/ Resources / Annual Report
2026 Threat Landscape Report
Drawn from 12.4 billion events, 4,200 production agent deployments, and 180 incident engagements over the past 12 months.
Executive summary
2025 was the year adversaries stopped experimenting with AI and started shipping it. The WitOne SOC saw a 41% year-over-year increase in confirmed campaigns that used language models for reconnaissance, lure generation, or operational tempo. More importantly, we saw the first wave of attacks that targeted AI agents themselves — including three indirect prompt-injection campaigns that abused customer support chatbots to exfiltrate data through legitimate channels.
At the same time, the defender story improved. Detection-as-code adoption grew. SOC teams that adopted autonomous tier-1 saw real reductions in alert fatigue without the precision losses early skeptics warned about. The customers who fared worst were the ones who left their AI tooling unguarded — assuming runtime risk would be owned by “someone else.”
Key findings
- +41% year-over-year confirmed campaigns using AI for recon, lure generation, or operations.
- 3 in-the-wild indirect prompt-injection campaigns targeting customer-facing AI agents.
- 27% increase in healthcare-targeted ransomware, but a 19% reduction in median dwell time across customers using managed detection.
- 4.2x increase in identity-first attacks (token theft, OAuth abuse, conditional-access bypass) compared to 2023.
- 52% of all incidents involved at least one over-permissioned cloud identity. Least-privilege is back on the board agenda.
Table of contents
- Chapter 1 — The AI-augmented adversary
- Chapter 2 — Indirect prompt injection in the wild
- Chapter 3 — Identity is the new perimeter (still)
- Chapter 4 — Ransomware: down in dwell time, up in volume
- Chapter 5 — Healthcare under pressure
- Chapter 6 — Public sector and the FedRAMP-aligned baseline
- Chapter 7 — Cloud blast radius: lessons from 23 incidents
- Chapter 8 — Detection-as-code and the autonomous SOC
- Chapter 9 — What we got wrong in last year's report
- Chapter 10 — Predictions for 2027 (and how we'd know we were wrong)
- Appendix A — MITRE ATT&CK technique frequency
- Appendix B — MITRE ATLAS technique frequency
- Appendix C — Methodology and data sources
Chapter 1 — The AI-augmented adversary
The defining shift of the year was speed. Adversaries used commercial and open-source language models to compress the recon-to-lure cycle from days to minutes. We tracked phishing campaigns where the lure copy was being regenerated per-recipient inside the campaign, with personalization drawn from public sources that would have taken a human researcher hours per target.
The good news: speed cuts both ways. Defenders who adopted Astute-grade retrieval for hunt and triage closed the gap. The customers who were caught flat-footed tended to share one trait — they were still treating “AI in security” as a 2027 problem.
Chapter 2 — Indirect prompt injection in the wild
We documented three confirmed indirect prompt-injection campaigns in the wild during 2025. In two of them, the entry point was a customer-facing support agent that read scraped pages or uploaded documents without sanitization. In the third, an internal “productivity” AI assistant was tricked into exfiltrating data through a legitimate Slack integration. None of the three customers had runtime AI security in place at the time of compromise. All three do now.
Chapter 3 — Identity is the new perimeter (still)
Identity-first attacks grew 4.2x compared to 2023. Token theft, OAuth abuse, and conditional-access bypass made up the bulk. The single highest-leverage control we tracked across 2025 was “Sign-In Risk policy enforced AND session token binding.” Customers with that pair in place saw an order-of-magnitude lower account-takeover incident rate than peers without it.
How to read this report
We try to be specific. Where a number is reported, we cite the methodology. Where a prediction is made, we publish the metric we'd use to know we were wrong. Disagreements and corrections are welcome — email the team at research@witone.one.
Want the deep technical appendices? Talk to the team for a copy of the unredacted edition.