Blog · Threat Intelligence

What attackers are doing to AI systems — right now.

Field reports, red-team writeups, and threat-actor tradecraft from the Stack Vault Threat Research team. New research weekly.

212reports
Published
4.2B
Prompts Analyzed
38actors
Tracked
Wklynew
Research
Featured Research

Latest from the Threat Research team

Original investigations into how attackers compromise AI systems in production.

Threat 6 May 2026

Indirect Prompt Injection in Production RAG: A 2026 Field Survey

We sampled retrieval traffic across 142 production RAG deployments. The injection rate is higher than published estimates — and getting worse.

Read article
Tradecraft 2 May 2026

Anatomy of a Multi-Turn Jailbreak Campaign

One adversary spent 11 days incrementally drifting our decoy assistant past its guardrails. The full transcript and detection trace, annotated.

Read article
Research 28 Apr 2026

Vector Store Poisoning at Scale: 8 Real Attacks

From customer-support chatbots to medical RAG: eight cases where adversarial embeddings reached production retrieval indexes.

Read article
Topics We Cover

Where the threat surface is moving

We focus on the threats analysts can actually detect with their existing tooling — extended.

Prompt Injection

Direct, indirect, and behavioral injection patterns observed in the wild.

Vector & Retrieval

Embedding poisoning, retrieval manipulation, and corpus integrity attacks.

Agent Abuse

Tool-call hijacking, capability escalation, and chain-of-thought exfiltration.

Identity & Access

Token theft against model APIs, agent impersonation, and federated trust abuse.

Model Exfiltration

Membership inference, parameter extraction, and proprietary data recovery from LLMs.

Threat Actors

Tracked adversaries who are explicitly targeting AI infrastructure.

Ready to See It Live

Get the weekly threat brief

One email. Friday morning. The week's adversarial AI activity, distilled.