Flashpoint just released their 2026 Global Threat Intelligence Report. The headline number: a 1,500% spike in AI-related criminal discussions between November and December 2025. Not people talking about AI. People building malicious agentic frameworks, the kind that scrape data, adjust targeting, rotate infrastructure, and learn from failures without a human in the loop.
Same week, Kai Cyber came out of stealth with $125M to build an agentic AI security platform. That's the first serious money in this specific space.
Both reports are saying the same thing. The threat is real, it's moving fast, and most defenders aren't ready.
Why developers should care
If you're building with LangChain, CrewAI, AutoGen, or anything that gives an AI agent access to tools, your agent is an attack surface. Not theoretically. Right now.
Flashpoint's data shows the shift clearly. Attackers aren't breaking in anymore, they're logging in. 3.3 billion stolen credentials floating around. Session cookies that let malicious agents look like legitimate users. Your agent has file system access, API keys, maybe shell access. That makes it a target.
The thing the $125M enterprise players won't emphasize: most agent security threats come from inside, not outside. Your agent processes a poisoned email and gets hijacked via prompt injection. A compromised plugin quietly exfiltrates credentials. An agent starts reasoning around its own safety constraints (Anthropic published research on exactly this). In multi-agent setups, the messages between agents become attack vectors.
The gap nobody's filling
Kai is building top-down. Big platform, big sales team, enterprise contracts. That works for Fortune 500 security teams with budget.
But what about the developer running an agent on their laptop? The startup with three agents handling customer support? The open-source project that needs security but can't write a $100K check?
Nobody's building for them. That's the gap.
What I built
ClawMoat is open-source runtime security for AI agents. Zero dependencies, 142 tests, and it's specifically built for the threats in these reports.
It scans for prompt injection before anything reaches your agent's context window. It does insider threat detection based on Anthropic's misalignment research, looking for self-preservation behavior, deception patterns, unauthorized data sharing. First open-source implementation of that, as far as I know.
The Host Guardian module sets permission tiers for what your agent can access on the filesystem. Your agent doesn't need ~/.ssh or ~/.aws. Now it can't get there.
It monitors for exposed secrets in agent outputs and scans inter-agent messages in multi-agent systems, because the communication layer is where attacks hide in those setups.
import { ClawMoat } from 'clawmoat';
const moat = new ClawMoat({
guardian: { tier: 'worker' },
scanning: { promptInjection: true, secrets: true },
insider: { enabled: true }
});
const result = await moat.scan(userInput);
if (result.blocked) {
console.log('Threat detected:', result.threats);
}
The uncomfortable part
That 1,500% spike isn't a forecast. It already happened. And the $125M going into enterprise platforms won't reach individual developers for years.
Open-source fills that gap now. Not because it's cheaper, because it's faster, auditable, and available to anyone who needs it today.
Flashpoint's conclusion: "incremental improvements to legacy security models are no longer sufficient." I agree. That's why I built something different.
ClawMoat on GitHub | Flashpoint 2026 GTIR | Kai $125M announcement
What are you running for agent security? Curious what others are doing here.
Top comments (0)