DEV Community

AKAVLABS
AKAVLABS

Posted on

I mapped all 84 MITRE ATLAS techniques to AI agent detection rules — here's what I found

Today Linx Security raised $50M for AI agent identity governance.
It validates the market. But there's a gap nobody is talking about.

Identity governance tells you what agents are allowed to do.

Runtime security tells you what they're actually doing.

MITRE ATLAS documents 84 techniques for attacking AI systems.

Zero commercial products map detection rules to all 84.

I spent the last several months mapping them. The repo is open source,

Sigma-compatible YAML, LangChain coverage live.

The 3 most dangerous techniques right now:

AML.T0054 — Prompt Injection

Agent reads external content containing malicious instructions.

Executes them because it can't distinguish attacker input from task input.

Memory Poisoning

False instructions planted in agent memory activate days later.

The agent's future behavior is controlled by a past attacker.

A2A Relay Attack

Sub-agent receives instructions from a compromised parent.

No mechanism to verify the instruction chain wasn't hijacked.

Detection has to happen at inference time — before execution.

Not after the governance layer logs the completed action.

github.com/akav-labs/atlas-agent-rules

Full writeup on the Linx gap here:

AgentSentry Research

Top comments (0)