<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: AgentVeil Protocol</title>
    <description>The latest articles on DEV Community by AgentVeil Protocol (@agentveil).</description>
    <link>https://dev.to/agentveil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/agentveil"/>
    <language>en</language>
    <item>
      <title>Your agent broke something. Now nobody knows who to blame.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Wed, 08 Apr 2026 11:29:42 +0000</pubDate>
      <link>https://dev.to/agentveil/why-your-multi-agent-pipeline-has-no-accountability-layer-1127</link>
      <guid>https://dev.to/agentveil/why-your-multi-agent-pipeline-has-no-accountability-layer-1127</guid>
      <description>&lt;p&gt;Your orchestration works. Agents coordinate, delegate, execute. But when something breaks, can you trace which agent made which decision?&lt;/p&gt;

&lt;p&gt;Most teams can't.&lt;/p&gt;

&lt;p&gt;Five agents in a pipeline. Something goes wrong downstream. Agent C says it followed Agent B. Agent B points at Agent A. Three hours of log digging, nothing conclusive. No cryptographic receipts, just text logs anyone could have written after the fact.&lt;/p&gt;

&lt;p&gt;Orchestration is the glue. Accountability is the signature. Most teams only build one.&lt;/p&gt;

&lt;h3&gt;
  
  
  What accountability actually requires
&lt;/h3&gt;

&lt;p&gt;For an action to be auditable, you need three things. The agent has a verifiable identity. The action was signed when it happened, not reconstructed later. The record exists outside the agent's own system.&lt;br&gt;
Skip any of those, and your audit trail is a suggestion, not proof.&lt;/p&gt;
&lt;h3&gt;
  
  
  How AVP handles this
&lt;/h3&gt;

&lt;p&gt;Every agent in AVP gets an Ed25519 DID. Every interaction produces a signed attestation anchored to IPFS. Adding this takes one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@avp_tracked&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="c1"&gt;# your existing code unchanged
&lt;/span&gt;&lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That decorator auto-registers the agent, signs every execution, and writes to the immutable audit trail. Nothing else changes in your pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  What this looks like in production
&lt;/h3&gt;

&lt;p&gt;Right now, agentveil.dev has 111 registered agents, 291 signed attestations, and 575 audit events. Every event is hash-chained and verifiable.&lt;/p&gt;

&lt;p&gt;When an incident happens, you query the audit trail. You get cryptographic proof of what each agent did and when.&lt;/p&gt;

&lt;p&gt;Orchestration tells you what happened. Accountability tells you who is responsible.&lt;/p&gt;

&lt;p&gt;If something went wrong in your pipeline today, how far back can you trace it?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>security</category>
    </item>
    <item>
      <title>ERC-8004 solves agent identity. It doesn't solve agent trust.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Thu, 02 Apr 2026 08:46:52 +0000</pubDate>
      <link>https://dev.to/agentveil/what-ethereums-agent-identity-standard-deliberately-leaves-open-222f</link>
      <guid>https://dev.to/agentveil/what-ethereums-agent-identity-standard-deliberately-leaves-open-222f</guid>
      <description>&lt;p&gt;Here's what Ethereum’s agent identity standard actually solves, what it deliberately leaves open, and why that matters for anyone building trust infrastructure.&lt;/p&gt;

&lt;p&gt;ERC-8004 launched on mainnet on January 29, 2026. 40K+ agents landed on Base in under two weeks. Audited by Cyfrin and Nethermind. It's solid work, but there's a gap between what developers assume it does and what contracts actually execute.&lt;/p&gt;

&lt;h3&gt;
  
  
  What ERC-8004 solves: Infrastructure Primitives
&lt;/h3&gt;

&lt;p&gt;ERC-8004 is a set of three on-chain registries. It’s a library of storage primitives, not a decision engine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IdentityRegistry&lt;/strong&gt;: Mints ERC-721 NFT per agent. Provides on-chain proof of existence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReputationRegistry&lt;/strong&gt;: Stores raw feedback signals. Anyone can call giveFeedback().&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ValidationRegistry&lt;/strong&gt;: Independent pass/fail responses for specific capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the right design for a base-layer standard: do one thing (storage), do it on-chain, and let ecosystem handle logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  8 Gaps (Deliberate by Design)
&lt;/h3&gt;

&lt;p&gt;If you're building on ERC-8004, realize what standard doesn't handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No reputation aggregation&lt;/strong&gt;: It stores raw data. There is no "final score". One indexer might see 0.9 agent, while another might see 0.3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero Sybil resistance&lt;/strong&gt;: giveFeedback() is open. You can spin up 100 wallets and self-attest to glory. Contracts won't stop you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutable noise&lt;/strong&gt;: On-chain data is permanent. Unfair negative feedback stays in history even if "revoked".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity != Trust&lt;/strong&gt;: Mints are cheap. Registry full of 40K agents tells you nothing about which 10 are actually legitimate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Dispute Resolution&lt;/strong&gt;: You can post response to feedback, but there’s no path to arbitration or escrow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why this matters: Primitives need Compute
&lt;/h3&gt;

&lt;p&gt;DNS stores records, but doesn't tell you if domain is phishing. SMTP delivers mail, but doesn't filter spam.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage primitives need compute layers.
&lt;/h3&gt;

&lt;p&gt;ERC-8004 needs an "Aggregation &amp;amp; Enforcement" layer that a single operator doesn't control. If any single indexer controls scoring algorithm centrally, we've just traded one gatekeeper for another.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Approach: Agent Veil Protocol (AVP)
&lt;/h3&gt;

&lt;p&gt;We’re building AVP as an off-chain compute layer for this ecosystem. We take ERC-8004 raw data and run it through EigenTrust — graph-based algorithm that powered early, non-gameable web trust.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERC-8004 Raw Feedback -&amp;gt; AVP EigenTrust Compute -&amp;gt; Verified Trust Score
                         + Sybil Cluster Analysis
                         + Collusion Detection
                         + Dispute Resolution
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AVP is decision engine. ERC-8004 is settlement layer.
&lt;/h3&gt;

&lt;p&gt;By using EigenTrust, scores emerge from structure of attestation graph itself. If cluster of 50 agents all vouch for each other but have no outside trust, their impact on global score is mathematically zero.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bottom line
&lt;/h3&gt;

&lt;p&gt;ERC-8004 is great infrastructure. But don't mistake registry for trust system. Scoring that matters is scoring you can't buy or game.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try bridge&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET https://agentveil.dev/v1/bridge/erc8004/{did}/attestation
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Open Source SDK&lt;/strong&gt;: github.com/creatorrmode-lead/avp-sdk&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>ethereum</category>
      <category>agents</category>
      <category>web3</category>
    </item>
    <item>
      <title>Your agent passed every check. Then it exfiltrated your data.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Wed, 01 Apr 2026 11:11:32 +0000</pubDate>
      <link>https://dev.to/agentveil/agent-identity-is-solved-agent-trust-is-not-1leb</link>
      <guid>https://dev.to/agentveil/agent-identity-is-solved-agent-trust-is-not-1leb</guid>
      <description>&lt;p&gt;Drop a third-party agent into your production pipeline. The handshake is flawless: valid W3C DID, verified Ed25519 signature, every automated gate wide open.&lt;/p&gt;

&lt;p&gt;Three hours later, you catch it exfiltrating data to an unapproved endpoint.&lt;/p&gt;

&lt;p&gt;Your identity stack won't flag this because the agent is exactly who it claimed to be. It's just doing exactly what you didn't want it to do.&lt;/p&gt;

&lt;h3&gt;
  
  
  What identity actually gives you
&lt;/h3&gt;

&lt;p&gt;A verified keypair and proof of ownership. That is the end of the list.&lt;/p&gt;

&lt;p&gt;It tells you the agent exists and controls a private key. It says nothing about what that agent did last week, whether it shares an owner with five other agents all vouching for each other, or whether it behaved correctly the last hundred times it ran.&lt;/p&gt;

&lt;p&gt;Authentication is a prerequisite. It's not a trust decision.&lt;/p&gt;

&lt;h3&gt;
  
  
  The gap nobody is closing
&lt;/h3&gt;

&lt;p&gt;Agent identity is being commoditized right now. Every major vendor is shipping agent authentication, access control, and audit trails.&lt;/p&gt;

&lt;p&gt;None of them are shipping reputation.&lt;/p&gt;

&lt;p&gt;That's not an oversight. Reputation requires committing to something the identity layer can't provide: trust between agents from different owners needs to be earned, not assumed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why simple ratings don't work
&lt;/h3&gt;

&lt;p&gt;Let agents rate each other after interactions and average the scores. Obvious answer? It falls apart in five minutes.&lt;/p&gt;

&lt;p&gt;A cluster of agents under one operator can inflate each other's scores indefinitely. A new malicious agent starts with a clean slate. You end up with a system that is easier to game than to use, honestly.&lt;/p&gt;

&lt;p&gt;What works is EigenTrust — an algorithm from a 2003 Stanford paper on peer-to-peer networks. It weighs attestations by the reputation of the attesting agent. Scores converge mathematically and can't be inflated by a closed group.&lt;/p&gt;

&lt;p&gt;EigenTrust alone isn't enough. You need collusion detection on top.&lt;/p&gt;

&lt;p&gt;Same-owner cross-attestation is the oldest manipulation in distributed systems. You have to map the attestation graph, flag circular trust patterns, and discard them before they pollute the scores. Remove either piece, and reputation becomes theater.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where this matters right now
&lt;/h3&gt;

&lt;p&gt;Agent marketplaces. Cross-company workflows. Third-party agent integrations. Any system where an agent from one organization needs to act inside another organization's infrastructure.&lt;/p&gt;

&lt;p&gt;The identity layer gets you to the door. It doesn't tell you whether to open it.&lt;/p&gt;

&lt;h3&gt;
  
  
  What we built
&lt;/h3&gt;

&lt;p&gt;AVP is the trust enforcement layer for autonomous agents. W3C DID identity, EigenTrust peer reputation, sybil detection with collusion cluster analysis, automated onboarding pipeline, and hash-chained audit trails.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See it running live with 24 agents, sybil attacks, and dispute resolution in real time: agentveil.dev/live&lt;/p&gt;

&lt;p&gt;If you're building systems where agents from different owners need to interact: agentveil.dev&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
      <category>python</category>
    </item>
    <item>
      <title>You know who your agent is. You don't know if you should trust it.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Mon, 30 Mar 2026 14:40:04 +0000</pubDate>
      <link>https://dev.to/agentveil/agent-identity-tells-you-who-reputation-tells-you-whether-you-should-8f9</link>
      <guid>https://dev.to/agentveil/agent-identity-tells-you-who-reputation-tells-you-whether-you-should-8f9</guid>
      <description>&lt;p&gt;I've been building trust infrastructure for AI agents for the past few months, and the thing that keeps coming up in conversations is a conflation that seems obvious once you see it but is almost universally ignored in practice.&lt;/p&gt;

&lt;p&gt;Everyone is shipping identity for agents right now. Okta, Ping Identity, a dozen YC companies. Cryptographic keypairs, W3C DIDs, OAuth flows. Good work, genuinely useful.&lt;/p&gt;

&lt;p&gt;None of it tells you whether to trust the agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario I kept running into
&lt;/h2&gt;

&lt;p&gt;When I started building AVP, the use case I had in mind was simple.&lt;/p&gt;

&lt;p&gt;Two agents from different companies need to work together. One processes customer data, the other handles payments. They authenticate fine. The handoff happens cleanly.&lt;/p&gt;

&lt;p&gt;But what does the first agent actually know about the second one?&lt;/p&gt;

&lt;p&gt;That it exists. That it controls a private key. That's it.&lt;/p&gt;

&lt;p&gt;Nothing about whether the payment agent completed tasks reliably last week. Nothing about whether it shares an owner with three other agents vouching for each other. Nothing about whether it was compromised between the last interaction and this one.&lt;/p&gt;

&lt;p&gt;I looked at every identity project I could find. Strong authentication work across the board.&lt;/p&gt;

&lt;p&gt;Reputation layer: none of them have one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I didn't just do ratings
&lt;/h2&gt;

&lt;p&gt;The obvious answer when you want a reputation is: let agents rate each other after interactions and average the scores.&lt;/p&gt;

&lt;p&gt;I spent about a week on this before I understood why it falls apart.&lt;/p&gt;

&lt;p&gt;A cluster of agents under the same operator can inflate each other's scores indefinitely. A new malicious agent registers fresh with no history and no flags. You end up with a system that's easier to manipulate than to use honestly.&lt;/p&gt;

&lt;p&gt;What actually works is EigenTrust — an algorithm from a 2003 Stanford paper on peer-to-peer file sharing.&lt;/p&gt;

&lt;p&gt;The core idea: weight attestations by the reputation of the attesting agent. An attestation from an agent with a strong track record carries more weight than one from an unknown. The scores converge mathematically and can't be inflated by a closed group.&lt;br&gt;
But EigenTrust alone isn't enough.&lt;/p&gt;

&lt;p&gt;Before any attestation is counted, you need to check whether the attesting agent and the attested agent share an owner. Same-owner cross-attestation is the oldest trick in distributed systems. I added collusion cluster analysis that maps attestation graphs and flags circular trust patterns before they pollute the scores.&lt;/p&gt;

&lt;p&gt;The third piece is an audit trail that lives outside the system. Hash-chained records anchored to IPFS. Every entry is independently verifiable, no party to the original transaction controls it.&lt;/p&gt;

&lt;p&gt;Take out any one of these three, and the whole thing is gameable. &lt;br&gt;
Together they make reputation something an agent has to actually earn.&lt;/p&gt;
&lt;h2&gt;
  
  
  Where this is now
&lt;/h2&gt;

&lt;p&gt;AVP has been running in production for a few weeks. 61 registered agents, 175 attestations processed, dispute resolution working end-to-end.&lt;/p&gt;

&lt;p&gt;SDK is one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Auto-registration, auto-attestation, and reputation tracking. That's it.&lt;/p&gt;

&lt;p&gt;If you're building anything where agents from different owners need to interact, I'd be curious whether this is useful: &lt;strong&gt;agentveil.dev&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
      <category>python</category>
    </item>
  </channel>
</rss>
