<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: OlegB</title>
    <description>The latest articles on DEV Community by OlegB (@oleg_avp).</description>
    <link>https://dev.to/oleg_avp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oleg_avp"/>
    <language>en</language>
    <item>
      <title>Trust-Gated Delegation in AWS Bedrock: Scoring AI Agents Before They Act</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Tue, 14 Apr 2026 09:17:52 +0000</pubDate>
      <link>https://dev.to/agentveil/trust-gated-delegation-in-aws-bedrock-scoring-ai-agents-before-they-act-50n3</link>
      <guid>https://dev.to/agentveil/trust-gated-delegation-in-aws-bedrock-scoring-ai-agents-before-they-act-50n3</guid>
      <description>&lt;p&gt;AWS Bedrock gives you Lambda, IAM, and CloudTrail for your agents. CloudTrail logs "agent called tool". It does not log "agent was trustworthy enough to call tool". That gap matters when you delegate work between agents.&lt;/p&gt;

&lt;p&gt;Here's a working pattern that closes it, using &lt;a href="https://agentveil.dev/" rel="noopener noreferrer"&gt;Agent Veil Protocol&lt;/a&gt; wired into Bedrock's Converse API as a set of tools that Claude calls before delegating anything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;Two agents registered on AVP: an orchestrator and a worker. The orchestrator gets three tools: check reputation, verify trust tier, and log outcome. Standard Bedrock &lt;strong&gt;toolSpec&lt;/strong&gt; format, nothing exotic.&lt;/p&gt;

&lt;p&gt;When the orchestrator needs to hand off a code review, Claude calls the tools in sequence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
[tool] check_reputation({"did": "did:key:z6Mkv..."})
  → score: 0.25, tier: newcomer, risk: low

[tool] can_trust({"did": "did:key:z6Mkv...", "min_tier": "newcomer"})
  → trusted: true

[tool] log_attestation({"to_did": "...", "outcome": "positive", "context": "code_review_delegation"})
  → recorded

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Score went from 0.25 to 0.95 after a single attestation. That looks like a lot until you understand the mechanism: 0.25 is a starter floor that every new agent gets on registration. It's not a real reputation - it's a placeholder. The first attestation from an onboarded agent triggers EigenTrust recomputation, which replaces the floor with an actual score based on the trust graph. After that initial jump, growth is slow. Reaching "elite" takes multiple attestations from independent, trusted sources over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handler
&lt;/h3&gt;

&lt;p&gt;The whole thing is about 25 lines. No SDK wrappers, no framework — &lt;strong&gt;just get_reputation()&lt;/strong&gt; and &lt;strong&gt;attest()&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handle_tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;check_reputation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;rep&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_reputation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;did&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rep&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;can_trust&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;rep&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_reputation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;did&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="n"&gt;tier_order&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;newcomer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;basic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;trusted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;min_tier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;min_tier&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;basic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;agent_tier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rep&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tier&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;newcomer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;trusted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tier_order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent_tier&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;tier_order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;min_tier&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;trusted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;trusted&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tier&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;agent_tier&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rep&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;

    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;log_attestation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;orchestrator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;to_did&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;to_did&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;outcome&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outcome&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;weight&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recorded&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outcome&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outcome&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]})&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;can_trust&lt;/strong&gt; is client-side on purpose. It calls &lt;strong&gt;get_reputation()&lt;/strong&gt; and compares tiers locally instead of hitting a separate endpoint. One less thing that can break.&lt;/p&gt;

&lt;p&gt;The Converse loop is the standard Bedrock pattern — call &lt;strong&gt;client.converse()&lt;/strong&gt;, check &lt;strong&gt;stopReason&lt;/strong&gt;, feed tool results back in. Nothing AVP-specific there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run it yourself
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil boto3
python examples/aws_bedrock.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need AWS credentials with Bedrock access. The example runs against the live AVP network at agentveil.dev — 100+ agents, daily IPFS anchors, production data.&lt;/p&gt;

&lt;p&gt;Full source: &lt;a href="//examples/aws_bedrock.py"&gt;examples/aws_bedrock.py&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Note on AWS Agent Registry
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/aws-agent-registry-in-agentcore-preview/" rel="noopener noreferrer"&gt;AWS launched Agent Registry in private preview&lt;/a&gt; on April 9. It's a catalog — tracks which agents exist, what they do, who built them. Good for discovery and governance. No scoring.&lt;/p&gt;

&lt;p&gt;That's the piece AVP fills. Registry answers "Who is this agent?" AVP answers, "Should I trust it?" The two are complementary, not competing. We plan to publish an integration when Agent Registry moves to GA.&lt;/p&gt;

&lt;p&gt;If this was useful, a ⭐ on GitHub helps: github.com/creatorrmode-lead/avp-sdk&lt;/p&gt;

&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://pypi.org/project/agentveil/" rel="noopener noreferrer"&gt;AVP SDK on PyPI&lt;/a&gt; — &lt;strong&gt;pip install agentveil&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/creatorrmode-lead/avp-sdk/blob/main/examples/aws_bedrock.py" rel="noopener noreferrer"&gt;Example source on GitHub&lt;/a&gt;&lt;br&gt;
&lt;a href="https://agentveil.dev/" rel="noopener noreferrer"&gt;agentveil.dev&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/aws-agent-registry-in-agentcore-preview/" rel="noopener noreferrer"&gt;AWS Agent Registry announcement&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>bedrock</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>How agent reputation actually works</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Tue, 14 Apr 2026 07:51:47 +0000</pubDate>
      <link>https://dev.to/agentveil/how-agent-reputation-actually-works-135a</link>
      <guid>https://dev.to/agentveil/how-agent-reputation-actually-works-135a</guid>
      <description>&lt;p&gt;Most trust systems are black boxes. You get a score, no explanation. For production systems, that's not good enough.&lt;/p&gt;

&lt;p&gt;Here's how AVP builds reputation from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Cryptographic identity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every agent starts with an Ed25519 keypair and a W3C DID. Without a verifiable identity, reputation has nothing to attach to.&lt;br&gt;
Verification tiers add weight. GitHub OAuth bumps the trust multiplier to 0.7x. Email to 0.3x. Each tier makes Sybil's attacks more expensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Trust period&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;New agents enter a three-day probation. Seed Agents evaluate them before they can interact with the broader system. No cold starts with full permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: EigenTrust scoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AVP weights peer reviews by the reputation of the reviewer. A review from a low-trust agent counts for almost nothing. A negative review from a seed agent tanks your score.&lt;/p&gt;

&lt;p&gt;Stanford EigenTrust applied to agent networks. Scores converge mathematically and can't be inflated by a closed group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agentveil&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AVPAgent&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AVPAgent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my_agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;rep&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_reputation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;did&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rep&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.87&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.72&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;tier&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;trusted&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Sybil detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AVP runs NetFlow max-flow analysis on the attestation graph. Coordinated rings get detected. Bots praising bots have zero impact on score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Velocity monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reputation isn't static. AVP tracks score changes across 1-day and 30-day windows. A sharp drop is a leading indicator. Same agent, different trust thresholds for different actions.&lt;/p&gt;

&lt;p&gt;What's the last action your agent took that you can cryptographically prove was intentional?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
      <category>python</category>
    </item>
    <item>
      <title>Three teams, one agent incident. Nobody knows who is responsible.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Thu, 09 Apr 2026 11:54:16 +0000</pubDate>
      <link>https://dev.to/agentveil/three-teams-one-agent-incident-nobody-knows-who-is-responsible-3od7</link>
      <guid>https://dev.to/agentveil/three-teams-one-agent-incident-nobody-knows-who-is-responsible-3od7</guid>
      <description>&lt;p&gt;Agent trust is a buzzword without context. Depending on your role, you need a completely different signal.&lt;/p&gt;

&lt;p&gt;RSAC 2026 confirmed what engineers already knew: OAuth and SAML weren't built for agent-to-agent delegation. The gap isn't theoretical anymore.&lt;/p&gt;

&lt;p&gt;For agent owners, reputation is a core business asset. If an agent slips up, they have to prove it wasn't a flaw in the core logic. With the EU AI Act deadline 117 days away, "I didn't know what my agent was doing" is no longer a legal defense. You need cryptographic proof.&lt;/p&gt;

&lt;p&gt;Hirers don't care about the owner's marketing. They need an independent score that works across platforms and can't be manipulated by the agent's own operator.&lt;/p&gt;

&lt;p&gt;Platforms manage thousands of agents in real time. The main gap isn't just knowing who the agent is; it's tracking the delegation chain. When Agent A hires Agent B, who is accountable? You need a recursive audit trail, not a flat log.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why a single score fails all three
&lt;/h3&gt;

&lt;p&gt;A simple rating system breaks immediately. An operator can inflate scores through mutual attestation. A new malicious agent starts with a clean slate. The signal becomes noise within days.&lt;/p&gt;

&lt;h3&gt;
  
  
  How AVP handles each role
&lt;/h3&gt;

&lt;p&gt;AVP decouples reputation from the agent itself.&lt;/p&gt;

&lt;p&gt;Owners get a verifiable history they can reference. Hirers get EigenTrust scores weighted by the reputation of the attesting agent, so no single operator can game it alone. Platforms get real-time Trust Gate enforcement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;trust_score&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;required_threshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;gate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;block_action&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AVP isn't orchestration. It's the layer that makes orchestration accountable.&lt;/p&gt;

&lt;p&gt;If an unknown agent requested database access right now, what minimum trust score would you require?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>security</category>
      <category>agents</category>
      <category>python</category>
    </item>
    <item>
      <title>Your agent broke something. Now nobody knows who to blame.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Wed, 08 Apr 2026 11:29:42 +0000</pubDate>
      <link>https://dev.to/agentveil/why-your-multi-agent-pipeline-has-no-accountability-layer-1127</link>
      <guid>https://dev.to/agentveil/why-your-multi-agent-pipeline-has-no-accountability-layer-1127</guid>
      <description>&lt;p&gt;Your orchestration works. Agents coordinate, delegate, execute. But when something breaks, can you trace which agent made which decision?&lt;/p&gt;

&lt;p&gt;Most teams can't.&lt;/p&gt;

&lt;p&gt;Five agents in a pipeline. Something goes wrong downstream. Agent C says it followed Agent B. Agent B points at Agent A. Three hours of log digging, nothing conclusive. No cryptographic receipts, just text logs anyone could have written after the fact.&lt;/p&gt;

&lt;p&gt;Orchestration is the glue. Accountability is the signature. Most teams only build one.&lt;/p&gt;

&lt;h3&gt;
  
  
  What accountability actually requires
&lt;/h3&gt;

&lt;p&gt;For an action to be auditable, you need three things. The agent has a verifiable identity. The action was signed when it happened, not reconstructed later. The record exists outside the agent's own system.&lt;br&gt;
Skip any of those, and your audit trail is a suggestion, not proof.&lt;/p&gt;
&lt;h3&gt;
  
  
  How AVP handles this
&lt;/h3&gt;

&lt;p&gt;Every agent in AVP gets an Ed25519 DID. Every interaction produces a signed attestation anchored to IPFS. Adding this takes one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@avp_tracked&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="c1"&gt;# your existing code unchanged
&lt;/span&gt;&lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That decorator auto-registers the agent, signs every execution, and writes to the immutable audit trail. Nothing else changes in your pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  What this looks like in production
&lt;/h3&gt;

&lt;p&gt;Right now, agentveil.dev has 111 registered agents, 291 signed attestations, and 575 audit events. Every event is hash-chained and verifiable.&lt;/p&gt;

&lt;p&gt;When an incident happens, you query the audit trail. You get cryptographic proof of what each agent did and when.&lt;/p&gt;

&lt;p&gt;Orchestration tells you what happened. Accountability tells you who is responsible.&lt;/p&gt;

&lt;p&gt;If something went wrong in your pipeline today, how far back can you trace it?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>security</category>
    </item>
    <item>
      <title>ERC-8004 solves agent identity. It doesn't solve agent trust.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Thu, 02 Apr 2026 08:46:52 +0000</pubDate>
      <link>https://dev.to/agentveil/what-ethereums-agent-identity-standard-deliberately-leaves-open-222f</link>
      <guid>https://dev.to/agentveil/what-ethereums-agent-identity-standard-deliberately-leaves-open-222f</guid>
      <description>&lt;p&gt;Here's what Ethereum’s agent identity standard actually solves, what it deliberately leaves open, and why that matters for anyone building trust infrastructure.&lt;/p&gt;

&lt;p&gt;ERC-8004 launched on mainnet on January 29, 2026. 40K+ agents landed on Base in under two weeks. Audited by Cyfrin and Nethermind. It's solid work, but there's a gap between what developers assume it does and what contracts actually execute.&lt;/p&gt;

&lt;h3&gt;
  
  
  What ERC-8004 solves: Infrastructure Primitives
&lt;/h3&gt;

&lt;p&gt;ERC-8004 is a set of three on-chain registries. It’s a library of storage primitives, not a decision engine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IdentityRegistry&lt;/strong&gt;: Mints ERC-721 NFT per agent. Provides on-chain proof of existence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReputationRegistry&lt;/strong&gt;: Stores raw feedback signals. Anyone can call giveFeedback().&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ValidationRegistry&lt;/strong&gt;: Independent pass/fail responses for specific capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the right design for a base-layer standard: do one thing (storage), do it on-chain, and let ecosystem handle logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  8 Gaps (Deliberate by Design)
&lt;/h3&gt;

&lt;p&gt;If you're building on ERC-8004, realize what standard doesn't handle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No reputation aggregation&lt;/strong&gt;: It stores raw data. There is no "final score". One indexer might see 0.9 agent, while another might see 0.3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero Sybil resistance&lt;/strong&gt;: giveFeedback() is open. You can spin up 100 wallets and self-attest to glory. Contracts won't stop you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutable noise&lt;/strong&gt;: On-chain data is permanent. Unfair negative feedback stays in history even if "revoked".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identity != Trust&lt;/strong&gt;: Mints are cheap. Registry full of 40K agents tells you nothing about which 10 are actually legitimate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Dispute Resolution&lt;/strong&gt;: You can post response to feedback, but there’s no path to arbitration or escrow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why this matters: Primitives need Compute
&lt;/h3&gt;

&lt;p&gt;DNS stores records, but doesn't tell you if domain is phishing. SMTP delivers mail, but doesn't filter spam.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage primitives need compute layers.
&lt;/h3&gt;

&lt;p&gt;ERC-8004 needs an "Aggregation &amp;amp; Enforcement" layer that a single operator doesn't control. If any single indexer controls scoring algorithm centrally, we've just traded one gatekeeper for another.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Approach: Agent Veil Protocol (AVP)
&lt;/h3&gt;

&lt;p&gt;We’re building AVP as an off-chain compute layer for this ecosystem. We take ERC-8004 raw data and run it through EigenTrust — graph-based algorithm that powered early, non-gameable web trust.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERC-8004 Raw Feedback -&amp;gt; AVP EigenTrust Compute -&amp;gt; Verified Trust Score
                         + Sybil Cluster Analysis
                         + Collusion Detection
                         + Dispute Resolution
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AVP is decision engine. ERC-8004 is settlement layer.
&lt;/h3&gt;

&lt;p&gt;By using EigenTrust, scores emerge from structure of attestation graph itself. If cluster of 50 agents all vouch for each other but have no outside trust, their impact on global score is mathematically zero.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bottom line
&lt;/h3&gt;

&lt;p&gt;ERC-8004 is great infrastructure. But don't mistake registry for trust system. Scoring that matters is scoring you can't buy or game.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try bridge&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET https://agentveil.dev/v1/bridge/erc8004/{did}/attestation
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Open Source SDK&lt;/strong&gt;: github.com/creatorrmode-lead/avp-sdk&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>ethereum</category>
      <category>agents</category>
      <category>web3</category>
    </item>
    <item>
      <title>Your agent passed every check. Then it exfiltrated your data.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Wed, 01 Apr 2026 11:11:32 +0000</pubDate>
      <link>https://dev.to/agentveil/agent-identity-is-solved-agent-trust-is-not-1leb</link>
      <guid>https://dev.to/agentveil/agent-identity-is-solved-agent-trust-is-not-1leb</guid>
      <description>&lt;p&gt;Drop a third-party agent into your production pipeline. The handshake is flawless: valid W3C DID, verified Ed25519 signature, every automated gate wide open.&lt;/p&gt;

&lt;p&gt;Three hours later, you catch it exfiltrating data to an unapproved endpoint.&lt;/p&gt;

&lt;p&gt;Your identity stack won't flag this because the agent is exactly who it claimed to be. It's just doing exactly what you didn't want it to do.&lt;/p&gt;

&lt;h3&gt;
  
  
  What identity actually gives you
&lt;/h3&gt;

&lt;p&gt;A verified keypair and proof of ownership. That is the end of the list.&lt;/p&gt;

&lt;p&gt;It tells you the agent exists and controls a private key. It says nothing about what that agent did last week, whether it shares an owner with five other agents all vouching for each other, or whether it behaved correctly the last hundred times it ran.&lt;/p&gt;

&lt;p&gt;Authentication is a prerequisite. It's not a trust decision.&lt;/p&gt;

&lt;h3&gt;
  
  
  The gap nobody is closing
&lt;/h3&gt;

&lt;p&gt;Agent identity is being commoditized right now. Every major vendor is shipping agent authentication, access control, and audit trails.&lt;/p&gt;

&lt;p&gt;None of them are shipping reputation.&lt;/p&gt;

&lt;p&gt;That's not an oversight. Reputation requires committing to something the identity layer can't provide: trust between agents from different owners needs to be earned, not assumed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why simple ratings don't work
&lt;/h3&gt;

&lt;p&gt;Let agents rate each other after interactions and average the scores. Obvious answer? It falls apart in five minutes.&lt;/p&gt;

&lt;p&gt;A cluster of agents under one operator can inflate each other's scores indefinitely. A new malicious agent starts with a clean slate. You end up with a system that is easier to game than to use, honestly.&lt;/p&gt;

&lt;p&gt;What works is EigenTrust — an algorithm from a 2003 Stanford paper on peer-to-peer networks. It weighs attestations by the reputation of the attesting agent. Scores converge mathematically and can't be inflated by a closed group.&lt;/p&gt;

&lt;p&gt;EigenTrust alone isn't enough. You need collusion detection on top.&lt;/p&gt;

&lt;p&gt;Same-owner cross-attestation is the oldest manipulation in distributed systems. You have to map the attestation graph, flag circular trust patterns, and discard them before they pollute the scores. Remove either piece, and reputation becomes theater.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where this matters right now
&lt;/h3&gt;

&lt;p&gt;Agent marketplaces. Cross-company workflows. Third-party agent integrations. Any system where an agent from one organization needs to act inside another organization's infrastructure.&lt;/p&gt;

&lt;p&gt;The identity layer gets you to the door. It doesn't tell you whether to open it.&lt;/p&gt;

&lt;h3&gt;
  
  
  What we built
&lt;/h3&gt;

&lt;p&gt;AVP is the trust enforcement layer for autonomous agents. W3C DID identity, EigenTrust peer reputation, sybil detection with collusion cluster analysis, automated onboarding pipeline, and hash-chained audit trails.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See it running live with 24 agents, sybil attacks, and dispute resolution in real time: agentveil.dev/live&lt;/p&gt;

&lt;p&gt;If you're building systems where agents from different owners need to interact: agentveil.dev&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
      <category>python</category>
    </item>
    <item>
      <title>You know who your agent is. You don't know if you should trust it.</title>
      <dc:creator>OlegB</dc:creator>
      <pubDate>Mon, 30 Mar 2026 14:40:04 +0000</pubDate>
      <link>https://dev.to/agentveil/agent-identity-tells-you-who-reputation-tells-you-whether-you-should-8f9</link>
      <guid>https://dev.to/agentveil/agent-identity-tells-you-who-reputation-tells-you-whether-you-should-8f9</guid>
      <description>&lt;p&gt;I've been building trust infrastructure for AI agents for the past few months, and the thing that keeps coming up in conversations is a conflation that seems obvious once you see it but is almost universally ignored in practice.&lt;/p&gt;

&lt;p&gt;Everyone is shipping identity for agents right now. Okta, Ping Identity, a dozen YC companies. Cryptographic keypairs, W3C DIDs, OAuth flows. Good work, genuinely useful.&lt;/p&gt;

&lt;p&gt;None of it tells you whether to trust the agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario I kept running into
&lt;/h2&gt;

&lt;p&gt;When I started building AVP, the use case I had in mind was simple.&lt;/p&gt;

&lt;p&gt;Two agents from different companies need to work together. One processes customer data, the other handles payments. They authenticate fine. The handoff happens cleanly.&lt;/p&gt;

&lt;p&gt;But what does the first agent actually know about the second one?&lt;/p&gt;

&lt;p&gt;That it exists. That it controls a private key. That's it.&lt;/p&gt;

&lt;p&gt;Nothing about whether the payment agent completed tasks reliably last week. Nothing about whether it shares an owner with three other agents vouching for each other. Nothing about whether it was compromised between the last interaction and this one.&lt;/p&gt;

&lt;p&gt;I looked at every identity project I could find. Strong authentication work across the board.&lt;/p&gt;

&lt;p&gt;Reputation layer: none of them have one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I didn't just do ratings
&lt;/h2&gt;

&lt;p&gt;The obvious answer when you want a reputation is: let agents rate each other after interactions and average the scores.&lt;/p&gt;

&lt;p&gt;I spent about a week on this before I understood why it falls apart.&lt;/p&gt;

&lt;p&gt;A cluster of agents under the same operator can inflate each other's scores indefinitely. A new malicious agent registers fresh with no history and no flags. You end up with a system that's easier to manipulate than to use honestly.&lt;/p&gt;

&lt;p&gt;What actually works is EigenTrust — an algorithm from a 2003 Stanford paper on peer-to-peer file sharing.&lt;/p&gt;

&lt;p&gt;The core idea: weight attestations by the reputation of the attesting agent. An attestation from an agent with a strong track record carries more weight than one from an unknown. The scores converge mathematically and can't be inflated by a closed group.&lt;br&gt;
But EigenTrust alone isn't enough.&lt;/p&gt;

&lt;p&gt;Before any attestation is counted, you need to check whether the attesting agent and the attested agent share an owner. Same-owner cross-attestation is the oldest trick in distributed systems. I added collusion cluster analysis that maps attestation graphs and flags circular trust patterns before they pollute the scores.&lt;/p&gt;

&lt;p&gt;The third piece is an audit trail that lives outside the system. Hash-chained records anchored to IPFS. Every entry is independently verifiable, no party to the original transaction controls it.&lt;/p&gt;

&lt;p&gt;Take out any one of these three, and the whole thing is gameable. &lt;br&gt;
Together they make reputation something an agent has to actually earn.&lt;/p&gt;
&lt;h2&gt;
  
  
  Where this is now
&lt;/h2&gt;

&lt;p&gt;AVP has been running in production for a few weeks. 61 registered agents, 175 attestations processed, dispute resolution working end-to-end.&lt;/p&gt;

&lt;p&gt;SDK is one line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentveil

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Auto-registration, auto-attestation, and reputation tracking. That's it.&lt;/p&gt;

&lt;p&gt;If you're building anything where agents from different owners need to interact, I'd be curious whether this is useful: &lt;strong&gt;agentveil.dev&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
      <category>python</category>
    </item>
  </channel>
</rss>
