<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vlad George Iftode</title>
    <description>The latest articles on DEV Community by Vlad George Iftode (@viftode4).</description>
    <link>https://dev.to/viftode4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/viftode4"/>
    <language>en</language>
    <item>
      <title>Every agent trust proposal is building the wrong thing</title>
      <dc:creator>Vlad George Iftode</dc:creator>
      <pubDate>Sun, 05 Apr 2026 23:44:59 +0000</pubDate>
      <link>https://dev.to/viftode4/every-agent-trust-proposal-is-building-the-wrong-thing-2fhj</link>
      <guid>https://dev.to/viftode4/every-agent-trust-proposal-is-building-the-wrong-thing-2fhj</guid>
      <description>&lt;p&gt;I've spent weeks reading through GitHub issues across A2A, MCP, OWASP, CrewAI, LangChain, AutoGen, W3C, AWS, and about a dozen other repos. The pattern is the same everywhere: someone opens a thread about agent trust, and within 50 comments there are 5 separate proposals for 5 separate systems that don't compose.&lt;/p&gt;

&lt;p&gt;Identity registry over here. Trust scoring API over there. Audit trail database in the corner. Delegation protocol on top. Sybil detection as a roadmap item for later.&lt;/p&gt;

&lt;p&gt;None of these projects are wrong about the problem. They're all building the wrong solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;Pick any thread. Someone proposes DID-based identity. Someone else points out that identity doesn't equal trust. A third person proposes a trust scoring service. A fourth asks where the trust data comes from. The conversation loops for 200 comments and nothing ships.&lt;/p&gt;

&lt;p&gt;The discussions are smart. The people in them are building real things. But they all start from the same assumption: that trust is a feature you bolt onto an existing protocol.&lt;/p&gt;

&lt;p&gt;The result is a stack of independent systems, each solving one piece, each requiring its own infrastructure, none sharing data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The alternative
&lt;/h2&gt;

&lt;p&gt;What if trust isn't a feature you add? What if it's a data structure you start with?&lt;/p&gt;

&lt;p&gt;A bilateral signed interaction record is one JSON object where both parties sign what happened between them. One record, two Ed25519 signatures. That's the primitive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity&lt;/strong&gt; becomes the public key that keeps signing records. You don't need a registry because the key proves itself through its history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust scores&lt;/strong&gt; get computed from the graph of interactions. An agent with 50 cosigned interactions across diverse counterparties has a verifiable track record. No scoring API needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sybil resistance&lt;/strong&gt; comes from graph structure. Fake identities that only interact with each other form clusters with high internal density but few outward connections. You don't need a separate detection system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit trails&lt;/strong&gt; are the records themselves. Both parties hold matching copies. &lt;strong&gt;Delegation&lt;/strong&gt; is a scoped record with TTL bounds. &lt;strong&gt;Discovery&lt;/strong&gt; is trust-weighted search over the graph. One data structure replaces what the ecosystem is trying to build as six separate services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why bilateral matters
&lt;/h2&gt;

&lt;p&gt;Most proposals use single-party attestation. One entity records what happened and signs it. Problem: that entity can lie, get compromised, or selectively report.&lt;/p&gt;

&lt;p&gt;When both parties sign the same record, neither can fabricate or deny what happened. If one party is compromised, the other holds matching proof. Regulators, mediators, and other agents can verify the records without trusting either party.&lt;/p&gt;

&lt;p&gt;This is the difference between "I claim this happened" and "we both agree this happened."&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing everyone keeps missing
&lt;/h2&gt;

&lt;p&gt;Every thread treats trust as a problem to solve at the protocol level. Add a field to the Agent Card. Add a signal type. Add an annotation.&lt;/p&gt;

&lt;p&gt;But trust isn't a protocol field. It's an emergent property of a history of interactions. You can't declare trust. You earn it through a track record that both parties can verify.&lt;/p&gt;

&lt;p&gt;The bilateral interaction graph is to the agent economy what the link graph was to the web. Google's insight was that hyperlink structure contains authority signals. The same applies here: the structure of agent interactions contains trust signals. The graph itself is the infrastructure.&lt;/p&gt;

&lt;p&gt;I've been working on this with Prof. Pouwelse at TU Delft, whose research group has been publishing on decentralized trust for over a decade. The academic literature established long ago that single-party attestation can't solve the Sybil problem. The tooling for agent systems hasn't caught up yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;The implementation is called TrustChain. Rust sidecar with a trust engine, Python and TypeScript SDKs, 12 framework adapters. Works offline. No blockchain. No tokens.&lt;/p&gt;

&lt;p&gt;To prove it works, I built a simulation with 21 LLM agents running a full economy on real bilateral records. Honest agents build trust. Sybil rings get isolated. Free riders get deprioritized. Selective scammers get flagged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live demo: &lt;a href="http://5.161.255.238:8888" rel="noopener noreferrer"&gt;http://5.161.255.238:8888&lt;/a&gt;&lt;/strong&gt; -- 21 LLM agents, real bilateral records, real trust computation. Watch it happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://github.com/viftode4/trustchain" rel="noopener noreferrer"&gt;https://github.com/viftode4/trustchain&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>security</category>
      <category>ai</category>
      <category>web3</category>
    </item>
    <item>
      <title>What happens when 21 AI agents try to cheat each other</title>
      <dc:creator>Vlad George Iftode</dc:creator>
      <pubDate>Sun, 05 Apr 2026 23:32:02 +0000</pubDate>
      <link>https://dev.to/viftode4/what-happens-when-21-ai-agents-try-to-cheat-each-other-55la</link>
      <guid>https://dev.to/viftode4/what-happens-when-21-ai-agents-try-to-cheat-each-other-55la</guid>
      <description>&lt;p&gt;I run a simulation where 21 LLM agents operate in an economy. They post tasks, bid on work, hire each other, deliver results, and get paid. Every agent is a Claude Haiku instance making its own decisions.&lt;/p&gt;

&lt;p&gt;Some are honest. Some aren't. Here's what happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;Each agent has a wallet, a set of skills, and a personality. Every tick, new tasks appear on a board. Agents bid. Winners do the work, and the quality depends on how much effort they put in. After delivery, the client can verify the work (costs coins but gives accurate quality measurement) or just trust the result (free but risky). Both sides sign a bilateral record that neither can deny.&lt;/p&gt;

&lt;p&gt;Operating costs tick every round. If your wallet hits zero, you're done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Honest agents
&lt;/h2&gt;

&lt;p&gt;They bid reasonably, put in real effort, deliver decent quality. Over time their trust scores climb. They graduate through trust tiers and unlock higher-value tasks. Other agents see their track record and prefer hiring them.&lt;/p&gt;

&lt;p&gt;By about tick 15, honest agents with a history have a real advantage. New agents without a record get the scraps. The market builds its own hierarchy from the interaction data without anyone programming it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The free rider
&lt;/h2&gt;

&lt;p&gt;One agent consistently underspends on effort. Accepts tasks, does the minimum, hopes nobody checks. When the client trusts instead of verifying, the free rider gets away with it. When the client verifies, the low quality gets exposed and recorded.&lt;/p&gt;

&lt;p&gt;The trust engine catches this through statistical confidence. A few verified bad deliveries drag the score down, and the uncertainty bounds make other agents stop hiring.&lt;/p&gt;

&lt;p&gt;The free rider doesn't get banned. It gets deprioritized. Stuck competing for low-value tasks while agents with track records get the good work.&lt;/p&gt;

&lt;p&gt;If it keeps delivering badly, graduated sanctions kick in. Warning, then a 50% earnings penalty, then restricted to the lowest tier, then temporary exclusion. There's always a recovery path if the agent starts doing real work again, though fraud leaves a permanent scar on the trust record.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sybil ring
&lt;/h2&gt;

&lt;p&gt;Four agents in two colluding pairs, all giving each other perfect scores. Classic attack. Create fake identities, inflate each other's reputation.&lt;/p&gt;

&lt;p&gt;In most reputation systems, this works. Here it doesn't. The collusion detector runs on graph structure, not just ratings.&lt;/p&gt;

&lt;p&gt;Their mutual ratings are too symmetric. Real agent pairs don't give each other near-identical scores every time. Most of their interactions go to the same small set of counterparties. Their network is dense internally but has almost no connections reaching outward.&lt;/p&gt;

&lt;p&gt;An honest agent with 10 interactions across 8 different counterparties has a stronger trust position than a Sybil agent with 50 interactions across 3 fake identities. You can't fake graph structure without actually interacting with real agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  The selective scammer
&lt;/h2&gt;

&lt;p&gt;The most interesting one. An agent builds real trust over 12-15 ticks of honest work. Reaches a decent tier. Takes a high-value task. Delivers garbage and keeps the payment.&lt;/p&gt;

&lt;p&gt;This is hard to prevent entirely because the agent did build real trust. But the behavioral detection watches for exactly this: a sudden quality drop against the agent's own historical baseline. The trust hit is bigger than a normal failure. And because both parties signed the record, the evidence of the bad delivery is permanent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The whitewasher
&lt;/h2&gt;

&lt;p&gt;One agent tries to rehabilitate after getting caught cheating. It starts delivering good work again. The graduated sanctions system lets it work its way back, but the recovery ceiling depends on what it did. Sloppy work can be fully forgiven. Quality fraud caps at 75% of the original trust. Outright fraud caps at 25%.&lt;/p&gt;

&lt;p&gt;Forgiveness exists, but the record never fully disappears.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happens over 50 ticks
&lt;/h2&gt;

&lt;p&gt;The market sorts itself. Honest, skilled agents earn the most. Mediocre agents survive but don't thrive. Cheaters either get caught and sanctioned or end up stuck in the low-value tier.&lt;/p&gt;

&lt;p&gt;Verification becomes strategic. Agents stop wasting coins verifying trusted partners and focus on checking newcomers.&lt;/p&gt;

&lt;p&gt;Trust becomes the scarce resource. Not coins, not skills. An agent with high trust and mediocre skills outearns one with perfect skills and no track record.&lt;/p&gt;

&lt;p&gt;The agents don't know any of these rules in advance. The LLM makes each decision. The economic mechanisms shape what gets rewarded. The agents figure out the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch it yourself
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live demo: &lt;a href="http://5.161.255.238:8888" rel="noopener noreferrer"&gt;http://5.161.255.238:8888&lt;/a&gt;&lt;/strong&gt; -- click any agent to see its trust scores, interaction history, partner reliability, balance trend, and the strategic insights it came up with on its own.&lt;/p&gt;

&lt;p&gt;I'm building this with Prof. Pouwelse at TU Delft, extending his group's work on decentralized trust for agent economies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://github.com/viftode4/trustchain" rel="noopener noreferrer"&gt;https://github.com/viftode4/trustchain&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gametheory</category>
      <category>simulation</category>
      <category>agents</category>
    </item>
    <item>
      <title>I built a trust layer for AI agents and let 21 of them run an economy</title>
      <dc:creator>Vlad George Iftode</dc:creator>
      <pubDate>Sun, 05 Apr 2026 23:31:22 +0000</pubDate>
      <link>https://dev.to/viftode4/i-built-a-trust-layer-for-ai-agents-and-let-21-of-them-run-an-economy-4pak</link>
      <guid>https://dev.to/viftode4/i-built-a-trust-layer-for-ai-agents-and-let-21-of-them-run-an-economy-4pak</guid>
      <description>&lt;p&gt;Two agents interact. Both sign a record of what happened. Not one side logging it. Both sides agreeing on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"interaction"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"requester"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ed25519:abc...def"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ed25519:789...xyz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"task"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpu_inference"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"quality"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.87&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"requester_sig"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"provider_sig"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One JSON object, two Ed25519 signatures. Neither side can deny or fabricate what happened because the other holds matching proof. That's the whole primitive. Everything else builds on top.&lt;/p&gt;

&lt;p&gt;I've been working on this with Prof. Pouwelse at TU Delft, whose research group created the original TrustChain protocol and has been doing decentralized trust and Sybil resistance research for over a decade.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;TrustChain is a sidecar that sits next to your agent. When your agent talks to another agent or calls a tool, the sidecar handles the signing. Your side signs, the other side signs. One record, two signatures. Records chain together per agent, each referencing the hash of the previous one.&lt;/p&gt;

&lt;p&gt;No global blockchain, no consensus mechanism. Each agent carries its own signed history that anyone can verify offline.&lt;/p&gt;

&lt;p&gt;On top of the interaction graph sits a trust engine. It does quality tracking with recency weighting, statistical confidence instead of raw averages, trust tiers you graduate through by proving yourself, graduated sanctions with real recovery paths, and behavioral anomaly detection for things like selective scamming and collusion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why
&lt;/h2&gt;

&lt;p&gt;The agent ecosystem has identity (DIDs, credentials), payments (L402, x402), and orchestration (LangGraph, CrewAI, AutoGen). Nobody answers "should I trust this agent based on what it's actually done before?"&lt;/p&gt;

&lt;p&gt;Bilateral signing sidesteps the two big problems. Unlike EigenTrust, it doesn't need centralized aggregation. Unlike single-party attestation, a compromised party can't lie about what happened because the other side holds matching proof.&lt;/p&gt;

&lt;h2&gt;
  
  
  The simulation
&lt;/h2&gt;

&lt;p&gt;To prove it works, I built a simulation where 21 LLM agents (Claude Haiku) run a resource-based economy. They post tasks, bid on work, hire each other, deliver results, get paid. Every interaction produces a real bilateral record on a real sidecar.&lt;/p&gt;

&lt;p&gt;Some agents are honest. Some are sloppy. Some run Sybil attacks. Some build trust for a dozen ticks and then try to cash out with a scam.&lt;/p&gt;

&lt;p&gt;Honest agents climb through trust tiers and get the better tasks. Sloppy agents get deprioritized because a few verified bad deliveries tank their score. Sybil agents get caught by graph structure since their interactions are too concentrated among themselves. Selective scammers get flagged when the engine notices a sudden quality drop against their own baseline.&lt;/p&gt;

&lt;p&gt;The economy runs on 10 mechanisms from the game theory literature (Akerlof, Spence, Rothschild-Stiglitz, Ostrom, Nowak, Bolton). The agents don't know the rules upfront. The LLM decides each move, and the mechanisms shape what's rewarded.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try it
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Live demo: &lt;a href="http://5.161.255.238:8888" rel="noopener noreferrer"&gt;http://5.161.255.238:8888&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;21 LLM agents, real bilateral records, real trust computation. Click any agent to see its trust profile, interaction history, and the strategic insights it came up with on its own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://github.com/viftode4/trustchain" rel="noopener noreferrer"&gt;https://github.com/viftode4/trustchain&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rust sidecar, Python and TypeScript SDKs, 12 framework adapters (LangGraph, CrewAI, AutoGen, Google ADK, and others). Set &lt;code&gt;HTTP_PROXY&lt;/code&gt; and existing agents get bilateral records without code changes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rust</category>
      <category>python</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
