DEV Community

 Raheem Larry Babatunde
Raheem Larry Babatunde

Posted on

Why Every AI Agent Needs a Cryptographic Identity

Why Every AI Agent Needs a Cryptographic Identity

The problem nobody is talking about


Every website you visit today has an SSL certificate. That green padlock in your browser proves the website is who it claims to be. Without it, you would have no way to know if you were talking to your bank or an impostor.

Now consider this: every AI agent running in production today has no equivalent.

No identity. No verification. No way to prove it is who it claims to be.

A financial AI agent executing trades. A customer service agent handling sensitive data. A compliance agent processing medical records. Any of these can be impersonated, compromised mid-execution, or manipulated — and you would never know.

Until something goes wrong.


The scale of the problem

There are already millions of autonomous AI agents deployed across finance, healthcare, legal, and enterprise software. That number is growing exponentially.

LangChain has over 80,000 GitHub stars. AutoGPT has over 160,000. OpenAI's Agents SDK launched in 2025 and was adopted by thousands of companies within weeks.

Every single one of those agents runs with zero cryptographic identity.

And from August 2026, that becomes a legal problem.

The EU AI Act mandates that AI agents operating in high-risk categories must have verifiable audit trails, certified identity, and demonstrable compliance. The penalty for non-compliance: €30M or 6% of global revenue.

There are currently zero tools that provide certified identity for AI agents at the infrastructure level.

That is the gap VeriSigil AI was built to fill.


What we built

VeriSigil AI is the trust layer for autonomous AI agents. Think of it as SSL — but for AI agents instead of websites.

Every agent gets a cryptographic identity passport — a W3C-standard DID (Decentralised Identifier) signed with Ed25519, the same algorithm used by modern TLS certificates and Signal.

Here is what a VeriSigil passport looks like:

{
  "agent_id":       "vsa_537e3974858f",
  "did":            "did:web:verisigilai.com:agents:my-agent-74858f",
  "signature":      "LMg6Sr/wjbmWoIC1Stvkhxq...",
  "signature_type": "Ed25519",
  "status":         "ACTIVE",
  "trust_score":    0.9735,
  "trust_level":    "TRUSTED",
  "eu_ai_act":      true,
  "compliant":      true,
  "issued_at":      "2026-05-04T14:15:28Z",
  "expires_at":     "2027-05-04T14:15:28Z"
}
Enter fullscreen mode Exit fullscreen mode

That passport is:

  • Cryptographically signed — cannot be forged
  • Publicly verifiable — anyone can check it
  • Stored immutably — every action is audited
  • EU AI Act compliant — built to the regulation from day one

The trust network

Identity alone is not enough.

An agent can have a valid identity and still behave maliciously if it is compromised. This is the difference between a website having SSL and a website being trustworthy.

That is why we built a dynamic trust network.

Every time an independent developer or enterprise verifies an agent, that verification is recorded cryptographically and contributes to the agent's trust score. The more independent parties that confirm an agent, the higher its trust score.

It works exactly like a credit score — but for AI agents.

Here is a real example from our live system:

Agent: vsa_537e3974858f

Verifier 1: ver_public        (reputation: 0.3) ✅
Verifier 2: ver_developer_001 (reputation: 0.5) ✅  
Verifier 3: ver_0c4af33d      (reputation: 0.5) ✅

Trust Score:      0.9735
Trust Level:      TRUSTED
Unique Verifiers: 3
Enter fullscreen mode Exit fullscreen mode

Each verification event is signed with Ed25519. Every event is stored immutably. The entire history is publicly auditable.

This is what AI agent trust should look like. Not a flag in a database. A cryptographically verifiable network of independent confirmations.


Try it yourself — right now

The API is live. No signup required for the demo.

Issue a test passport:

https://verisigil-api-production.up.railway.app/issue-test
Enter fullscreen mode Exit fullscreen mode

Verify an agent:

https://verisigil-api-production.up.railway.app/verify/vsa_537e3974858f
Enter fullscreen mode Exit fullscreen mode

See the live trust graph:

https://www.verisigilai.com/trust_network.html
Enter fullscreen mode Exit fullscreen mode

See the W3C DID document:

https://verisigil-api-production.up.railway.app/did/vsa_537e3974858f
Enter fullscreen mode Exit fullscreen mode

The full SDK is open source on GitHub:

https://github.com/raheem-verisigil/verisigil-ai
Enter fullscreen mode Exit fullscreen mode

What the trust graph looks like

When you visualise the trust network, you see exactly which independent parties have verified an agent, their reputation scores, and the timestamps of each verification event.

This is not abstract infrastructure. It is visible, inspectable, and auditable by anyone — including EU regulators.

Visit verisigilai.com/trust_network.html to see it live.


Join the trust network

We are looking for developers building AI agents who want to:

  • Become a verifier — get a free API key and appear as a named node in the trust graph
  • Issue passports for your agents and make them verifiable
  • Integrate VeriSigil into your LangChain, AutoGPT, or CrewAI workflows
  • Contribute to the open source SDK

Getting started takes 30 seconds:

👉 Get your free verifier API key


What is coming

We are currently raising a $4.5M pre-seed round to build:

  • Behavioral fingerprinting — ML-powered continuous authentication that detects compromised agents even when their identity is valid
  • ZK compliance engine — zero-knowledge proofs for EU AI Act certification without exposing sensitive data
  • MCP security scanner — real-time code scanning and threat detection for agent actions
  • Federated trust network — decentralised verification across organisations

If you are building in AI security, agent infrastructure, or EU AI Act compliance — I would love to talk.


The bottom line

Every website needs SSL. Every AI agent needs VeriSigil.

The EU AI Act enforcement clock is running. August 2026 is not far away.

We built the infrastructure. We opened the network. Now we need independent developers to join and make it real.

Try it: verisigil-api-production.up.railway.app/issue-test

See the trust graph: verisigilai.com/trust_network.html

GitHub: github.com/raheem-verisigil/verisigil-ai

Website: verisigilai.com


Raheem Larry Babatunde is the Founder & CEO of VeriSigil AI. 7+ years building fraud detection systems that caught $50M+ in financial crime. Now building trust infrastructure for the AI agent era.

Contact: raheem@verisigilai.com

Follow VeriSigil AI on LinkedIn and GitHub for weekly building-in-public updates.

Top comments (1)

Collapse
 
verisigilai profile image
Raheem Larry Babatunde

If you are building AI agents and want to try
the live trust network, you can get a free
verifier API key here in 30 seconds:

verisigilai.com/trust_network.html

Would love your feedback on how it fits
your use case.