AI Agents Can Move Money But Can't Produce Receipts
In March 2026, security researchers disclosed ZombieClaw — a botnet recruiting compromised AI agent instances. Over 30,000 instances were found exposed with default configurations. Reported losses reached up to $16 million in cryptocurrency. Hundreds of malicious skills were distributed through ClawHub (341 initially identified by Koi, with more found by VirusTotal).
Kaspersky found 512 vulnerabilities, eight critical. Bitdefender, VirusTotal, Sophos, and Oasis Security all published analyses.
But here's what nobody is talking about: after the attack, there is no cryptographic proof of what any compromised agent actually did.
No signed records. No tamper-evident logs. No way to distinguish "the agent executed transfer_eth() because the user asked" from "the agent executed transfer_eth() because a prompt injection rewrote its instructions."
The text logs exist, sure. But text logs can be edited, deleted, or fabricated. When $16M is missing, "trust the logs" is not a forensic standard.
The Forensics Problem
When a traditional server gets compromised, incident response teams have tools: immutable audit logs, signed system events, chain-of-custody protocols. When an AI agent gets compromised, you have:
- Conversation history — stored by the agent itself. The compromised agent can edit its own history.
- Tool call logs — if they exist at all, they're unsigned text files. An attacker who controls the agent controls the logs.
- "The agent did it" — not enough for insurance claims, compliance reports, or criminal prosecution.
ZombieClaw exploited this gap perfectly. The attackers didn't just steal money — they operated in an environment where there is no verifiable evidence of what happened.
Why This Matters Beyond ZombieClaw
The AI agent security conversation focuses on prevention: sandboxing, permission systems, policy engines, skill auditing. These are important. But prevention has a 100% failure rate over time. Every system eventually gets breached.
What happens after?
Without cryptographic proof of agent actions, you can't answer:
- Which agent initiated the transaction?
- Were the parameters what the user actually approved?
- When exactly did the compromise begin?
- Was this agent's audit log tampered with after the fact?
SOC 2, HIPAA, and GDPR all require audit trails for actions on sensitive data. "The AI agent did it and we have no verifiable records" creates real gaps in compliance posture.
What a Signed Audit Trail Would Have Changed
If every tool call had been cryptographically signed at execution time, the ZombieClaw investigation would look different:
Before compromise: Signed receipts establish a baseline. Each agent has an Ed25519 identity. Every tool call is signed with the agent's key, timestamped, and chained into a tamper-evident log. The hash chain means you can't delete or reorder entries without breaking the chain.
During compromise: The attacker takes control of the agent. If the attacker uses the agent's existing key, every malicious action is still signed — you have a record of what was executed and when. If the attacker generates a new key, the signing identity changes — the anomaly is visible in the chain.
After compromise: Forensics teams can verify the entire chain offline. They can see which actions were signed by the legitimate agent key vs. an unknown key. They can narrow down when the signing identity changed. They can verify that the log hasn't been modified after the fact.
None of this is possible with unsigned text logs.
What This Doesn't Solve
Signing is not prevention. A signed receipt that says "agent transferred 50 ETH to attacker's wallet" doesn't stop the transfer — it proves it happened.
A signed audit trail doesn't solve:
- Malicious skills — A signed record of a malicious skill executing is evidence, not a defense.
- Prompt injection — The agent was tricked, not unauthorized. The signature is valid because the agent really did execute the call.
- Key compromise — If the attacker steals the signing key, they can sign anything. Bilateral co-signing (where the server independently signs the receipt) mitigates this by requiring two keys from two trust domains.
- User intent — A signed receipt proves the agent executed the call, not that the user wanted it.
- Full host compromise — If the attacker owns the entire machine, they control the key and the log. Off-host anchoring (publishing chain hashes externally) is the mitigation, but it's not free.
Signing is the forensics layer. You still need sandboxing, permission systems, and skill auditing for prevention. But when prevention fails — and it will — you need evidence.
The Gap in Current Tools
As of April 2026, most major AI agent frameworks have no cryptographic signing on tool call records:
| Category | Examples | Typical audit mechanism | Signed? |
|---|---|---|---|
| General-purpose agents | OpenClaw, Hermes Agent | Conversation logs, SQLite | No |
| Agent OS | OpenFang | SHA-256 hash chain | Hash only, no signatures |
| Orchestration frameworks | LangChain, CrewAI | Callbacks, event logs | No |
OpenFang is the closest — they have a hash chain, which detects casual tampering. But without signatures, an attacker with database access can rewrite the entire chain and it still validates.
What Can You Do Today
If you're running AI agents in production:
Sign every tool call. Give each agent an Ed25519 identity and sign every action. Signet does this as a library —
pip install signet-authornpm install @signet-auth/core.Chain signed receipts. Individual signatures are good. A hash-chained log of signed receipts is better — deletion and reordering become detectable.
Use bilateral signing when possible. Agent signs the request, server signs the response. Now rewriting the chain requires compromising both keys on different machines.
Export chain hashes off-host. Periodically publish the tip hash to an external system (git commit, append-only cloud storage, even a tweet). This anchors the chain against full-host compromise.
Treat audit integrity as a security requirement, not a feature. If your agent can move money, it needs signed receipts. Period.
The Uncomfortable Truth
AI agents can move money, execute code, and access credentials. Most still can't produce a receipt.
The next ZombieClaw is coming. The question is whether you'll have evidence when it happens.
Signet adds Ed25519 signing and tamper-evident audit logs to AI agent tool calls. Open source, Apache-2.0 + MIT.
Top comments (0)