TL;DR: For the past 3 years, I’ve audited smart contracts. Recently, I’ve spent months analyzing drained Web3 AI agents. 87% of exploits don’t come from bad prompting; they happen because we let probabilistic LLMs touch deterministic EVMs directly. Today, I am open-sourcing Lirix v1.0.0—a zero-key, deterministic security gateway that strictly sandboxes agent intents before a single wei of gas is spent.
Web3 #Blockchain #Security #Python #Open-Source #Ethereum
If you build Web3 AI agents, we need to have a very uncomfortable conversation about execution.
Last month, I watched a protocol's $500k treasury get wiped out in exactly 12 seconds. It wasn’t a smart contract reentrancy bug. It was a single, elegantly crafted injected prompt that bypassed a multi-sig through a malicious tool call.
In another post-mortem I audited, an LLM simply hallucinated a non-checksummed blackhole address and sent an entire swap into the void.
The harsh reality of the current ecosystem is this: If your LLM has direct access to sendTransaction, you are running blind in a minefield. We are treating probabilistic reasoning engines as if they are deterministic execution environments.
It is time to build a physical boundary.
🚀Enter Lirix v1.0.0
Today, I’m open-sourcing Lirix v1.0.0.
It is a deterministic, zero-key security gateway built specifically for Web3 AI agents. It acts as an uncompromising gatekeeper in your execution pipeline, silently killing rogue transactions before they are ever signed.
Only 3 lines of code stand between hallucination and safe execution:
from lirix import Lirix
guardian = Lirix(rpc_urls=["https://eth-mainnet..."\])
safe_payload = guardian.validate_and_simulate(raw_llm_json, intent="swap")
Lirix introduces almost zero friction to your codebase, but under the hood, every transaction must survive a strict 5-layer defense gauntlet.
The 5-Layer Defense Architecture
🛡️ L1: Intent Auditing We match every LLM output against a strict developer-defined whitelist. If an indirect prompt injection tries to pivot a legitimate swap into a rogue transfer, Lirix kills it in memory before it can propagate.
🛡️ L2: Schema Boundaries Using Pydantic v2 strict typing, Lirix enforces EIP-55 checksums and hard-blocks negative or NaN values. Mathematical hallucinations and logic-defying outputs die here.
🛡️ L3: Deep-ABI Decoding Attackers love hiding malicious recipients deep inside nested Uniswap V3 Multicall calldata. Lirix recursively unpacks every single layer of the payload to expose and shut down supply-chain poisoning.
🛡️ L4: Stateful RPC Arbitration If your RPC node lags and returns stale data, your agent is guaranteed to get sandwiched by MEV bots. Lirix runs multi-node state diffing and triggers a hard circuit breaker upon detecting any lag. Fail-closed by design.
🛡️ L5: Zero-Gas Sandbox The ultimate physical check. We embedded Anvil with EIP-3155 state overrides. Lirix performs a local “void detonation” (dry-run) and catches EVM reverts before a single signature is generated or gas is burned.
⚖️Engineering Trade-offs (No Fluff, Only Reality)
In security tooling, architectural purity matters more than feature bloat. Here are the hard trade-offs we made for v1.0.0:
Compile-Time Paranoia: We enforced 100% Strict Mypy across the entire codebase. It added three extra weeks to our dev cycle, but it catches 99% of TypeErrors before runtime. The overhead is absolutely worth the mathematical guarantee.
Execution Isolation: Air-gapped by default. We stripped every single cloud dependency. Lirix is Zero-Key and Zero-Telemetry. It runs entirely inside your VPC, acts only as a payload sanitizer, and never touches your private keys.
🤝 Let's Build Deterministic AI
If you are building AI agents that handle real TVL, stop letting them run naked on-chain.
I am an auditor by trade. I didn't build this to ride an AI hype cycle; I built this because I was tired of writing post-mortems for preventable exploits.
💻 Full Code & Protocol: https://github.com/lokii-D/lirix
🏛 Deep-dive Architecture: @lokii
🤝 Dev Logs & Discussions: https://dev.to/lokii_ding | Medium | https://x.com/lokii_AuditAI
PRs, brutal code critiques, and hard security reviews are more than welcome. Let’s build the autonomous future, safely.
Author: lokii, Web3 × AI Agents Security Auditor. Open for security audits & B2D product collaborations. DM me on X/Twitter to connect.






Top comments (0)