The problem nobody talks about
AI agents can act. They can call APIs, write files, execute code, make decisions.
But can they prove what they did?
Logs can be edited. Timestamps can be faked. Events can be reordered silently.
There's no equivalent of Git, TLS, or OAuth for agent behavior over time.
What cryptographic memory looks like
Every action becomes a signed, hash-chained event:
Who acted — deterministic Ed25519 agent identity
What was decided — signed payload
When — tamper-evident sequence
Nothing missing — fork detection catches silent deletions
Verifiable offline. No server. No central authority.
One line with LangChain
pythonfrom piqrypt_langchain import PiQryptCallbackHandler
handler = PiQryptCallbackHandler(agent_name="my_agent")
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
Every action is now signed & hash-chained
handler.export_audit("audit.json")
The standard behind it
This is built on AISS — Agent Identity and Signature Standard — an open protocol.
The RFC is public. The spec is implementable by anyone.
TCP/IP secured communication. TLS secured it. OAuth made it delegable.
AISS is that primitive for autonomous agent trust.
Try it
bashpip install piqrypt[langchain]
📦 PyPI : piqrypt
🔗 LangChain Hub : hub.langchain.com/piqrypt/piqrypt-audited-agent
📖 RFC : github.com/PiQrypt/piqrypt/blob/main/docs/RFC_AISS_v2.0_narrative.md
Happy to discuss the threat model, the crypto choices, or the RFC in the comments. 👇
Top comments (0)