Your AI Agents Are Talking — But Can You Prove What They Said?
AI agents are no longer “helpers.”
They move money, make decisions, and talk to each other.
If you have at least two agents, you’re already in a multi‑agent system — whether you planned for it or not.
PiQrypt is an open‑source trust layer that ensures every interaction between your agents is cryptographically verifiable.
Even if your agents change, your LLMs evolve, or your infrastructure migrates — PiQrypt stays as the immutable proof layer on top.
The gap nobody talks about
Modern AI stacks are incredibly powerful:
- LLMs: OpenAI, Anthropic, Mistral, DeepSeek, local models.
- Agent frameworks: LangChain, CrewAI, AutoGen, custom agents.
- Observability tools: logs, traces, dashboards.
They all share a blind spot:
“They can show you what happened — but they can’t prove it.”
Logs can be modified.
Traces aren’t cryptographic.
And when agents interact, there’s no shared, verifiable record of that interaction.
When something goes wrong:
- Which agent made the decision?
- In what order?
- Based on which interaction?
- Can you prove it to someone outside your system?
Most systems today can’t.
Agents don’t just need to connect — they need to agree
When two agents interact, there are actually two problems:
- How do they communicate?
- How do they prove they communicated?
Today’s A2A‑style protocols (Agent2Agent, AI‑to‑agent handshakes, and custom flows) mainly solve the first.
PiQrypt solves the second — with cryptographic proof around agent‑to‑agent interactions.
“Agents need a handshake — not just a connection.”
Enter PiQrypt: co‑signed interactions with cryptographic memory
PiQrypt is an open‑source trust layer that sits between your agents and your memory / logs. It’s LLM‑agnostic, framework‑agnostic, and infrastructure‑agnostic:
- Works with any LLM (OpenAI, Anthropic, Mistral, DeepSeek, local).
- Integrates with any framework (LangChain, CrewAI, AutoGen, or your own).
- Independent of your storage, logs, MLflow, or cloud provider.
Every interaction becomes:
- Co‑signed by the participating agents,
- Anchored in a hash‑chained audit trail,
- Part of a verifiable, multi‑agent session.
This is powered by AISS (Agent Identity and Signature Standard) and built on top of PCP (Proof of Continuity Protocol) — an open protocol specification for agent‑to‑agent collaboration, with PiQrypt as the reference implementation.
How the A2A handshake works conceptually
PiQrypt’s A2A handshake is a short, peer‑to‑peer protocol used to:
- Discover other agents (via registry or direct),
- Authenticate both agents,
- Collaborate with cryptographic proof,
- Audit all interactions, stored in both agents’ audit trails.
Here’s how it looks at the protocol level:
- Each agent generates an Ed25519 keypair.
- Agents exchange public keys (via your agent bus, API, WebSocket, or A2A‑style transport).
- Every agent pair performs a co‑signed handshake:
- Both sign the fact that “Agent X and Agent Y have agreed to talk.”
- The handshake is appended to each agent’s hash‑chained memory.
from aiss.a2a import initiate_handshake, accept_handshake, verify_handshake
from aiss.crypto import ed25519
from aiss.identity import derive_agent_id
# Agent A
priv_a, pub_a = ed25519.generate_keypair()
agent_a = derive_agent_id(pub_a)
# Agent B
priv_b, pub_b = ed25519.generate_keypair()
agent_b = derive_agent_id(pub_b)
# 1. Agent A initiates handshake
handshake = initiate_handshake(
priv_a,
agent_a,
agent_b,
payload={
"intent": "data_sharing",
"scope": "market_analysis",
"terms": "50/50 split"
},
expires_in=3600 # 1h timeout, anti‑replay
)
# 2. Send to Agent B
# ...
# 3. Agent B accepts
response = accept_handshake(
priv_b,
agent_b,
handshake,
counter_payload={
"agreed": True,
"conditions": "Data encrypted in transit"
}
)
# 4. Verify (both agents)
is_valid = verify_handshake(response, {
agent_a: pub_a,
agent_b: pub_b
})
print(f"Handshake valid: {is_valid}")
Thanks to piqrypt-session-multi-ai-agents, this is packaged as an AgentSession that creates a shared session across all agents before a single action is taken.
From handshake to verifiable sessions
Here’s a minimal example with three agents: planner (LangChain), executor (AutoGen), and reviewer (CrewAI). All frameworks, one session.
from piqrypt.session import AgentSession
import piqrypt as aiss
# Generate keypairs
planner_key, planner_pub = aiss.generate_keypair()
executor_key, executor_pub = aiss.generate_keypair()
reviewer_key, reviewer_pub = aiss.generate_keypair()
# Define agents
session = AgentSession(agents=[
{"name": "planner", "agent_id": aiss.derive_agent_id(planner_pub), "private_key": planner_key, "public_key": planner_pub},
{"name": "executor", "agent_id": aiss.derive_agent_id(executor_pub), "private_key": executor_key, "public_key": executor_pub},
{"name": "reviewer", "agent_id": aiss.derive_agent_id(reviewer_pub), "private_key": reviewer_key, "public_key": reviewer_pub},
])
# 1. Start: all pairwise A2A handshakes are recorded
session.start()
# 2. Stamp events in the session
session.stamp("planner", "task_delegation", {"task": "rebalance_portfolio"}, peer="executor")
session.stamp("executor", "order_executed", {"order_id": 42, "price": 182.5, "quantity": 100}, peer="reviewer")
session.stamp("reviewer", "review_approved", {"approved": True})
# 3. Later: export and verify offline
session.export("trading‑session‑audit.json")
# That audit is:
# - signed by each agent,
# - co‑signed by every interaction,
# - readable and verifiable without your production stack.
From this moment on:
- Every
session.stamp(agent_name, event_type, payload, peer=...)is signed by the acting agent. - It’s anchored in the shared, hash‑chained session.
- It’s reflected in every agent’s memory, with a shared
interaction_hash.
“Same interaction. Two memories. One verifiable truth.”
Trust scores: how much you can rely on a session
PiQrypt doesn’t stop at proof. For every session, it computes:
- VRS (Vulnerability Risk Score): a risk metric based on agent behavior, anomalies, and policy flags.
- Trust score (0–1): how “safe” the agent’s history and current session are.
-
TrustGate decision:
ALLOW,REQUIRE_HUMAN, orBLOCK, enforced at runtime with signed proof of every decision.
Example:
result = piqrypt.verify(
signature=agent_event.signature,
chain=agent_event.chain,
context={"agent_id": "pq1_planner_a3f8", "action": "portfolio_rebalance"}
)
print(f"trust_score: {result.trust_score:.4f}") # → 0.9987
print(f"decision: {result.decision}") # → ALLOW
You can block risky actions, require human approval, or allow automatically, all based on verifiable data.
Stack‑agnostic by design
PiQrypt is built to be truly agnostic:
- LLMs: OpenAI, Anthropic, Mistral, DeepSeek, local models.
- Agent frameworks: LangChain, CrewAI, AutoGen, or any custom stack.
- Cloud / infra: Independent of your storage, logs, MLflow, Application Insights, or LangSmith.
“Your agents can change. Your infrastructure can change. Your trust layer should not.”
PiQrypt’s A2A handshake and PCP‑backed audit trail work over any transport (API, message bus, A2A style, or custom protocol). The registry can be centralized, distributed, or even DHT‑based in the future.
Why this matters in production
If you’re building:
- Multi‑agent workflows (planner → executor → reviewer → auditor).
- Agents that communicate across boundaries (company ↔ partner, SaaS ↔ local).
- Compliance‑sensitive applications (finance, healthcare, legal, etc.).
…then an A2A handshake with cryptographic trust scoring is no longer optional.
With PiQrypt, you get:
- Cryptographic identity for every agent.
- Co‑signed, hash‑chained handshakes for every session.
- Session‑anchored events that are verifiable offline, across frameworks.
- Numeric trust scores integrated into your governance and policy layer.
Getting started
- Install PiQrypt and the A2A‑ready session module:
pip install piqrypt piqrypt-session-multi-ai-agents
- Follow the A2A‑focused guides:
-
docs/A2A_HANDSHAKE_GUIDE.md -
docs/A2A_SESSION_GUIDE.md -
QUICK-START.md(A2A handshake section).
- Run the 3‑agent demo (
planner→executor→reviewer) and experiment withverify()+trust_scoreon your own event chains.
From observability to verifiability
We’ve spent years building observability for software systems.
Now, AI needs the same — but stronger:
“Not just visibility.
Proof.”
Your agents already talk.
The real question is:
“Can you prove what happened between them?”
Right now, most systems can’t.
PiQrypt does.
PiQrypt is the A2A handshake your agents never knew they needed.
Nothing ever disappears, and nothing appears out of nowhere — only what’s cryptographically agreed upon.
You can check out the full reference implementation on [GitHub/PiQrypt]
https://github.com/PiQrypt/piqrypt and try your first A2A handshake today.
Top comments (0)