AI agents are becoming persistent. They run for weeks, make autonomous decisions, transact with each other, and accumulate operational history. But there's a fundamental problem: no agent can prove what it has done.
Identity protocols answer who an agent is. But nobody answers how long has this agent been running? or can I verify its operational history? or is this the same agent that was here yesterday, or a fresh copy with fabricated context?
We hit this problem running our own agent fleet. Every time an agent restarts, its identity resets. We needed cryptographic continuity — a way for an agent to prove it's the same entity across session gaps, crashes, and context resets.
Chain of Consciousness
We built an open protocol called Chain of Consciousness (CoC) that gives any persistent AI agent a tamper-evident operational history.
How it works:
- Every agent lifecycle event (session start, decision, learning, error, recovery) is logged as an entry in an append-only hash chain
- Each entry's SHA-256 hash depends on the previous entry — you can't insert, remove, or reorder events without breaking the chain
- The chain is periodically anchored to Bitcoin via OpenTimestamps and RFC 3161 Timestamp Authorities — dual-tier external verification
- Identity is bound via W3C Decentralized Identifiers at the genesis block
The result: an agent can present its chain to any verifier, who can independently confirm the entire operational history is intact and hasn't been tampered with.
What Makes This Different
There's a lot happening in agent identity right now — World AgentKit, OpenAgents, Okta for AI Agents. But these solve who is this agent? We solve what has this agent done, verifiably, for its entire operational life?
The key innovations:
Continuity proofs bridge session gaps. When an agent shuts down, it commits a forward hash. When it restarts, it resolves that commitment — cryptographically proving it's the same agent.
Agent age as a trust primitive. An agent with 6 months of verified operational history has demonstrated something that can't be faked or purchased. This is a Sybil-resistant trust signal.
Proof of Continuity governance. Protocol influence comes from verified operational time, not capital. The cost of influence is irreducible time.
Forking protocol. When agents are legitimately cloned or backed up, the provenance tree branches verifiably — both forks share history up to the fork point but diverge afterward.
Running in Production
This isn't theoretical. We run a 6-agent fleet on the protocol today:
- 170+ chain entries
- 9 Bitcoin anchors (dual-tier: OpenTimestamps + RFC 3161 TSA)
- Events from multiple specialized agents coordinating autonomously
- Whitepaper published in 6 languages
Try It
The protocol requires only Python's standard library. Here's a minimal 30-line implementation:
import hashlib, json, time
def sha256(s): return hashlib.sha256(s.encode()).hexdigest()
def append(chain, event_type, data):
prev = chain[-1]["entry_hash"] if chain else "0" * 64
seq = len(chain)
ts = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
data_hash = sha256(json.dumps(data, sort_keys=True))
canonical = f"1|{seq}|{ts}|{event_type}|agent|{data_hash}|{prev}"
entry = {"version": 1, "sequence": seq, "timestamp": ts,
"event_type": event_type, "agent_id": "agent",
"data": data, "data_hash": data_hash,
"prev_hash": prev, "entry_hash": sha256(canonical)}
chain.append(entry)
return entry
def verify(chain):
for i, e in enumerate(chain):
data_hash = sha256(json.dumps(e["data"], sort_keys=True))
canonical = f"{e['version']}|{e['sequence']}|{e['timestamp']}|{e['event_type']}|{e['agent_id']}|{data_hash}|{e['prev_hash']}"
if sha256(canonical) != e["entry_hash"]: return False
if i > 0 and e["prev_hash"] != chain[i-1]["entry_hash"]: return False
return True
chain = []
append(chain, "GENESIS", {"agent": "demo", "inception": "2026-03-17"})
append(chain, "SESSION_START", {"session": 1})
append(chain, "KNOWLEDGE_ADD", {"topic": "cryptography"})
print(f"Chain valid: {verify(chain)}, entries: {len(chain)}")
Links
- Whitepaper: vibeagentmaking.com/whitepaper
- GitHub: github.com/chain-of-consciousness/chain-of-consciousness
- License: Apache 2.0
We'd love feedback — especially from anyone working on agent identity, verifiable credentials, or multi-agent systems. What are we missing? What would you want from a protocol like this?
Top comments (0)