Your AI agent just approved a $500 refund. Three months later, a customer disputes it.
Can you prove what happened?
Most teams can't. Logs get rotated, databases get migrated, and nobody remembers why the AI made that call.
Cronozen Proof SDK solves this in 10 lines. Every AI decision gets a SHA-256 hash chain — immutable, tamper-proof, and audit-ready.
Install
npm install cronozen
Record your first decision
import { Cronozen } from 'cronozen';
const cz = new Cronozen({
apiKey: process.env.CRONOZEN_API_KEY,
baseUrl: 'https://api.cronozen.com/v1',
});
// 1. Record what the AI did
const event = await cz.decision.record({
type: 'agent_execution',
actor: {
id: 'refund-bot',
type: 'ai_agent',
name: 'Refund Agent',
},
action: {
type: 'refund_approved',
description: 'Auto-approved refund under $100 policy',
input: { orderId: 'ORD-7829', amount: 49000 },
output: { refundId: 'REF-3301' },
},
aiContext: {
model: 'gpt-4',
provider: 'openai',
confidence: 0.95,
reasoning: 'Order within 7-day window, amount under threshold',
},
tags: ['refund', 'auto-approved'],
});
console.log(event.id); // decision event created
That's it. The decision is now recorded as evidence.
Seal it with approval
Recorded decisions are drafts. To make them tamper-proof, seal them:
// 2. Human or system approves → SHA-256 hash chain sealed
const approval = await cz.decision.approve(event.id, {
approver: {
id: 'ops-manager-1',
type: 'human',
name: 'Kim Park',
},
result: 'approved',
reason: 'Verified against refund policy v2.1',
});
console.log(approval.sealedHash);
// → "sha256:a3f2c8e1..."
Once sealed, this decision cannot be modified. The hash links to the previous decision in the chain — tamper with one, and every subsequent hash breaks.
Export for audit
When the auditor comes knocking:
// 3. Export as JSON-LD — hand this to compliance
const proof = await cz.evidence.export(event.id);
console.log(proof['@context']);
// → "https://schema.cronozen.com/proof/v1"
console.log(proof.verification);
// → { hashAlgorithm: 'SHA-256', chainIndex: 42, chainHash: 'sha256:a3f2...' }
This gives you a standards-compliant JSON-LD document with full cryptographic verification. Your auditor gets the what, who, when, and the math to prove it hasn't been touched.
The full picture
Record (draft) → Approve (sealed) → Export (audit-ready)
│ │ │
AI action SHA-256 hash JSON-LD proof
logged chain-linked with verification
What you can record
| Event Type | Use Case |
|---|---|
agent_execution |
AI agent performed an action |
human_approval |
Manager signed off |
ai_recommendation |
AI suggested, human decided |
policy_decision |
Rule engine triggered |
automated_action |
Cron job, webhook, scheduled task |
escalation |
Flagged for review |
Error handling
The SDK throws typed errors so you can handle each case:
import { Cronozen, ConflictError, NotFoundError } from 'cronozen';
try {
await cz.decision.approve(eventId, request);
} catch (error) {
if (error instanceof ConflictError) {
// Already sealed — this is by design, not a bug
console.log('Decision already locked');
}
if (error instanceof NotFoundError) {
console.log('Decision not found');
}
}
Why this matters
Korea's AI Basic Act took effect January 2026. The EU AI Act is enforcing transparency requirements. If your AI makes operational decisions — approvals, classifications, recommendations — you need evidence that a human was in the loop.
Cronozen Proof gives you that evidence in 3 API calls.
- GitHub: cronozen/cronozen-sdk
- npm:
cronozen - Docs: docs.cronozen.com
If you're building with AI agents, LLM pipelines, or automated workflows — try it. The first 1,000 decisions/month are free.

Top comments (0)