Every AI system says it supports “audit logging”.
But logs are:
- not portable
- not verifiable
- not usable outside the system
That’s the real gap.
So instead of building another tool, I tried something else:
I implemented the same idea across:
- n8n
- Flowise
- Langflow
- Dify
- Claude Code
Same result everywhere:
→ execution is controlled
→ validation exists
→ but evidence stays trapped in logs
So I defined a minimal standard:
EPI (Evidence Packaged Infrastructure)
EPI packages AI execution as evidence.
https://github.com/mohdibrahimaiml/epi-recorder
The idea is simple:
Instead of:
“we logged what happened”
You get:
“here is a portable, verifiable artifact of what happened”
Still early.
Looking for feedback on:
- schema
- hashing
- validation
Especially from people working on:
- guardrails
- agents
- AI workflows
Top comments (0)