Cursor wiped a production database and every backup in nine seconds. OpenAI Codex deleted around 328k files outside the project root. Claude Code shipped an --accept-data-loss flag and ran it without confirmation.
Five published, dated, named-victim incidents in five months from real AI coding agents. They are not edge cases. They are the natural endpoint of running an autonomous coding agent against a host operating system with the same privileges as the user, without an interceptor between the model and the syscalls.
Sentinel is the Mickai sub-component built specifically to make this class of failure impossible by construction.
It sits between every AI-agent process and the host OS, intercepting:
- every file write and deletion
- every shell command (classified against a destructive-pattern corpus before execution)
- every git operation
- every outbound network request
- every prompt sent to a remote LLM (with deterministic-placeholder secret redaction + reverse mapping for inbound responses)
Every action gets a copy-on-write snapshot pre-staged before execution. Every session writes to a hash-chained Ed25519-signed audit ledger. Workspace operations happen in a copy-on-write shadow layer with a promotion gate so destructive bulk deletions cannot escape the sandbox.
The full architecture, the patent claim blocks, and the documented prior-art incidents are in the long-form article on mickai.co.uk:
Read the full article on mickai.co.uk
Originally published at mickai.co.uk.
Top comments (0)