I have been using AI coding assistants daily for about a year now -- Claude Code mostly, with some Cursor mixed in. They are genuinely useful, but there is a problem I kept running into: after a long session, I had no clear idea what the AI actually did.
Not in a paranoid way. More like: I asked it to refactor a module, it made changes across several files, and then I realized I could not reconstruct the sequence of what happened. Which file changed first? What was the prompt that triggered that particular edit? Did it touch anything it was not supposed to?
I tried keeping notes. That lasted about two days.
Finding Mantra
A few weeks ago I came across Mantra. The tagline is "AI coding session time machine" -- replay, control, secure. I was a bit skeptical but tried it anyway since it is free and does not require an account.
The setup took maybe five minutes. Mantra works as an MCP (Model Context Protocol) gateway, sitting between your AI tool and the outside world. For Claude Code you just add it to your MCP config. Same for Cursor and Gemini CLI.
Session Replay in practice
The feature I use most is Session Replay. After a coding session, I can open the timeline and see every tool call the AI made -- file reads, writes, shell commands, everything -- in the order it happened, with the associated prompt context.
This sounds minor but it changed how I review AI work. Before, I would just look at the git diff and try to reason backwards. Now I can watch the session like a log:
- Prompt: "add input validation to the signup form"
- AI reads
auth/signup.ts - AI writes
auth/signup.ts(adds zod schema) - AI reads
auth/tests/signup.test.ts - AI writes
auth/tests/signup.test.ts(adds test cases)
That is the happy path. What I actually found useful was catching the deviations. In one session the AI quietly read a config file that had nothing to do with the task. Not malicious, just scope creep. The replay made it visible immediately.
Sensitive content detection
Mantra also flags potentially sensitive content passing through the session -- things like API keys, tokens, or credentials that end up in prompts or responses. This has caught me twice when I accidentally included an .env file in context. The AI never did anything bad with it, but I would rather not have that data flowing through at all.
The detection is not perfect -- it does not catch everything -- but it is a useful second layer of attention.
The MCP Unified Gateway
If you use multiple AI tools, the gateway aspect is worth noting. Instead of configuring MCP servers separately for Claude Code and Cursor, Mantra acts as a single proxy. You configure your MCP tools once in Mantra and both clients pick them up. Less duplication, and the audit log covers everything regardless of which tool you used.
I mostly use Claude Code so this is not a huge deal for my workflow, but I can see it being valuable in team settings where different people prefer different editors.
What it does not do
To be fair: Mantra is not a code review tool. It does not tell you whether the AI changes were correct, only what it did. You still need to read the diff and test things. It also does not integrate with git directly -- the session log and your version history are separate things.
I also noticed the UI is still fairly early-stage. Functional, but not polished. Filtering through long sessions could be easier.
Worth trying
If you use Claude Code, Cursor, or similar tools heavily, the visibility gap is real. You are handing over significant editing power to a system that does not narrate its own actions very well. Mantra fills that gap in a straightforward way.
It is at mantra.gonewx.com, free, no signup required. The setup docs are clear enough that you will not need more than ten minutes to get it running.
I am not affiliated with them -- just found it useful and figured it was worth sharing.
Top comments (0)