You asked the AI to refactor a module. It did â but now the tests fail and the code is worse than before. You want to undo everything, but you also want to keep what the AI learned about the codebase during the attempt.
Most developers just git checkout . and start over. That throws away the context. Here's a better approach.
The Rollback Prompt
Instead of reverting and re-prompting from scratch, use this two-step pattern:
Step 1: Extract the Lessons
The changes you just made broke the test suite. Before I revert,
answer these questions:
1. What did you learn about the codebase structure while
making these changes?
2. Which parts of your approach were correct in principle
but wrong in execution?
3. What constraints did you discover that weren't in the
original requirements?
4. If you had to redo this from scratch, what would you
do differently?
Be specific â reference file names, function signatures,
and the actual errors you'd avoid.
Step 2: Revert and Re-apply With Context
Now revert the code:
git checkout .
Then re-prompt with the extracted lessons as context:
I need the same refactoring we attempted before. Here's what
we learned from the failed attempt:
[Paste the AI's answers from Step 1]
Apply the refactoring again, avoiding the issues identified above.
Start with the smallest possible change and verify it compiles
before expanding scope.
Why This Works Better Than Starting Over
When you just revert and re-prompt, the AI has zero memory of what went wrong. It'll often make the exact same mistakes.
The Rollback Prompt captures the "failure knowledge" â the constraints, edge cases, and structural insights the AI discovered during the failed attempt. The second attempt typically succeeds because it starts with a better map of the territory.
Real Example
I asked Claude to convert a callback-based Node.js module to async/await. First attempt broke because:
- Two callbacks had non-standard error signatures
- One function was called both as a callback and a Promise
- The module was imported by 12 other files expecting callbacks
The rollback extraction captured all three issues. The second attempt handled all of them and passed tests on the first run.
Without the extraction, I would have discovered these constraints one at a time across multiple failed attempts.
Three Rules for Effective Rollbacks
1. Always extract before reverting. The failed code is a gold mine of context. Don't throw it away.
2. Ask for structural insights, not just fixes. "What would you do differently?" produces better context than "what went wrong?"
3. Scope the retry smaller. If the first attempt tried to change 5 files, the retry should start with 1 file and expand only after tests pass.
The Compound Effect
Over time, this pattern builds a library of "failure context" for your codebase. I keep a LESSONS.md file with the most common failure patterns. When starting complex refactors, I include relevant lessons in the initial prompt.
The result: fewer failed attempts, faster convergence, and an AI that benefits from its own past mistakes even across sessions.
How do you handle failed AI refactoring attempts? Revert and retry, or something smarter?
Top comments (1)
hello