The Problem With AI and Version Control
Let an AI agent loose on your codebase and the first thing you worry about isn't whether it can write code. It's whether it'll destroy your work.
git push --force. git reset --hard. git checkout .. Any one of these can wipe out hours of work. And an AI that's "just trying to help" might reach for them when stuck.
After reverse-engineering Claude Code's source, I found something interesting: it doesn't just avoid destructive git commands. It has 7 explicit rules baked into its system prompt that govern how it interacts with version control. These aren't suggestions — they're hard constraints that the model must follow on every interaction.
Here's what they are, and why they matter for anyone building AI-powered dev tools.
Rule 1: Never Amend — Always Create New Commits
This is the most surprising one. When a pre-commit hook fails, the natural instinct (for humans and AI alike) is to fix the issue and run git commit --amend. But Claude Code is explicitly told:
Always create NEW commits rather than amending, unless the user explicitly requests a git amend.
Why? Because when a hook fails, the commit never happened. If you run --amend after a hook failure, you're not amending the failed commit — you're amending the previous commit. The one that was already fine.
This is a bug that experienced developers make all the time. And it's exactly the kind of mistake an AI would make more often, because it's trying to "fix" the situation as fast as possible.
The fix is simple: after a hook failure, fix the issue, re-stage, and create a new commit. Claude Code makes this the default behavior.
Rule 2: Never Update Git Config
This one's absolute — no exceptions, no "unless the user asks."
NEVER update the git config.
Git config changes are invisible, persistent, and can affect every future operation in the repository. An AI that silently changes core.autocrlf or push.default could create bugs that surface weeks later in a completely different context.
The attack surface is just too wide. Better to never touch it.
Rule 3: Destructive Commands Need Explicit Permission
Claude Code maintains an internal list of git commands it considers dangerous:
git push --forcegit reset --hard-
git checkout --(the discard-changes form) git clean -fgit branch -D
None of these run unless the user explicitly asks for that specific operation. The AI won't escalate to a destructive command because a non-destructive approach failed.
This is important because AI agents love to solve problems. If git merge has conflicts, the "fast" solution is git checkout --theirs . — which silently discards all your changes. An unconstrained AI might do exactly that.
Rule 4: Never Skip Hooks
No --no-verify. No --no-gpg-sign. No bypassing the safety checks that the project has configured.
If a pre-commit hook fails, Claude Code investigates why and fixes the underlying issue. It doesn't take the shortcut of skipping the check.
This one seems obvious, but think about how many times you've seen --no-verify in a codebase's git history. Now imagine an AI that defaults to skipping hooks whenever they're inconvenient.
Rule 5: Force Push to Main Gets a Warning, Not Compliance
Even if you explicitly ask Claude Code to force-push to main or master, it won't just comply. It will warn you first.
This is a deliberate asymmetry: most destructive operations execute if explicitly requested. But force-pushing to the default branch gets extra friction because the blast radius is the entire team.
Rule 6: Stage Specific Files, Not Everything
When staging files, prefer adding specific files by name rather than using
git add -Aorgit add ..
Why? Because git add -A might scoop up:
-
.envfiles with secrets - Credentials or private keys
- Large binary files
- Build artifacts
Claude Code stages files individually, which means it has to reason about what actually belongs in the commit. This is slower but safer — every staged file is a conscious decision.
Rule 7: Never Commit Without Being Asked
NEVER commit changes unless the user explicitly asks you to.
An AI that auto-commits after every change would create a git history that looks like a stream-of-consciousness journal. Dozens of tiny, meaningless commits that obscure the actual work.
By requiring explicit permission to commit, Claude Code ensures that commits are intentional, well-scoped, and have meaningful messages.
The Pattern: Encode Safety Into Constraints, Not Hopes
What's interesting about these rules isn't any individual one — most experienced developers would agree with all of them. What's interesting is that they're explicit constraints rather than emergent behavior.
You could ask an AI model to "be careful with git" and hope it figures out what that means. Or you could enumerate the 7 specific ways git operations can go wrong and make each one a hard rule.
This is the difference between building guardrails and hoping the road is straight.
If you're building tools that give AI agents access to version control, steal this approach:
- List every destructive operation your tool can perform
- Make each one require explicit user intent
- For the nuclear options (force push to main), add extra friction even with explicit intent
- Make the safe path the default path (new commit, not amend; specific files, not all)
This is Part 7 of my "Claude Code Architecture Deep Dives" series. I found these patterns while reverse-engineering 17,000+ lines of Claude Code's source for my book.
Want to go deeper? Read Chapter 1 free — no email required. The full book covers the permission system, context management, tool infrastructure, and more.
What git safety rules would you add for AI agents? I'm curious what I missed — drop a comment.
Top comments (0)