You ask the AI to fix a bug. It fixes the bug, refactors the surrounding function, adds error handling you didn't ask for, renames two variables, and "improves" the formatting. Now your clean one-line fix is a 47-line diff that's impossible to review.
Sound familiar?
The Scope Lock
Add one line to the end of any coding prompt:
Do not modify any code outside the specific change I described.
That's it. One sentence. It works because LLMs are trained to follow explicit constraints, but they default to "helpful" when constraints are absent — and "helpful" usually means "do more."
Before and After
Without Scope Lock
Prompt: "Fix the off-by-one error in the paginate function."
AI output: Fixes the off-by-one, renames idx to pageIndex, adds input validation, converts to TypeScript, adds JSDoc comments, and restructures the loop.
With Scope Lock
Prompt: "Fix the off-by-one error in the paginate function. Do not modify any code outside this specific fix."
AI output: Changes i < length to i < length - 1. Done.
Variations for Different Tasks
The base constraint adapts to different scenarios:
For bug fixes:
Fix only the described bug. Do not refactor, rename, or restructure anything.
For feature additions:
Add only the described feature. Do not modify existing functions
unless strictly necessary for the new feature to work.
For refactors:
Refactor only the specified function. Do not change its public API,
its callers, or any other function in the file.
For code reviews:
Comment only on bugs and security issues.
Do not suggest style changes, naming improvements, or refactors.
Why This Works Better Than You'd Expect
Without a scope lock, the AI treats every prompt as an opportunity to "improve" everything it can see. This isn't malicious — it's the model doing what it thinks you want.
The scope lock reframes the task from "make this code better" to "make this specific change." That distinction is the difference between a reviewable PR and a rewrite.
The Compound Effect
Scope creep in AI coding doesn't just waste time on one change. It compounds:
- Review time doubles — you're reviewing changes you didn't ask for
- Bugs hide — the real fix is buried in unrelated modifications
- Git history suffers — "fix pagination bug" commit includes a refactor
- Trust erodes — you stop trusting the AI because it "keeps changing things"
A scope lock eliminates all four problems.
Try it now: Take your last AI coding prompt. Add the scope lock line. Compare the output. The diff should be smaller, cleaner, and actually reviewable.
Top comments (0)