I tried using AI on a 15k+ line codebase. It failed badly. Wrong changes, broken logic, random imports — classic. Then I changed how I used it, and it started saving me weeks of work.
The Problem
AI doesn’t understand large codebases. You can’t just paste a repo and say:
“refactor this”
It will:
miss dependencies
break existing flows
hallucinate logic
touch things you didn’t ask for
I learned this the hard way.
The Setup
This is not a small project.
Multiple screens, services, storage layers — not something AI can “just understand”.
What Actually Worked
- Stop dumping the whole codebase
Instead of:
“Here’s my project, fix X”
Do:
give only relevant files
explain relationships manually
AI is not context-aware at scale. You have to simulate context.
- Give strict instructions (like a junior dev)
Bad:
“Improve this”
Good:
“Modify this function only. Do not change API shape. Do not touch unrelated files.”
The difference is massive.
- One change at a time
Don’t do:
“Refactor this entire flow”
Do:
break into small steps
verify each change
then move forward
AI works best iteratively, not in one shot.
- Expect it to fail so always keep backup
It will:
generate wrong logic
miss edge cases
introduce bugs
Example:

This is where you review like a senior — not trust blindly.
The Real Insight
AI didn’t replace my work. It compressed months into days, but only because I guided it properly. If you treat AI like magic, it breaks. If you treat it like a junior dev, it becomes powerful.
Final Thought
AI is not bad at large codebases. Most people are just bad at using it correctly. If you're working on bigger projects, stop prompting lazily. That’s the real bottleneck.

Top comments (0)