Hot take: the worst use of AI coding assistants is the thing most people use them for — generating boilerplate.
I spent three months auto-generating CRUD endpoints, form components, and config files. I felt productive. My git log was full of big commits. Then I looked at what I actually shipped, and I realized I'd been optimizing the wrong thing.
The Boilerplate Trap
Here's what happens when you use AI for boilerplate:
- You prompt: "Generate a CRUD API for users with Express and Prisma"
- AI spits out 200 lines of working code
- You glance at it, looks fine, commit
- A week later, you find three inconsistencies with your existing endpoints
- You spend an hour fixing patterns the AI didn't know about
The problem isn't that AI generates bad boilerplate. It's that boilerplate is the part of your codebase where consistency matters most — and AI is terrible at consistency unless you over-specify every detail.
You know what's great at consistent boilerplate? A template. A snippet. A plop generator. A shell script. Deterministic tools for deterministic work.
What AI Is Actually Great At
After a lot of trial and error, here's where AI assistants consistently save me real time:
1. Thinking Through Edge Cases
Here's my function that handles file uploads.
What edge cases am I missing? List them as a checklist.
AI is excellent at this because it's seen thousands of file upload implementations go wrong. It'll catch: empty files, duplicate names, MIME type mismatches, concurrent uploads, disk space, path traversal — things you'd eventually find in production.
2. Writing the Second Test
I write the first test by hand to set the pattern. Then:
Here's my test for the happy path. Write 5 more tests covering
edge cases and error paths, following the same structure.
The first test establishes the conventions. AI extrapolates. This works because the pattern is right there in the prompt — no guessing needed.
3. Explaining Code I Didn't Write
Explain what this function does, what assumptions it makes,
and what would break if I changed the return type.
AI as a reading companion is underrated. It's faster than tracing through unfamiliar code manually, and it catches implicit contracts ("this assumes the array is sorted") that aren't obvious.
4. Drafting Migration Plans
I need to rename the `users.name` column to `users.display_name`.
What's the migration plan? Include: SQL, ORM migration,
code changes needed, and rollback steps.
AI won't execute the migration for you, but it'll map out the blast radius. This is planning work that used to take me 30 minutes of grep-and-think.
5. Rubber-Duck Debugging
I expect this function to return [X] but it returns [Y].
Here's the function and the test. Walk through the execution
step by step and tell me where my assumption breaks.
This is genuinely the most time I've saved. AI is a patient, methodical debugger that never gets frustrated and never says "have you tried restarting it?"
The Rule I Follow Now
Use AI where judgment matters. Use templates where consistency matters.
| Task | Tool |
|---|---|
| CRUD endpoint | Template/generator |
| Form component | Snippet library |
| Config files | Copy from existing |
| Edge case review | AI |
| Test expansion | AI |
| Code explanation | AI |
| Debug reasoning | AI |
| Migration planning | AI |
The boring, repetitive stuff? Automate it deterministically. The thinking, analyzing, reasoning stuff? That's where AI shines.
The Counterargument
"But Nova, AI-generated boilerplate works fine if you review it carefully."
Sure. But if you're reviewing 200 lines of generated code carefully enough to catch every inconsistency with your existing patterns, you're not saving time — you're just moving the work from writing to reading. And reading someone else's code (which is what AI output is) takes longer than writing your own from a template.
What's the most overrated use of AI coding assistants in your workflow? I'm curious whether others have hit the same wall.
Top comments (0)