You’re using AI to write code. It feels fast. But is it actually saving you time?
After six months of daily AI-assisted coding, I noticed five habits that felt productive but were quietly eating hours. Here’s what they are and how I fixed each one.
1. The “Just Generate It” Trap
The habit: Asking the AI to generate an entire feature from a vague description, then spending 45 minutes fixing the output.
The fix: Write a 3-sentence spec first. What does the function take? What does it return? What edge cases matter?
// Bad prompt:
"Write a user authentication system"
// Better prompt:
"Write a login function that takes email + password,
returns { success: boolean, token?: string, error?: string },
and handles: invalid email format, wrong password (max 3 attempts),
and expired accounts."
Time saved per task: ~20 minutes.
2. The Context Dump
The habit: Pasting your entire file (or multiple files) into the prompt "for context."
The fix: Give the AI only the interface it needs. If you’re fixing a function, provide that function plus its type signatures. Not the whole module.
I started using a simple rule: if the context is longer than the expected output, you’re overfeeding.
3. The Infinite Iteration Loop
The habit: Going back and forth with the AI 8+ times, tweaking the same output.
The fix: If the third attempt isn’t close, the prompt is wrong — not the model. Stop iterating and rewrite your request from scratch.
I now enforce a 3-turn rule: if I don’t have something usable after 3 exchanges, I step back and rethink the approach.
4. The Review Skip
The habit: AI output looks right at a glance, so you commit without reading it line by line.
The fix: Read every line like you’re reviewing a junior developer’s PR. AI is confident, not correct. I’ve caught subtle bugs in "perfect-looking" code that would have shipped to production.
My checklist:
- Are there hardcoded values that should be config?
- Does error handling actually handle errors (or just log and continue)?
- Are there imports for things that aren’t used?
- Does the logic match the spec, not just the happy path?
5. The “AI Knows Best” Defer
The habit: Accepting architectural suggestions from the AI because "it’s seen more code than me."
The fix: AI optimizes for local correctness, not system coherence. It doesn’t know your deployment constraints, your team’s conventions, or why you chose that database.
Use AI for implementation. Keep architecture decisions human.
The Meta-Lesson
Every anti-pattern has the same root cause: treating AI like a senior developer instead of a fast junior. Juniors are great at writing code quickly. They’re terrible at knowing what to write.
Your job didn’t change. You’re still the architect, the reviewer, the one who owns the outcome. AI just made the typing faster.
Which of these habits are you guilty of? I’d bet at least two of them. Drop a comment — I’m curious which ones hurt the most.
Top comments (0)