DEV Community

Nova
Nova

Posted on

5 AI Coding Mistakes That Waste Hours (and the Prompts That Fix Them)

You paste code into ChatGPT. You get an answer. It looks right. You ship it.

Two hours later, it breaks.

I've made every AI coding mistake in the book. Here are the five that cost me the most time — and the exact prompts I use now to avoid them.


Mistake 1: Asking for a Solution Without Explaining the Constraint

What happens: You say "write a function that sorts users by name." The AI gives you a perfectly valid sort — that loads 50,000 records into memory.

The fix prompt:

I need a function that sorts users by name.

Constraints:
- Dataset: ~50k records, PostgreSQL backend
- Must use database-level sorting (no in-memory)
- Return paginated results (20 per page)
- Must work with the existing User model (id, name, email, created_at)

Show me the query and the controller method.
Enter fullscreen mode Exit fullscreen mode

Why it works: Constraints turn "write me a sort" into "write me this sort." The AI can't over-engineer or under-engineer when boundaries are explicit.


Mistake 2: Accepting the First Output Without Verification

What happens: The code compiles. The tests pass (the ones the AI wrote). You merge. Then edge cases start exploding.

The fix prompt:

Before giving me the final version, do this:

1. List 3 edge cases that could break this function
2. Write a test for each edge case
3. Run the function mentally against each test
4. Fix any failures
5. Then show me the final version with all tests passing
Enter fullscreen mode Exit fullscreen mode

This is the verification loop pattern. It adds 30 seconds to the prompt and saves hours of debugging.


Mistake 3: Dumping an Entire File and Saying "Fix It"

What happens: The AI has 4,096 tokens of context. You just used 3,800 on a file it doesn't understand. The remaining 296 tokens produce garbage.

The fix prompt:

Here's a focused excerpt from auth.js (lines 42-78).
The bug: login() returns 200 even when the password is wrong.

Expected: return 401 for invalid credentials
Actual: returns 200 with an empty token

Only fix the authentication check. Don't refactor anything else.
Enter fullscreen mode Exit fullscreen mode

Why it works: You did the triage. You isolated the problem. The AI can now be a surgeon instead of a detective.


Mistake 4: Not Telling the AI What "Done" Looks Like

What happens: You ask for "a REST API endpoint." You get something. Is it done? You don't know. The AI doesn't know. You go back and forth five times.

The fix prompt:

Build a POST /api/invoices endpoint.

Done means:
- [ ] Validates required fields (amount, customer_id, due_date)
- [ ] Returns 201 with the created invoice
- [ ] Returns 422 with field-level errors for invalid input
- [ ] Includes a test for each scenario above
- [ ] Uses the existing Invoice model (no schema changes)
Enter fullscreen mode Exit fullscreen mode

Checklists kill ambiguity. If you can't write the checklist, you don't understand the task well enough to delegate it — to a human or an AI.


Mistake 5: Using AI for the Wrong Part of the Job

What happens: You spend 20 minutes crafting the perfect prompt to generate a database migration. It would have taken 3 minutes to write it by hand.

The rule I follow now:

Task AI? Why
Boilerplate (CRUD, tests, types) Repetitive, low-risk
Business logic ⚠️ Only with clear spec
Architecture decisions You need to own these
Debugging (with context) AI is great at pattern-matching errors
Debugging (without context) Garbage in, garbage out

The best AI coding workflow is knowing when not to use AI.


The Pattern

Every mistake above has the same root cause: under-specified input.

The fix is always the same:

  1. State the constraint
  2. Define "done"
  3. Ask for verification
  4. Keep context small

That's it. No frameworks. No plugins. Just better prompts.


What's the AI coding mistake that costs you the most time? Drop it in the comments — I'll write the fix prompt for it.

Top comments (0)