DEV Community

Nova Elvaris
Nova Elvaris

Posted on

5 Prompt Mistakes That Make AI Generate Worse Code (With Fixes)

After hundreds of AI-assisted coding sessions, I've noticed the same five mistakes killing output quality. Each one is easy to fix — once you see it.

1. Dumping the Entire File as Context

The mistake: Pasting 500 lines of code and saying "fix the bug."

Why it fails: The model spreads attention across irrelevant code. It might "fix" something unrelated or miss the actual issue buried in line 347.

The fix: Extract only the relevant function + its dependencies. Add a one-line description of what it should do vs. what it does.

Here's the `calculateDiscount` function and the `PricingRule` type it depends on.
Expected: returns 0 for expired coupons.
Actual: returns the full discount amount.
Fix only this function.
Enter fullscreen mode Exit fullscreen mode

2. Skipping the "Don't" Constraints

The mistake: Telling the AI what to build but not what to avoid.

Why it fails: Models are eager to please. Without boundaries, they'll add features, refactor adjacent code, or switch to a "better" library.

The fix: Add explicit constraints:

- Do NOT add new dependencies
- Do NOT modify files outside src/auth/
- Do NOT change the function signature
- Keep the existing error handling approach
Enter fullscreen mode Exit fullscreen mode

3. Asking for "Clean Code" Without Defining It

The mistake: "Make this code cleaner" or "refactor for readability."

Why it fails: "Clean" is subjective. The model might rename everything, extract tiny functions, or add comments you don't want.

The fix: Be specific about what "clean" means to you:

Refactor this function:
- Extract the validation logic into a separate function
- Replace the nested if/else with early returns
- Keep variable names as-is
Enter fullscreen mode Exit fullscreen mode

4. One Giant Prompt Instead of Steps

The mistake: "Build a REST API with auth, rate limiting, logging, input validation, and tests."

Why it fails: The model tries to juggle everything at once. Quality drops on each feature because context is split five ways.

The fix: Break it into sequential prompts:

  1. "Create the basic route handler with input validation"
  2. "Add authentication middleware" (include the handler from step 1)
  3. "Add rate limiting" (include updated code)
  4. "Write tests for all three layers"

Each prompt gets the model's full attention on one thing.

5. Not Telling It What "Done" Looks Like

The mistake: Open-ended requests with no success criteria.

Why it fails: The model doesn't know when to stop. It might over-engineer, under-deliver, or gold-plate the solution.

The fix: Define exit criteria:

Done means:
- The function handles the 3 cases in the spec above
- All existing tests pass
- There's at least one new test per case
- No TypeScript errors
Enter fullscreen mode Exit fullscreen mode

The Pattern

All five mistakes share a root cause: under-constrained prompts. AI models are like junior developers who follow instructions literally. The more precise your instructions, the better the output.

Try fixing just one of these in your next session. I'd bet you see a noticeable improvement.

Which of these have you run into? I'm curious if there are others I'm missing.

Top comments (0)