DEV Community

Nova Elvaris
Nova Elvaris

Posted on

7 Signs Your AI Prompt Is Too Long (and How to Fix Each One)

Longer prompts don't mean better results. In fact, after working with LLMs daily for the past year, I've found the opposite: the best prompts are the shortest ones that still work.

Here are seven signs your prompt has bloated past the point of usefulness — and exactly how to trim each one.

1. You're Repeating the Same Instruction in Different Words

The symptom:

Make sure the output is valid JSON.
The response should be properly formatted JSON.
Please return JSON, not plain text.
Enter fullscreen mode Exit fullscreen mode

The fix: Say it once. Models don't need reinforcement the way humans do. One clear instruction beats three fuzzy ones.

Return valid JSON matching this schema: { "name": string, "score": number }
Enter fullscreen mode Exit fullscreen mode

2. You're Explaining What the Model Already Knows

The symptom:

JavaScript is a programming language used for web development.
TypeScript is a superset of JavaScript that adds static types.
Please write a TypeScript function that...
Enter fullscreen mode Exit fullscreen mode

The fix: Skip the preamble. The model knows what TypeScript is. Jump straight to what you need.

Write a TypeScript function that validates email format and returns { valid: boolean, reason?: string }.
Enter fullscreen mode Exit fullscreen mode

3. Your Context Section Is Longer Than Your Actual Task

The symptom: 500 words of background, 20 words of instruction.

The fix: Apply the 1:3 rule — your task description should be at least one-third the length of your context. If it's not, you're either over-explaining context or under-specifying the task.

Trim context to only what's needed for this specific task, not everything the model might conceivably need.

4. You're Including Files "Just in Case"

The symptom: Pasting your entire package.json, full config file, or three related source files when the model only needs one function signature.

The fix: Extract only the relevant snippet. Instead of 200 lines of config:

# Relevant config section
database:
  host: localhost
  port: 5432
  pool_size: 10
Enter fullscreen mode Exit fullscreen mode

The model processes every token you send. Irrelevant tokens don't just waste money — they dilute attention on what matters.

5. You Have More Constraints Than Requirements

The symptom:

Build a REST endpoint.
- Don't use Express
- Don't use callbacks
- Don't use var
- Don't use default exports
- Don't exceed 50 lines
- Don't add comments
- Don't use abbreviations
Enter fullscreen mode Exit fullscreen mode

The fix: Flip negatives to positives. Instead of seven "don'ts":

Build a REST endpoint using Fastify, async/await, const/let, named exports. Keep it under 50 lines, no comments.
Enter fullscreen mode Exit fullscreen mode

Positive constraints are clearer and shorter. The model spends less effort interpreting what you don't want.

6. You're Writing Prose When a Template Would Do

The symptom:

I want you to analyze the code and first tell me what it does, then identify any bugs, then suggest improvements, and finally rate the overall quality on a scale of 1-10 with an explanation.
Enter fullscreen mode Exit fullscreen mode

The fix: Use structured output format:

Analyze this code. Return:
1. Summary (2-3 sentences)
2. Bugs found (list)
3. Improvements (list)
4. Quality: X/10 — reason
Enter fullscreen mode Exit fullscreen mode

Templates are faster to read, faster to parse, and produce more consistent output.

7. You Keep Adding Instructions After Getting Bad Results

The symptom: Your prompt started at 100 tokens. After five iterations of "also do X" and "make sure to Y", it's 800 tokens and still not working.

The fix: Stop appending. Start over. Write a fresh prompt with everything you've learned from the failed attempts. A clean 150-token prompt almost always outperforms an 800-token Frankenstein prompt.

The Compression Test

Here's a quick exercise: take your longest prompt and try to cut it in half without losing any functional requirement. If you can — and you almost certainly can — the shorter version will likely perform the same or better.

I've done this with dozens of prompts. Average reduction: 40%. Performance change: negligible to improved.

Shorter prompts are:

  • Cheaper to run
  • Faster to return
  • Easier to debug
  • More consistent in output

The goal isn't the shortest possible prompt. It's the shortest prompt that reliably produces what you need.


What's the longest prompt you've ever written? Did cutting it down change the output? Drop a comment — I'm building a collection of before/after examples.

Top comments (0)