DEV Community

Cover image for AI Is Too Polite
Evan Morris
Evan Morris

Posted on

AI Is Too Polite

AI speeds up development — no question.

The problem?

It’s trained to be helpful, and helpful often means agreeable.

If you confidently show an LLM buggy code, it might validate your logic instead of questioning it.

In a code review, a "yes-person" isn't a teammate; they're a liability.

The "Context" Trap

The reason AI often gives broad, surface-level advice is that we give it broad, surface-level prompts. If you ask it to "be a security engineer," it might give you a generic list of top 10 tips.

To get the most out of LLM's, you have to narrow its focus. AI has a limited "attention span" (context window); the broader the scope, the thinner the analysis.

The Fix: Trade Generalizations for Specifics

The fix isn't just better models—it's prompts that force the AI to look for specific failure points.

❌ The "Generic" Prompt
"Can you check this code for bugs?"

⚠️ The "Persona" Prompt (Better, but still broad)
"Act as a strict senior engineer. Be brutally honest and find errors."

✅ The "Specialized" Prompt (Best)
"Review this code specifically for concurrency issues. Identify potential deadlocks or threading bottlenecks that would emerge if traffic increased by 10x."

Why this works

By defining the "failure state" (high traffic, deadlocks), you're directing the AI's limited processing power toward a specific logic path. You aren't just asking if the code works; you're asking how it breaks.

The takeaway

Use AI to challenge your work, not to rubber-stamp it. Don’t just tell it who to be—tell it exactly where to dig.

Top comments (0)