DEV Community

Cover image for AI Sycophancy: Is AI Too Nice?
Evan Morris
Evan Morris

Posted on

AI Sycophancy: Is AI Too Nice?

AI tools are incredibly helpful — and sometimes that’s the problem.

Large language models tend to agree with you. They validate your approach, confirm your assumptions, and tell you your code “looks good.” That confidence boost can feel earned, even when it isn’t.

As engineers, we should be cautious of that.

I use tools like Cursor, Gemini, and Copilot every day. They’ve absolutely increased my productivity. But I’ve noticed a consistent pattern: getting high-quality output often takes multiple attempts. The first response is usually fine. Rarely is it critical.

That’s not because the model is bad. It’s because it’s doing exactly what it was trained to do: be helpful.

And “helpful” often means agreeable.

Why This Matters

If you ask an AI model to review code in a vague way, you’ll usually get a vague review. Polite suggestions. Nothing that seriously challenges your implementation.

For example: Generic prompt

“Can you review this code for bugs?”

You’ll get something that sounds reasonable, but likely misses deeper issues — security assumptions, error handling gaps, or production risks.

Now compare that to this: Improved prompt

“Act as a strict senior software engineer. Review this code as if it will run in production and handle sensitive data. Focus on security issues, poor error handling, and unsafe assumptions. Call out anything that could cause failures and suggest concrete fixes.”

The difference in output quality is usually immediate.

What Changed?

  • You set a clear role (strict senior engineer)
  • You defined scope (security, error handling, production risk)
  • You explicitly asked for pushback, not validation
  • You required actionable feedback

This matters because AI models are optimized to agree unless you give them permission — and direction — to challenge you.

The Real Takeaway

The problem isn’t that AI is “too dumb” or that we need better models. The problem is that vague prompts turn AI into a yes-man.

If you want value, don’t ask AI to review your work.

Ask it to try to break it.

Just like a Quality Assurance Engineer’s job in a development team is to try and break the software before they approve of an implementation.

AI works best when you stop asking it to be nice and start asking it to be honest.

Top comments (0)