DEV Community

riyaan amin
riyaan amin

Posted on

AI Didn't Write My Code. It Argued With Me. Here's the Difference.

I'm building an F1 prediction app. I asked Claude to implement the ML service — specifically, a prediction function that takes a driver, a team, and a race, and returns a predicted finishing position.

Claude wrote it. It looked right. Clean code, proper error handling, sensible fallbacks.

Then it added a comment I hadn't asked for:

# Note: blending model output with heuristic baseline
# because with only 1 race in training data, pure model
# predictions will overfit to Australia-specific conditions
Enter fullscreen mode Exit fullscreen mode

I hadn't thought about this. I was going to use the model output directly. Claude was pointing out, unprompted, that my approach was wrong.

We went back and forth for probably 20 minutes. I argued that the model should just learn from whatever data it has. It explained why that would make predictions confidently wrong rather than honestly uncertain. I pushed back. It held its position and gave me a better explanation.

Eventually I changed the implementation to match what it was suggesting.

That's not code generation. That's a technical disagreement where the AI was right and I was wrong. And I only got the benefit of it because I was actually engaging with what it was saying rather than just copy-pasting output.


The thing nobody says about AI coding tools

Everyone focuses on output speed. How fast can it write a function? How many lines per minute?

The more useful question is: how often does it catch something you missed?

In my experience, this happens constantly — but only if you read the code. The number of times I've had Claude write something, skimmed it, pasted it in, and had it fail in a way that was totally predictable from the code... that's on me, not the AI.

The coding AI that makes you faster isn't the one that writes the most code. It's the one that writes code you actually understand, so you can argue with it when it's wrong and learn from it when it's right.


Two patterns I've noticed

When I describe a problem clearly, Claude usually finds a better solution than what I had in mind. Not always, but enough that I've started treating my initial approach as a first draft hypothesis rather than a spec.

When I describe a solution (i.e., "implement X using approach Y"), it just does that. No pushback, no alternatives, even if Y is a bad idea. The framing matters.

The implication: if you want an AI to actually help you write better code, ask about problems, not implementations. "Here's what I need to achieve" gets you better results than "here's what I want you to build."


I don't know if this is a universal experience or just how I use these tools. Would be curious if others find the same thing.

What's the most useful thing an AI coding tool has pushed back on for you?

Top comments (0)