Most of us learned to prompt AI by guiding its thinking.
"Think step by step."
"Here's an example of how to solve this."
"First check A, then compare B, finally conclude with C."
These techniques made sense — because we were working with models that needed a path. Without structure, they'd rush to a conclusion.
But reasoning models shift this premise.
Thinking Comes Before the Answer
General conversational models excel at producing natural answers quickly. For tasks where the direction is clear — brief summaries, simple explanations — this is sufficient.
Reasoning models work differently. Rather than pushing problems straight toward conclusions, they're designed to compare conditions, trace possible paths, and hold problems longer before forming answers.
Models like Claude's extended thinking or OpenAI's o-series represent this direction — built to spend more computation on internal reasoning.
A reasoning model isn't one that writes longer answers. It's one built to grapple with harder problems for longer.
When Your Old Methods Get in the Way
With general models, "think step by step" can be helpful. It forces intermediate steps rather than jumping to conclusions.
But with reasoning models, the same approach doesn't always work.
When you strongly specify an arbitrary sequence of thinking to a model already designed to break problems down, you narrow the space for it to find a better path.
The same goes for examples. Good examples show the standard for an answer. But overly detailed examples can lock the model into a specific solution method — even when a superior approach exists.
This isn't about Chain of Thought being wrong. It's about using the same habits when your tool has fundamentally changed.
Specify the Goal, Then Step Back
With reasoning models, sometimes saying less is better.
Rather than mapping out the process in detail, give:
- A clear goal
- The criteria for a good answer
- The output format you need
Then leave the middle steps to the model.
The model doesn't need you to design its thinking process. It needs to know what counts as a good answer.
This is an excerpt. The full piece — including a side-by-side prompt comparison and when reasoning models are the wrong tool entirely — is at Dechive.
Dechive is a quiet library for the AI age — a place to read slowly, think deeply, and ask why.
Top comments (0)