DEV Community

Nova
Nova

Posted on

Chain-of-Thought Prompting: Make AI Think Step by Step (With Examples)

I’ve lost count of how many times an LLM gave me a confident wrong answer… and then magically fixed it the moment I asked it to slow down.

That’s the core idea behind chain of thought prompting (often shortened to CoT prompting): you nudge the model into a more deliberate, step by step AI mode so it doesn’t jump to the first plausible output.

A quick caveat: some models won’t reveal a full internal “thought process” verbatim (and that’s fine). In practice, you don’t need the raw brain-dump. What you want is:

  • a clear plan
  • intermediate steps that matter
  • checks/verification
  • a clean final answer

Below are patterns I actually use, plus before/after prompts for math, debugging, and analysis.

What “chain of thought prompting” really means

In plain English, you’re asking for structured reasoning instead of a single-shot answer.

A good CoT prompt usually includes:

  1. Role + goal (what the model is doing)
  2. Constraints (format, tools, assumptions)
  3. Process (steps, checks, alternatives)
  4. Output format (so you can copy/paste the result)

The magic isn’t “please think step by step” — it’s giving the model a track to run on.

Pattern 1: “Plan → Execute → Verify” (works everywhere)

This is my default. It reduces hallucinations and catches dumb mistakes.

Template

You are {role}. Solve {task}.

Process:
1) Plan: write a short plan (3-6 bullets).
2) Execute: follow the plan.
3) Verify: list 2-3 quick checks (edge cases, math check, sanity check).

Output:
- Final answer:
- Verification:
Enter fullscreen mode Exit fullscreen mode

You’ll see this pattern show up in the examples below.

Example 1 (Math): stop guessing, start checking

Before (typical prompt)

A train travels 180 km in 2 hours and then 120 km in 1.5 hours.
What is the average speed for the whole trip?
Enter fullscreen mode Exit fullscreen mode

You’ll often get the right answer, but when it’s wrong it’s usually because the model averages the speeds instead of using total distance / total time.

After (CoT prompting)

Solve this and show the calculation steps.

Problem:
A train travels 180 km in 2 hours and then 120 km in 1.5 hours.

Process:
1) Write the formula for average speed over multiple legs.
2) Compute total distance and total time.
3) Compute average speed.
4) Verify with a quick sanity check.

Output:
- Average speed (km/h):
- Steps:
- Sanity check:
Enter fullscreen mode Exit fullscreen mode

Why it works: you force the model onto the correct structure (totals), and you explicitly require a sanity check.

Pattern 2: “Show your work, but keep it tight”

Some prompts trigger an essay. You can ask for step by step AI reasoning without getting a wall of text.

Template

Explain your reasoning concisely.
Show only the steps that change the result (no filler).
If you make an assumption, state it.
Enter fullscreen mode Exit fullscreen mode

This is especially useful in code debugging.

Example 2 (Code debugging): from vague to actionable

Before

My code doesn’t work. What’s wrong?

function sum(a, b) {
  return a + b
}

console.log(sum("2", 3))
Enter fullscreen mode Exit fullscreen mode

That prompt invites generic advice.

After (debugging CoT prompt)

You are a senior JS dev helping me debug.

Task:
- Explain why the output is what it is.
- Provide 2 fixes (one strict, one flexible).
- Add a tiny test snippet to prove it.

Process:
1) Identify the root cause in one sentence.
2) Show the minimal code change.
3) Verify with tests.

Code:
function sum(a, b) {
  return a + b
}

console.log(sum("2", 3))

Output format:
- Root cause:
- Fix A (strict types):
- Fix B (coerce inputs):
- Tests:
Enter fullscreen mode Exit fullscreen mode

Why it works: you’re not just asking “what’s wrong”, you’re asking for root cause → fix → proof.

Pattern 3: “Ask it to generate counterexamples” (anti-overconfidence)

If you only ask for a solution, models optimize for plausible. If you ask for counterexamples, you get a more robust answer.

Template

After you propose a solution, try to break it:
- give 2 edge cases where it fails
- then patch the solution
Enter fullscreen mode Exit fullscreen mode

Example 3 (Analysis task): writing a decision memo instead of vibes

Let’s say you’re doing a small architecture decision.

Before

Should I use Redis or Postgres for caching?
Enter fullscreen mode Exit fullscreen mode

You’ll get a bloggy “it depends.”

After (analysis CoT prompt)

You are my pragmatic tech lead.

Context:
- Product: SaaS API
- Workload: 2k req/s peak
- Cached objects: ~5 KB each
- Cache TTL: 5 minutes
- Team: 3 devs, minimal ops time

Task:
Recommend Redis or Postgres-based caching.

Process:
1) List decision criteria (latency, complexity, cost, failure modes).
2) Compare options in a table.
3) Give a recommendation + when it changes.
4) Provide 2 counterexamples where your recommendation is wrong.

Output:
- Recommendation:
- Comparison:
- When I’d choose the other option:
- Counterexamples + mitigations:
Enter fullscreen mode Exit fullscreen mode

Why it works: you get an actual mini decision record, plus conditions that flip the answer.

A practical “step by step AI” checklist

When CoT prompting isn’t working, it’s usually because the prompt is missing one of these:

  • Inputs: the model is guessing missing context
  • Objective: you didn’t define what “good” looks like
  • Constraints: no word limit, no output format, no assumptions
  • Verification: no checks, tests, or edge cases

If you want a single line upgrade that helps a lot, add this:

Before finalizing, run a quick verification: check assumptions, edge cases, and do a sanity check.
Enter fullscreen mode Exit fullscreen mode

Common mistakes (and fixes)

1) “Think step by step” with no structure

  • Fix: ask for Plan → Execute → Verify or a numbered procedure.

2) No output format

  • Fix: specify headings or bullet slots to fill.

3) No permission to ask questions

  • Fix: add “If anything is ambiguous, ask up to 3 clarifying questions first.”

4) No grounding for math/code

  • Fix: require unit checks, small tests, or a worked example.

Copy/paste prompt: CoT without the fluff

Here’s a final reusable prompt that works for most tasks:

You are a helpful expert.

Goal: {what I want}
Context: {inputs, constraints, audience}

Approach:
1) Write a short plan (max 5 bullets).
2) Solve step by step, keeping it concise.
3) Verify: list 2-3 checks or edge cases.

Output:
- Answer:
- Key steps:
- Verification:
Enter fullscreen mode Exit fullscreen mode

If you use this consistently, you’ll notice fewer “confidently wrong” outputs — and when the model is wrong, you’ll see where it went off the rails.


Want more patterns like this? I put together a free Prompt Engineering Cheatsheet sample — grab it here: https://getnovapress.gumroad.com/l/prompt-sample

Top comments (0)