DEV Community

Cover image for Cursor’s debug mode enforces what good debugging looks like
Pablo Rios
Pablo Rios

Posted on

Cursor’s debug mode enforces what good debugging looks like

Debugging with AI usually means copying logs into chat. I paste, it guesses, I paste more. The AI never sees the full picture — just the fragments I decide to share.

Cursor's debug mode works differently. It sets up instrumentation, captures logs itself, and iterates until the fix is proven. The interesting part isn't the AI — it's the process it enforces.

The bug

Pagination on an external API integration. Code looked right. Tokens being sent, parsed, passed back. But every request returned the same first page.

Hypotheses before fixes

I described the bug. Instead of jumping to a fix, it generated hypotheses:

  • The JSON field name might be incorrect
  • The query might be missing required parameters
  • The tokens might not be getting passed through correctly

Then it added instrumentation to test each one and asked me to reproduce the bug.

Instrumentation

Cursor added debug logs like this to capture what it needed to test each hypothesis:

f, _ := os.OpenFile(debugLogPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
f.WriteString(fmt.Sprintf(`{"location":"search.go:142","message":"request_body","data":%s}`, requestJSON))
f.Close()
Enter fullscreen mode Exit fullscreen mode

Then asked me to reproduce the bug and click proceed when ready.

Narrowing down

First round of hypotheses were all rejected. So it generated new hypotheses, added more instrumentation and ran again. Still rejected.

This went on for a few rounds. Each rejection narrowed the search space. The logs kept exposing more of what was actually happening at runtime.

The actual issue

Based on what the logs revealed, Cursor asked to check how the API expects pagination state to be passed for this specific operation.

Turns out the API had two different pagination mechanisms — one for regular queries, another for aggregations. We were using the right tokens but passing them through the wrong channel. The aggregation subsystem had its own contract for how continuation state gets passed back in.

Same data, different subsystem, different expectations. The logs showed the tokens being sent. The docs explained why they were being ignored.

Once the mismatch was clear, the fix was straightforward.

Wait for proof

Cursor didn't remove instrumentation until the fix was verified:

Before: page 1, page 1, page 1
After: page 1, page 2, page 3

No "it should work now." Actual proof.

What makes it different

The value isn't that AI is debugging for me — it's that the tool enforces discipline I already know I should follow:

  • Hypotheses before fixes
  • Instrumentation to capture evidence
  • Iteration until logs prove the fix
  • Documentation checks before assumptions

This is what good debugging looks like. Debug mode just makes it the default.

Top comments (0)