DEV Community

Leena Malhotra
Leena Malhotra

Posted on

The Debugging Trick That Saves Me Every Sprint

I used to be the developer who fixed bugs by changing random things until the error went away.

Copy some code from Stack Overflow, tweak a few variables, restart the server, pray to the programming gods. When something miraculously worked, I'd commit it immediately before the universe noticed and broke it again.

This approach worked fine for toy projects and weekend hackathons. But once I started working on production code with real deadlines, my "guess and check" strategy became a liability. While other developers seemed to diagnose issues instantly, I was still flailing around in browser dev tools wondering why my perfectly reasonable code was being perfectly unreasonable.

Everything changed when I learned a technique that felt almost embarrassingly simple. So simple that I initially dismissed it as too basic to be useful.

I was wrong. This one practice has saved me countless hours, prevented production disasters, and made me the person my team comes to when they're stuck on impossible bugs.

The Rubber Duck Reality

Every developer has heard of rubber duck debugging—explaining your code to an inanimate object to find the flaw in your logic. It works because articulation forces clarity. When you have to verbalize your assumptions, you often discover they're wrong.

But here's what most developers miss: the rubber duck method works even better when you write it down instead of just talking.

The technique I'm about to share is essentially rubber duck debugging with a twist—you're not just explaining what your code does, you're documenting what you expected it to do, what it actually does, and the specific point where those two things diverge.

I call it "expectation mapping," and it's the closest thing to a debugging superpower I've ever found.

The Three-Column Debug Log

When I encounter a bug that isn't immediately obvious, I open a simple text file and create three columns:

Expected | Actual | Why?

Then I trace through my code line by line, filling in each column as I go.

Let me show you what this looks like with a real example from last month. I was working on a feature that aggregated user activity data, and the totals were consistently off by about 15%. Sometimes higher, sometimes lower, but never exactly right.

Here's how my debug log started:

Line 23: getUserData(userId)
Expected: Returns user object with activities array
Actual: Returns user object with activities array  
Why? ✓ This works fine

Line 27: activities.filter(a => a.date >= startDate)
Expected: Filters activities from last 30 days
Actual: Filters activities from last 30 days
Why? ✓ startDate is correct, filter works

Line 31: activities.reduce((sum, a) => sum + a.duration, 0)
Expected: Sums all duration values
Actual: Getting weird decimals like 145.39999999
Why? ? Duration stored as minutes, but something's off...
Enter fullscreen mode Exit fullscreen mode

That "something's off" note made me dig deeper into how duration was being calculated. Turned out the database was storing duration in seconds, but the frontend was treating it as minutes. The 15% variance was actually a consistent unit conversion error that was being masked by rounding in the display layer.

Without the expectation map, I would have spent hours looking at the aggregation logic. With it, I found the bug in fifteen minutes.

The Assumption Audit

The most powerful part of expectation mapping isn't finding obvious errors—it's surfacing hidden assumptions.

When you force yourself to write down what you expect each line of code to do, you discover how many assumptions you're making without realizing it. Assumptions about data types, API responses, user behavior, browser compatibility, network conditions.

Most bugs aren't caused by typos or syntax errors. They're caused by the gap between what we assume and what actually happens in the real world.

A few weeks ago, I was debugging a form that worked perfectly in development but failed randomly in production. My expectation map looked like this:

Line 15: validateEmail(email)
Expected: Returns true for valid emails
Actual: Returns false for some valid emails
Why? ? Regex works fine with test data...

Line 18: email.trim().toLowerCase()
Expected: Normalizes email for comparison  
Actual: Sometimes throws "Cannot read property 'trim' of undefined"
Why? ! Email field is sometimes null/undefined in prod
Enter fullscreen mode Exit fullscreen mode

That second entry revealed my assumption: I assumed the email field would always contain a string. In development, I was always filling out the form completely. But real users sometimes submitted the form with empty fields, triggering validation errors I'd never seen locally.

I use Crompt's document summarizer when I'm debugging complex systems with extensive documentation. Instead of re-reading entire API specs, I can upload the docs and quickly identify the specific behaviors and edge cases I need to verify. This helps me build more accurate expectations when mapping through integration points.

The State Snapshot Method

Here's where expectation mapping gets really powerful: tracking state changes over time.

Instead of just mapping individual lines, I snapshot the application state at key points and compare those snapshots to what I expected. This is especially useful for debugging React components, async operations, or anything involving complex state management.

Last sprint, I was debugging a shopping cart component where items would occasionally disappear after adding them. The UI would flash the item briefly, then revert to the previous state. Classic race condition symptoms, but I couldn't pinpoint where.

My state snapshot log looked like this:

Initial state:
Expected: { items: [], loading: false, error: null }
Actual: { items: [], loading: false, error: null }
Why? ✓ Clean start

After ADD_ITEM dispatch:
Expected: { items: [newItem], loading: false, error: null }
Actual: { items: [], loading: true, error: null }  
Why? ? Action dispatched but items not updated yet

After 200ms:
Expected: { items: [newItem], loading: false, error: null }
Actual: { items: [], loading: false, error: "Validation failed" }
Why? ! Server rejected item but UI didn't show error
Enter fullscreen mode Exit fullscreen mode

The expectation map revealed that I wasn't handling server-side validation errors properly. The item would be optimistically added to the cart, then silently removed when the server rejected it. Users saw a flash of success followed by confusion when their item vanished.

Without mapping expectations at each state transition, I would have focused on the wrong part of the code—probably the Redux reducers or the component rendering logic. The real bug was in error handling middleware.

The Documentation Side Effect

Here's an unexpected benefit of expectation mapping: it creates excellent documentation for complex bugs.

When you write down your debugging process—what you expected, what you found, why things went wrong—you're creating a knowledge base for your future self and your teammates. Next time someone encounters a similar issue, they don't have to start from scratch.

I keep a shared debugging log for our team where I document particularly tricky bugs using this format. It's become one of our most valuable resources. Instead of Slack messages like "Hey, anyone seen this error before?", we search the debug log first.

The log also helps during code reviews. When I see patterns in our debugging docs—the same assumptions causing problems repeatedly, the same edge cases being missed—I can suggest architectural changes or better testing strategies.

The Prevention Protocol

The most valuable debugging happens before bugs exist.

After using expectation mapping for a few months, I started applying the same technique during development. When writing new features, I create expectation maps for the critical paths before I even run the code.

This sounds like extra work, but it actually saves time. When you explicitly document your expectations upfront, you catch logical errors before they become runtime errors. You also create better test cases because you've already thought through the edge cases and failure modes.

For complex features, I use Crompt's AI tutor to help me think through potential edge cases I might be missing. I describe the feature I'm building and ask it to suggest scenarios where my assumptions might break down. It's like having a junior developer asking good questions about your design.

The Time Paradox

Expectation mapping feels slow when you're under pressure to fix something quickly. Writing down every assumption, documenting each step, tracking state changes—it seems like overhead when you just want the bug to go away.

But here's the paradox: the more urgent the bug, the more valuable the process becomes.

When you're stressed and behind schedule, your brain wants to jump to conclusions. You see an error message and immediately assume you know what's wrong. You make changes based on hunches instead of evidence. You fix symptoms instead of root causes.

The expectation map forces you to slow down just enough to think clearly. It prevents you from going down rabbit holes or making changes that create new problems. Most importantly, it helps you build confidence in your fix before you deploy it.

Last month, we had a production issue where user uploads were failing intermittently. The initial impulse was to restart the server and hope for the best. Instead, I spent ten minutes mapping expectations around the upload flow.

The map revealed that the failures correlated with file sizes above 2MB, but our code had no explicit size limits. Digging deeper, I found that our proxy server had a default upload limit we'd never configured. Instead of a band-aid restart, we implemented proper size validation and user feedback.

Without the expectation map, we would have deployed a temporary fix and hit the same issue again the next week.

The Collaboration Multiplier

Expectation mapping becomes even more powerful when you do it with teammates.

When multiple developers are debugging the same issue, everyone has different assumptions about how the system works. Instead of debugging in parallel and comparing notes afterward, we'll often do live expectation mapping in a shared document.

One person traces through the code while others add their expectations and observations. This catches blind spots that individual debugging would miss. It also ensures everyone understands the fix once we find it.

I use Crompt's AI debate bot sometimes to challenge my debugging assumptions. I'll describe my current hypothesis about what's causing a bug, and ask it to argue for alternative explanations. This helps me avoid confirmation bias and consider possibilities I might have dismissed too quickly.

The Meta-Learning Effect

The most valuable thing about expectation mapping isn't solving individual bugs—it's learning to debug better over time.

When you consistently document what you expected versus what actually happened, patterns emerge. You start noticing the types of assumptions you make repeatedly. You identify the blind spots in your mental model of how systems work.

Over the past year, my expectation maps have taught me that I consistently underestimate the complexity of browser differences, overestimate the reliability of network connections, and make too many assumptions about user input validation.

Knowing these patterns helps me write better code from the start. When I'm building a new feature, I automatically think "What are my usual blind spots here?" and design tests to catch those specific issues.

The Simple Truth

Great debugging isn't about knowing more obscure commands or having better tools. It's about thinking more clearly about what you expect to happen versus what actually happens.

Expectation mapping works because it externalizes your mental model and forces you to confront the gaps between your assumptions and reality. It's not magic—it's just a systematic way to think through problems instead of guessing your way through them.

The next time you're staring at a bug that doesn't make sense, try this: open a text file, make three columns, and start mapping what you expect versus what you observe.

Your rubber duck will thank you.

-Leena:)

Top comments (2)

Collapse
 
_79f03d9bf748cf3000e8c6 profile image
里咯

I understand this is the idea of test-driven development

Collapse
 
rohit_gavali_0c2ad84fe4e0 profile image
Rohit Gavali

Cool