DEV Community

Ben Halpern
Ben Halpern

Posted on

What is your debugging approach?

Do you have a pattern you follow when tasked to track down a bug?

Top comments (51)

damcosset profile image
Damien Cosset

1- Panic
2- Question my ability to do shit as a developer
3- Calm down, start to look at the logs
4- Start spamming console.log()
5- Rage because there are too many console.log to keep track
6- If bug is complicated, get back to 1)
7- Making progress, adding a fix
8- Commit/push
9- Realize some other part is broken because of the fix
10- Back to 1)
11- Around 6PM, throw my hands in the air, yell "Fuck this shit", and leave.
12- Around 9AM the next morning, realize it was a typo.

rioma profile image

Start spamming console.log()


damcosset profile image
Damien Cosset

I feel you! I'm more of a:


kind of dev, but I see where you're coming from :D

codingmindfully profile image
Daragh Byrne

12, every bloody time!!

zoso1166 profile image

Dude, I read this and left for like 10 minutes !! Thanks for making my day 😁

momobarro profile image

your answer made my day :'‑) :'‑) :'‑)

kleene1 profile image
kleene1 haha πŸ˜„

mohanarpit profile image
Arpit Mohan
  1. Grep through the logs for any obvious issues or errors. With decent logging, 50% RCA happens here.
  2. Try to replicate the scenario in local environments and see the bug in action.
  3. Keep adding printf statements after each line of execution. Occasionally use the debugger as well on local to triage the issue.
  4. Forget to remove printf statements when committing fix.
  5. Create hotfix commit to remove debug logs πŸ˜„
  6. Actually deploy fix to production.
byrro profile image
Renato Byrro

πŸ‘ claps for honesty! πŸ˜„

sarathsantoshdamaraju profile image
Krishna Damaraju
  1. First thought - "Oh shit" 🀭
  2. Check the git commit and understand the code changes went with that.
  3. Put on the headphones 🎧 with the debugging playlist
  4. Use debugger and console.log to debug
  5. Bang my head around it πŸ˜‚
  6. Realise the issue was small πŸ€¦β€β™‚οΈ
  7. Look at my self in the mirror 😏
  8. Fixing it πŸ’ͺ
  9. Writing test cases
  10. Make a hot fix with a typo
  11. Deploy and forget πŸ˜‚
srinivas33 profile image
Srinivasa Rao

Here is my approach

  1. Reproduce the bug
  2. Locate the bug ( I use bunch of print statements, pdb here)
  3. Fix the bug
  4. Test it carefully
  5. Deploy on to staging and test again
  6. Deploy on to production
byrro profile image
Renato Byrro

I wouldn't say I have a pattern. Debugging is like an art. There are rarely two contexts where I can address the exact same way.

What I try to do all the time is to consider the context. I have lost too much time in the past for focusing too much on a single line or function, not understanding why it's failing.

In many cases, bugs are a product of a combination of factors.

Considering the context, what else is going on when the bug was produced, usually allows me to understand the cause faster.

sdryds profile image
Stewart Smith • Edited

I develop in a space where you can't trust the hardware you're running on. With that in mind:

1) check the logs
2) replicate the failure
3) come up with a minimal repro and pray that it fails consistently
4) use debugger Foo
5) consult hardware manuals for expected behaviour and interface
6) start tracing register activity and traffic to the hardware unit
7) start bit banging registers
8a) complain about the problem to coworkers
8b) learn about something seemly unrelated that is broken right now
8c) find out your problem is a corner case of that issue
9) file a driver or hardware bug
10) participate in long email threads followed by a meeting where the HW engs explain their change has no software impact and shouldn't break anything
11) HW engs end the meeting with "well in that case it does impact SW"

190245 profile image
  1. Tell the PM to discuss with QA, because Dev have no access to production.
  2. Kick the Jira report back to QA as it doesn't have reproduction data.
  3. Re-read and kick back again because it doesn't have logs attached.
  4. Raise a ticket with ops to enable remote debug in production.
  5. Connect to production debug port, add conditional breakpoints where I think the issue is.
  6. Observe the issue in production.
  7. Receive the updated Jira & confirm it reports the issue observed.
  8. Repeat 5&6 until 7 is completed.
  9. Write a test case that reproduces the bug & fails the build because of it.
  10. Confirm expected behaviour with a BA.
  11. Repeat 9&10 until happy.
  12. Assign Jira to a junior dev, with a comment of "can you fix the build for this please?"
  13. Coffee break.
npras profile image
Prasanna Natarajan

Recent favorite is testing a fix directly in production by monkey-patching the existing code with the fix (not recommended (but totally recommended)).

The usual favorites are the ones mentioned in Aaron Patterson's puts debugging methods. I have a vim shortcut (space + D) to put out these puts statements:

puts "^^ #{?# * 90}" #DEBUG
p caller.join("\n") #DEBUG
puts "$$ #{?# * 90}" #DEBUG

For sql, I have sample queries for quick reference that does all sorts of queries that I'd mostly require in my current project (CTEs, window functions, certain joined tables etc)

yucer profile image

If you write perfect patches maybe.

But if you're app use a shared state like the database, and your patch is wrong you might become nightmares.

The wrong states resulting for your mistake are left in the database. And worst is that normally the code is not made to handle inconsistent states.

If you note the mistake in the patch you can make a correction patch but also need an sql query to correct the wrong states.

Is you don't detect it son, the code might generate wormy states for the related entities and this situation spreads.

By the time you note the database might be so inconsistent that you better drop it.

npras profile image
Prasanna Natarajan

Yes, I'm aware of the risks of this approach.

I mentioned it explicitly in the post too.

Even if I know how the patch works inside out, I don't ever try this when it involves database changes. Too risky.

gklijs profile image
Gerard Klijs

Not really a pattern. If it's in some new functionality it's probably a typo somewhere. Logs might be useful, but it's also common for something expected not to happen, this no logging. If possible by debugging/ looking into the database check if some of the assumptions done might be wrong. Maybe there's a clue in the git log. Has SRE changed anything? Time to maybe as some logging statements to check assumptions. When it was Clojure I could just as them on the fly, and inspect the data directly.. and then it turns out it was a typo after all, ouch.

cskotyan profile image
chandrahasa k
  1. Reproduce the issue (entrust it to others if parallelism is possible.needed)
  2. Check logs analyze the stack/error traces
  3. If not enough detail available on the existing logs up the log level
  4. Analyze the code as wirtten and see if there is an issue.
  5. If step fails then run the code on local machine and debug it
  6. Apply the fix, test and write automated tests to catch that in the future.
  7. Release
mikestaub profile image
Mike Staub • Edited
  1. Fully understand the bug by describing it as simply as I can. Imagine you are trying to report it on github.
  2. Ask myself what I expected to happen, what actually happened, and what assumptions I hold that make me feels entitled to the expected result. I list these assumptions out. ( rubber duck method )
  3. I go through the list of assumptions and order them by what I think are the least likely to be invalid first.
  4. I walk down the list and verify all the assumptions are actually true. Usually, it is here that I find the 'invalid assumption'. ( here is where we actually use the debugging tools )
  5. If all my assumptions hold, then it simply means I don't understand the system deeply enough and I need to go back to step 3.
dbredvick profile image

I just debugged some CI/CD issues this AM. For me, devops debugging looks a little different than my normal (code) debugging process. Here's what happened:

  1. Try to configure a new service based on an internal blog post
  2. Google error message
  3. Tweak settings
  4. Re-read blog post
  5. Reach out to the author of the blog post
  6. Figure out the issue
  7. !!MOST IMPORTANT!! - update the docs so others don't run into this issue πŸ™‚
bauripalash profile image
Palash Bauri πŸ‘»

Javascript : console.log();
Python : print()
C : printf();
Go : fmt.Println()

pinotattari profile image
Riccardo Bernardini

Not really, I go with the wind...

Actually, my debugging starts when I write my code.

I program in Ada and, as far as possible, I declare type invariants, contracts for my procedures/functions, define specific subtypes with constraints and spread the code with Assert and such. Armed with this array of "bug traps," as soon as something fishy happens I have an exception that (usually) points an accusing finger to the culprit. This shortens debugging times a lot.

I still remember days of debugging for a dangling pointer in C that corrupted the heap and caused a segmentation fault in a totally unrelated point...

Beside that, I usually go with debugging prints. I use a debugger only in very unusual cases. I do not actually why, I just like debug printing more.

igeligel profile image
Kevin Peters

Working in distributed systems with microsoeervices, the problem is to find the causing service first. Normally I try to track where the error happens and follow the ID of the log message of the original service where the event was emitted. Then I'll try to mock the data and debug the critical point. Always going up in the stack trace to find the root cause.

Since I work with Node.js I always use ndb for debugging:

Some comments may only be visible to logged-in visitors. Sign in to view all comments.