Do you have a pattern you follow when tasked to track down a bug?
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Do you have a pattern you follow when tasked to track down a bug?
For further actions, you may consider blocking this person and/or reporting abuse
Thomas Hansen -
Bas codes -
🦥 -
Hiren Dhaduk -
Once suspended, ben will not be able to comment or publish posts until their suspension is removed.
Once unsuspended, ben will be able to comment and publish posts again.
Once unpublished, all posts by ben will become hidden and only accessible to themselves.
If ben is not suspended, they can still re-publish their posts from their dashboard.
Once unpublished, this post will become invisible to the public and only accessible to Ben Halpern.
They can still re-publish the post if they are not suspended.
Thanks for keeping DEV Community safe. Here is what you can do to flag ben:
Unflagging ben will restore default visibility to their posts.
Discussion (55)
1- Panic
2- Question my ability to do shit as a developer
3- Calm down, start to look at the logs
4- Start spamming console.log()
5- Rage because there are too many console.log to keep track
6- If bug is complicated, get back to 1)
7- Making progress, adding a fix
8- Commit/push
9- Realize some other part is broken because of the fix
10- Back to 1)
11- Around 6PM, throw my hands in the air, yell "Fuck this shit", and leave.
12- Around 9AM the next morning, realize it was a typo.
console.log('here1');
console.log('here2');
console.log('here3');
console.log('blabla');
console.log('bananas');
console.log('hmmm');
I feel you! I'm more of a:
console.log('HI')
console.log('HELLO')
console.log('HHEEEEEEYYYYY')
kind of dev, but I see where you're coming from :D
12, every bloody time!!
Dude, I read this and left for like 10 minutes !! Thanks for making my day 😁
your answer made my day :'‑) :'‑) :'‑)
Yeh..lol haha 😄
printf
statements after each line of execution. Occasionally use the debugger as well on local to triage the issue.printf
statements when committing fix.👏 claps for honesty! 😄
debugger
andconsole.log
to debugHere is my approach
I wouldn't say I have a pattern. Debugging is like an art. There are rarely two contexts where I can address the exact same way.
What I try to do all the time is to consider the context. I have lost too much time in the past for focusing too much on a single line or function, not understanding why it's failing.
In many cases, bugs are a product of a combination of factors.
Considering the context, what else is going on when the bug was produced, usually allows me to understand the cause faster.
I develop in a space where you can't trust the hardware you're running on. With that in mind:
1) check the logs
2) replicate the failure
3) come up with a minimal repro and pray that it fails consistently
4) use debugger Foo
5) consult hardware manuals for expected behaviour and interface
6) start tracing register activity and traffic to the hardware unit
7) start bit banging registers
8a) complain about the problem to coworkers
8b) learn about something seemly unrelated that is broken right now
8c) find out your problem is a corner case of that issue
9) file a driver or hardware bug
10) participate in long email threads followed by a meeting where the HW engs explain their change has no software impact and shouldn't break anything
11) HW engs end the meeting with "well in that case it does impact SW"
Recent favorite is testing a fix directly in production by monkey-patching the existing code with the fix (not recommended (but totally recommended)).
I'm a MonkeyPatch Debugger
Prasanna Natarajan ・ Sep 24 ・ 3 min read
The usual favorites are the ones mentioned in Aaron Patterson's puts debugging methods. I have a vim shortcut (space + D) to put out these puts statements:
For sql, I have sample queries for quick reference that does all sorts of queries that I'd mostly require in my current project (CTEs, window functions, certain joined tables etc)
If you write perfect patches maybe.
But if you're app use a shared state like the database, and your patch is wrong you might become nightmares.
The wrong states resulting for your mistake are left in the database. And worst is that normally the code is not made to handle inconsistent states.
If you note the mistake in the patch you can make a correction patch but also need an sql query to correct the wrong states.
Is you don't detect it son, the code might generate wormy states for the related entities and this situation spreads.
By the time you note the database might be so inconsistent that you better drop it.
Yes, I'm aware of the risks of this approach.
I mentioned it explicitly in the post too.
Even if I know how the patch works inside out, I don't ever try this when it involves database changes. Too risky.
Not really a pattern. If it's in some new functionality it's probably a typo somewhere. Logs might be useful, but it's also common for something expected not to happen, this no logging. If possible by debugging/ looking into the database check if some of the assumptions done might be wrong. Maybe there's a clue in the git log. Has SRE changed anything? Time to maybe as some logging statements to check assumptions. When it was Clojure I could just as them on the fly, and inspect the data directly.. and then it turns out it was a typo after all, ouch.
Scream into the distance and pretend i know what im doing . Do a simple look and see what might come up look at the problem and come up with something. Ask people if they know what happened. Then i will look at the logs. Then finally i will make a test. Its around the logs i find out something and can cancel out what it is not. Then somehow manage to find the problem what ever it was..
Oh, of course, Ststem.out.println and chase the shit out of the bug like crazy ;)
True story tho: rather than adding breakpoint everywhere, I’m more towards to analyzing the business logic and identify the expected/unexpected results, unfortunately with a brutal print.
I developed this habit when working on a few production projects and live-debugging through the pages of logs.
In production, there is no debugger or any intuitive tools (not usually), but just layers of logs to dig into.
I practice my production debugging habit in daily coding task. It’s not as effective as utilizing a debugger, but it keeps my brain running ;)
I just debugged some CI/CD issues this AM. For me, devops debugging looks a little different than my normal (code) debugging process. Here's what happened:
Javascript :
console.log();
Python :
print()
C :
printf();
Go :
fmt.Println()
Go into the logs, get all the data as they were at that point in time, throw them in a pot, boil them into a few unit tests...and see why that problems occurred.
The problem is that the project I am currently working on, as weird as it may seem, it has so many time sensitive external dependencies. So that it doesn't even make sense to even try to debug anything :-) directly.
I wrote an article about debugging Javascript.
I mostly talked about using breakpoints instead of console logging. Truth is they both have their place, which is something I'd change.
But I hope can help!
Ideally, you make sure all the tests have passed, before jumping to debug.
Next, make sure you have covered all test cases.
After that just place breakpoints. First to the View events, then the logic then the data layer. So debugging and separation of conserns is somehow related.
console.log , sometimes also using the Browser-Debugger but mostly console.log 😅
find the wrong variable, fix, another issue, add console.log again, repeat :D
After too many written console.log()'s I recently began work on a kind of debug-dashboard, which will hopefully help to get a better sense to the written logs (or rather better visibility of the testing vars)
I think about it best by ruling out parts that couldn't be the problem. I start by trying to rule out as big of chunks as I can, which helps me narrow down to the subsystem, class, function, or even couple of lines where the problem lives. As soon as I can confirm that a piece works like I expect it should, I shrink my scope a little bit and look for the next piece to confirm works on its own. Once I find the spot that's working weird, it's super important that I understand why it's doing what it's actually doing so I can make sure the fix I use actually solves the problem.
Sometimes when I'm tired, I'll find myself randomly trying to change a piece of code to see if it works, but that's a sure sign that I'm not going to get anything else productive done and I need to take a walk, because it means I don't understand why my fixes should work.
It's nice because this thought process works really well for debugging mechanical assemblies that don't quite work too:
"OK, well we've verified that every dimension on this part is right, so take that out, set it aside, and look for the issue in the now-slightly-simpler assembly."
The most important part is that, the more confused and overwhelmed I get, the smaller, slower steps I take. :)
Working in distributed systems with microsoeervices, the problem is to find the causing service first. Normally I try to track where the error happens and follow the ID of the log message of the original service where the event was emitted. Then I'll try to mock the data and debug the critical point. Always going up in the stack trace to find the root cause.
Since I work with Node.js I always use ndb for debugging:
kevinpeters.net/how-to-debug-java-...
My debugging approach is still not as efficient as I would like it to be.
I am currently in the process to get diagrams setup for all the business rules to be able to understand what it should be and what it is. Also to push back to the reporter that it functions as designed. And also to make more and more tests.
Not really, I go with the wind...
Actually, my debugging starts when I write my code.
I program in Ada and, as far as possible, I declare type invariants, contracts for my procedures/functions, define specific subtypes with constraints and spread the code with Assert and such. Armed with this array of "bug traps," as soon as something fishy happens I have an exception that (usually) points an accusing finger to the culprit. This shortens debugging times a lot.
Beside that, I usually go with debugging prints. I use a debugger only in very unusual cases. I do not actually why, I just like debug printing more.
Yeah, if I can't reproduce, then it's a serious issue
This is my favorite, I call it the enterprise approach:
I use my polygot skillz
Sprinkle
console.log('<unique-prefix>', ...)
until the bug has been fixed 🤪Someone shared the six stages of debugging with me long ago (super funny!) and I haven't ever forgotten it!
this is how I do it. breakpoints everywhere. lools
debug
I use 'alert' rather then consoleLog.
I use 'toast' rather then LogD.
I use 'or die("Erro");' when I write php code.
50x print() 👌🏻😏
breakpoints everywhere * - *
See stacktrace.
Read logs and see what happened.
Try to reproduce.
Use debugger to see what happens in code.
Fix.
Test locally.
Code review.
Commit.
Test on test environment.
Basically turning my brain into an interpreter...
1 - Replicate the bug
2 - Narrow the location of the bug in the code with logs and debugger.
3 - Correct it (if it's a hard one, cry a little 😁 )
4 - Test it
5 - Deploy
Delete half of the code.... Bug still alive?
if yes then delete half of the remaining code and repeat......
But I try not to do this using ftp on a prod server of course
console.log('bacon');
Console.log all the things! You have to stick with what works.
Usually breakpoints. Sometimes logging. Sometimes semi-randomly commenting code out until the bug doesn't appear, which probably means the issue is in the commented out code. Sometimes git-bisect.