Do you have a pattern you follow when tasked to track down a bug?
For further actions, you may consider blocking this person and/or reporting abuse
Do you have a pattern you follow when tasked to track down a bug?
For further actions, you may consider blocking this person and/or reporting abuse
Thomas Bnt -
Judy -
Peter Kim Frank -
Soloudo Uzoukwu -
Top comments (51)
1- Panic
2- Question my ability to do shit as a developer
3- Calm down, start to look at the logs
4- Start spamming console.log()
5- Rage because there are too many console.log to keep track
6- If bug is complicated, get back to 1)
7- Making progress, adding a fix
8- Commit/push
9- Realize some other part is broken because of the fix
10- Back to 1)
11- Around 6PM, throw my hands in the air, yell "Fuck this shit", and leave.
12- Around 9AM the next morning, realize it was a typo.
console.log('here1');
console.log('here2');
console.log('here3');
console.log('blabla');
console.log('bananas');
console.log('hmmm');
I feel you! I'm more of a:
console.log('HI')
console.log('HELLO')
console.log('HHEEEEEEYYYYY')
kind of dev, but I see where you're coming from :D
12, every bloody time!!
Dude, I read this and left for like 10 minutes !! Thanks for making my day π
your answer made my day :'β) :'β) :'β)
Yeh..lol haha π
printf
statements after each line of execution. Occasionally use the debugger as well on local to triage the issue.printf
statements when committing fix.π claps for honesty! π
debugger
andconsole.log
to debugHere is my approach
I wouldn't say I have a pattern. Debugging is like an art. There are rarely two contexts where I can address the exact same way.
What I try to do all the time is to consider the context. I have lost too much time in the past for focusing too much on a single line or function, not understanding why it's failing.
In many cases, bugs are a product of a combination of factors.
Considering the context, what else is going on when the bug was produced, usually allows me to understand the cause faster.
I develop in a space where you can't trust the hardware you're running on. With that in mind:
1) check the logs
2) replicate the failure
3) come up with a minimal repro and pray that it fails consistently
4) use debugger Foo
5) consult hardware manuals for expected behaviour and interface
6) start tracing register activity and traffic to the hardware unit
7) start bit banging registers
8a) complain about the problem to coworkers
8b) learn about something seemly unrelated that is broken right now
8c) find out your problem is a corner case of that issue
9) file a driver or hardware bug
10) participate in long email threads followed by a meeting where the HW engs explain their change has no software impact and shouldn't break anything
11) HW engs end the meeting with "well in that case it does impact SW"
Recent favorite is testing a fix directly in production by monkey-patching the existing code with the fix (not recommended (but totally recommended)).
I'm a MonkeyPatch Debugger
Prasanna Natarajan γ» Sep 24 γ» 3 min read
The usual favorites are the ones mentioned in Aaron Patterson's puts debugging methods. I have a vim shortcut (space + D) to put out these puts statements:
For sql, I have sample queries for quick reference that does all sorts of queries that I'd mostly require in my current project (CTEs, window functions, certain joined tables etc)
If you write perfect patches maybe.
But if you're app use a shared state like the database, and your patch is wrong you might become nightmares.
The wrong states resulting for your mistake are left in the database. And worst is that normally the code is not made to handle inconsistent states.
If you note the mistake in the patch you can make a correction patch but also need an sql query to correct the wrong states.
Is you don't detect it son, the code might generate wormy states for the related entities and this situation spreads.
By the time you note the database might be so inconsistent that you better drop it.
Yes, I'm aware of the risks of this approach.
I mentioned it explicitly in the post too.
Even if I know how the patch works inside out, I don't ever try this when it involves database changes. Too risky.
Not really a pattern. If it's in some new functionality it's probably a typo somewhere. Logs might be useful, but it's also common for something expected not to happen, this no logging. If possible by debugging/ looking into the database check if some of the assumptions done might be wrong. Maybe there's a clue in the git log. Has SRE changed anything? Time to maybe as some logging statements to check assumptions. When it was Clojure I could just as them on the fly, and inspect the data directly.. and then it turns out it was a typo after all, ouch.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.