1- Panic
2- Question my ability to do shit as a developer
3- Calm down, start to look at the logs
4- Start spamming console.log()
5- Rage because there are too many console.log to keep track
6- If bug is complicated, get back to 1)
7- Making progress, adding a fix
8- Commit/push
9- Realize some other part is broken because of the fix
10- Back to 1)
11- Around 6PM, throw my hands in the air, yell "Fuck this shit", and leave.
12- Around 9AM the next morning, realize it was a typo.
I wouldn't say I have a pattern. Debugging is like an art. There are rarely two contexts where I can address the exact same way.
What I try to do all the time is to consider the context. I have lost too much time in the past for focusing too much on a single line or function, not understanding why it's failing.
In many cases, bugs are a product of a combination of factors.
Considering the context, what else is going on when the bug was produced, usually allows me to understand the cause faster.
I develop in a space where you can't trust the hardware you're running on. With that in mind:
1) check the logs
2) replicate the failure
3) come up with a minimal repro and pray that it fails consistently
4) use debugger Foo
5) consult hardware manuals for expected behaviour and interface
6) start tracing register activity and traffic to the hardware unit
7) start bit banging registers
8a) complain about the problem to coworkers
8b) learn about something seemly unrelated that is broken right now
8c) find out your problem is a corner case of that issue
9) file a driver or hardware bug
10) participate in long email threads followed by a meeting where the HW engs explain their change has no software impact and shouldn't break anything
11) HW engs end the meeting with "well in that case it does impact SW"
The usual favorites are the ones mentioned in Aaron Patterson's puts debugging methods. I have a vim shortcut (space + D) to put out these puts statements:
For sql, I have sample queries for quick reference that does all sorts of queries that I'd mostly require in my current project (CTEs, window functions, certain joined tables etc)
Not really a pattern. If it's in some new functionality it's probably a typo somewhere. Logs might be useful, but it's also common for something expected not to happen, this no logging. If possible by debugging/ looking into the database check if some of the assumptions done might be wrong. Maybe there's a clue in the git log. Has SRE changed anything? Time to maybe as some logging statements to check assumptions. When it was Clojure I could just as them on the fly, and inspect the data directly.. and then it turns out it was a typo after all, ouch.
Fully understand the bug by describing it as simply as I can. Imagine you are trying to report it on github.
Ask myself what I expected to happen, what actually happened, and what assumptions I hold that make me feels entitled to the expected result. I list these assumptions out. ( rubber duck method )
I go through the list of assumptions and order them by what I think are the least likely to be invalid first.
I walk down the list and verify all the assumptions are actually true. Usually, it is here that I find the 'invalid assumption'. ( here is where we actually use the debugging tools )
If all my assumptions hold, then it simply means I don't understand the system deeply enough and I need to go back to step 3.
I just debugged some CI/CD issues this AM. For me, devops debugging looks a little different than my normal (code) debugging process. Here's what happened:
Try to configure a new service based on an internal blog post
Google error message
Tweak settings
Re-read blog post
Reach out to the author of the blog post
Figure out the issue
!!MOST IMPORTANT!! - update the docs so others don't run into this issue 🙂
I graduated in 1990 in Electrical Engineering and since then I have been in university, doing research in the field of DSP. To me programming is more a tool than a job.
Actually, my debugging starts when I write my code.
I program in Ada and, as far as possible, I declare type invariants, contracts for my procedures/functions, define specific subtypes with constraints and spread the code with Assert and such. Armed with this array of "bug traps," as soon as something fishy happens I have an exception that (usually) points an accusing finger to the culprit. This shortens debugging times a lot.
I still remember days of debugging for a dangling pointer in C that corrupted the heap and caused a segmentation fault in a totally unrelated point...
Beside that, I usually go with debugging prints. I use a debugger only in very unusual cases. I do not actually why, I just like debug printing more.
A software engineer trying to build stuff and helping people in open source. I like beers, sunny beaches and JavaScript but I also fiddle around with a lot of other languages and tools.
Working in distributed systems with microsoeervices, the problem is to find the causing service first. Normally I try to track where the error happens and follow the ID of the log message of the original service where the event was emitted. Then I'll try to mock the data and debug the critical point. Always going up in the stack trace to find the root cause.
Since I work with Node.js I always use ndb for debugging:
Ryan is an engineer in the Sacramento Area with a focus in Python, Ruby, and Rust. Bash/Python Exercism mentor. Coding, physics, calculus, music, woodworking. Looking for work!
I think about it best by ruling out parts that couldn't be the problem. I start by trying to rule out as big of chunks as I can, which helps me narrow down to the subsystem, class, function, or even couple of lines where the problem lives. As soon as I can confirm that a piece works like I expect it should, I shrink my scope a little bit and look for the next piece to confirm works on its own. Once I find the spot that's working weird, it's super important that I understand why it's doing what it's actually doing so I can make sure the fix I use actually solves the problem.
Sometimes when I'm tired, I'll find myself randomly trying to change a piece of code to see if it works, but that's a sure sign that I'm not going to get anything else productive done and I need to take a walk, because it means I don't understand why my fixes should work.
It's nice because this thought process works really well for debugging mechanical assemblies that don't quite work too:
"OK, well we've verified that every dimension on this part is right, so take that out, set it aside, and look for the issue in the now-slightly-simpler assembly."
The most important part is that, the more confused and overwhelmed I get, the smaller, slower steps I take. :)
console.log , sometimes also using the Browser-Debugger but mostly console.log 😅
find the wrong variable, fix, another issue, add console.log again, repeat :D
After too many written console.log()'s I recently began work on a kind of debug-dashboard, which will hopefully help to get a better sense to the written logs (or rather better visibility of the testing vars)
Oh, of course, Ststem.out.println and chase the shit out of the bug like crazy ;)
True story tho: rather than adding breakpoint everywhere, I’m more towards to analyzing the business logic and identify the expected/unexpected results, unfortunately with a brutal print.
I developed this habit when working on a few production projects and live-debugging through the pages of logs.
In production, there is no debugger or any intuitive tools (not usually), but just layers of logs to dig into.
I practice my production debugging habit in daily coding task. It’s not as effective as utilizing a debugger, but it keeps my brain running ;)
Martin is an enthusiastic software engineer building and operating microservices in the JVM stack using Kotlin. Currently working for Albert Heijn in the role of DevOps engineer.
My debugging approach is still not as efficient as I would like it to be.
Panic
Open up the logs
Push the bug back to the reporter to explain it more and reproducible steps.
Run application locally and try to reproduce
Swear
Put log.info("what the fork is going on?!") everywhere in the code
Look into the database
Start to rule things out
Decide I need more documentation of all the business rules there are.
I am currently in the process to get diagrams setup for all the business rules to be able to understand what it should be and what it is. Also to push back to the reporter that it functions as designed. And also to make more and more tests.
Scream into the distance and pretend i know what im doing . Do a simple look and see what might come up look at the problem and come up with something. Ask people if they know what happened. Then i will look at the logs. Then finally i will make a test. Its around the logs i find out something and can cancel out what it is not. Then somehow manage to find the problem what ever it was..
UI Consultant, Maker & Technical Writer.
I write about JS, TS, Rx, Angular & all things Front End
🇮🇹🇬🇧
Follow me on Twitter: https://twitter.com/gc_psk
Founder of https://makerkit.dev
Go into the logs, get all the data as they were at that point in time, throw them in a pot, boil them into a few unit tests...and see why that problems occurred.
The problem is that the project I am currently working on, as weird as it may seem, it has so many time sensitive external dependencies. So that it doesn't even make sense to even try to debug anything :-) directly.
My name is Alexander and I am a tech lead by day. During the night I am an open-source developer on a mission to build helpful tools to help people in their learning journeys.
Usually breakpoints. Sometimes logging. Sometimes semi-randomly commenting code out until the bug doesn't appear, which probably means the issue is in the commented out code. Sometimes git-bisect.
See stacktrace.
Read logs and see what happened.
Try to reproduce.
Use debugger to see what happens in code.
Fix.
Test locally.
Code review.
Commit.
Test on test environment.
1 - Replicate the bug
2 - Narrow the location of the bug in the code with logs and debugger.
3 - Correct it (if it's a hard one, cry a little 😁 )
4 - Test it
5 - Deploy
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
1- Panic
2- Question my ability to do shit as a developer
3- Calm down, start to look at the logs
4- Start spamming console.log()
5- Rage because there are too many console.log to keep track
6- If bug is complicated, get back to 1)
7- Making progress, adding a fix
8- Commit/push
9- Realize some other part is broken because of the fix
10- Back to 1)
11- Around 6PM, throw my hands in the air, yell "Fuck this shit", and leave.
12- Around 9AM the next morning, realize it was a typo.
console.log('here1');
console.log('here2');
console.log('here3');
console.log('blabla');
console.log('bananas');
console.log('hmmm');
I feel you! I'm more of a:
console.log('HI')
console.log('HELLO')
console.log('HHEEEEEEYYYYY')
kind of dev, but I see where you're coming from :D
12, every bloody time!!
Dude, I read this and left for like 10 minutes !! Thanks for making my day 😁
your answer made my day :'‑) :'‑) :'‑)
Yeh..lol haha 😄
printf
statements after each line of execution. Occasionally use the debugger as well on local to triage the issue.printf
statements when committing fix.👏 claps for honesty! 😄
debugger
andconsole.log
to debugHere is my approach
I wouldn't say I have a pattern. Debugging is like an art. There are rarely two contexts where I can address the exact same way.
What I try to do all the time is to consider the context. I have lost too much time in the past for focusing too much on a single line or function, not understanding why it's failing.
In many cases, bugs are a product of a combination of factors.
Considering the context, what else is going on when the bug was produced, usually allows me to understand the cause faster.
I develop in a space where you can't trust the hardware you're running on. With that in mind:
1) check the logs
2) replicate the failure
3) come up with a minimal repro and pray that it fails consistently
4) use debugger Foo
5) consult hardware manuals for expected behaviour and interface
6) start tracing register activity and traffic to the hardware unit
7) start bit banging registers
8a) complain about the problem to coworkers
8b) learn about something seemly unrelated that is broken right now
8c) find out your problem is a corner case of that issue
9) file a driver or hardware bug
10) participate in long email threads followed by a meeting where the HW engs explain their change has no software impact and shouldn't break anything
11) HW engs end the meeting with "well in that case it does impact SW"
Recent favorite is testing a fix directly in production by monkey-patching the existing code with the fix (not recommended (but totally recommended)).
I'm a MonkeyPatch Debugger
Prasanna Natarajan ・ Sep 24 ・ 3 min read
The usual favorites are the ones mentioned in Aaron Patterson's puts debugging methods. I have a vim shortcut (space + D) to put out these puts statements:
For sql, I have sample queries for quick reference that does all sorts of queries that I'd mostly require in my current project (CTEs, window functions, certain joined tables etc)
If you write perfect patches maybe.
But if you're app use a shared state like the database, and your patch is wrong you might become nightmares.
The wrong states resulting for your mistake are left in the database. And worst is that normally the code is not made to handle inconsistent states.
If you note the mistake in the patch you can make a correction patch but also need an sql query to correct the wrong states.
Is you don't detect it son, the code might generate wormy states for the related entities and this situation spreads.
By the time you note the database might be so inconsistent that you better drop it.
Yes, I'm aware of the risks of this approach.
I mentioned it explicitly in the post too.
Even if I know how the patch works inside out, I don't ever try this when it involves database changes. Too risky.
Not really a pattern. If it's in some new functionality it's probably a typo somewhere. Logs might be useful, but it's also common for something expected not to happen, this no logging. If possible by debugging/ looking into the database check if some of the assumptions done might be wrong. Maybe there's a clue in the git log. Has SRE changed anything? Time to maybe as some logging statements to check assumptions. When it was Clojure I could just as them on the fly, and inspect the data directly.. and then it turns out it was a typo after all, ouch.
I just debugged some CI/CD issues this AM. For me, devops debugging looks a little different than my normal (code) debugging process. Here's what happened:
Javascript :
console.log();
Python :
print()
C :
printf();
Go :
fmt.Println()
Not really, I go with the wind...
Actually, my debugging starts when I write my code.
I program in Ada and, as far as possible, I declare type invariants, contracts for my procedures/functions, define specific subtypes with constraints and spread the code with Assert and such. Armed with this array of "bug traps," as soon as something fishy happens I have an exception that (usually) points an accusing finger to the culprit. This shortens debugging times a lot.
Beside that, I usually go with debugging prints. I use a debugger only in very unusual cases. I do not actually why, I just like debug printing more.
Working in distributed systems with microsoeervices, the problem is to find the causing service first. Normally I try to track where the error happens and follow the ID of the log message of the original service where the event was emitted. Then I'll try to mock the data and debug the critical point. Always going up in the stack trace to find the root cause.
Since I work with Node.js I always use ndb for debugging:
kevinpeters.net/how-to-debug-java-...
I think about it best by ruling out parts that couldn't be the problem. I start by trying to rule out as big of chunks as I can, which helps me narrow down to the subsystem, class, function, or even couple of lines where the problem lives. As soon as I can confirm that a piece works like I expect it should, I shrink my scope a little bit and look for the next piece to confirm works on its own. Once I find the spot that's working weird, it's super important that I understand why it's doing what it's actually doing so I can make sure the fix I use actually solves the problem.
Sometimes when I'm tired, I'll find myself randomly trying to change a piece of code to see if it works, but that's a sure sign that I'm not going to get anything else productive done and I need to take a walk, because it means I don't understand why my fixes should work.
It's nice because this thought process works really well for debugging mechanical assemblies that don't quite work too:
"OK, well we've verified that every dimension on this part is right, so take that out, set it aside, and look for the issue in the now-slightly-simpler assembly."
The most important part is that, the more confused and overwhelmed I get, the smaller, slower steps I take. :)
console.log , sometimes also using the Browser-Debugger but mostly console.log 😅
find the wrong variable, fix, another issue, add console.log again, repeat :D
After too many written console.log()'s I recently began work on a kind of debug-dashboard, which will hopefully help to get a better sense to the written logs (or rather better visibility of the testing vars)
Oh, of course, Ststem.out.println and chase the shit out of the bug like crazy ;)
True story tho: rather than adding breakpoint everywhere, I’m more towards to analyzing the business logic and identify the expected/unexpected results, unfortunately with a brutal print.
I developed this habit when working on a few production projects and live-debugging through the pages of logs.
In production, there is no debugger or any intuitive tools (not usually), but just layers of logs to dig into.
I practice my production debugging habit in daily coding task. It’s not as effective as utilizing a debugger, but it keeps my brain running ;)
My debugging approach is still not as efficient as I would like it to be.
I am currently in the process to get diagrams setup for all the business rules to be able to understand what it should be and what it is. Also to push back to the reporter that it functions as designed. And also to make more and more tests.
Scream into the distance and pretend i know what im doing . Do a simple look and see what might come up look at the problem and come up with something. Ask people if they know what happened. Then i will look at the logs. Then finally i will make a test. Its around the logs i find out something and can cancel out what it is not. Then somehow manage to find the problem what ever it was..
I wrote an article about debugging Javascript.
I mostly talked about using breakpoints instead of console logging. Truth is they both have their place, which is something I'd change.
But I hope can help!
Go into the logs, get all the data as they were at that point in time, throw them in a pot, boil them into a few unit tests...and see why that problems occurred.
The problem is that the project I am currently working on, as weird as it may seem, it has so many time sensitive external dependencies. So that it doesn't even make sense to even try to debug anything :-) directly.
I use my polygot skillz
Sprinkle
console.log('<unique-prefix>', ...)
until the bug has been fixed 🤪Yeah, if I can't reproduce, then it's a serious issue
console.log('bacon');
50x print() 👌🏻😏
Usually breakpoints. Sometimes logging. Sometimes semi-randomly commenting code out until the bug doesn't appear, which probably means the issue is in the commented out code. Sometimes git-bisect.
Someone shared the six stages of debugging with me long ago (super funny!) and I haven't ever forgotten it!
I use 'alert' rather then consoleLog.
I use 'toast' rather then LogD.
I use 'or die("Erro");' when I write php code.
Basically turning my brain into an interpreter...
Delete half of the code.... Bug still alive?
if yes then delete half of the remaining code and repeat......
But I try not to do this using ftp on a prod server of course
breakpoints everywhere * - *
Console.log all the things! You have to stick with what works.
this is how I do it. breakpoints everywhere. lools
debug
See stacktrace.
Read logs and see what happened.
Try to reproduce.
Use debugger to see what happens in code.
Fix.
Test locally.
Code review.
Commit.
Test on test environment.
1 - Replicate the bug
2 - Narrow the location of the bug in the code with logs and debugger.
3 - Correct it (if it's a hard one, cry a little 😁 )
4 - Test it
5 - Deploy