How to Write an Effective Bug Report (incl. template)

Savvina Drougouti on February 20, 2019

Why an effective bug report is necessary? It's crucially important to write an accurate error report, as in this way you will increase the chanc... [Read Full]
markdown guide
 

It's good to see someone else advocating a two-metric system for basic bug classification. Quantified Task Management uses 6-point Priority and Gravity; the latter name makes it applicable to non-bug issues. As to the six-point scale, p0 is "Wishlist", while p5 is "something is on fire". I usually advocate p4 for the usual "top", leaving one spot above that for actual emergencies.

Of course, one should always know what classification system is being used by the project you're filing a bug report on.

 

I am happy you noticed this point! I really liked the metric system of QTM and it's actually pretty similar to the basic one mentioned in the post. Thanks for the link, very interesting material!

 

A couple additions to the list that don't always apply but when they do, are critical for a good bug report.

  1. Required artifacts
    Sometimes an issue can be reproduced in a build from scratch with just some user interactions. In this case the written steps, screenshots and/or videos (how I ❤️that videos have now become a normal part of bug reports). But there are plenty of times when that is not enough to reproduce. Maybe the issue only occurs on a specific data-set, or after uploading a specific file, or after specific configuration changes. Depending upon the nature/size of the artifact it can be either be attached or put on Dropbox/Sharepoint/network-share with a link. I've had to deal with way too many bugs that were like "this report shows an error when it used to show data" and I have to ask for them to give me the report before I can do anything...

  2. Logs, logs and more logs
    There are plenty of issues where logs are not relevant. But when they matter they REALLY matter. If an error occurred there's probably a stack-trace logged somewhere - it should be provided. And if it's a client-side error do some digging server-side to see if there was any errors there that could be a root cause. And if you are thinking "well when the dev reproduces the issue they'll see the errors so why should I go digging" there are plenty of issues that are hard to reproduce and those errors on your instance may be the only copy in existence. Treat it as such.
    Full logs, not just errors, are particularly useful for those extra thorny performance issues. If a 1000 user stress-test is taking twice as long as the baseline I'm going to want as many logs and performance counters that are available, as there's no way a human can debug that test case.
    Depending upon your setup these logs may need to be found and provided like the required artifacts or they could be automatically pushed to a central logger (graylog, elastic, etc). Even if that is the case you should still provide the query information to find the relevant logs - instance-id/machine-name, time-range, user, etc.

 

Thanks for the detailed and helpful comment! I really agree with your two points and think that everyone involved in testing should keep a look on them too.

code of conduct - report abuse