DEV Community

Tudor Brad
Tudor Brad

Posted on • Originally published at betterqa.co

How to write bug reports that developers actually fix

We write hundreds of bug reports per week at BetterQA. Fifty-plus engineers, dozens of client projects, every industry from healthcare SaaS to car auction platforms. And the single most demoralizing thing that happens in QA isn't finding a nasty bug. It's writing a careful, detailed report and getting "Cannot Reproduce" three days later.

The bug still exists. The developer closed the ticket. Your twenty minutes of documentation disappeared into the backlog. Next sprint, the same bug ships to production.

This cycle breaks something worse than software. It breaks trust. Developers stop reading your reports carefully because they expect noise. Testers stop investing effort because they expect rejection. The codebase suffers while both teams point fingers.

I want to talk about what actually fixes this, based on patterns we've seen across hundreds of client engagements.

The Christie story

A few years ago, one of our QA engineers, Christie, found a legitimate bug on a client project. The PM on the client side told her to close it. His reason: "It makes the development team look bad." Christie refused. Three weeks later, the product owner found the exact same bug unfixed in production.

That PM wasn't malicious. He was protecting his team's metrics. But this is what happens when bug reports become political instead of technical. The developer never got a chance to evaluate the bug on its merits because it was killed at triage by someone who had an incentive to keep the count low.

Good bug reports can't fix organizational dysfunction. But they can make bugs harder to dismiss. A report with precise reproduction steps, console logs, and a video recording is much harder to close without investigation than a vague ticket that says "checkout broken."

Why reports get closed

When we audit rejection patterns across client projects, the same reasons come up repeatedly.

No starting state. The report says "go to settings and click Save." Which settings page? Logged in as who? With what data already in the system? The developer opens a fresh account, clicks Save, nothing breaks, closes the ticket. Your bug was specific to admin users with more than 50 saved items, but you didn't mention that because it felt obvious.

Steps from memory. You found the bug at 4 PM, wrote the report at 5:30 PM, and reconstructed the steps from memory. You forgot step 4 where you switched tabs, which is the step that actually triggers the race condition. The developer follows your nine steps, skips the one you omitted, and can't reproduce.

Environment gaps. A bug that shows up on Chrome 121 with a slow 3G connection may not appear on the developer's local machine with sub-millisecond latency. A bug specific to macOS doesn't exist on the developer's Linux workstation. Without exact environment details, the developer can't tell whether they're looking at a genuine "can't reproduce" or a configuration mismatch.

Editorialized severity. If every bug you file is marked Critical, developers learn to ignore your severity ratings. When you actually find a production blocker, it sits in the same queue as your "Critical" report about a tooltip that's two pixels off. Reserve high severity for genuine blockers. Be honest about the rest. Developers will trust your escalations when they're rare.

What developers need from you

Forget templates for a minute. There are five questions a developer asks when they pick up a bug ticket, and if your report doesn't answer all five, you're creating work for both of you.

What broke? Not what the user experienced. What technical behavior deviated from spec. "The authentication endpoint returns 500 instead of 401 for expired tokens" is something a developer can grep for. "Something went wrong when I tried to log in" is not.

How do I see it? The exact sequence from a known state. "Known state" is the key phrase. Starting from a fresh browser session is different from a session that's been open for two hours. Starting from a newly created user is different from the test account you've reused for 18 months. Specify where the developer should begin, then number every click and keystroke.

What should happen instead? If you can't articulate the expected behavior, you might not have a bug. You might have a feature request, a misunderstanding of the spec, or an assumption about how things should work. Reference the PRD, the acceptance criteria, or at minimum describe what the same action produces in a working state.

How often does it happen? A bug that reproduces 100% of the time is a different investigation than one that appears in 1 out of 20 attempts. If it's intermittent, say so. Tell the developer what you tried, how many times, and whether you noticed any pattern in when it fails versus when it doesn't.

What else did you see? Browser console errors. Network tab responses. Server logs if you have access. The exact timestamp so the developer can correlate with their own logging. This ancillary evidence often contains the actual clue, even when the visible symptoms are vague.

The anatomy of a report that survives

Here's the structure we train our engineers to use. It's not revolutionary. It just works.

Title: [Component] Action fails with [Symptom] under [Condition]. "Checkout - Submit times out when cart contains 50+ items" tells a developer everything they need for initial triage. "Checkout broken" tells them nothing.

Environment: Browser and version. OS and version. Device type. Network conditions if relevant. Auth state. Feature flags. Timestamp. If the developer needs to match their setup to yours, give them everything they need to do it.

Preconditions: What has to be true before step 1. Account type, subscription tier, existing data, any setup you performed. This is the section that most reports skip, and it's the section most responsible for "Cannot Reproduce."

Steps: Numbered. Click-by-click. If timing matters, say so. "Click Submit before the loading spinner disappears" is a different bug than "Click Submit after the page has fully loaded." Don't assume the developer will interact with the UI the same way you did.

Expected result: What should happen, with a reference to why you believe that. A spec section, an acceptance criterion, or "this worked correctly on the previous build" all count.

Actual result: What happened, with evidence. Screenshot for visual bugs. Network trace for API failures. Console log for JavaScript errors. Video for timing-dependent issues. Attach everything. Developers will ignore what they don't need, but they can't use evidence you didn't include.

Frequency and severity: How often you saw it, how many times you tested, and your honest assessment of impact. Not every bug is a P1.

Mistakes that waste everyone's time

Combining bugs. Your report describes a broken submit button, a CSS overflow issue, and a missing validation message. That's three bugs. The developer fixes the submit button and now the ticket is in limbo, partially done, unable to be closed. One report per bug. Always.

Assuming shared context. You know that "the main CTA" means the purple button in the hero section. The developer who joined last week doesn't. Use explicit labels. Better yet, annotate screenshots. A red circle around the element in question removes all ambiguity.

Skipping negative tests. If the bug appears when logged in as admin but not as a regular user, that difference is critical diagnostic information. If you only tested admin, say so. Otherwise the developer will spend hours testing scenarios you already covered.

Writing steps from memory instead of re-testing. Before you submit, reproduce the bug one more time following your own steps exactly as written. If you can't reproduce it from your own report, the developer definitely won't.

Why we built BugBoard

We built BugBoard because the reporting workflow itself was the bottleneck. Our engineers were spending ten to fifteen minutes per bug report on manual assembly: capturing screenshots, annotating them, copying console logs, formatting steps, filling out fields. That overhead actively discourages thorough documentation. When you're running behind on test execution, you cut corners on reporting. Cut corners become rejected tickets.

BugBoard takes a different approach. You capture a screenshot, and the AI converts it into a structured bug report with steps, severity, and component tags. Browser details and console logs attach automatically. Instead of writing the report from scratch, you review and refine what the system drafted.

The result is that our engineers produce reports with consistent structure in under five minutes. Developers who receive BugBoard reports learn to trust the format because it's the same every time: same sections, same level of detail, same evidence types. That consistency builds the trust that makes the whole system work.

The trust problem underneath

Bug reporting is ultimately a communication problem between two groups of people who think differently about software. Developers think in terms of code paths and state. Testers think in terms of user flows and edge cases. A good bug report translates tester observations into developer-actionable information.

The teams that do this well have a few things in common. They review a sample of bug reports regularly and look for patterns in rejections. They track "Cannot Reproduce" rates and treat high rates as a process signal, not a blame metric. They make it easy for developers to ask clarifying questions quickly, because a five-minute conversation resolves ambiguity faster than three rounds of ticket comments.

And they treat bug report quality as a team discipline, not an individual skill. Templates help. Training helps. But what helps most is developers and testers actually talking to each other about what information matters and what doesn't.

The bugs are going to keep coming. With AI-assisted development pushing code faster than ever, the volume is only going up. The question is whether your reports will survive first contact with the developer who has to fix them.

More on how we approach QA at betterqa.co/blog.

Top comments (0)