DEV Community

Cover image for What to Include in a Bug Report: Console Errors, Network Logs, and Why Templates Aren't Enough
Benji Darby
Benji Darby

Posted on

What to Include in a Bug Report: Console Errors, Network Logs, and Why Templates Aren't Enough

It's standup. Someone picks up a ticket. The title says "Login broken." The description says "See screenshot" but there's no screenshot. Priority is marked "High" — which on this project means nothing, because everything is marked High.

The developer spends 20 minutes trying to reproduce, gives up, pings the reporter, waits four hours for a reply: "Oh I think it was in Firefox? Maybe Chrome." The ticket gets deprioritized. The user who reported it loses faith. The cycle repeats.

This isn't a process problem. It's a tooling problem.

Why most bug reports fail

The environment context is missing. "It doesn't work" with no browser, OS, viewport size, or URL. The developer has to guess, and their guess is wrong half the time.

The console had a red error that would have explained everything. The user didn't know to open DevTools. Even if they did, they wouldn't know which error was relevant.

The description is vague. "The page is weird" could mean a CSS misalignment, a broken layout, missing data, or a full crash. Without a screenshot, both sides waste time describing what they're seeing.

By the time the developer reads the ticket, the state that caused the bug is gone. The user's session expired, the cache cleared, the deployment rolled forward. Stale reproduction steps are worse than no steps at all because they send you down the wrong path.

And the bug was probably caused by a failed API call that returned a 403. The user saw "something went wrong." The developer sees nothing in the logs because the client never reported which endpoint failed.

What a good bug report actually contains

Every bug report that doesn't waste time has these:

  • Summary — one sentence describing the symptom, not the cause
  • URL — exact page where it happened
  • Browser and OS — including version
  • Screenshot — ideally annotated to highlight what's wrong
  • Console errors — from the session, not just the moment of reporting
  • Network failures — any 4xx/5xx responses from the page
  • Steps to reproduce — what the user was doing when it happened
  • Expected vs actual behavior — what they thought should happen

That's a lot to ask a human to collect manually. Most won't, and you can't blame them.

Why templates don't work

The common answer is "add a bug report template." Jira templates, GitHub issue templates, Google Forms with required fields.

Templates help with structure, but they fail at context. A user filling out "Browser: Chrome" doesn't know their version. They won't open DevTools to copy console errors. They won't check the Network tab. And they'll skip "Steps to reproduce" with "see above" nine times out of ten.

The information that matters most — console errors, network failures, browser version, exact URL — is already available in the browser at the moment the user encounters the bug. The problem is that nobody captures it.

The tooling approach

Instead of asking users to collect technical context, collect it for them.

Capture console errors from page load — not when the report form opens, but when the page loads. The error that caused the bug happened before the user decided to report it. Patch fetch and XMLHttpRequest to log response status codes. A 500 from /api/checkout is worth more than any written description.

Browser metadata (user agent, viewport, URL, timezone) is already available via JavaScript without any user input. Screenshot annotation lets the user capture and draw on their screen — a red circle around "this button does nothing" eliminates 80% of the back-and-forth.

And if the report goes straight to the issue tracker with all context attached, nobody has to copy-paste from a form into a ticket. The ticket is the report.

The checklist

If you're evaluating how to improve bug report quality on your team, here's what to audit:

  • [ ] Are console errors captured automatically, or does the user have to copy them?
  • [ ] Are failed network requests included in reports?
  • [ ] Is the browser/OS/URL captured without user input?
  • [ ] Can users annotate screenshots within the reporting flow?
  • [ ] Do reports go directly into your issue tracker, or is there a manual step?
  • [ ] Is the report captured at the moment of the bug, or after the user navigates away?

If you answered "no" to more than two of these, your bug reports are probably costing your team hours per week in triage and back-and-forth.

Privacy matters

One objection I hear: "We can't capture console errors and network requests — that's sensitive data."

This is valid and important. Any system that captures this data needs:

  • Domain exclusions — never log requests to payment processors, auth providers
  • URL sanitization — strip query parameters and path segments that contain IDs or tokens
  • User consent — the user initiates the report and can see what's being sent
  • Admin control — the team admin decides what categories of data are captured

Capturing everything and filtering nothing is a privacy violation. Capturing nothing and asking the user to collect it manually is a productivity violation. The middle ground is automated capture with explicit filtering and user visibility.


This is the problem I built IssueCapture to solve — a widget that captures all of this automatically when a user reports a bug, and creates the Jira ticket with full context. But the principles apply regardless of what tool you use. Better bug reports come from better tooling, not better templates.

The teams I've talked to who stopped fighting bug report quality all did the same thing: they stopped relying on the reporter and started capturing the data automatically.

Top comments (0)