DEV Community

Tudor Brad
Tudor Brad

Posted on • Originally published at betterqa.co

Your team is confusing bug severity with priority, and it's costing you sprints

I've sat through hundreds of sprint planning sessions where someone says "this is a P1" and someone else says "no, it's a sev-3" and then the whole room argues for fifteen minutes about a tooltip that renders wrong on Firefox. Nobody ships anything. The standup runs long. Half the team checks out mentally because they've had this exact argument before.

The root problem is simple: most teams use "severity" and "priority" interchangeably, and that confusion creates real damage. Bugs get fixed in the wrong order. Critical issues sit in backlog while someone polishes a cosmetic fix that a stakeholder complained about in Slack.

At BetterQA, we triage thousands of bugs across dozens of client projects every month. This confusion shows up constantly, and it's one of the first things we fix when onboarding a new team.

Severity is about impact, priority is about urgency

That's the whole distinction. Once you internalize it, triage gets dramatically faster.

Severity answers: how broken is this? How much damage does the bug cause to the system, the data, or the user's ability to do their job?

Priority answers: how soon do we need to fix it? Given everything else on our plate, where does this land in the queue?

These two axes are independent. They correlate sometimes, but treating them as the same thing is where teams lose sprint capacity.

The examples that make it click

I use two examples when explaining this to new QA engineers, and they tend to stick.

Low severity, high priority: the CEO's bio typo. Someone misspelled the CEO's name on the company About page. The system works perfectly fine. No functionality is broken. No data is corrupted. Severity? Low. But the CEO noticed it, sent a message to the VP of Product, and now three people are asking when it will be fixed. Priority? High. Fix it today.

High severity, low priority: the edge case crash. There's a bug where the app crashes if a user enters exactly 47 special characters into a phone number field during registration. The app completely dies. Severity? High, it's a full crash. But it affects roughly 0.1% of users in a flow that has a validation fallback anyway. Nobody has actually reported it in production. Priority? Low. Log it, schedule it for a future sprint, move on.

If your bug tracker doesn't let you set these independently, you'll default to whatever field you have and lose the nuance. This is exactly why we built BugBoard with separate severity and priority fields. The distinction matters for triage, and collapsing them into one dimension forces bad decisions.

What severity levels actually look like

I've seen teams use three levels, five levels, even seven. The number matters less than consistency. Here's a practical five-level scale that works across most projects:

Critical (sev-1)

The system is down, data is being lost or corrupted, or a core workflow is completely blocked for all users. Payment processing fails. Login is broken. The database is returning errors. There is no workaround.

If you're debating whether something is sev-1, ask: "Can users do the primary thing they came here to do?" If the answer is no, it's sev-1.

Major (sev-2)

A significant feature is broken or behaving incorrectly, but the system is still usable. Users can work around it, but the workaround is painful or non-obvious. Think: search returns wrong results, file uploads fail intermittently, or a key report generates incorrect numbers.

Moderate (sev-3)

Something is clearly wrong but the impact is contained. A secondary feature misbehaves. A form doesn't validate one edge case properly. Sorting works on most columns but breaks on date fields. Users notice it but can still get their work done.

Minor (sev-4)

Cosmetic issues, UI inconsistencies, or small deviations from the spec that don't affect functionality. A button is slightly misaligned. A success message uses the wrong shade of green. Text truncates awkwardly at one specific viewport width.

Trivial (sev-5)

Issues so minor that most users would never notice them. A tooltip appears 200ms late. There's extra whitespace at the bottom of a page that only shows on one browser. The "about" link in the footer points to a slightly outdated version of the page.

What priority levels actually look like

Priority is a business decision, not a technical one. That's why product managers, project leads, or client stakeholders typically set priority, while QA engineers set severity. The people closest to the technical impact assess severity. The people closest to the business impact assess priority.

Immediate (P1)

Drop what you're doing and fix this now. The fix goes into the current sprint, possibly as a hotfix outside the normal release cycle. Reserved for situations where the bug is actively causing business damage: lost revenue, broken SLAs, security vulnerabilities being exploited.

High (P2)

Fix this in the current sprint. It's important enough to bump something else out of the sprint if needed. Stakeholders are watching. Customers have noticed.

Medium (P3)

Schedule this for the next sprint or two. It needs to get done, but it's not urgent enough to disrupt current work. Most bugs land here, and that's fine.

Low (P4)

Fix it when you have time. Put it in the backlog and revisit during grooming. If it never gets fixed because higher-priority work keeps coming in, that might be acceptable.

Won't fix / defer (P5)

The team acknowledges the bug exists but has decided not to fix it, at least not in the foreseeable future. Maybe the feature is being deprecated. Maybe the cost of fixing it outweighs the impact. Document the decision and move on.

The four quadrants that matter for triage

When you separate severity and priority into two independent fields, you get a 2x2 matrix that makes triage decisions almost mechanical:

High severity + high priority: Fix immediately. System crash affecting many users, critical security hole, data corruption in a production workflow. This is your "all hands on deck" category.

High severity + low priority: Schedule carefully. The bug is technically severe but the real-world impact is low because of how rarely it occurs or because a workaround exists. Don't ignore it, but don't let it hijack your sprint either.

Low severity + high priority: Fix fast, but keep perspective. The CEO's typo. The client's logo rendered in the wrong color. A cosmetic issue on a landing page right before a big marketing push. Quick fix, high visibility, low technical risk.

Low severity + low priority: Backlog it. Minor UI polish, edge case behaviors that almost nobody encounters, small inconsistencies that don't affect usability. Groom these periodically and close the ones that are no longer relevant.

Where teams actually lose time

The damage isn't theoretical. I've watched it happen across projects.

Scenario 1: Everything is P1. A product owner marks every bug as high priority because they want everything fixed. The dev team has thirty P1 tickets and no way to distinguish between a broken payment flow and a misaligned icon. So they pick based on what seems easiest, or what they were already working near. The truly critical bugs get fixed by accident, not by design.

Scenario 2: Severity drives priority by default. The team uses a single field, or treats them as synonyms. A sev-1 crash that happens once a month in an internal admin tool gets treated with the same urgency as a sev-1 crash in the customer-facing checkout flow. One affects three people who already know the workaround. The other loses revenue every hour it's live.

Scenario 3: Nobody updates priority after initial triage. A bug was P3 when it was filed two months ago. Since then, the feature it affects has become the primary onboarding flow for a new enterprise client. It's now effectively P1, but nobody re-triaged it. The new client hits it on day one.

How we handle this at BetterQA

When we onboard a new client's QA process, one of the first things we audit is how they categorize bugs. More often than not, we find a single "priority" dropdown doing double duty, or severity levels that nobody on the team can define consistently.

We standardize on two separate fields with clear definitions that the whole team agrees on. QA sets severity based on technical impact. Product sets priority based on business context. When the two conflict, that conflict is the conversation worth having in triage, not "is this a P1 or a P2?"

In BugBoard, we enforce this separation at the tool level. Every bug has both fields. Reports can be filtered and sorted by either dimension independently. When you look at your backlog and filter for "high severity, low priority," you get a clear view of the technical debt that's accumulating quietly. When you filter for "low severity, high priority," you see the political fires that need quick attention.

Practical steps to fix this on your team

If your team is currently mixing these up, here's what I'd do:

1. Add both fields to your bug tracker. If your tool only has one, add a custom field. Every bug gets both a severity and a priority rating.

2. Define who owns each field. QA owns severity. Product or project management owns priority. If someone wants to change the other team's rating, that's a conversation, not a unilateral edit.

3. Write down your definitions. Put your severity scale and priority scale somewhere the whole team can reference. One page, plain language, with examples. Revisit it quarterly.

4. Use the 2x2 in triage. When reviewing new bugs, plot them mentally on the severity/priority grid. The quadrant tells you what to do. Stop debating feelings and start making decisions based on two clear dimensions.

5. Re-triage periodically. Priorities change. A P4 bug in January might be a P2 by March because the product roadmap shifted. Build re-triage into your grooming cadence.

It's a small distinction with a big payoff

Getting severity and priority right won't make your bugs disappear. But it will make your triage meetings shorter, your sprint planning more accurate, and your team less frustrated. When everyone agrees on what "this is critical" actually means, you stop arguing about vocabulary and start fixing the right things in the right order.

That's the whole point.

For more on QA practices, bug reporting, and how independent testing teams handle triage at scale, check out the BetterQA blog.

Top comments (0)