In Scrum, there is something called the Definition of Done. It's basically a set of conditions which is used to objectively determine is a user story is done (in a sense that is understood by all parties).
Let's say that I've implemented a story, but upon testing the QA team finds one or more bugs that are critical (i.e. blockers).
Obviously that blockers will have to be fixed ASAP.
But can/should you consider that story done?
More specifically, should blockers be part of the definiton of done?
Also, what if it's not a blocker (just a P2 or lower)?
All answers are welcome, but I'd especially appreciate those coming from people who actually have experience with Scrum.
Top comments (7)
A very simple approach you can have when it comes to consolidating what "Done" means to your team is:
Is the original issue/struggle solved with what we're delivering?
With that in mind, it becomes clear that no, it is not done.
Also, remember that a blocker blocks, which means that in your development process from "To Do" to "Done", if there is a blocker, the story never reached done, because it's blocked.
I like your suggested approach/question.
An alternative could be "Is our product acceptably shippable even with this issue?".
Thanks for the input.
That makes sense, it's a different context than the one I'm into (we ship on demand, not product oriented), but I guess the principle remains.
Glad I could help :)
There'll be a variety of opinions on how to construct a definition of done (after many years I've never seen two definitions that looked the same). So take what I have to say with a truckload of salt.
On teams that I work with, these types of blockers (or more specifically the absence of critical issues) are always included in the definition of done.
For context, it's common that the first thing on our definition of done checklist is "It is running in production, without a feature flag". We don't consider work on a story to be "done" until it's been running in the real world for some time, and cannot be toggled off.
Applying that to your scenario: If a story introduces a critical bug prior to making it to production, it's not even close to done. It goes right back into the development column, with new acceptance criteria surrounding the issue that was discovered.
Now, there's always a grey area when dealing with non-critical issues. Ultimately it's up to the team, the product owner and the business stakeholders to decide if it's ok to ship code that contains issues that will be fixed some time in the near future.
Typically when we discover any issue within a story, we'll sit down with our product owner and replicate the issue for them. From there we have two options, add new acceptance criteria to address the issue or create a separate (but dependent) bug card to correct the issue.
As I described above, in the case of breaking changes, there's no debate, the card goes back into development. But in the case of a smaller impact issue it's a coin flip, and ultimately up to the product owner and business' best judgement.
I think the only applicable answer you're going to find to this question is going to come from within your team and business context. For example: If you're working on a product that hasn't been released to the public yet, then introducing breaking changes is a far smaller issue than if you were out in the wild with millions of users. That's only one of many variables that will influence how you define when work is "done" in your team.
Yes, having the PO or stakeholders seemed like the best option to me as well.
I wanted to see if others share this view (or if they have reasons to be against it).
Thanks for your input.
P.S. Regarding what you said about having the implementation in production for a while, I just want to say: That's awesome! And badass!
User Stories should only be closed after QA proves that the solutions works to cover all the closing criteria. If you are closing tickets without QA's signoff, then you are doing it way to early. QA signoff should be a part of closing a ticket.
Yes, that makes sense.