Have you ever worked on a bigger change for an app and struggled with the release schedule? Have you ever wanted to get feedback from users early but weren't sure when exactly the right time is? Here's a simple but effective prioritization technique that can help slim down your scope and give you more confidence in it with different stages that can be mapped to Alpha, Beta & Release.
There are lots of prioritization techniques that aim to solve different problems. You've probably already used some form of value vs. effort-based prioritization techniques, such as RICE. Maybe you've even asked your target audience with a purposefully designed survey to learn from them, e.g. using the KANO model. Every prioritization technique has its use cases and maybe they already helped you make a lot of useful decisions.
But these strategies are designed for a higher-level kind of prioritization, as in deciding if you should be implementing feature A or feature B first or if feature C is even needed in the next version at all. They don't scale down to tasks or even sub-tasks of your features though, so it's quite possible to do too much within a specific feature. Also, they don't help answer when you can start putting the feature in users' hands for early feedback to apply user-focused approaches, such as the Lean Startup methodology. One could of course opt for a method that is independent of scale, like the MoSCoW method, but their categories wouldn't be easy to rate because they're so abstract that different people would have different expectations for each category.
The goal of the Laser Focus prioritization strategy I suggest in this article is to provide clear rating categories, help with task scopes and provide an easy to apply method for interpretation. All three aspects together help you stay laser-focused.
Laser Focus categories
There are three goals we want to reach with our categories:
- Decide which tasks are in the scope of the currently planned release.
- Prioritize tasks needed for an Alpha or Beta version higher than the others.
- The Categories names should have an actionable, self-contained meaning.
We're suggesting the following categories which fulfill all requirements:
Vital
Absolute minimum needed for the first round of testing. Can be ugly.
This allows for shipping a product with just "Vital" features or tasks implemented to a small group of testers to get feedback early. Of course, the scope of this Alpha testing should be made clear stating what basic features or tasks are still missing so they are not unnecessarily reported by the testers. But the vitals of the product or feature can be tested already and we get a first round of feedback if we're headed in the right direction.
Essential
Core aspects required for basic functionality. Can have rough edges.
A second and bigger round of testing can be started as soon as all "Essential" features or tasks are implemented. At this level, no specific testing scope needs to be communicated, it should be enough to call the version a "Beta" version where the base features are available but still a lot of things are missing or incomplete.
Completing
Ironing out rough-edges and completing aspects of functionality.
The "Completing" level defines the scope where the final product is ready to be released. In some situations, e.g. if a new version was announced for a specific date, the product can also be released while still, some "Completing" tasks are open, but then it should be publicly marked as "Beta". Typically this level includes all kinds of features or tasks that are important for a bigger customer base but are not relevant to evaluate the core of the product.
Optional
Nice-to-haves that can be delayed to later (versions) or skipped entirely.
The "Optional" level has the notion that the features or tasks rated as such are wanted things, but that they are in no way necessary to release a finalized version of a product, even long term. Hence they can also be easily delayed or scrapped if needed as per the resources of the team.
Retracting
Nice-to-haves (at first sight) that can (potentially) cause more harm than improve things.
Unlike "Optional", features or tasks rated as "Retracting" should be actively avoided. That means it can make sense to document or keep them somewhere including the rationale why they should be avoided for long-term decision making. This saves time when the same idea comes up again sometime in the future. Also, if multiple people are involved in the rating, it can help identify the tasks where discussion might be necessary to clarify the effect of a task on the product.
Laser Focus matrix
The second pillar of the Laser Focus strategy is its multi-dimensional scalability. To explain what this means and why it is important, let's apply the categories we have so far with an example: Let's develop a stopwatch app to track time for different things done throughout a day. This is the initial list of feature ideas, rated using the Laser Focus categories:
- Create projects → Essential (essential to app, but pre-filled projects enough for first test)
- Edit projects → Completing (not a necessity for testing purposes, but for final release)
- Delete projects → Completing (cleanup task, not needed for testing purposes, but for final)
- Start/Stop a timer → Vital (core idea of app, vital part of the app)
- Select a project for the timer → Vital (without selecting project, app idea not fulfilled)
- Edit past tracked times → Retracting (V2 with competitive feature, risk of cheating)
- Delete past tracked times → Optional (nice to have, no risk of cheating as no added time)
- Show historical time tracked on a selected project → Essential (core use case for app)
- Show projects with the most tracked time → Essential (core use case for app)
Thanks to the categorization, we can already exclude two features from the first release and recognized even a feature we should probably never implement (6) that should be permanently documented. But more importantly, we now know that 4 and 5 are the "Vital" features to implement first. Let's start working on their sub-tasks:
- Start/Stop a timer → Vital
- Design Start/Stop button layout (low fidelity)
- Design Start/Stop button coloring & icons (high fidelity)
- Design Start/Stop button pulsating shadow effect (animations)
- Implement Start/Stop button layout (low fidelity)
- Implement Start/Stop button coloring & icons (high fidelity)
- Implement Start/Stop button pulsating shadow effect (animations)
- Setup basic tracked time database models
- Persist Start/Stop actions into the database
- Select a project for the timer → Vital
- Design project selector navigation & layout (low fidelity)
- Design project selector shapes, colors & icons (high fidelity)
- Design project selector navigation & layout (low fidelity)
- Design project selector shapes, colors & icons (high fidelity)
- Persist selected project into the tracked time database model
All clear, let's get started, right? Right?
No. I'm sure you noticed it already while reading/skimming through them. There's a problem. We have prioritized the features thinking about what's really necessary for being testable, for putting the app into users' hands. But now we have the same problem again, just on a different level. These tasks (and potentially also their sub-tasks) aren't all "Vital" for our very first version to put in users' hands. How can we fix this? Should we apply another rating for the tasks, too?
Yes, absolutely! This is actually a requirement in the Laser Focus strategy: Apply the rating on all levels down the road! Not necessarily above levels, where you are allowed to choose any alternative prioritization technique. But the lower levels from wherever you want to start from should all be rated like this. Let's assign the Laser Focus categories to the tasks, too and then see what this means for overall priority:
- Start/Stop a timer → Vital
- Design Start/Stop button layout (low fidelity) → Vital
- Design Start/Stop button coloring & icons (high fidelity) → Completing
- Design Start/Stop button pulsating shadow effect (animations) → Optional
- Implement Start/Stop button layout (low fidelity) → Vital
- Implement Start/Stop button coloring & icons (high fidelity) → Completing
- Implement Start/Stop button pulsating shadow effect (animations) → Optional
- Setup basic tracked time database models → Essential
- Persist Start/Stop actions into database → Essential
- Select a project for the timer → Vital
- Design project selector navigation & layout (low fidelity) → Vital
- Design project selector shapes, colors & icons (high fidelity) → Completing
- Implement project selector navigation & layout (low fidelity) → Vital
- Implement project selector shapes, colors & icons (high fidelity) → Completing
- Persist selected project into tracked time database model → Essential
It's important to note that the reference value for the ratings of the tasks was the feature because it's the direct parent. This means that I asked myself the question "Is persisting Start/Stop actions into database vital or essential to the feature Start/Stop a timer?" and not to the app or anything else. This makes answering the questions much easier.
Let's visualize these two different levels of rating with a simple matrix. On the X-axis we put the ratings of the features. On the Y-axis the ratings of the tasks. The circles represent the tasks:
As you can see, tasks 4a, 4d, 5a, and 5c are in the bottom left field, the "Vital-Vital" field, or short "VV". The field's background is tinted red. It contains all tasks the focus should be on first. Once they're all implemented, the very first testing round can begin, the Alpha phase starts.
The tasks 4g, 4h, and 5e in the yellow-tinted field "Vital-Essential" or short "VE" should be tackled next. Once all tasks in all three yellow-tinted fields are completed, the Beta phase starts.
The "VC" field with its "Completing" tasks for the "Vital" features should be tackled last among the tasks we defined so far. Once all tasks in all green-tinted fields are done, it's Release time.
In the above example, we skipped the tasks for all non-Vital features. If we had rated them also, the full matrix could have looked something like this, including also the "Retracting" rating:
We can see how the Alpha, Beta, and Release tasks are circularly layered around the origin point (bottom left corner), visually providing us a priority for each task based on its distance to the origin. This easily scales to a third axis if for example sub-tasks were added to each task. Formally speaking, this scales to any number of dimensions. To calculate the overall category of any given element, just look up all ancestors and just select the lowest priority as the overall category of the "atomic" (lowest level) element. For example, imagine a sub-task with the category rating "Essential", a parent task rated "Vital" and its parent feature rated "Completing". Overall, the lowest priority is "Completing", so this is the overall category of the sub-task.
Calculating the overall category alone can lead to many tasks being on the same level, especially at "Completing" where we have 5 different fields. A way of prioritizing features or tasks within the same category is by calculating the average of its own category combined with that of all its ancestor's categories. To do this, let's assign each category a number (from 1 "Vital" to 5 "Retracting"), the lowest level (e.g. a sub-task) can then be represented by a tuple, e.g. (2, 1, 3)
in the above example. The average of these numbers is simply calculated by (2 + 1 + 3) / 3 = 2.0
. Another task with more ancestors and the same overall "Completing" category might be rated as (3, 2, 3, 1)
and therefore have an average of (3 + 2 + 3 + 1) / 4 = 2.25
, so it should be prioritized lower. The higher the overall average, the lower the priority – that makes a lot of sense as the average number roughly resembles the distance to the origin – the highest possible priority.
But don't worry, you don't actually have to calculate these averages, there's a simpler way based on the matrix we've seen above with enough precision:
The above diagram shows in which order the fields should be tackled, based on origin distance. Note that there are two fields placed 2nd, 4th, and 5th each. For these fields, there's a choice to be made that can be different depending on the situation: Should we focus more on adding more features? Or should we focus more on improving the already started features? For a feature focus expansion first, you should continue in direction of the "Feature category", e.g. "EV" before "VE". For improving existing features first, it should be the other way around.
Laser Focus breakdown
In the above section, we learned that categorization on multiple levels is key to the Laser Focus concept. If you try to apply this to your project right away, you may realize though that many or even all of your features or tasks are actually "Vital" or "Essential" to you. If this is the case, then it's a sign that you have probably not efficiently split your tasks yet.
That's why it's important to break down your tasks the right way before categorizing them. The guiding question you should ask yourself while splitting features into tasks or tasks into sub-tasks should not be restricted to "which steps do I need to make to finalize it". You should also think about the effort for each step and if the effort isn't negligibly small, you might want to consider splitting it away. Sometimes it might seem to be hard doing that, but more often than not, it's a good idea to follow the approach "make it work, then make it better" while splitting the tasks.
For example, for the above feature "Start/Stop a timer" we could have split it up into 3 tasks: "Design the Start/Stop buttons", "Implement the Start/Stop buttons" and "Persist data". The problem with this is that there are no different levels of completion. It's better to break it down even further. Of course, we could do that as sub-tasks under these tasks, but to make priority calculation easier, it is recommended to do it in fewer levels. So instead we opted for "Design the Start/Stop button layout", "Implement Start/Stop button layout" and the same two tasks also for "... coloring & icons" and "... pulsating shadow effect".
Ask yourself which parts have their own effort and split them so each task is worth being prioritized based on the effort needed. Don't split micro-tasks away, it's not worth prioritizing such small tasks, just keep them as part of another task.
A proper breakdown is very important for the Laser Focus strategy to be effective.
Summary
Let's sum up the Laser Focus prioritization strategy:
- Break down your features and tasks into smaller steps of different completion levels
- Rate them on each level with "Vital", "Essential", "Completing", "Optional" or "Retracting"
- Visualize or calculate the overall priority for the lowest level by considering all ancestors
Apply these steps at any given time for your project and it will help you keep focusing on the important things and confidently putting your work-in-progress versions into users' hands early.
I hope this helps!
_This article was written by Cihat Gündüz
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.