DEV Community

Cover image for Boost Your Testing Strategy: The Coolest Methods to Prioritize A/B Tests Like a Pro! ๐ŸŽฒ๐Ÿ“Š๐Ÿ˜Ž
Olga R
Olga R

Posted on

Boost Your Testing Strategy: The Coolest Methods to Prioritize A/B Tests Like a Pro! ๐ŸŽฒ๐Ÿ“Š๐Ÿ˜Ž

In today's fast-paced business world, continuous product improvement and innovation are key to staying competitive.

This is where A/B testing comes in as a powerful tool with both advantages and drawbacks. While A/B testing can yield valuable insights, it can also be expensive and time-consuming. Therefore, it is crucial to focus on running only the most valuable experiments.

To prioritize hypotheses, there are several methods available, including ICE, RICE, and PIE. We will discuss each of these methods to understand which one is better.


ICE. Impact. Confidence. Ease.

To effectively use the method, the team should assign a score to each of the three factors mentioned above and multiply them together. The smaller the score range used, the better, as it simplifies the process and helps to better assess the parameters.

  1. Impact. First, consider the potential impact of the test configuration on the key metric being optimized.

  2. Confidence. Next, assess how confident the team is in the effectiveness of this experiment based on their experience.

  3. Ease. Finally, determine how easy it is to implement the idea by assessing the number of team members required and the amount of time it will take. The easier it is to implement, the higher the score.

Once you have scored each factor, use the following formula to calculate the overall score:

ICE Score = Impact x Confidence x Ease


RICE. Reach. Impact. Confidence. Effort.

This method is quite similar to ICE, but it helps make a decision regarding the potential reach of the test.

  1. Reach. This is an estimate of how many users the idea will affect. For example, if the idea will affect all users, assign a value of 5. If it will impact 50% of users, assign a value of 4. If it will affect about 20% of users, assign a value of 3. If it will affect about 5% of users, assign a value of 2. Finally, if it will affect less than 1% of users, assign a value of 1.

  2. Impact. Think about how much the idea will influence user activity, such as the user's willingness to become a payer and how it affects the conversion rate. If you expect a 100% boost in conversion rate, assign a value of 5. If you expect a 50% increase, assign a value of 4, and so on. Alternatively, you can use a measuring scale where 3 indicates a massive impact, 2 for high impact, 1 for medium impact, 0.5 for low impact, and 0.25 for minimal impact. The measuring scale is flexible and can be adjusted to suit your needs.

  3. Confidence. How certain are you about the impact and reach scores you assigned to each idea? If you are completely confident, assign a score of 5. If you are not at all confident, assign a score of 1. Alternatively, you can use a percent score to indicate your confidence level, where 100% represents high confidence, 80% represents medium confidence, 50% represents low confidence, and so on.

  4. Effort. Last but not least is the amount of time and resources required to implement a particular feature. For example, if an improvement requires three team members to work on it for one week, the effort score would be 3 person-weeks. If it needs one team member and three weeks of work, it would also have an effort score of 3 person-weeks. If you don't want to calculate an exact number, just do a mental tally and assign an Effort score of 5 if it takes more than 10 workdays, 4 if it takes more than 5 workdays, and 1 if it takes less than a day.

After assigning all scores, you need to calculate the final score.

RICE Score = (Reach x Impact x Confidence) รท Effort

For example, (5 Reach x 5 Impact x 1 Confidence) รท 4 Effort = ~ 6.

While doing RICE, it is essential not to get bogged down in complex scoring and calculations in order to be as accurate as possible. Scoring provides a solid foundation for rational discussions about which idea to prioritize and why.


P.I.E. Potential, Importance, and Ease.

Here you should answer the next questions:

  1. How much improvements can be made on this part of your product as a result of a specific idea?

  2. Is this part of your product important? How often do users interact with it?

  3. Will it be challenging to implement the test?

To simplify the process, you can assign a score between 1 and 10 for each variable and calculate the average of the three scores. The result will give you an overall idea of the potential impact of the idea being tested.


While these methods may seem impressive in theory, it is crucial to assess their true value in practice. To effectively prioritize your A/B tests, it is important to consider your specific goals, resources, and product type. In my opinion, it is better to create a custom method tailored to your unique business processes, one that incorporates variables specific to your company. The scores of each idea should serve as a basis for discussion rather than a final judgment.

Regardless of the method used, proper prioritization of A/B test ideas is crucial to ensure that no one idea is given undue preference over others. Also prioritization should not be the responsibility of just one person, this should be a team activity that is performed regularly - for instance, once every two weeks - where team members can vote and share their opinions.

The backlog of ideas should be regularly reprioritized, as business priorities, strategies, and developer availability can all change. By staying on top of your ideas and regularly reevaluating them, you can ensure that you are always focusing on the best ideas and staying ahead of the curve.

Top comments (0)