DEV Community

Fabricio Teixeira for UX Collective

Posted on • Originally published at uxdesign.cc on

All them A/B tests that never happened

When A/B tests become less about user validation and more about escape hatches.

Photo by Robert Anasch

5:01pm. Design review session. The designer pulls up their first design and talks through how this updated version of the homepage will help the business, the user, and democracy. They then pull up a second version of the same homepage, and list a few other reasons why this one is even better.

“There’s certainly pros and cons for each of these options you are presenting…”, says one of the coworkers.

What follows is a series of comments from the people in the room in defense of one of the two options. Opinions start pretty shy, generally positive, and gradually become more passionate until the point people start to pick sides.

Several minutes of heavily passionate debate go by.

It’s now 6:02pm and someone is waiting outside to use the meeting room.

“We should definitely A/B test them.”

The A/B test as a escape hatch

It’s fine to be indecisive.

Designers who are curious and constantly looking to learn are —in my opinion— more valuable to companies than designers-who-know-it-all-based-on-zero-facts. That’s the beauty of digital design: using data to make informed decisions on what an interface will look like. Art and science combined into a single discipline.

The problem with proposing A/B tests as a shortcut to end that type of debate is that usually the discussion that leads to that conclusion is not about which version will perform best; the difference between the two options presented has more to do with different business priorities than different UI approaches to reach that priority. Those two versions are trying to achieve different things, and both are important for the business.

Incapable of prioritizing one goal over another, teams appeal to A/B tests as the solution to end all problems.

In many cases the team ends up performing the tests, and unsurprisingly, the results point to:

  • Version A gets more clicks on our sign up button
  • Version B gets longer session times exploring our content

Finally, the team goes back to the question that should have been answered prior to testing: do we want more sign ups, or do we want longer engagement time? What is our priority in the first place?

A/B tests are powerful tools, but designers and product managers need to understand more clearly what these tests are good for. Otherwise, these tests become an escape hatch for lack of consensus.

For lack of a solid product strategy point of view.

Most of those A/B tests never happen.

This article is part of Journey: lessons from the amazing journey of being a designer.


Top comments (0)