I'd love to hear from the community what you all do to ensure that changes you make will actually improve the user experience.
I'm especially curious about:
- if you don't do A/B testing, why not?
- is your development a data-driven process?
- do you use a server or client-side A/B testing tool?
- what are the metrics you look at to decide whether the experiment was a success?
- how does A/B testing help you understand the cause and effect of changes to your website?
I plan to write another post about web performance and how A/B testing helps tie improvements to the UX metrics back to specific optimizations. Your responses would be helpful context for me!
Top comments (10)
At PayPal, we run experiments all the time. Any change in UI/content or any change that impact users have to be run through A/B test to determine if it works.
If things are done without A/B testing then it basically would be HIPPO(highest paid person opinion) symptom.
I agree in many cases, but I think in other cases one can hypothesize that the outcome isn't going to meaningfully different and eyeballing the problem or working from first principles is fine.
We A/B in order to avoid arguing over a series of potentially horizontal changes. We don't want to make a change that nobody can agree on is an actual improvement. In other scenarios, the outcomes are too chaotic to justify the time it would take to set up a truly scientific test. It's case by case basis, but hopefully the conversation can happen among people who all understand the tradeoffs.
Respect the A/B test, understand the non-universality of it as a solution and potential local maxima problems. 😄
Sometimes is hard to convince clients of A/B testing, specially when you need at least a month to get some significant data you can act on it.
I never ignore your comments Neil. It's prime DEV content.
If anyone doesn't do it they might be wondering what A/B testing is. Super high level. You did some work to improve your users experience AKA make users push the money button (in theory) now you use techniques to find out if worked or if your users can't seem to find the money button anymore. If the money button was clicked less then it's back to the drawing board and we do it all again.
We don't do A/B tests because we're a company of consultants and clients usually think they're a waste of money. "Just give us the best that you can".
Any attempt to bring in the argument is usually perceived as: "Wait, so you don't have a clear idea of what to do? We thought you were professionals." 🤷♂️
A/B tests could possibly be considered if the client disagrees on something, but in the end, they pay so we have to comply even if it sounds dumb.
On the lines of:
"If you're still unconvinced, we could make both versions and see what's better for the users."
"Free of charge, I presume?"
"Uh..."
Finally, also because clients usually call us to replace an awfully outdated software, so it's fairly easy to do better.
Sadly this relies on having enough users in the test group for the relationship to be statically significant.
I know what it is and know it's benefits but unless you are hitting enough users it's not useful.
We A/B but screen recording all mouse movements and clicks provide much more actionable information.
A / B testing is very important. Also when it comes to building websites. Thanks to this, we can build a website with an optimal structure. I also recommend this post on this topic: gamerseo.com/blog/ab-test-case-stu...
I don’t do A/B testing because doing just A is already enough of a challenge for me (making the site look and behave the way I want).