This article has been first published in my blog smartpuffin.com.
As I plowed my way through A/B testing of more than 300 features, I was wonder...
For further actions, you may consider blocking this person and/or reporting abuse
Nice post Elena, I'd add this one:
I see a lot of time wasted on trying to bend over backwards to measure something when you really just need to trust your gut, or trust the expertise involved. You're only testing because it will deliver good return on investment in the first place. If your investment to try and measure the thing is too great, you might want to take a different approach.
Depending on your implementation, split testing can also have pretty terrible consequences in terms of performance and software complexity. In general I think we need to proceed with humility in this area.
Hi Ben, thank you, you raise a great point.
Actually, even two points: hard to measure and complexity. I should've thought about it myself, I've seen consequences of both!
Thank you again!
I don't get it. Why shouldn't I use a/b testing when releasing a bugfix for example? The success metric in that case would be the error rate, which I want to drive down. A/B testing doesn't have to be coupled to conversion. You can use any success metric
Thanks a lot for the article and for listening to the Vox populi.