The Case Against A/B Testing at Early-Stage Startups A/B tests have become so ubiquitous that founders who don't frequently employ them have been subtly made to feel bad about themselves for having such a weak data-driven culture.
By Andrew Cohen Edited by Dan Bova
Our biggest sale — Get unlimited access to Entrepreneur.com at an unbeatable price. Use code SAVE50 at checkout.*
Claim Offer*Offer only available to new subscribers
Opinions expressed by Entrepreneur contributors are their own.
If you've been listening to the tech startup illuminati over the past few years, you'd be forgiven for thinking that the cult of A/B testing has become permanently ingrained in the "lean startup" mentality. Indeed, whenever there is a product question that cannot be answered by mere intuition, today's rock star startups seem to unquestionably advocate the so-called "split test" as the arbiter of all crowd wisdom.
A/B tests have become so ubiquitous that founders who don't frequently employ them have been subtly made to feel bad about themselves for having such a weak data-driven culture. This has gotten a little out of hand. Yes, it is true that A/B tests can provide tremendous insight into online user behavior, which can -- with a lot of data -- help you improve your conversion rates. But there is a problem.
Related: So, What's the 'X-Factor' in the A/B Testing Formula? (Infographic)
The problem at early-stage companies is that early optimization can distract management from much higher yield activities, while often not having a large enough sample size to yield meaningful results.
Performing split tests is simply a management-intensive task, no matter how cheap and efficient A/B testing plugins such as Optimizely have become. Someone needs to devote their time to determining what to test, setting up the test and verifying and implementing the results of the test. Even if this technically does not require very much "time," it requires management's mental bandwidth, which is the scarcest resource at any company (especially an early-stage startup).
Do you really want your co-founder to be distracted by yielding a 5 percent better conversion rate (if he or she is lucky), when you haven't yet proved that there is a burning need for your product or that you are able to scale your marketing effectively?
Related: The Biggest, and Most Avoidable, Mistake of New Entrepreneurs
Similarly, how can you be sure that those A/B test "results" are statistically significant in the first place? Young companies typically have low amounts of web traffic, which may not produce usable A/B test conclusions for several weeks or months, if at all. And the frequently changing nature of your web traffic can make it very hard to draw meaningful conclusions to begin with. If 80 percent of your traffic this month came from a single TechCrunch article or Facebook ad blitz, is that really representative of how your core user base will behave on that target web page in the future?
Early-stage startups should therefore be careful not to get too bogged down in perfecting their marketing copy or the color of their call-to-action buttons until they are confidently nailing all the fundamentals. Building a metrics-driven culture at your startup means ensuring that you are always focusing on the right key performance indicators and not optimizing them too early. Sometimes a quick focus group, hallway usability test or intuition can be sufficient for you to make product decisions that are "good enough" for your company's current size.
Have you had positive or negative experiences with A/B testing at your startup? Sound off in the comments.
Related: The Pros Agree These Are the 2 Best Tools for Conversion Optimization