Effective A/B testing often doubles sales. But most people who try it never see meaningful results.
The problem isn’t that they don’t follow all the usual best practices. The problem is exactly that they follow the usual “best practices.”
The top conversion rate optimization (CRO) experts do A/B testing very differently than what the average marketing guru talks about.
Let’s go through the difference, so you don’t waste your time and money on A/B tests that have bad odds of succeeding.
Common A/B testing “best practices”
The most common “best practice” is quite simple: “Test one thing at a time, so you know what caused the difference.”
It sounds like sound advice.
Imagine a medical test where the study groups didn’t just take different medicines, but also changed their diet, exercise routine, sleep rhythm, and social behavior. You couldn’t tell if the winning group won thanks to a better medicine or some other change.
You often (but not always) want scientists to A/B test things one change at a time.
But the goal of A/B testing marketing isn’t publishing a scientific paper. The goal is to make more sales, get more leads, etc.
The problem with “one change at a time” is that it rarely creates a statistically significant difference. Even the occasional result it creates is usually meaninglessly small.
What about all the amazing case studies?
If you’ve followed A/B tests and other CRO stuff from the sidelines, you’ve likely seen hundreds of tests where a tiny change made a huge difference.
Maybe a different button color, moved form, or slightly reworded headline increased conversion by a factor of two or eight.
Those are either flukes or the starting point (called “control”) was inexcusably bad.
Flukes are a part of all A/B testing. If you’re happy with only 95% confidence in the result (which is usually considered good enough), that means one in 20 of your tests is a fluke. I aim for 97+% confidence to cut down on flukes a bit more.
However, inexcusably bad starting situations are even far more common than flukes. The more a web development company is paid, the more likely it is that they make the site “look good.” That typically translates to ghost buttons (transparent buttons with only an outline), colors that don’t draw attention to the right things, forms that look better than they are usable or compelling, and so on.
When there are millions of A/B tests running all the time, there will be a lot of flukes and inexcusably bad control versions. A lot of those end up being published as case studies.
Almost no one publishes an article about the 578 button-color tests that didn’t create a change. But when one color change increased results by 142%, plenty of people share that story without checking the data (a lot of those results are based on low confidence levels = high likelihood it’s just a fluke) or considering if the control was even decent.
As an aside, tons of diet studies compare a specific diet to the standard American diet. If you know what that means, it’s not hard to see why any new diet is “healthy.” The control is so miserably bad that any diet that forces a person to think about what they eat is healthier.
Why so many marketing experts get it wrong?
Because they aren’t experts in conversion rate optimization.
Let’s exclude all the wanna-be experts from this. There’s a “marketing expert” at every corner of the internet. Including them seems pointless. Let’s narrow this down to people with at least a few years of experience and $100,000+ of fees behind them.
Take a random sample of them. Very few of them focus on conversion rate optimization, even though many of them do some A/B testing in their own business or with clients.
CRO is a tricky topic. I did it for a few years as my primary focus. But it’s been a couple of years since I really dedicated myself to it. I know more than most marketing people (over half of my clients were and still are marketing experts), but there are people/companies that are clearly ahead of me now in CRO, so I only sell basic-level help with it.
But if you’ve never been fully in that world, you don’t know the difference between knowing the basics, being great, and being truly one of the best. It easily looks like knowing some basics and doing a lot of A/B tests is enough for great results.
That’s why many non-CRO-expert marketing experts share the common best practices, which include the “test one small change at a time” idea. They genuinely think it’s good advice because it makes sense in other contexts (i.e., scientific inquiry).
Why so many “conversion rate optimization” companies get it wrong?
Because they aren’t CRO companies. They’re web development companies that portray themselves as CRO companies.
There’s a huge over-supply of web development. There’s a meaningful under-supply of CRO help. For a lay person, those two look damn similar. So, lots of mediocre web development companies claim to be doing CRO when they really only recite some basics about “user experience” and “conversion best practices” and then sell a new website based on whatever template they use.
I talk with people almost every week who have spent tens of thousands on multiple website redesigns that all failed to create a meaningful improvement (decreases in sales, on the other hand, are surprisingly common). Usually at least some of those web development companies called themselves “conversion experts.”
So, they share the same “best practices” because they’re just as oblivious to their ineffectiveness as most others.
How to do effective A/B testing
Here’s the $1mil secret:
- Start with completely different versions of whatever you’re testing. Don’t just move a form to the other edge of the screen. Don’t just reword the headline. Don’t just use different images. Instead, have completely different pages, ads, emails, or whatever else you’re working on. Especially test the message—what you’re saying with the piece of marketing—since that makes the biggest difference to your results.
- Once you’ve found the best starting point, move on to slightly less drastic changes. A/B test the focus of the page, change the image in an ad, or start the email with what you previously had as the ending.
- Keep moving gradually to smaller and smaller changes. When you no longer see meaningful improvements often enough to justify the effort, move on to A/B testing something else.
That’s it. Sort of.
What complicates things is that you need to figure out what big changes are likely to create meaningful differences, what other things need to change when you make one change to keep things aligned, and how to manage the whole process.
Doing CRO at the highest level isn’t easy. But copying the basic rules of A/B testing from the people whose clients consistently see 50-500% increases in profit within a year makes a lot more sense than relying on the flawed “best practices.”
It’s better to do one big change in a week that’s likely to create a big difference than 100 small changes that won’t create nearly the same difference. Prioritize quality over quantity.
If you only have resources (time, money, energy) for small quick tests, you’re likely to get better results by using those resources on other things.
As a final note, there’s a lot more to high-level CRO than starting with big changes. But it’s really the only big secret behind effective A/B testing.