However much they espouse the power and value of A/B testing, so many business owners still get all twisted up in opinions. And it’s understandable. When a website is your baby, it’s difficult not to.
But, like a handful of other ailments, admitting you have a problem is the first step to recovery. You test (rather than decree) in order to nail down the most comfortable user experience for prospective customers, to make sure you’re not missing a big opportunity, and ultimately to create an environment most ripe for conversion.
I’ve seen fundamentally excellent A/B tests go awry in a handful of ways, and I’m going to outline some of them for you:
Screw-Up #1: Testing minutia instead of concepts
So you’ve got a brand new homepage. Don’t launch it and test whether the green button converts better than the blue one. You’ll get there – but first – think in broad strokes.
Test big-picture concepts. Test the Yahoo-style homepage (lots of information with a portal feel) against the Google-style homepage (you can do one thing, and one thing only). Test the big pretty picture against an all-text benefits statement. Then, once you’ve optimized the big stuff, work your way down to colors, fonts and photos.
Screw-Up #2: Getting caught up in opinions
Remember: if you feel strongly that A will outperform B – that’s just, like, your opinion, man. And opinions are like belly buttons. Everybody has one.
Your opinion doesn’t matter, so don’t spend time debating which version will perform better.
What matters is that you thoughtfully choose a few concepts that are substantially different from one another, test methodically and let the numbers speak for themselves.
It’s not about the highest paid person’s opinion. That’s what April Fool’s is for.
Screw-Up #3: Not verifying that your results are statistically conclusive
Your test might look like one version is clearly winning, but don’t stop your test until you verify the difference is statistically significant. Put away that TI-83 and try this handy tool.
Screw-Up #4: Comparing one week against the next
Countless times, I’ve heard CEOs and VPs at various startups suggest this: “We changed up the site on May 1, and the numbers over the last week are better than what we saw in the week leading up to the change.
This is not an A/B test.
It might be meaningful, but it’s not methodical. If you want to feel good about the validity of your test results, don’t compare one week against the next. Use Google Website Optimizer or a similar tool — there are plenty — and alternate your new version against the control.
Screw-Up #5: Not measuring the entire funnel
It’s true that your A/B test should have a single “goal” outcome: the user clicks to see the next page, submits a registration form, downloads a white paper, etc.
However: you would be wise to measure the impact of your test on the entire conversion funnel. The experience your users have with the page content you’re testing could very well influence them further down Conversion Drive.
Let’s take a very simple example: you’re testing using the word “free” vs. steering clear of it. Your landing page might get more clicks, suggesting the “free” test is successful — yet your users suddenly appear less likely to convert to a purchase. By using the word “free” you may have created, for the test group, a different expectation than that of the control group.
If you don’t measure the behavior of each user group, test and control, all the way down the conversion funnel, you’re running blind.
Smashing Magazine published a pretty good “Ultimate Guide to A/B Testing” last June with some tool suggestions and a handful of surprising test results. If you’re really tickled by that sort of thing, be sure to follow Which Test Won.