5 Ways You’re Screwing Up Your A/B Testing (& what to do about it)

5 Reasons Why Your A/B Testing Wins Do Not Increase Your Revenue
Conversion hangover? Don’t be booking a Vegas trip unless you’re absolutely sure your A/B test results are real. (Image Source: thecampuscompanion.com)

Cognac, champagne, whiskey – you had it all after what you thought was an awesome A/B test success. But the excitement began to fade after a few days when you noticed that your revenue didn’t increase the way you expected. Suddenly, everything seemed gloomy and you felt nothing but cheated.

“The tool declared a winner,” you argued in your head.

That’s right. It did.

But unless you understand the intricacies of A/B testing, you will only end up blaming the tool and finding yourself in situations when your revenue doesn’t match your conversion wins.

Want to know where you went wrong? How you can make A/B tests that actually boost your bottom line? Given below are a few points that will help you understand where you went wrong and how you can fix it:

1. You set up a test at the beginning of the funnel when you have additional traffic coming in the middle of the funnel

Take the example of an eCommerce site here. Here’s your typical conversion path – Homepage > Product category page > Product page > Checkout > Sale.

Cool?

Now, let’s assume you made a change at the product category page that pushed more people down to the checkout. This increased your revenue by 30% with a statistical confidence of 99.7%. But this lift was specific to the traffic that passed through the product category page.

You forgot you had additional traffic coming in to your product page directly. And this traffic was unaffected by the change you made on the product category page. As a result, your revenue will be lower than your expected increase of 30% as reported in your A/B test.

How can you avoid or fix this?

Check additional traffic streams that might affect your test results. Exclude them from your calculations and manage your expectations according to the traffic that will actually undergo the test.

Access ‘Custom Variables’ in your Google Analytics (GA) account. If your A/B testing software is integrated with your GA (like Visual Website Optimizer), you can easily see the conversion goals tracked for the number of visitors who actually became a part of the test. This is how it looks in Google Analytics:

A:B Testing - Google Analytics

2. Waiting for the statistical confidence is not your thing

Often, I see people stop tests arbitrarily and happily declare a winner at a statistical significance of 80%, 90% or even less.

And later you see them complain that the results were off and they didn’t see any improvement in business.

If you stop your test before it achieves a statistical confidence, you will have unreliable data. According to the industry standards, you must run the test until it reaches a confidence level of at least 95%.

How can you avoid or fix this?

Declare the winner only when a minimum of 95% statistical significance has been achieved. This means that there is only a 5% chance that your data might prove otherwise.

Thanks to the A/B testing tools available nowadays, this confidence level is calculated and made available without you having to get involved in calculations.

3. Sample size doesn’t mean much to you

You waited for the statistical confidence. Great! It’s 99% now. Percentage improvement looks good at 125.5%. This means you can rely on this data. Right? Well, not always.

In an A/B test, you essentially infer conclusions about your entire target audience based on the behavior of a small sample of your customers. Still, you cannot rely on the data of 50 visitors and draw a conclusion for the behavior of 50,000 website visitors.

If you implement changes on your website based on a test with insufficient sample size, you are signing in for a surprise. Most likely, the 60% jump in revenue figures, reported on the basis of 50 test participants, will not match the results in the real world.

Here’s an example of a test that reached the statistical confidence but still has insufficient sample size:

A:B Testing Statistical Confidence
Click for full-size image

How can you avoid or fix this?

Go to our free split duration calculator. Plug in the values of your site and calculate the sample size or the number of visitors you’d need to achieve conclusive test results.

Even if your A/B test tool declares a winner with 95% confidence level or above, let the test run until it achieves the required sample size.

4. You focus on psychological tactics more than customers’ needs

The primary aim of conversion rate optimization is to eliminate unsupervised thinking and help consumers make the right decision by providing them value.

While psychological tactics like changing the button color do influence behavior and improve conversions, such wins usually have minimal impact on revenue goal.

How can you avoid or fix this?

Focusing on psychological tactics is important as they do help in pushing visitors further down the conversion funnel, if not make a direct sale. But when you understand your customers’ needs and concerns and try to address them on your website, it will usually give you a better revenue lift than the psychological tactics that you test.

5. Tracking the wrong conversion goal

Optimizing your website for only one goal is okay when it has a narrow focus. For example, the sale goal trumps all other conversion goals in the case of eCommerce sites.

But if your website has multiple conversion goals, not tracking all associated goals can sometimes make you draw false conclusions from your A/B tests.

For example, a SaaS (Software as a Service) business website usually has multiple conversion goals such as eBook downloads, free sign-ups, paid sign-ups, bounce rate, and others.

So if they change the call-to-action text for a ‘Free eBook download,’ it might increase the number of conversions for ebook downloads. But this might reduce their paid sign-ups as more prospects now get distracted by the free eBook.

Tracking only the ebook downloads as your conversion goal here can give you a false satisfaction as you might discover later that your revenue is going down.

How can you avoid or fix this?

Make sure you’re tracking all important KPIs (Key performance indicators). Sometimes you might see that the primary goal for the test had a positive result, but it didn’t reflect favorably on another important metric of your website. In that case, you must decide which metric has a better impact on the bottom line of your business and proceed with the changes accordingly.

In some cases, choosing the wrong A/B testing tool can also get you stuck in a situation when you might see inflated results in your test that do not settle well with your actual revenue figures (as Neil Patel explains it in this post).

It is best that you choose a reputed A/B testing tool. You can also do an A/A test to check if your software is reporting accurate results.

— Smriti Chawla


default author image
About Smriti Chawla
Smriti Chawla is a Content Marketer with Visual Website Optimizer, the A/B testing software with built-in heatmaps. She writes about A/B testing and Conversion Optimization on their split-testing blog. For more updates from them, you can follow them on Twitter and connect with them on Google+.
» More blog posts by Smriti Chawla