How to Formulate A Smart A/B Test Hypothesis (and Why They’re Crucial)

By , January 20th, 2014 in A/B Testing | 10 comments
lightbulb

Image by Olivier Gunn via The Noun Project

The more targeted and strategic an A/B test is, the more likely it’ll be to have a positive impact on conversions.

A solid test hypothesis goes a long way in keeping you on the right track and ensuring that you’re conducting valuable marketing experiments that generate lifts as well as learning.

In this article I’ll give you a quick and easy method for formulating a solid test hypothesis.

But let’s start by making sure we know what a test hypothesis is.

What is an A/B test hypothesis?

Here’s the dictionary definition of a hypothesis:

“A tentative assumption made in order to draw out and test its logical or empirical consequences.”

In landing page optimization, the test hypothesis is the basic assumption that you base your optimized test variant on. It encapsulates both what you want to change on the landing page and what impact you expect to see from making that change.

“I think that changing this into this will have this impact.”

By performing an A/B test, you can examine to what extent your assumptions were correct, whether they had the expected impact, and ultimately gain insight on the behavior of your target audience.

Formulating a hypothesis will help you scrutinize your assumptions and evaluate how likely they are to have an actual impact on the decisions and actions of your prospects.

In the long run this can save you a lot of time and money and help you achieve better results.

Why are you running a test?

In order to formulate a test hypothesis, you need to know what your conversion goal is and what problem you want to solve by running the test.

So before you start working on your test hypothesis, you first have to do two things:

  1. Determine your conversion goal
  2. Identify a problem and formulate a problem statement

Once you know what your goal is and what you presume is keeping your visitors from realizing that goal, you can move on to formulating your test hypothesis.

The 2 essential elements of an A/B test hypothesis

A test hypothesis consists of two things:

  1. A proposed solution
  2. The anticipated results the solution will facilitate

Let’s look at a real world example of how to put it all together.

A/B Test Hypothesis - Ebook

I have a free ebook that I offer to the readers on my blog. Being a test junkie, I’m always running experiments on the ebook landing page in order to gain insight and crank up conversions.

Data from surveys and customer interviews suggested that I have a busy target audience who don’t have a lot of spare time to read ebooks, and that the time it takes to read the thing could be a barrier that keeps them from it.

  • Conversion goal: ebook downloads
  • Problem statement: “My target audience is busy, and the time it takes to read the ebook is a barrier that keeps them from downloading it.”

With both the conversion goal and problem statement in place it was time to move on to forming a hypothesis on how to solve the issue set forth in the problem statement.

The fact is that you can read the book in just 25 minutes, and I hypothesized that I could motivate more visitors to download the book by explicitly stating that it’s a quick read.

In addition to this, data from eye tracking software suggested that the first words in the first bullet point on the page would attract immediate attention from visitors. And I further hypothesized that the first bullet point would be the best place to address the time issue.

Putting the proposed solution and expected results together lead to the following test hypothesis:

“By tweaking the copy in the first bullet point to directly address the ‘time issue’, I can motivate more visitors to download the ebook.”

With all this in place I moved on to working on the actual treatment copy for the bullet point.

A/B Test Hypothesis - Treatment

It’s important to remember that until you test your hypotheses, they will never be more than hypotheses. You need actual reliable data in order to prove or disprove the validity of your hypotheses.

In order to find out whether my hypothesis would hold water I set up an A/B test with the bullet copy as the only variable.

AB Test Hypothesis Results

As the test data clearly shows, my hypothesis held water and I could conclude that addressing the time issue in the first bullet point had the anticipated effect of getting more visitors to download the ebook.

Better Data = Better Test Hypothesis

Working with test hypotheses provides you with a much more solid optimization framework than simply running with guesses and ideas that come about on a whim.

But remember that a solid test hypothesis is an informed solution to a real problem – not an arbitrary guess. The more research and data you have to base your hypothesis on, the better it will be.

Google Analytics, customer interviews, surveys, heat maps and user testing are just a handful of examples of valuable data sources that will help you gain insight on your target audience, how they interact with your landing page, and what makes them tick.

What are some of your most or least successful A/B test hypotheses?

– Michael Aagaard

About The Author

Photo of Michael Aagaard

Michael Aagaard is the Senior Conversion Optimization Consultant at ContentVerve. When he’s not preaching the CRO gospel as a popular international speaker, he spends his time helping clients improve conversion rates in wonderful Copenhagen. Follow him on Google+, Twitter or get his new free ebook: 7 Universal Conversion Optimization Principles.
» More blog posts by

Comments

  1. Megan Bush says:

    I agree that it is important to test based on informed solutions to real problems vs. arbitrary guesses. However, a challenge I face with well defined hypotheses is that for newer testing clients, they start the conversation with a “win or loss” approach. In reality, all tests tell you something about your customer and is a win if you use the information to drive future business decisions. Conveying that in a test hypothesis is difficult. Have you experienced this as well?

    • Dan Levy says:

      You bring up a great point, Megan. What Michael alluded to in the post (but didn’t quite have space to go into – I know, because I edited the piece!) is that even “failed” A/B tests can teach you valuable insights about your audience. And these insights can be used to form future test hypotheses that, ultimately, drive conversions. I guess it all comes down to the old formula: Data + Insights = Success.

    • Hi Megan – as Dan mentioned, this is a really good point. I wrote an article for Unbounce.com a while back about how negative test results can help you increase conversions in the long term. Here’s the link: http://unbounce.com/a-b-testing/failed-ab-test-results/

      In my experience, it’s important to “train” your clients to understand that optimization is an ongoing process – not a hit or miss situation.
      And, as you mention, that gaining insight is just as important as getting lifts. When you get a negative test result, it’s important to tell the client about the insight the test provided you with, and not just leave it at “well this didn’t work…” ;-) Walk them through the test data and the underlying hypothesis and let them know that the test wasn’t a waste of time but a valuable step on the road to greater customer insight and higher conversions.

      If you base your tests on solid, easy-to-understand hypotheses that can be proved or disproved makes this process much easier for the client to understand.

      - Michael

      - Michael

      • Megan Bush says:

        Thanks for the reply, Michael and Dan! If you find yourself running a test with multiple variations, do you always choose which one you believe to be a winner? If so, how do you justify even adding the second variation? For example, if you’re testing a red vs. green vs. blue button, would you identify in your hypothesis “The green version will increase conversion rate”?

  2. Andy Kuiper says:

    Fundamentals are important – thanks Michael :-)

  3. Steve says:

    I find that approximately 75% of the time I get it “right” the first time and that subsequent split testing results decrease my conversion rates. That said, I still continue to test, because even if I’m hitting 75% of the time I’m still missing 25% of the time and often missing badly at that.

  4. Sean Ellis says:

    Great quick read… This is the second article that I’ve read since yesterday that encourages formulating A/B test hypotheses. Here’s the other one that suggests it’s one of the most common A/B testing mistakes: http://www.growthhackers.com/12-ab-split-testing-mistakes-i-see-businesses-make-all-the-time/

    While I haven’t formally referred to it as hypothesis testing, in essence I’ve been doing it. I only run a test if I have a reason to think it would perform better. But admittedly I could be more disciplined about documenting my assumptions. As the article suggests, every test is an opportunity to learn. Learning starts with documenting your assumption about what you think will happen and then recording if it actually happens. If you don’t keep a log of these assumptions and results, then you really aren’t going to learn as much as you could about improving results and you will slow your ability to improve results over time.

    My biggest take away from the article is a re-emphasis that conversion rate optimization takes a lot of rigor. It’s not something that companies should just dabble with. I’ve long recognized that the impact on a business from conversion rate optimization that is executed effectively is game changing. It starts with little things like documenting your assumptions and the research that supports those assumptions. Testing is simply an extension of that research that happens to also directly move the needle in the business via successful tests.

  5. Hi Sean – thanks for the excellent comment! Great points.

    As one becomes more experienced with testing, forming hypotheses becomes like second nature and the formal part of writing down the hypothesis becomes less important. However, when you work with a team or for clients, it’s really important that the hypothesis is formulated so everyone can understand it. Moreover, formulating the hypothesis “officially” helps facilitate knowledge sharing.

    I agree 100% that CRO has to be an ongoing and structured process. When I started out I used to to tests on a whim based on what felt right at the moment. Sure I got some results here and there, but I didn’t really start to move the needle until I started to understand the fundamentals ;-)

    - Michael

  6. Tech Bead says:

    Nice formulation of hypothesis :)

x
Get actionable optimization tips delivered straight to your inbox.

You'll learn:

  • What it takes to build successful marketing campaigns
  • Why your landing page design and copy might be working against you
  • How to increase conversions while delighting leads and customers