The more targeted and strategic an A/B test is, the more likely it’ll be to have a positive impact on conversions.
A solid test hypothesis goes a long way in keeping you on the right track and ensuring that you’re conducting valuable marketing experiments that generate lifts as well as learning.
In this article I’ll give you a quick and easy method for formulating a solid test hypothesis.
But let’s start by making sure we know what a test hypothesis is.
Here’s the dictionary definition of a hypothesis:
“A tentative assumption made in order to draw out and test its logical or empirical consequences.”
In landing page optimization, the test hypothesis is the basic assumption that you base your optimized test variant on. It encapsulates both what you want to change on the landing page and what impact you expect to see from making that change.
By performing an A/B test, you can examine to what extent your assumptions were correct, whether they had the expected impact, and ultimately gain insight on the behavior of your target audience.
Formulating a hypothesis will help you scrutinize your assumptions and evaluate how likely they are to have an actual impact on the decisions and actions of your prospects.
In the long run this can save you a lot of time and money and help you achieve better results.
In order to formulate a test hypothesis, you need to know what your conversion goal is and what problem you want to solve by running the test.
So before you start working on your test hypothesis, you first have to do two things:
Once you know what your goal is and what you presume is keeping your visitors from realizing that goal, you can move on to formulating your test hypothesis.
A test hypothesis consists of two things:
Let’s look at a real world example of how to put it all together.
I have a free ebook that I offer to the readers on my blog. Being a test junkie, I’m always running experiments on the ebook landing page in order to gain insight and crank up conversions.
Data from surveys and customer interviews suggested that I have a busy target audience who don’t have a lot of spare time to read ebooks, and that the time it takes to read the thing could be a barrier that keeps them from it.
With both the conversion goal and problem statement in place it was time to move on to forming a hypothesis on how to solve the issue set forth in the problem statement.
The fact is that you can read the book in just 25 minutes, and I hypothesized that I could motivate more visitors to download the book by explicitly stating that it’s a quick read.
In addition to this, data from eye tracking software suggested that the first words in the first bullet point on the page would attract immediate attention from visitors. And I further hypothesized that the first bullet point would be the best place to address the time issue.
Putting the proposed solution and expected results together lead to the following test hypothesis:
“By tweaking the copy in the first bullet point to directly address the ‘time issue’, I can motivate more visitors to download the ebook.”
With all this in place I moved on to working on the actual treatment copy for the bullet point.
It’s important to remember that until you test your hypotheses, they will never be more than hypotheses. You need actual reliable data in order to prove or disprove the validity of your hypotheses.
In order to find out whether my hypothesis would hold water I set up an A/B test with the bullet copy as the only variable.
As the test data clearly shows, my hypothesis held water and I could conclude that addressing the time issue in the first bullet point had the anticipated effect of getting more visitors to download the ebook.
Working with test hypotheses provides you with a much more solid optimization framework than simply running with guesses and ideas that come about on a whim.
But remember that a solid test hypothesis is an informed solution to a real problem – not an arbitrary guess. The more research and data you have to base your hypothesis on, the better it will be.
Google Analytics, customer interviews, surveys, heat maps and user testing are just a handful of examples of valuable data sources that will help you gain insight on your target audience, how they interact with your landing page, and what makes them tick.
What are some of your most or least successful A/B test hypotheses?