In my last post I talked about 3 landing page optimization mistakes I’ve made that other people should avoid. One point in particular sparked quite a bit of interest, so I decided to dedicate a whole post to expanding on it.
Today we’re going to take a closer look at what you can learn from case studies and why you shouldn’t always take them 100% literally. Moreover, I’m going to give you a number of tips for how to get the maximum value from the myriad of LPO, CRO and A/B testing case studies that are available on the web today.
When you read a case study in which someone got a conversion lift by, say, changing their CTA button color from red to green, it means that the person who performed the test found out that a green button performed better on their landing page.
It does not mean that green buttons will always perform better than red ones on all landing pages forever.
There are a million things that influence what works on a given landing page, and a specific change might not work when implemented in a different context.
My best advice is to take what you can from other people’s experience and use it as inspiration for your own landing page experiments. One of the best ways of doing that is by trying to understand the underlying principle that made the change work and then find ways of transferring that principle to your own landing page.
Let’s look at an example from the real world that revolves around CTA placement.
Here’s an old case study where a landing page with the CTA placed under the fold performed significantly better than variation with the CTA above the fold.
You could conclude that CTAs should always be placed under the fold. But a more nuanced interpretation might be that you should place the CTA where it best complements the prospect’s decision-making process.
The test results in this case study inspired me to conduct further research and more tests. I found that there is a correlation between the complexity of your offer and how much information prospects need in order to make an informed decision. This correlation influences how far down on the page you should ask for the conversion.
Generally I see a tendency that the higher the level of complexity/scrutiny, the further down on the page you should place your CTA.
We can also use the classic red vs. green button test as an example. While a case study might show an example of a green button performing better than a red one, the transferable principle has little to do with red vs. green – it has to do with visual hierarchy.
It’s really all about using contrasting colors that make mission-critical elements like buttons stand out and assume a higher rank in the visual hierarchy so it’s easy for visitors to identify them.
And as this case study from HubSpot clearly shows, green isn’t always better than red.
So if you want to get the most out of the case studies you read, think of them as a source of transferable principles that you can tweak and adapt to fit your particular conversion scenario and landing page.
As illustrated in the examples above, taking test results literally and jumping to conclusions can backfire and limit both the insight and results you can gain in the long run.
So when you read a case study, make sure to take the time to interpret the data and consider what’s really going. Once you have a clear idea about why the changes worked in the case study example, it’s crucial that you carefully consider how similar changes could potentially increase conversions on your own landing page.
In my experience, the best way of doing that is to develop an A/B test hypothesis stating what you want to change, how the change will affect the behavior of your prospects and what results you expect to see.
It may sound overly complicated, but formulating a test hypothesis can be as simple as filing out the blanks in this template:
Here’s an example of how the red vs. green button test gave me inspiration for a test hypothesis that had nothing to do with buttons.
Remember the transferable principle? In this case, the button test taught me that mission-critical elements need to stand out and assume a higher rank in the visual hierarchy.
This turned out to be directly applicable to a landing page I was working on that offered a free YouTube Mp3 converter.
A simple squint test indicated that the orange font color made the headline fall in with the rest of the landing page design. And I hypothesized that changing the font color to black would make the headline copy stand, increase readability and boost conversions.
Here was my test hypothesis:
By changing the font color of the headline from orange to black I can get more prospects to read and understand the headline value proposition and thus increase conversions.
When I tested the hypothesis live on the landing page, it did indeed increase downloads by 6.64%
A solid optimization hypothesis goes a long way in keeping you on the right track. Moreover, it forces you to scrutinize your test ideas and helps you keep your eyes on the goal. If you want to learn more about hypotheses, this article will give you a more in-depth guide.
Learn what you can from other people’s results, but don’t just assume that you can simply replicate those results on your own landing page.
You can get a lot of inspiration and ideas from A/B testing case studies, but always take the time to build a solid test hypothesis on how and why this particular test idea will help you increase conversions.
And most importantly, before you draw any conclusions, test, test, test whether your optimization hypothesis actually works in the real world – on your specific landing page and target audience.
Any questions? Let me know in the comments!