Small Changes That Have a BIG Impact on Increasing Conversion Rates

By , November 20th, 2013 in Conversion | 21 comments
Small changes can make a big impact on increasing conversions
Image via our friends at Sparksheet.

The dream scenario for most online businesses is to get the highest possible lift with the smallest possible investment in optimization.

The obvious way of making this dream come true is to implement small, simple changes that have a significant affect on conversions. But I know from experience that identifying such optimization opportunities can be really, really difficult.

In this article we’ll take a closer look at when, how and why small changes can have a major impact on increasing conversion rates.

Let’s start out with an example from the real world…

How a simple Landing Page copy tweak generated an 18.59% conversion lift

This is a test I recently performed on the landing page for my free ebook 7 Universal Conversion Optimization Principles.. I was able to generate an 18.59% increase in downloads by simply tweaking one bullet point.

conversion principles
Click for full-size image

At first glance you might be thinking, “Come on, man – changing a few words in one bullet point can’t have that much of an impact!”

But if we dig a little bit deeper and look at the underlying test hypothesis, you’ll see that it really does make a lot of sense.

First off, the first bullet point holds a prominent position on the page, and an analysis with Eyequant (a clever new tool that predicts how visitors will view your landing pages) suggested that the first few words of this bullet point would attract initial attention from visitors.

Zeroing in on the first bullet point to increase conversions

Secondly, I know from my own behavior that time is a barrier that impacts my decision every time I’m about to download a free ebook; I simply don’t have time to read all the stuff on my iPad.

I figured that I couldn’t be the only person with this problem and that I might be able to increase conversions by emphasizing the fact that my ebook only takes 25 minutes to read.

This sequence of thoughts resulted in the following hypothesis:

By tweaking the copy in the first bullet point to directly address the ‘time barrier’, I can motivate more visitors to download the ebook and increase the conversion rate of the landing page.

This test hypothesis led to tweaking the bullet copy, which in turn led to a significant increase in downloads.

Why and when small changes have a major impact on conversion

In order for any change to have an impact on conversion, it must first have an impact in the mind of the prospect. As the above example demonstrates, small changes can have a major impact when they:

  1. Address a pain point or mental barrier that stands in the way of the prospect making the “right” decision on the landing page
  2. Are made strategically to a prominent element on the landing page
  3. Are made strategically to a mission critical element on the landing page

Let’s look at some more real-world examples:

Making a big impact by addressing a mental barrier

When it comes to free ebooks, there’s another mental barrier that I recognized from my own behavior: Is the book worth reading?

I’ve read my fair share of free ebooks that totally under-delivered on their value proposition. To address this question on my ebook landing page, I’ve added social proof in the form of expert testimonials from industry thought leaders who’ve read the book.

On the control landing page, I placed the four testimonials at the bottom of the page just below the CTA. But I hypothesized that the testimonials would have more impact if two of them were placed higher on the page so they’d be easy to spot as soon as a visitor lands on the page.

The test data confirmed my hypothesis and the treatment increased the number of downloads by 64.53% (!)

Demonstrating social proof to increase conversions
Click for full-size image

Again this may seem like a crazy result, but with a little more insight it makes perfect sense. The first testimonial is by Marcus Sheridan of The Sales Lion and it says, “A book like this could easily be sold for a lot of money. But Michael has elected to give it away. All I can say is WOW…. and get it today.”

The second testimonial is from Unbounce’s very own Oli Gardner and it says, “Some books only talk theory, but this one backs up each idea with real test results.”

This means that I have two testimonials from credible sources that directly address the question (mental barrier) of whether the ebook is worth the read. It only makes sense to showcase them prominently on the page.

Making a big impact by tweaking a prominent element

Here’s an example of a painfully simple change I made on a YouTube Mp3 Converter landing page. In this case, simply changing the font color of the headline from orange to black was enough to increase downloads by 6.64%.

Increasing conversions by changing the headline font color

A simple squint test (squint your eyes, stand back and see which elements stand out) revealed that the orange font color made the headline fall in line with the rest of the landing page design. I hypothesized that changing the font color to black would make the headline copy stand out and increase readability.

I put my hypothesis to the test and was pleased to see that it held water: the change in font color increased readability and made it easier for prospects to understand the value of the offer right away.

The headline is one of the most prominent elements on your landing page. It’s also one of the few copy elements that you can be 99% sure your prospects will read and therefore has a significant impact on the decision-making process of your potential customer.

More often than not, small headline tweaks will have a measurable impact on conversion.

Making a big impact by tweaking a mission-critical element on your landing page

Mission-critical elements are parts of your landing page that your prospects have to interact with in order to get to the next step in the conversion funnel. Call-to-action buttons are a great example.

While CTA buttons may seem like an insignificant design element, they play a decisive part in the conversion sequence and have a direct impact on the decisions and actions of your prospects – regardless of whether you’re asking them to download a PDF, fill out a form, buy a product, or even just click through to another page.

This means that your CTA buttons represent the tipping point between bounce and conversion. They tie every step in the conversion sequence together and make it possible to move from one “micro yes” to the next, all the way to the final “macro yes”.

Increasing conversion rates by tweaking the CTA

Here’s an example from a test I conducted on MatchOffice.com – an international commercial real estate portal through which businesses can find offices for rent.

Once a prospect finds a relevant office, they have to click through the main CTA in order to get more information on the office via e-mail. This means that clicking the CTA is the main conversion goal, and every extra click potentially means money in the bank.

By changing the button copy from “Order Information and Prices” to “Get information and Prices” we increased conversions by 14.79%.

In case you think this test was a fluke, here’s an example from a Danish sister website where exactly the same exercise resulted in a conversion lift of 38.26%!

Increasing conversions by changing one word

It’s a subtle change, but it completely changes the perceived value of clicking the button: “Order” emphasizes what I have to do when I click the button, whereas “Get” emphasizes what I’ll gain by clicking the button.

The more value to the user you can convey via your CTA copy, the more clicks the button will get.

I’ve conducted a vast amount of tests with CTA buttons and it’s quite astounding how much you can move the needle just by optimizing your buttons.

In my experience, optimizing CTAs represents the ultimate low-hanging fruit in LPO.

Sign-up forms are another good example of mission-critical elements. You find them on tons of landing pages and they often represent the main conversion goal.

Here’s one more example from my ebook landing page where a simple variation of the privacy policy generated a 24% drop in downloads.

Increasing conversions by tweaking the privacy policy

I’ve conduced similar experiments on other websites and my research suggests that the word “spam”, when used in the privacy policy, has a demotivating effect on prospects.

There are many plausible explanations, but it seems that using the “S word” – even if the intention is to guarantee against it – plants the idea in the minds of the prospects and makes them more concerned about signing up.

And the moral of the article is…

“It is not the magnitude of change on the “page” that impacts conversion; it is the magnitude of change in the “mind” of the prospect.”
–Dr. Flint McGlaughlin

In other words, landing page optimization isn’t actually about optimizing landing pages, it’s about optimizing decisions. And increasing conversions isn’t about making radical changes – it’s about making the right changes.

– Michael Aagaard

About The Author

Photo of Michael Aagaard

Michael Aagaard is the Senior Conversion Optimization Consultant at ContentVerve. When he’s not preaching the CRO gospel as a popular international speaker, he spends his time helping clients improve conversion rates in wonderful Copenhagen. Follow him on Google+, Twitter or get his new free ebook: 7 Universal Conversion Optimization Principles.
» More blog posts by

Comments

  1. I’ve read the quote from Dr. Flit before reading your article. Got a hint from first paragraph about it. Conversion is based on various factors as you mentioned here prospective measures should be taken to drive people to become customers. Nice job!

  2. Erik says:

    While I agree with the fact that even the smallest changes may lead to higher conversions. I must call out the fact that sitting and waiting for statistical confidence is a complete waste of time. This being said, traditional A/B testing results are pretty much useless. If you do not implement technology that has the ability to act in real-time to visitor behavior/preferences, in the time it takes to implement your “statistical significant results” visitor behavior most likely has already altered and you are missing out on valuable conversions. Just saying…

    • Oli Gardner says:

      I don’t see how you can say that waiting for statistical significance is a waste of time. I’ve run countless tests where early results indicate a movement in one direction, but then after a week once it’s had a chance to level out and remove chance, the results are completely different.

      This isn’t about usability testing to uncover behavioural patterns, it’s about measuring the success of your conversion goal.

      Watching (recording) behaviour is an invaluable resource for generating test hypothesis and UX improvements. That way you use behaviour to influence change and remove unnecessary interactions.

      • Tim Paskowski says:

        The problem lies in the subtle implication that outcomes generated in the test phase are (a) relative to a consistent baseline, where in fact behavior is heavily moderated by a large number of time-dependent and third-party factors; and (b) veridical in that they will represent future outcomes, and are therefore somewhat loosely predictive of actual lift.

        Both implications are misleading because statistical confidence states that the outcomes observed during the test phase are due to actual differences between the creative (and therefore are unlikely to be due to some outside factor, or “chance”) but makes no assumption about future outcomes (e.g., how your visitors will prefer a certain experience at point in time after a result is implemented). This notion is well-understood by the scientific community, but skewed somewhat by marketing tests.

        Here is a likely outcome of an A/B test: you split the traffic (either in a meaningful way or not, I’m not convinced it matters) between two or more experiences, wait for statistical significance, compare the outcomes, and (finally) implement the results. Essentially, you’ve now devoted a certain proportion of your traffic to an under-performer and a complementary proportion to a higher-performance experience. By doing so, you’re gambling that the number of outcomes (goals) achieved by the high-performance experience outweighs the loss assumed by sending traffic to the “loser”; this represents a somewhat significant (and conveniently glazed-over) risk of loss during the test phase. One argument could be that the insights gained about visitor bias towards one creative or another make up that loss, but again, the insights are anecdotal because of the high frequency and magnitude of change in visitor preferences over time.

        At any rate, even if the lift from the winner outweighs the losses from the loser during the test phase, implementing the results represents the second fallacy (‘b’, above) wherein likewise outcomes are expected simply because that’s what happened during the test phase. If you’re fortunate enough to have gained anything during the test phase, you’re still unlikely to observe the results you expected in the long run. Iterative testing doesn’t help much either, since the iterations must be very closely spaced (far too closely spaced for most companies to reach a significant sample of outcomes). The only answer is to find a way to predict trends favoring one experience or another, then throttle the display of those creative with preference towards high-performance experiences. The tradeoff is that as soon as you stop doing it, you’ve essentially fallen back to A/B testing.

        • Oli Gardner says:

          That’s a super interesting response.

          One thing I would point out though, is that if you are seeing a winning page variant you are *not* losing any conversions on your control page. It’s the same as it always was, not worse.

          You’re simply gaining a relative lift on one variant – soon to be the sole variant when the test is over.

          • Tim Paskowski says:

            Oli,

            That point highlights the problem:

            1. Baseline (control) performance is not steady state, and neither is the alternate (variant) — the performance of each experience changes over time based on a number of potentially confounding variables (e.g., seasonality, offers, competitors’ marketing, etc.). Both the magnitude (how much it changes) and the frequency (how often performance swings) is significant. If the average conversion rate of PAGE A over a 30-day timeframe is 5.0%, and PAGE B is 10.0%, you’d conclude that PAGE B is somehow better, right? What if you were to change the time window to view how each page performed yesterday? You’re likely to find that each conversion rate is significantly different from its 30-day average. In fact, you might find that the situation is now inverse because of some outside factor (perhaps an offer you e-mailed your customers somehow moderates the performance of your conversion action).

            In this likely — actually, almost certain — scenario, you’d lose conversions because you sent most (or all) of your traffic to PAGE B simply because it was converting better, on average, over a larger time window. By contrast, if you were able to intelligently allocate more traffic to the winning variant in real time (e.g., as visitors are interacting on your site), you’d be able to identify and prune low-performers.

            Now on to your second statement. If you were to end a test after a given point wherein you have statistically-significant results, and divert 100.0% of your traffic to that experience, how soon do you follow-up and check performance? Do you iterate (thereby losing the original baseline, so you really have no idea how you performed relative to the original experience) or do you leave it be (thereby ignoring the significance of time-dependent outcomes)?

            This is the very same reason you don’t see researchers draw heavy conclusions in pharma studies. These papers describe what happened during the experiment (test phase) and how a variant (the med.) performed relative to a placebo for a certain audience. Show me one medical study that inferred that a treatment should be applied in all future cases as a guarantor of success. In a hundred years of literature review, you’d be hard-pressed to find such a study, yet the original post here draws similar conclusions.

            Rather than focus on lift as a benefit of A/B testing, the author should focus on the insights gained. I’m still unconvinced that any benefit is to be had in terms of raw performance gain, but customer insight is always valuable.

    • Hi Erik – thanks for your feedback and insight!

      Of course split testing has its shortcomings, just like I’m sure automated predictive algorithm-based software has its shortcomings.

      Proper segmentation, personalization and understanding of different visitors types and intent are all very important factors in any optimization effort. Just because you use split testing, doesn’t mean that you simply treat every single visitor as the same and assume that they will eternally display uniform behavior.

      I follow my tests very closely and look at the development several times a day in order to stay on top of things. I would be concerned about letting an algorithm run my experiments for me on “auto-pilot”.

      I think there’s a place for both approaches, and I’d certainly like to learn more about your tool. From what you write on the website it seems very interesting.

      – Michael

  3. Momoko Price says:

    Sending this to one of my clients now. Hoping to make them see the light when it comes to the power of simple fixes like this :)

    Also, my personal fave variant of Flint McLaughlin’s mantra:

    “We are not optimizing a web page. We are optimizing a *thought sequence*.”

  4. Suzanne says:

    Thanks so much Michael for the awesome tips…I realized, I’ve been making a few of the mistakes you mentioned and am going to make the changes!

  5. Ian Rhodes says:

    Great read Michael, thanks for sharing these insights. Loved your statement ‘landing page optimization isn’t actually about optimizing landing pages, it’s about optimizing decisions’.

  6. Hey Michael, I love this stuff. It’s amazing that such subtle changes can drive massive impacts – especially when they go against conventional thinking.

    We’re hoping to gear up to run some subtle tests across out widget soon just testing small things to see the impact across 1M impressions/month. Looking forward to hopefully sharing some data like you’ve done above.

  7. Sunday says:

    No doubt every Internet marketer would ultimately appreciate a significant increase in conversion rate when little or no effort is applied as changes. The most important thing is identifying where and when to make the needed changes. The ideas and examples provided in this post are readily helpful.

    This valuable post was shared in kingged.com – the social bookmarking and content syndication website where the above comment was also left.

    Sunday – kingged.com contributor

    http://www.kingged.com/small-changes-that-have-a-big-impact-on-increasing-conversion-rates/

  8. Sarmista Aun says:

    Hi Michael,
    Thanks for sharing your experience after making certain tweaks in your landing page. I will soon apply the tricks for my own landing page.

  9. Yassin says:

    Hey Oli, Thanks for the post. Great as always
    i just want to say that there is a book called pitch anything that will make it clear why these changes get results. the book talks in a paragraph about attention and how to trigger it.

  10. Most small business owners I know often draw mistaken conclusions from their traffic and conversions. Testing is only part of the equation. Without a properly cleared target customer, worth traffic source, funnel system, call-to-action, and follow-up, they’re just guessing. So testing is just one part, albeit important part, of a larger cohesive strategy.

  11. AbhiM says:

    very informative. will try these in my website. thanks for the article

  12. Dan Carter says:

    Thanks Micheal for the great info and research. thanks for sharing these insights

x
Get actionable optimization tips delivered straight to your inbox.

You'll learn:

  • What it takes to build successful marketing campaigns
  • Why your landing page design and copy might be working against you
  • How to increase conversions while delighting leads and customers