[HOW TO] Implement a Conversion Rate Optimization (CRO) Process

This is a guest post. The author’s opinions are entirely his or her own and may not always reflect the views of Unbounce. Today’s post is from Garry Lee, Director of Analytics at RedEye.com.

Conversion rate optimization (#cro) is a popular phrase – just look at the number of people mentioning it on twitter in the last 24 hours (sure, some are from Croatia). But what exactly does it mean? Is it another buzz phrase that will take 5 years to evolve into anything? (CRM anyone?) Or is it a real phenomenon that will become a key online tool?

Before we get into what CRO is, let’s start by defining what it’s not…
“A single exercise or a one off project to assess how well the site is performing at a given time”

Unlike some online marketing activities, where you can check off a box having done something like a user testing day, an annual survey, implementing an analytics solution, CRO, is not a one off process.

For me it’s an overall process, a way of combining different elements to produce a continual testing and analysis process, where you’re constantly benchmarking your performance.

The key with this process is that it doesn’t have to rely on continual spending on expensive tools or consultants; it’s about following basic principles and most importantly setting up the benchmarks correctly from the start.

The Beginning – getting it right at the start

To begin optimizing your website you need to define what the goal of your site are. Pretty obvious I agree, but you do need to be more specific than “to make money”. Remember everything on the site is geared towards your goals, so everything on the site must have their own micro goals that contribute towards your main objectives. As an example in a case study we did with William Hill, an expensive-to-maintain piece of content was almost dropped until we proved how important it was in contributing to sales and improving conversion, rather than just being a nice to have piece of content.

This post is an overview of key stages of the process. More detail will appear in subsequent posts.

STEP 1 – Process Introduction

(Review goals; get web analytics and survey tools in place)

Efficient optimization begins with ensuring the right tools are in place at the beginning. You need to ensure you have analytics tracking and reporting in place because poor data and an inability to measure and test changes means all optimization is guess work.

You need to invest in a survey tool. These are free or cheap (depending on the one you choose) and very simple to operate. Why do you need one? Because however much web analytics can tell you ‘what’ the problems are on your site, it will never tell you ‘why’ the problems are there or suggest solutions. A well planned and executed survey strategy however can help explain this.

Finally and most importantly, you need to agree what the key goals of the website are and set up core tasks on the website to drive these goals. You should:

  • Decide what your main site conversion goals are
  • Understand what drives these goals (assess all aspects of the site and understand how they aid the goals)
  • Use this understanding to build metrics that will allow you to measure success or failure

As an example, let’s look at a typical travel site:

  1. It has a clear set of goals around getting bookings and selling extra ancillaries like parking or travel insurance.
  2. A key goal of the site is having reviews, therefore you need to measure the reviews.
  3. By understanding the role of the review (to make people happy with the holiday they are researching) you can see that you need to compare conversions of people reading/using reviews to those that don’t.

Repeat this process for each of the goals you want to benchmark for future testing.

STEP 2 – Measure

(Web Analytics / Surveys / User Testing)

To effectively measure your website you need to combine three core elements; web analytics, surveys and user testing. What does each of these bring to conversion rate optimization?

Measurement ToolWhat it brings to conversion
AnalyticsDoesn’t require inference & gives direct facts on what is happening on-site. When used with segmentation it can produce data for different sets of people. You can highlight pain points and target where changes are required to improve conversion.
SurveysVoice of the customer. No need to guess opinion based on movement, you are told what you need to know. Run these regularly and you get a baseline of customer opinion and over time they will tell you what areas need change and in many cases ideas of how to change, leading to increased conversion.
User TestingDirectly interact with the user to see how they are using the site. Issues are clearly laid out and it’s also possible to test potential changes and get controlled feedback to alter the site and improve conversion.

The reason we combine all three of these tools into the measure stage is to build a baseline of measurement that all future optimization can be measured against. While it’s no simple task, I have tried to distill this process into 5 bullet points:

  1. Understand key site goals of the site (you should have achieved this already in step 1)
  2. Ignore all other web analytics data and just focus on building metrics against these goals
  3. Create a series of questions that ask the user how satisfied they are with each of the key goals – place into a monthly survey
  4. Run a lab session with 5-7 users around these key goals and get scores against success of the goals
  5. You now have a benchmark of how your site is performing, so whenever you change anything on the site moving forward, you can do more than simply check if sales are up or down: you check your web metrics (which you have daily); you check your survey results (which you have monthly); you check your usability results (which you have quarterly)

STEP 3 – Analyse & Investigate

(Web analytics / UX consultancy)

“If you have $1,000 to spend on an analytics solution, spend $100 on the tool and $900 on the people”
– Avinash Kaushik

Without people to interpret the information your data is meaningless. This step is all about people and ensuring you dedicate time and common sense to the analysis that will ensure your data works hardest for you.

The aim here is to take the benchmarks we identified in step 2 and uncover the biggest problems. My advice here is that your analysis should be all about process and focus. Remember what your goal is; don’t get distracted and follow through to a logical conclusion.

For example…

  • One of your benchmarks shows that your bounce rate has been rising for the last few weeks
  • Looking at a page report in your analytics tool, focus just on the bounce rates for the pages that represent over 60% of the site traffic. This analysis might show that 2 pages in particular are seeing a rise in bounce rates
  • You could now run some lab tests to highlight what issues people have with these pages or maybe get your UX people to run an expert review of the pages (your analytics tool will not tell you the problem, just that one exists!)
  • One other option would also be to run a survey specifically to users that see this page and ask for their direct feedback
  • Take all this information and the UX experts can then come up with some alternative versions of the pages to be tested – which will nicely lead us into step 4

Now I have simplified this on purpose, but in reality it’s not much harder than this; find the problem areas, investigate the detail, get user perspective and come up with potential tests.

You need to come out of this step with some informed hypothesis and solutions that can be pushed forward for testing and optimization.

STEP 4 – Implementation

This step is all about taking the analysis and investigation results from step 3 and implementing the test strategy that you run in step 5. The key here is detail, because if you don’t set up the tests exactly as you plan then the results you get will be skewed and the decisions you ultimately make will not be in the best interests of your conversion rate – so close attention to detail is critical

What can you test?

This is an oft missed part of the process where you plan a test around things that you cannot actually change! To give you a prime example I saw 2 years ago, a company spent a long time coming up with 4 alternative forms with different questions (including a version with minimal questions) only to discover on the day it was due to go live that the removed questions were legally required.

So you’ve got information on the areas that need improving and you also understand what you can improve – now the question is “What do I test first? Hint, the answer is not “This one looks like the most fun”

It should be a combination of the potential to impact conversion most and the ease with which you can make the changes and get results.

The classic example is when looking at a funnel; let’s say it’s a 4 step funnel…

Add to basket – Enter personal details – Enter payment – Confirm

Whilst it’s always tempting to want to test and improve the end of the funnel because that’s closest to the payment, the levels of traffic going through the top of the funnel will be so much higher than you’ll get significant results to prove your tests much quicker so that might be the better option. Plus, don’t be afraid to model results and see what impact they would have, for example if I can get a 10% increase in one part of the process it leads to X increase in revenue, but if I reduce the bounce rate on a landing page by 5%, it has a different affect on revenue – choose the one with the greatest effect on revenue.

One final thought on prioritization – don’t be afraid to find benchmarks, because something that looks like an issue might well be good by your sector’s standards and is not the priority you think it is.

Track and measurements in place

The final thing you must not forget is to ensure you have sufficient tracking in place for you tests to be able to prove results. For this we are not just talking about the output that you get with your testing tool (e.g. Google Optimiser) but that you are able to look at the overall affect of the test across the site. This might be looking at multiple goals (actually possible with Google Optimizer) or maybe creating a segment of the different tests and looking at the overall behaviour of people exposed to the different tests.

Obviously the entire subject of exactly how to come up with alternatives to test, what exactly should be in them etc… is an entire blog (book!) in itself, but a few important notes

  • DON’T change too much at once; try and make just one change each time. If you change 15 things at a time you will never genuinely know what’s had the impact, positive or negative
  • DO get creative, if people are not clicking on the ‘buy now’ button the solution is not always to test a new colour or making it larger!
  • DON’T assume you’ll remember the change, keep a detailed records of all changes
  • DO get input from more than the person that owns the page for alternatives – at the end of the day this is probably the person that came up with the problem on the page originally so they will not be objective

STEP 5 – Test (A/B testing / MVT)

Anyone reading a blog on this fine site hopefully understands the power and benefits of A/B and multivariate testing, but the point I’d like to really hammer home is these tests need to be part of a continuous process, based on analysis and research; they cannot be standalone one-off activities, testing just like conversion rate optimization is a continual process that should be built into everything that you do.

On the discussion between A/B testing and multivariate testing I can say only one thing, never do multivariate testing until you have first sunk your teeth firmly into A/B testing, because the results you can achieve with A/B testing will come at a far quicker rate.

Your take away from this step is to ensure you base your tests around what you learned in stage 3 (analysis) and check success against what you did in stage 2 (measure), in terms of how to do the testing, you are already in the right place, look around the site you are on!

Closing the test and looping back round

As long as you have completed step 4 correctly, you will be able to look at a full collection of results beyond just a simple ‘did more people click on the button’; critically we are talking about looking back to the benchmark you have put in place and seeing how these have been affected by the tests you have run. By looking at these results and the data you get within the testing tool you now have all the information you need to conclude what was the result of the tests and either implement that change permanently or remove it – remember, a negative test is as important to test as a positive one.

If you do have a negative result, that doesn’t mean you stay with what’s in place, you tested a change because there was a problem and just because the first test did not work, doesn’t mean you stop testing that area, at this point you loop back to step 4 and re-testing.

If however you get a positive result, then you need to consider three options

  1. Continue testing further on the same area, re-fining to get the maximum results
  2. Move onto the next issue on the priority list you drew up in step 4
  3. If you have completed the initial priority list, then return to step 2 and start the loop again finding a new series of things to test and continue improving conversion

Continual Improvement

The most important thing I’d like you to take away from this whole process is conversion rate optimization is NOT a one off exercise; it is NOT a single expensive tool. Instead it is a continual process of improvement combining a number of basic weapons available to everyone.

  • Benchmark the site using analytics, surveys and user feedback
  • Analyse the information to highlight areas for improvement
  • Test ways to improve these areas and see what effect this has
  • Repeat ad infinitum…

— Garry Lee

default author image
About Garry Lee
Garry has worked in online analytics for over 12 years, with the last 10 years at RedEye, where he is Director of Analytics & Usability working across many leading industry names like Marks & Spencer, HSBC and Hotels.com, as well as the British Government. He launched behavioural email in the UK, as well as launching new media attribution systems and is currently focused on improving websites through conversion rate optimization.
» More blog posts by Garry Lee