what is unbounce

Become a Better Marketer. Anytime, Anywhere.

Listen and learn on the go with Unbounce’s Call to Action marketing podcast. Tune in and get inspired in the car, while you cook, or at the gym.

Grow Your Conversion Rates Consistently with an A/B Testing Calendar

smarch1 copy
An A/B testing calendar will help you get more conversions – and beat those lousy Smarch weather blues. Image source.

For many marketers, the biggest barrier to running A/B tests is time.

As a fellow busy marketer, I get it. We’ve all got a bottomless inbox, an ever-growing day-to-day task list and bigger projects to tackle.

But I also know that 60 minutes of planning each month is all it takes to get you out of your habit of A/B testing procrastination.

A/B testing calendars are a plan for your team detailing what tests you’re going to run each week or month, what they aim to optimize and when they start and finish.

They motivate you to keep lining up tests, but they also help you become more efficient so you can generate more tests (and more wins). Check out the graph below, which shows the number of tests per month I was able to run before and after implementing an A/B testing calendar:


Needles to say, the extra conversion rate uplift you can gain from running twice as many tests can be enormous.

If you’re ready to get out of your procrastination funk, then read on. This post will help you implement a solid A/B testing calendar so you can finally become an A/B testing veteran who sees consistent conversion uplifts month after month.

Step 1: Know your tests

The first step is to choose what you’d like to optimize.

There are many places you can start, from your landing pages to your user acquisition and onboarding flows.

The trick is to just pick one and get started. For example:

  • Test your homepage using tools like Optimizely or VWO.
  • Test your campaign landing pages using Unbounce.
  • Test your onboarding flow (and onboarding emails) with services like SparkPage or Vero.

Listing these areas out is an important first step. Once you’ve chosen a larger area to test, you can get a little more granular and formulate your first hypothesis.

The guys at ConversionXL have a great guide to help you formulate and prioritize killer hypotheses. Here’s a quick summary:

  1. Identify problem areas. Use your analytics to find high-traffic pages with high drop-off rates.
  2. Interview real users. Use insight and data together to come up with better test hypotheses.
  3. Prioritize your tests. Rank them based on three criteria: the potential impact on a certain metric, the importance of that metric and the cost to implement.

Step 2: Count your conversions per month

Next, you’ll need to take a look at past tests and estimate how many conversions per month you can get in each area.

Your homepage, for example, likely gets the most traffic and therefore gets the most conversions per month.

Additionally, we’ll start with a rule of thumb that you’ll want about 250 conversions per variant to reach statistical significance. This is a good figure to get you started, but check out Optimizely’s sample size calculator to work out a more exact figure.

With these two numbers, (average conversions per month and number of conversions to reach statistical significance), you can calculate your the number of variants you can test for that month.

Calculating number of variants per month

For example, lets imagine you have one landing page for your latest ebook, which gets 5,000 visitors per month and a 15% conversion rate. This means 750 ebook downloads (conversions) per month.

So using our 250 conversions “rule of thumb,” you could test 3 variants per month (750/250).

Step 3: Standardize to one time period

After you do that simple calculation, you’ll know how many variants you can run per month in each testing area.

This is where the real value to this method comes in.

When you run tests in a non-standardized way, your testing calendar might look something like this:


Because no tests have a definite time length, all your tests start and finish at different times, which is incredibly difficult to plan around.

Now imagine you plan each test so that it runs for exactly a month, incorporating the max number of variants that the conversions allow. Here’s what that’d look like:


This is much, much easier to plan your testing calendar around, as you can to start and end on the same day. 

If you shift to planning tests that run for a specific time period, your monthly team meetings will start to look like this:

  1. All the tests running last month have finished and you have the results ready for the team to discuss and share lessons learned.
  2. The tests for this month are already underway.
  3. You spend some time brainstorming and planning the test for next month.

Your team now finishes each test at the end of the month and on the same day, they can start the next test.

There is no time wasted between the end of one test and the start of another. Every possible hour that you could be testing, you are testing.

You are now an unstoppable optimization machine. :)

Step 4: Take the template, make it work for you

Here is the template we use each month. It’s yours to take, download and modify.


There is space to identify each test, set the hypothesis and define the goal. (If you haven’t done this before, you might want to read “How to Formulate A Smart A/B Test Hypothesis.”)

We have a separate tab for each of our test areas, then a month-by-month plan for each channel. If you have enough resources and traffic to run tests weekly or bi-weekly, then just change “Month 1” to “Week 1.”

And if you want to dig even deeper into A/B test objective and goal setting, check out this webinar from Optimizely and KISSmetrics. They discuss A/B test planning at 32:00.

Your next action: Take 15 minutes

It doesn’t take much to get started on this. In fact, you can get started with your A/B testing calendar in 15 minutes flat.

  1. First, map the key areas in your user acquisition journey. For example, your homepage, campaign landing pages, onboarding flow and welcome emails.
  2. Next, download our template and create a new tab for each area you’ve identified. Change the timeframe in each (month, week, etc) to match your testing schedule.
  3. List any tests you currently have running and suggest a test in each area for next month.
  4. Between now and the start of next month, flesh out the details of each test. Determine the variants you want to test and ready any assets you’ll need, like copy and images.

And lastly, if you have any other advice to share on how your team manages your A/B testing calendar, please leave a comment and let us know!

— Peter Tanham

smarch1 copy

About Peter Tanham
Peter Tanham is the CEO and co-founder of SparkPage, a market platform which lets marketers test and optimize their lifecycle messaging. He blogs regularly on SparkPage's Lifetime Value marketing blog. You can follow him on Twitter @PeterTanham.
» More blog posts by


  1. james brown

    A/B testing is not only helps you to improve the conversion rate, but, also helps you decide which technique is best suited for your business growth.

  2. Paul Koks

    Good post Peter!

    A few remarks:
    – In addition, it might be a good idea to visualize the ROI uplift as well. What was the effect on your overall conversion rate after you have started with planning your tests?
    – It’s great to run as many tests as possible to maximize ROI, but you really have to be careful with the impact of test A on the outcome of test B (interaction effect)
    – In your example, you take 1 month as the test period. It’s a good idea to work with a fixed length. But what is your experience with cookie deletion rates and it’s effect on one user ending up in seeing the other variant?
    – 250 conversions per variant might be on the lower end (a great tool to use: http://abtestguide.com/calc/, it includes power and significance numbers)

    Cheers, Paul

    • Peter Tanham

      Thanks Paul, great points.

      – Yeah, I tried to get into that in an earlier draft of this post, but I wanted to keep the post length readable. In reality you can’t calculate an actual ROI uplift, as you don’t have a control version of your team running

      You could calculate the extra number of tests you were performing before/after switching from ad-hoc to calendar, work out the average uplift per test and then total the ROI increase from that. It’s not a calculation we’ve done as we’re happy the calendar approach is better, but in theory you could work it out that way.

      – Watching for interactions is great advice. We see it often when A/B testing ad copy, landing page copy and email copy – often the best performing headline/subject line is the one that reinforces whatever message the subject saw on the previous step(s).

      – We haven’t seen much of a problem with cookie deletion. Maybe you could expand on this one? We mostly focus A/B testing on acquisition and onboarding, so the amount of repeat users going through the flow is minimal. A/B testing with repeat users has it’s own whole host of caveats, not least that an “optimal” variant for new users can underperform the control with repeat users, simply because they were used to the “sub-optimal” original.

      – Thanks, I haven’t come across that tool before. Yeah, this article is intended as an introduction to calendars. If you have previous tests to look at, use them to figure out your conversion numbers, but I included 250 because I think it’s more important that people get started with a calendar, rather than spending hours trying to get the perfect number and never starting.

      The 250 is a figure to start with and adjust as your first month goes.

      Thanks for the comments Paul!

  3. Peep Laja

    The “250 per variation” rule of thumb is misleading out of context. It’s not a stopping rule, but saying that you should not stop before you have AT LEAST that many – the exact amount depends on the needed sample size, and might be much, much higher.

    Here’s how many conversions you need before stopping the a/b test:

    • Peter Tanham

      That’s very true. I often find a big problem a lot of teams face with A/B testing is that they get overwhelmed with the need to get 100% statistical significance from day 1.

      The goal of this was to say, if you don’t have any past figures you’re better off picking something approximate and refining over time.

      A lot of the thrust of this article is to point out that many times we don’t get the best A/B testing done not because of our sample sizes or significance rates (which are incredibly important, of course), but because they’re difficult to get organised, or in that case because the work needed to get started is off putting.

      • Jeff Schultz

        I think you’re spot on Peter. I like to give clients an idea of a crawl, walk, run evolution to their optimization journey. Your calendar idea fits very well into getting someone to crawl in order to help them walk and run in the future.

        I’d rather be “pretty sure” I am making a positive impact today, than “100% certain” that I will… someday.

        Thanks for the great article!

        • Peter Tanham

          Thanks Jeff,

          That’s a great analogy – you could spend forever debating perfect running technique before you ever start walking!

          Great is the enemy of good, as they say.

  4. Eduardo Vaisman

    I like the emphasis on getting started quickly and refining over time. We use the same strategy with our customer at Optimove.

  5. Paul Francois

    great post Paul , thank you for info

  6. Bangla Funny Picture

    Bangla Funny Picture is Fully Fun and Bangla Funny a Image, Bangla Funny Photo, Bangla Funny Facebook Comment Picture.

  7. Ciera Fales

    Valuable analysis ! For my two cents , if others wants to merge PDF or PNG files , my friend found a tool here http://goo.gl/EKY63u