What is A/B testing?

A/B testing (sometimes called “split testing”) is a type of experiment in which you create two or more variants of a piece of content—like a landing page, an email, or an ad—and show ’em to different segments of your audience to see which one performs the best.

Essentially, A/B testing lets you play scientist—and make decisions based on data about how people actually behave when they hit your page.

A/B testing in marketing

Marketing budgets keep gettin’ tighter. Paid clicks have never been more expensive. Proving that you’re making the most of your advertising dollars—to your boss, or to your clients—is absolutely essential.

That’s where A/B testing can help. By determining the highest-performing version of a piece of content—your “champion” variant—you can maximize the impact of your marketing campaigns.

Imagine, for instance, that you want to test whether one landing page headline will get you more leads than another. Sure, you could just make the change and cross your fingers. But what if you’re wrong? When you’re gambling with your marketing budget, mistakes can get costly.

A/B testing is a way to mitigate risk and figure out (with some measure of certainty) how to convert the highest percentage of your audience. By sending half your traffic to one version of the landing page and half to another, you can gather evidence about which one works best—before you commit to making the change broadly.

A/B testing terminology

Before we get into how you run an A/B test, it’s important to learn some fundamental testing terminology:

What is a “variant?”

“Variant” is the term for any new versions of a landing page, ad, or email you include in your A/B test. It’s the version where you apply the change you’re experimenting with—your “variable.” Although you’ll have at least two variants in your A/B test, you can conduct these experiments with as many different variants as you like. (But note that it’ll increase the time your test takes to achieve statistical significance.)

What is a “control?”

In the context of A/B testing, the “control” variant refers to the original or existing version of a webpage, email, or other marketing material that you are testing. This is the version that is currently in use before any changes are made. It serves as a benchmark against which the “challenger” or “variant B”—the modified version where one or more elements have been changed—is compared.

At the beginning of any A/B test, your control variant is also your “champion.”

What is a “champion?”

You can think about A/B testing like gladiatorial combat. Two (or more) variants enter, but only one variant leaves. This winner (the version with the best conversion performance, typically) is crowned the “champion” variant.

When you start an A/B test, your original version is your champion by default, since it’s the only version for which you already have performance data. Once the test concludes, you might find that one of your “challenger” variants has performed better than the original—which makes it your new champion.

What is a “challenger?”

When starting an A/B test, you create new variants to challenge your existing champion page. These are called “challenger” variants. If a challenger outperforms all other variants, it becomes your new champion. If it doesn’t, you can throw it in the scrap heap of failed marketing ideas.

How does A/B testing work?

In a typical A/B test, traffic is randomly assigned to each page variant based upon a predetermined weighting. For example, if you are running a test with two landing page variants, you might split the traffic 50/50 or 60/40. To maintain the integrity of the test, visitors will always see the same variant, even if they return later.

The main factor that decides how much weight you would ascribe to your page variants during a test is timing: whether you’re starting the test with multiple variants at the same time or testing new ideas against an established page.

PRO TIP. Keep in mind you need to drive a certain amount of traffic through test pages before the results are statistically significant. You can find calculators online (like this one) or use tools like Unbounce’s landing page builder to help you run tests.

If you’re starting a new campaign and have several ideas about which direction to take, you can create a variant for each idea.

In this scenario, you’d most likely assign equal weight to each variant you wanna test. For two variants, that’d be 50/50. For three, it’d be 33/33/34. And so on. You want to treat them equally and pick a champion as soon as possible. As you have no conversion data on any of the pages, begin your experiment from a position of equality.

If you have already have a campaign that you want to try some new ideas out on, it’s usually best to give your new variants a smaller percentage of traffic than the existing champion to mitigate the risk inherent with introducing new ideas.

Admittedly, this will be slower. It’s not recommended that you try to accelerate an A/B test by favoring new variants though, as they’re not guaranteed to perform well. (Remember, A/B testing is all about mitigating risk. Test wisely!)

What can you A/B test?

Most marketing departments rely on a mixture of experience, gut instinct, and personal opinion when it comes to deciding what will work better for their customers. It sometimes works out, but often doesn’t. When you start A/B testing, you should be prepared to throw all the boardroom conjecture out the window: the data (properly interpreted, anyway) doesn’t lie. It’s worth telling your boss this.

There are a number of elements that you can focus on in your testing. The different variations and content that goes into the test are up to you, but which one works the best (whether you like it or not) is up to the customers.

Some of the elements you should consider split testing are:


Your main headline is usually a succinct rendering of your core value proposition. In other words, it sums up why anyone would want your product or service.

When it comes to testing, consider playing around with the emotional resonance of the wording. You might try a headline that evokes urgency, or one that fosters curiosity. Similarly, experimenting with the length of the headline can impact performance—while shorter headlines are generally punchier, a longer headline can convey more information and potentially draw readers in more effectively. And don’t overlook the potential impact of font style and size—sometimes a change in typography can refresh the entire feel of a page.

Here are some other approaches you can try when testing your headline:

  • Try a longer versus shorter headline
  • Express negative or positive emotions
  • Ask a question in your headline
  • Make a testimonial part of your headline
  • Try different value propositions

Call to action (CTA)

On a landing page or web page, your call to action is a button that represents your page’s conversion goal. You can test the CTA copy, the design of the button, and its color to see what works best. Try making the button bigger, for example, or make it green for go, blue for link color, or orange or red for an emotional reaction.

You can also explore different verb usages to incite action. (For instance, “Join” might have a different impact compared to “Discover.”) Remember, though, the copy should speak to the value of your offer—the benefit someone will get from clicking.

Hero image

A hero shot is the main photo or image that appears above the fold on a landing page or web page. Ideally, it shows your product or service being used in a real-life context, but how do you know what hero shot will covert for which landing page? Do you go with the smiling couple? Or maybe a close-up of the product itself? Experiment and find out.

You might test different imagery styles—such as photographic or illustration—to see which one resonates more with your audience. Similarly, experimenting with the size and orientation of the image can help shape visitors’ focus. Play around with the color schemes to evoke different emotions and set a specific tone.

PRO TIP. Just like your headline and supporting copy, the hero shot is subject to message match. If your ad mentions mattresses, but your landing page’s hero shot shows a rocking chair, then you’ve likely got a mismatch.

Lead forms

Depending on your business, you might need more than just a first name and an email—but the number of fields can be a decisive factor in user engagement.

You might test a form with only essential fields against one with additional, optional fields to gauge your visitors’ willingness to provide more information. Additionally, experimenting with different types of fields—such as dropdowns or open fields—can offer insights into user preferences and potentially increase form submissions.

If you have a particularly strong need for data, try running a test with different form lengths. This way, you can make an informed decision about what abandonment rate is acceptable when weighed against the extra data produced.


For the copy of your campaign (whether on a landing page or in an email), you might consider testing different writing styles. For example, a conversational tone might resonate better with your audience than a formal tone. It could also be beneficial to experiment with the inclusion of bullet points or numbered lists to enhance readability and engagement.

Often the biggest factor is long copy versus short copy. Shorter is usually better, but for certain products and markets, detail is important in the decision-making process. You can also try reordering features and benefits, or making your language more or less literal.

There are lots of opinions on what works and what doesn’t, but why not test it and see for yourself?


The layout of your landing page or email can completely change the visitor experience. You might try a layout that emphasizes visual elements over text—or vice versa—to see which is more effective.

Will a CTA on the left outperform one placed on the right? And does that testimonial video do better if you put it at the bottom of the page or the top? Good question. Sometimes changing the layout of a page can have major effects on your conversions.

Experimenting with navigation can also impact performance. Perhaps a sticky navigation bar works better, or maybe a sidebar navigation is more user-friendly. The goal should be to create a layout that is both aesthetically pleasing and facilitates a seamless user journey.

PRO TIP. If you want to experiment with layout, move one thing at a time and keep all other elements on the page the same. Otherwise, it’ll be difficult to isolate the changes that work.

How do you run an A/B test?

Cool, so now you know the basics of A/B testing. But how exactly do you go about setting up and running an A/B test to improve your campaign performance?

Here’s the step-by-step process of running an A/B test, from the initial stages of identifying your goals and formulating hypotheses, to creating variants and analyzing the results.

Step 1: Identify your goal

Before you start A/B testing your campaign, you should get super clear on the outcome you’re hoping to achieve. For example, you might wanna increase your ad clickthrough rate or reduce your landing page bounce rate. (Whatever metric you wanna influence, though, remember that the ultimate aim of A/B testing is to increase your campaign conversion rate.)

A clearly-defined goal will help you shape the hypothesis of your A/B test. Say you’re getting lots of traffic to your landing page, but visitors aren’t clicking on your CTA—and you wanna change that. Already, you’ve narrowed down the number of variables you might test. Could you improve CTA clicks by making the button bigger, or increasing the color contrast? Could you make the CTA copy more engaging? 

Once you’ve got your testing goal, forming a hypothesis is a whole lot easier. 

Step 2: Form your hypothesis

The next step is to formulate a hypothesis for you to test. Your hypothesis should be a clear statement that predicts a potential outcome related to a single variable. It’s essential that you only change one element at a time so that any differences in performance can be clearly attributed to that specific variable. 

For example, if you wanna improve the clickthrough rate on your landing page CTA, your test hypothesis might be: “Increasing the color contrast of my CTA button will help catch visitors’ attention and improve my landing page clickthrough rate.” The hypothesis identifies just one variable to test, and it makes a prediction that we can definitively answer through experimentation.

Make sure that your hypothesis is based on some preliminary research or data analysis to so that it’s grounded in reality. (We already know high-contrast CTA buttons get more clicks, for instance.) Whatever you test, you still wanna be reasonably confident that it’ll be effective for your audience. 

Step 3: Create your variants

Creating variants means developing at least one new version of the content or element you wanna test, alongside your control version. In a standard A/B test, you’ll have two variants: variant A and variant B. 

“Variant A” is typically your control variant—the original version of whatever you’re testing. Since you already know how this version is performing, it becomes our baseline for any results. This is your “champion” by default. It’s the one to beat.

“Variant B” should incorporate whatever changes to your variable that you’ve hypothesized will improve performance. If our hypothesis is that a different color CTA button will get more clicks, this is the variant where we’ll make that change.  

Although most A/B tests have just two variants, you can test additional variants (variant C, variant D) simultaneously. But be aware that more variants mean it’ll take longer to achieve statistical significance—and if you introduce any additional variables to the test (like a different page headline), it can become almost impossible to say why one version is outperforming another. 

Step 4: Run your test

Once you’ve got your variants, you’re ready to run your test. 

During this phase, you’ll divide your audience into two groups (or more, if you’ve got more than two variants) and expose one half to variant A, the other to variant B. (Ideally, the groups should be totally random to avoid any bias that might influence the results.)

It’s essential that you run your test for long enough to reach statistical significance. (There’s that term again.) Essentially, you need to make sure you’ve exposed each variant to enough people to be confident that the results are valid.

The duration of your test can depend on things like your type of business, the size of your audience, and the specific element being tested. Be sure to calculate your A/B test size and duration to ensure your findings are accurate.

Step 5: Analyze your results

After you’ve got a large enough sample size, it’s time to analyze the data you’ve gathered. This means scrutinizing the metrics relevant to your variable—clickthrough rate, bounce rate, conversion rate—to determine which variant performed better. The winner becomes your new “champion” variant.

Say, for example, you’re testing a new CTA button color on your landing page to see if it gets more clicks. You’d wanna compare the clickthrough rate on the button of your page variants and see which is getting more visitor engagement. 

Depending on what you’re testing, you might need to use analytical tools to dig into the data and extract actionable insights. This step is critical—it not only helps you identify the winning variant, but can also provide valuable information you can leverage in future marketing campaigns.

Step 6: Implement the winning version

The final step of your A/B test is to implement your learnings across your campaign. With these new insights, you can confidently roll out your “champion” variant and expect higher overall performance. Nice. 

But the process doesn’t stop here. You should keep monitoring the performance of your changes to make sure they’re getting you the expected results. You also should already be starting to think about what you might test next, looking for new ways to improve your performance.

Optimization is a mindset. Never stop testing. 

Bonus: A/B testing mistakes to avoid

Marketers often make mistakes when A/B testing—they’ll stop the test too soon, jumping to conclusions before they’ve got the necessary data to make an informed decision. When you run your own test, make sure to avoid these common pitfalls (originally highlighted by CRO expert Michael Aargaard for the Unbounce blog).

A/B testing mistake: Declaring a “champion” too early

It can be tempting to roll out a winning variation as soon as you start to see a lift in conversions, but it’s crucial that you don’t jump to conclusions before you see the bigger picture. In Michael’s words:

You need to include enough visitors and run the test long enough to ensure that your data is representative of regular behavior across weekdays and business cycles. The most common pitfall is to use 95% confidence as a stopping rule. Confidence alone is no guarantee that you’ve collected a big enough sample of representative data. Sample size and business cycles are absolutely crucial in judging whether your test is cooked.

Michael himself runs tests for four full weeks, with a minimum of 100 conversions (preferably closer to 200) on each variant and a 95% confidence level being prerequisites for declaring a champion. He then uses an A/B testing calculator to check the statistical significance of his results.

Despite his own methodology, Michael stresses that there’s no one-size-fits-all rule for declaring a champion, as there are many contextual factors that make each test unique. Focus on covering both a large enough sample size and a long enough duration of time to ensure that you’re getting a complete view of the page’s performance before calling it.

A/B testing mistake: Focusing only on your conversion rate

Conversion rates are fickle things. They can fluctuate frequently due to something as minor as the time of day, to major shifts in your competitive landscape. Ultimately, it’s important to remember that your goal isn’t just a higher conversion rate—it’s also whatever benefit those extra conversions provide for your business. As Michael put it:

If you run a business, it’s not really about improving conversion rates, it’s about making money. So instead of asking yourself “Is my conversion rate good?” you should ask yourself “Is my business good?” and “Is my business getting better?

The purpose of improving your conversion rate is to impact other, more tangible metrics in your business. Michael reminds us to look past the conversion rate and focus more on things like lead quality, profit, and revenue. If an increased conversion rate doesn’t translate to increased business success, it isn’t a win.

A/B testing mistake: Assuming A/B tests are the only option

You might be surprised to learn that running A/B tests on low-traffic pages can actually be dangerous. That’s because small sample sizes are easily impacted by changes in the dataset, which can dramatically shift the outcome of a test. If you’ve only got a few hundred visitors, just one conversion can change the outcome and give you the wrong impression.

And sure, you could just wait until the test gets enough traffic—but you might be waiting for a while.

Let’s say you want to run a test with two variations. Using a duration calculator, we can see that if the current conversion rate is 3% with 100 daily visitors, and you want to detect a minimum improvement of 10%, you’ll need to run the test for… 1035 days. Ouch!

Instead, Michael suggests using other forms of research to figure out how to improve your conversion rates. Customer interviews, case studies, and surveys can provide qualitative data that reveals opportunities you might not have even considered testing—and without all the traffic.

When you can’t A/B test properly, it’s even more important to spend time doing qualitative research and validating your hypotheses before you implement treatments on the website. The more homework you do, the better the results will be in the end.

I’ve been involved in several optimization projects where customer interviews revealed that the core value proposition was fundamentally flawed. Moreover, the answers I got from these interviews got me much closer to the winning optimization hypothesis.

Problems with A/B testing—are they worth it?

A/B testing your campaigns can be a powerful way to squeeze more conversions (sometimes many more conversions) out of your marketing budget, increasing your overall return on investment. It’s possible to make mistakes if you’re not careful in setting it up—most commonly, changing more than one element at a time—but with a prep and a great hypothesis, you can set yourself up for success.

That said, for smaller teams and businesses especially, there are a few hurdles that can make A/B testing your pages more challenging:

Challenge 1: You need to wait for statistical significance

Imagine you flip a coin in the air. It comes up heads. You flip it a second time. Heads wins again. That’s strange, you think, as you give the coin a final flip. It lands heads up once more.

After three flips, are you ready to conclude that any flipped coin has a 100% chance of landing heads up? (Breaking News: Local Marketer Declares Laws of Probability Are A Sham.)

Probably not. Imagine heading to Vegas thinking a coin flip always comes up heads.

A similar thing happens when you A/B test a landing page. Until you’ve tested your variants with enough visitors to achieve statistical significance, you really shouldn’t apply your learnings. Instead, you need to eliminate as much uncertainty as possible before you decide on a champion variant. How many visitors you need can vary depending on your goals, but it’s typically a high number.

Challenge 2: You need quite a bit of traffic (and time)

The need for statistical significance poses another problem for small teams. If you don’t get enough traffic to be confident in your results, you can’t (or shouldn’t) end the A/B test. For smaller businesses, landing pages can take months to achieve the necessary results to draw a single conclusion. And sometimes that conclusion will be that the change you made (changing a button from blue to red, for instance) hasn’t impacted your conversion rates at all.

If you’re running a timely marketing campaign, or just want to see results quickly, A/B testing without much traffic can be too slow to be useful. Waiting a year for a 5% conversion lift on a single landing page is unlikely to be appealing and hard to defend. Given that there are manual hassles involved in setting it up too, it won’t be worth your time.

Challenge 3: It’s a “one-size-fits-all” approach to optimization

This issue is one drawback baked into A/B testing: When you crown a champion variant, you’re choosing the version of your page that’s most likely to convert a majority of your visitors. This doesn’t mean that there weren’t other types of visitors who would’ve been more likely to convert on the losing variant. (It’s even possible these neglected visitors are more valuable to your business than the people for whom you’ve optimized.)

By design, A/B testing takes a blunt, “one-size-fits-all” approach to optimizing that’s likely not ideal for anyone. Sure, it can boost raw conversion rates in dramatic ways. But it sometimes lacks the nuance that growth-minded marketers obsessed with segmentation, personalization, and targeting might expect.

A/B testing alternatives: Using Smart Traffic

Let’s say you love the idea of optimizing your landing pages for more conversions, but can’t overcome one of the hurdles we’ve just discussed. How do you proceed?

Artificial intelligence, thankfully, can help you improve your conversion rates without the high bar to entry of A/B testing. Using a tool like Unbounce’s Smart Traffic, for instance, lets marketers optimize their landing pages automatically (or, as computer scientists like to say, automagically) by having AI do the kind of work that a human marketer can’t.

By running contextual bandit testing instead of A/B testing, Smart Traffic allows you to start seeing results in as few as 50 visitors, with an average conversion lift around 30%. There’s never any need to crown a champion because the AI routes each and every visitor to the landing page variant that’s most likely to convert them—based on their own unique context. No more “one-size-fits-all.”

Here’s how it works:

  • You create one or more variants, changing whatever you’d like. Unlike A/B testing, you’re not limited to just one change at a time—and adding more than one variant doesn’t significantly slow down your time to optimizing. (Here’s a resource about creating landing page variants for Smart Traffic to get your started.)
  • Set a conversion goal and turn it on. You decide what counts as a conversion in the Unbounce builder, then turn on Smart Traffic as your preferred optimization method. It starts working right away.
  • Smart Traffic optimizes automatically. The beauty of this approach is it is relatively hands’ off. Once Smart Traffic is enabled, it keeps learning and optimizing throughout the life of your campaign.

Because of how easy they make optimizing, AI-powered tools should become a bigger part of your marketing stack. There are still plenty of reasons to choose A/B testing, but Smart Traffic enables even the little guys—or those of us who’re chronically short on time—to take advantage of optimization technology once affordable only by big enterprises.