A/B testing sample size
calculator…and much more

Use our calculator to execute A/B tests with confidence. You’ll also get all the
resources you need to run more A/B tests, more efficiently.

Get started

A/B testing sample size
calculator…and much more

Use our calculator to execute A/B tests with confidence. You’ll also get all the
resources you need to run more A/B tests, more efficiently.

Get started

Get more than just calculations

After you calculate your optimal sample size and test duration, we’ll provide a handy
list of our extensive A/B testing resources, including:
Webinar recording

A/B testing calculator:
Sample size and test duration

Enter your test parameters and contact info below to get the optimal sample size and test duration, so you can better plan your A/B tests.
Current conversion rate
Desired lift
Number of variations
Number of daily visitors

Your results

Here are the calculations based on the numbers you entered:
-- Total sample of users
-- Test duration (days)

Your A/B testing resources

Start building and testing for free
Get tips from the experts! Watch the recording on how to master A/B testing for higher conversions.

Get all the info you need to conduct successful A/B testing, including a step-by-step guide, case study, examples, and expert tips.

Download our ebook and see how you can get A/B testing success without the stress and achieve peak performance.

This template can help you boost your return on ad spend (ROAS) by showing you how to run more experiments, more efficiently.

If you’re already building your pages with Unbounce, these documentation pages will guide you through how to run an A/B test and interpret the results.

Quickly build and test landing pages on your own with easy-to-use, no-code tools so you can experiment efficiently and maximize your results.

Just follow these easy steps:

1. Enter your current conversion rate (%)
Input the percentage of visitors currently converting on your site. If 5 out of every 200 visitors convert, your conversion rate is 2.5% (5 ➗ 200 x 100 = 2.5%)

2. Input your desired lift (%)
This is the increase in conversion rate you hope to achieve with your new variation. For instance, if you want a 10% increase from your current conversion rate, enter 10.

3. Specify the number of variations
Enter the number of different versions (including the original) you are testing. If you’re testing one new version against the original, input 2.

4. Enter your average daily visitors
Provide the average number of visitors your site receives each day. This helps the calculator estimate how long the test will need to run.

After you’ve entered those numbers, the calculator will generate:

Total sample of users
This is the total number of visitors required across all variations to reach statistical significance.

Test duration (days)
Based on your daily visitor count and the required sample size, the calculator will estimate how many days you should run your test to achieve reliable results.
Plug in your numbers and our A/B testing calculator can help you estimate the ideal test duration based on your specific situation. In general, though, we recommend running your test for at least one to two weeks to gather enough data and account for any variations in visitor behavior over different days of the week.
No problem. If your website has low traffic, you can still run meaningful A/B tests. It might take a bit longer to reach statistical significance, but it’s definitely doable. To get quicker results, consider testing elements with a higher impact on conversion rates or using a more significant variation between your test versions.
Once you have your A/B test results, it’s time to act. If one version significantly outperforms the other, implement the winning variant on your site. If the results are inconclusive, analyze your data for insights and consider running another test with different changes. Remember, continuous testing and optimization are key to improving your conversion rates over time.
Statistical significance is a fancy way of saying that your test results are not just due to random chance. It means that the difference you’re seeing between your A and B versions is likely accurate and can be trusted to guide your decisions. Our calculator helps you determine if your results meet this critical threshold.
To ensure statistical significance, you need a sufficient sample size and test duration. Use our A/B testing calculator to determine the necessary sample size and test duration before you start your test. Also, make sure to run the test for the recommended duration without making changes midway, as this can affect your results.
Confidence level: This is the probability that your test results are not due to random chance. A 95% confidence level means you can be 95% sure that the results are reliable.

Statistical power: This measures the likelihood that your test will detect a real difference between your variants if one exists. Higher statistical power means a higher chance of detecting a true effect.
Don’t sweat it—if you don’t know your baseline conversion rate, our A/B testing calculator can still help. Start by running a small preliminary test to gather initial data. Once you have an idea of your baseline, you can input this into the calculator to refine your estimates for future tests.
Yes, you can, though it’s not ideal. Unequal sample sizes can complicate the analysis and interpretation of your results. It’s usually better to aim for equal sample sizes to ensure more reliable and straightforward results.
Not necessarily. While a 50/50 split is common and helps in quickly reaching statistical significance, you can use other ratios like 60/40 or 70/30 depending on your testing goals and traffic. Just remember that the more even the split, the quicker you’ll gather reliable data.
From a statistical standpoint, equal sample sizes are preferable. They make it easier to detect differences between variants and ensure that your test results are robust and reliable.