**What is AB Testing?**

AB Testing is the process of comparing the conversion rate of two websites. Also referred to as “split testing,” an AB test simply means that you divide site traffic evenly between two site variations, compare the conversion rate of both sites, and see which site variation generates a higher conversion rate.

**AB Testing Example**

Here’s a simple graphic that shows the basics of an AB test (taken from our friends at VWO):

In this example, 1/2 of traffic goes to site variation A. 1/2 of traffic goes to variation B. Variation A converts at 23%, compared to variation B, converting at 11%. In this example, variation A clearly has the higher conversion rate, more than double the conversion rate of variation B.

**Statistical Significance – Knowing when data is meaningful**

One the key factors when running a proper AB test is statistical significance. You must gather enough data to know, at 95% confidence, that your test results are valid before making conclusions. Without enough data, you could accidentally shoot yourself in the foot by making bad decisions.

Here’s an example statistical calculator, made by KissMetrics. You can see, using the numbers above, that our test data is significant at a 99% confidence level:

However, what if you had the same amount of data, but less of a difference in conversion rate? Could you still tell the difference? Say for example you were comparing 17 conversions vs 11 for about 100 visitors in each group. You wouldn’t reach 95% confidence yet:

In fact, it wouldn’t be until you hit around double the number of visitors (200) that your results would become significant at over 95% level (in this case, 96%):

**The larger the conversion difference, the less visitors and data you need to determine that your result is statistically significant**. However, with a smaller difference, it takes more time. The key to getting good test data is ensuring that you **give your tests enough time to reach significance** and do not rush a decision. If you rush, your run the risk that may accidentally implement a losing variation.

**Time as a Variable**

It’s critically important that, in order to compare two site variations, **you compare both site variations at the same time with a proper AB test, rather than do before and after tests**.

The whole reason that AB testing software exists is to allow you to compare the two sites during the same time. If you simply change something on your site and compare the conversion rate to the week prior’s, you are forgetting that conversion rates fluctuate with time.

**AB Testing and Measurement**

Just because you run an AB Test showing a 20% conversion gain doesn’t mean that your conversion rate actually improved by 20%. **Measuring the true conversion lift can only be achieved by having enough data, preferably 2-4 weeks**.

The reason is that a statistically significant result only tells you that your conversion rate is higher on the new variation – that, you are sure that you have a winner showing an improvement. However, the true conversion lift will usually be lower than what the test software provides. For example, if you implement a winning variation showing a double conversion rate improvement – congratulations! But you more than likely have a conversion improvement of 40-50% rather than the full 100%. To get an accurate measure, you should let the test run for at least 2-4 weeks to gather sufficient data before making the call.