Top-5 Concerns about A/B Testing and Their Resolutions

Top 5 concerns about A/B testing

There is a number of different questions and concerns which may face marketers and business owners at the beginning of A/B testing practice. In this article, we’ve collected the most common questions from newbies in A/B testing and answers to them.  

1. When should I select a winner variation in A/B experiment

To get reliable results, you should have sufficient quantity of users in your A/B test.
The most common mistake is to finish a test after reaching 95% of confidence. Confidence is high, but it’s still not a guaranty that you have big data set enough to be representative for different business cycles and days of the week.

The point is that users’ behavior is constantly changing and depends on days and business cycles, that’s why the best solution to cover all these types of users is to run a test long enough, depending on the specifics of each business. Remember there’s no unified size of dataset and test duration, each case should be analysed individually.

Maxymizely recommends running each test until reaching at least 100 conversions through every variation and, of course, your test should achieve a 95% level of confidence to make reasoned conclusions about the winner variation. One more useful tool we’ll recommend using to any A/B testing specialist is this calculator of statistical significance, to determine the significance of results of a test.

2. How to run an A/B test on a low-traffic website?

It’s the second question in the list of the most common questions about A/B testing.

The first and main thing to remember in this case is the difference between A/B testing and conversion optimisation. It’s not the same. The A/B testing is the priceless technique for conversion rates optimisation but not always suited for it.

Running A/B tests on low traffic websites is unsafe because even little changes in a small dataset will cause big impacts on statistics. Of course, there is always an opportunity to run a longer test. You can calculate the length of a test using such tools as this A/B test duration calculator. So, if you have 2% conversion rate on your website with 101 unique users per day, and you want to detect your conversion rates at least 10% higher, you’ll need to spend 1552 days for it. Too much time, yeah?

So, the first part of the answer is to focus on generating more traffic instead of A/B Testing. And the second part is to start conversion improvement using best practices of landing page building and to try a qualitative instead of quantitative research for your website’s improvement. Some of the methods of qualitative research were listed in our article How to Discover Website Areas You Should Improve via A/B Testing’ and include but are not limited to customer service records research, asking your visitors directly, collecting public feedback, researching competitors, involving user testing services, etc.

3. How to know whether your conversion rate is high enough?

Conversion rates are fluky things you shouldn’t make the primary focus on. It’s not the aim by itself. Conversion rates usually fluctuate even because of such minor factors as time of the day or month (depending on a business niche you occupy). The critical thing you should care about is business revenue.

So, let’s change this question from  ‘Is my conversion rate good?’ to ‘Is my business good?’ The real objective of conversion rate optimisation is to force other, more practical metrics in a business like lead quality, profit, and revenue.

4. How does A/B testing affect SEO?

This question usually contains two concerns: cloaking and content duplicating. And both of them are myths.

  1. Cloaking is the act of showing different content to a search engine and real visitors. But A/B testing tools work in the way of showing the bot original content only while showing to visitors of websites content is swapped with JavaScript.
  2. Content duplicating is the act of plagiarism that is always penalized by search engines, as no content can be ranked twice with the same words, or on the words that are not yours. But in most of A/B testing software (including Maxymizely), there is an option to redirect traffic from a website to different content variations that resolve this concern as well.

A/B testing isn't cheating

Besides, search engines (Google, Yahoo, Bing and others) are doing A/B testing too, and some of them have special software for it (as Google Content Experiments). So it’s highly improbable that they would have decried the use of this marketing tool, used by themselves. As a guarantee, you may filter out search bots from your tests with the noindex value of an HTML robots meta tag (but that’s not necessary as most search bots will see your original page only, not the variation).

5. Is it possible to A/B test responsive designs?

The answer to this question is, of course, you can easily do it. If you’re using an A/B testing tool like Maxymizely and want to improve something in the responsive website, usually your variations will include changes in CSS instead of simple changes in texts or images. This because you are forced to design different variations for different devices. So, as an answer, there is no problem in A/B testing of responsive designs.

Essentially, responsive design means being able to manage effectively different screen resolutions. When doing that, you’re mostly concerned about the images and how well the <div> tags scale.

This case has an accidental but positive side effect: you have to segment your visitors according to devices, and that is a kind of behavioral targeting. If you do it right, you’ll achieve far more benefits by targeting specific variations to specific screen resolutions and measuring their different conversion rates instead of having a general conversion rate for a whole website, which may distort the conversion rates of various customer segments.

If you have any other issues with A/B testing, feel free to write a comment or contact in other convenient way!