Mistake №1. Ending tests too soon
The most common mistake of CRO practitioners is to end the test too soon. We may say it’s the trend! The most mistakes at this point are: to wait until the confidence level is 95 % and to end a test after that; to run a test until some trend is defined and to end right after that. Of course, you shouldn’t do any of this.
The solution is to define the right sample size (e.g. using this tool) and to run a test for one full business period (two are better), considering days of a week, season waverings, your competitors’ activities, moon cycles, and other factors that can influence number of sales in your business.
Mistake №2. Not to integrate an A/B test with your web analytics suite
If you don’t do this you cannot recheck the data you get from your AB test with the data you get from your analytics suite.
The solution is to compare data using two data-collecting engines so that you’ll control the number of reached goals, revenue and other meaningful metrics you get from each of them. The other important aspect is to make sure that your developers understand the session model functioning in both your analytics and A/B testing tool. If there is a misalignment between the two models, you’ll get different numbers that will hurt the results of your test.
Mistake №3. Not waiting for statistical significance. Not knowing what statistical significance is or why it’s important
Another problem that beginners can face is stopping the test until it reaches the statistical significance, that causes the lack of evidence of the result, so the result isn’t valid.
The solution. To get reliable results you have to determine a reasonable for your case level of significance (95% or more) and then to start your testing. Don’t ever stop the test until you reach the recommended sample size (calculated in this tool, as an example).
However, don’t stop the test even after you’ve got your sample size, as we wrote above. It is required to cover at least one or two business periods.
Mistake №4. Testing a page with low traffic
If you want to A/B test a low-traffic page think twice before the start. The issue with it is the duration of the future test. As an example, if you want to launch an experiment for a page with 100 daily visitors, your existing conversion rate is 5%, and you want to improve it to at least 10% by testing 2 variations, you’ll need to run your test for 2432 days! For calculating the length of your test you may use such tools as this. Another side of the mistake marketers use to make is testing too many variations for that page’s traffic. A multivariate test (also known as A/B/N test) with a lot of variations and complexity could take tens of years to finish if you aren’t careful!
Mistake №5. Focusing on micro-level instead of macro-level.
The problem of the marketers who make such a mistake is that they are focusing on particular elements of a page instead of thinking about the whole picture. Don’t focus on such micro features as a colour of the button or title of the lead magnet. These micro changes are unlikely to dramatically change the conversion rates of your business. You should analyze the whole picture to consider how to get long-term customers that will bring your business appreciable quantity of money. Also, remember to differentiate the customers and remember that none of them should be treated identically.
Mistake №6. Focusing on crazy ideas, instead of best practices.
The mistake is in testing complex, hard to implement things instead of going for quick wins.
It’s much better to learn and implement the best practices of A/B testing than to invent something completely new. For example, sometimes it’s better just to add a CTA to the first screen and to add a contact page that will lead to great conversion improvement! Read our article with the best practices on how to find the areas of a website you should improve or find some of checklists (like this one) that will help to define the weakest places of your website.
Mistake №7. Treating all visitors like they’re the same.
Quite a common mistake of A/B testers is a lack of segmentation of the audience. It’s much better to test what works better for, say, Adwords visitors vs. the visitors from e-mails. Often each group will prefer something slightly different and will come and convert through the different sales funnels. For example, version A performed better overall but version B did really well with PPC ad visitors (which is very actionable info). Also, you should differentiate new vs. returned users within each segment.
The solution is to segment the visitors for your A/B tests. It’s much better for making right business decisions, than not doing this way.
Mistake №8. Do not make decisions based on just what performs better at a single step in your funnel.
Oftentimes a sales funnel embraces more than one step. In such situations you should care not only about % of people going from step ‘1’ to step ‘2’, but also about the next step in a funnel, because by thinking in a short distance you’ll gain a flashy success and will lose in a long-term outlook.
Look at the example of this mistake during the A/B test. A version ‘A’ will get a higher % of people to go from step 1 to 2 (in a 4-step funnel), than version B. But a version ‘B’ will still get a higher overall % to step 4, then version A.
The solution is to think farsightedly and to implement A/B tests not only to a single step of your sales funnel, but to consider all the steps.