12 Mistakes to Avoid in an A/B Test

Rahul Ashok
5 min readFeb 14, 2021

#6 — Tests don’t run long enough

Photo by Maksim Goncharenok from Pexels

We have all been there.

Trying a new A/B test to see if a particular theory is accurate and if there is a significant increase in a specific metric.

Oftentimes, the test doesn’t go as expected and that is okay. The whole reason to conduct an A/B test is to optimise for improvement.

Having said that, there are 12 common mistakes that growth hackers/digital marketers often make that can be prevented to ensure that the test gets a legit chance.

Without further ado, here are the 12 common mistakes that can be prevented:

Testing mistake #1 — Wasting time on stupid tests

Do not test items that are easily fixable. For example, if you have a dark grey banner with black text on it, the copy will not be readable!

You don’t need to run a test for that, just change the font color, make it readable and move on.

Testing mistake #2 — You think you know what will work

Peep Laja (considered to be the god of CRO) once said that after helping a long list of clients achieve their goals, he still can predict the success of a test only 60% of the time which is very close to flipping a coin (50% chance of heads or tails).

Photo by Pavel Churiumov on Unsplash

So, don’t assume any outcome, experiment and test it out.

Testing mistake #3 Copying other people tests

You heard your colleague or friend conduct an A/B test and get successful results. You’re impressed by the result and decide to implement the same test in your work.

Red flag!

You’re basically trying to implement somebody else’s solution to somebody else’s problem to your project.

Does that sound like a smart choice?

Unless you both are working on the same or similar projects, this is a big no-no.

Testing mistake #4 — Too low sample size

This is straightforward. If you run a test on a hundred customers and decide it’s success or failure, you’re bound to fail in the long run.

Without conducting the test on an ample sample size, it would be premature to conclude the result of an A/B test.

Testing mistake #5 — You run tests on pages with very little traffic

This mistake is a continuation of the previous one. If there is very less traffic on a specific page and you decide to run an A/B test there, it will not deliver accurate results.

You need sufficient traffic (differs from one website to the next) for a successful A/B test.

Testing mistake #6 — Tests don’t run long enough

Peep Laja recommends running an A/B test for 28 days. Unless you have very high conversions on the website (10,000 per week), you should let the test run its course and not conclude it prematurely.

Photo by Brooke Lark on Unsplash

Also, check the test at the end of 2 weeks and compare it to the results at the end of 4 weeks.

Testing mistake #7 — Test full weeks — not middle of the week

Ensure that the test runs for a complete business cycle. For example, if the test starts on a Tuesday, it should run until the next Monday.

As much as it is important to run a test for 28 days, it is also important to not run it beyond 28 days. This is primarily because beyond 28 days, it leads to polluted samples and also, people delete cookies so the same audience will see different variations of the test.

Testing mistake #8 — Test data is not sent to third-party analytics

The testing tool you’re using might indicate that the results are positive and everything is accurate but the bottom line will still be stagnant.

If data is sent to a third party analytics tool accurately, the test can be tested thoroughly.

Testing mistake #9 — You give up after your first hypothesis fails

Don’t give up. If the first hypothesis goes wrong, try a variation and test another element. If you cancel the test after the first hypothesis goes wrong, it could stop you from solving the problem.

Testing mistake #10 — Validity threats

In the course of an A/B test, there is a possibility that you might have some false positives or false negatives.

There are three ways to recognise this:

a) Instrumentation effect

It’s possible that the coder didn’t test the website on different browsers and this could lead to buggy landing pages on certain landing pages.

b) Selection effect

If you realise that a page is working well because of Adwords traffic and assume that the page is excellent and take it live.

Once the Adwords traffic is paused, the page performs poorly. This could be because the audience were primed with the ad copy/creative and liked the page but audience who didn’t see the ad, weren’t primed and hated the landing page

c) History effect

For example, the ads have the CEO’s photo on it but there is some negative news about the CEO. In such a scenario, the ads need to be taken down immediately.

Testing mistake #11 — Ignoring small gains

Oftentimes, we tend to ignore small gains and accept only significant ones.

Next time that happens, consider this:

A 5% monthly increase in conversion rate will result in 80% uplift for the year.

That’s how compounding works. So, make sure that you record every small gain.

Testing mistake #12 — You’re not running tests all the time

There are a hundred million tests and theories that can be conducted to improve a feature or metric of the website.

Every week you spend not testing something, is a week wasted.

Note that, this does not mean you run hundreds of random tests. You need to plan, prioritise, and test as per your requirements. It can be as small increasing the CTR by 0.1% or as significant as increasing the conversion rate by 20%

If you would like to learn more about the basics of growth marketing, click below:

This topic is explained in detail in the Researching and Testing lesson of CXL Institute’s Growth Marketing Minidegree. The lessons in the minidegree are crafted nicely with tons of additional reading material that enabled me to understand the concepts in depth.

--

--

Rahul Ashok

Digital Marketer, Traveler, Photographer, Dreamer.