Create fully customized landing pages on your own

Don’t Make These A/B Testing Mistakes that Cost Conversions

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp

It has been noted by more than one marketing blogger that most A/B testing experiments are conducted incorrectly. That is true. It’s quite possible your last experiment was conducted incorrectly.

In the majority of cases, A/B testing is only a start of an experiment. No analysis. No hypotheses. A pleased marketer changed the color of a button and is happy. He’s waiting for sales growth to materialize out of nowhere.

But in reality, gaining conversions through testing is much tougher than that. The bulk of these experiments are designed wrong. There are mistakes all round. Mistakes lead to irrelevant results. Irrelevant results become incorrect conclusions.  Incorrect conclusions, as this Kissmetrics post shows, can become disappointments in the methodology behind A /B testing itself or even losing the sale.

Everything really fits together: if you don’t use the tools correctly, then it’s just as good as if there are no new and effective tools for site improvement–which is no good at all. Everything on your site is just as lackluster as it was before.

You don’t want that. Don’t follow the pack and make these foolish testing mistakes:

Disregarding statistical validity – #1

Although there are built-in algorithms to calculate statistical validity in nearly all the services available for A/B tests, many marketers ignore them. This disregard leads to problems on a site.

The biggest one? Incorrect conclusions. Statistical validity is as important as two previous points. Conversion increase specialists also advise to use estimation calculators for statistical validity.

Conversion Caluculator
Conversion increase specialists advise to use estimation calculators for statistical validity

So what can be done to avoid this blunder?

Really, the key here is to know that it’s important not to rely on an experiment if its statistical validity is below 95 percent.This is a minimum point though. Ideally, this indicator should be as 99.9 percent.

Testing elements which do not affect conversion – #2

These experiments do not make any sense. Changing the elements that do not affect decision-making doesn’t influence user behavior.

If you reword the text which nobody never read or only change an element in the bottom part of the landing page, which is seen by only 10 % of users, there will be no utility in this decision.

[tweetthis]If you value your time and money, do not spend time on meaningless A/B tests.[/tweetthis]

You’ll get better results if you test the elements which actually affect decision-making. Testing these elements is the most effective means of gaining conversions.

Test these elements which the visitors of a site interact with:

  • CTA buttons (buttons like “Buy”, “Download”, “Registration”)
  • Forms (for registration, site subscription, registration of a purchase)
  • Text elements (headers, subtitles, descriptions of goods, benefits, advantage for a potential customer)
  • Visual elements (images, videos, audio)
  • Pages with prices (prices themselves, descriptions of value for customers, security icons, money-back guarantees)

Incorrect parallel A/B tests – #3

When I mention the incorrect parallel A/B test, I mean the situation that is created when visitors participating in an experiment on one page enter even into another experiment.

For example, in the first experiment they see an original version and in the second–a test one. More than one element has changed. As a result, it can affect validity of your testing results if the impact of an experiment on the entire purchase funnel is analyzed.

If you are absolutely sure that the audience of experiments will not be crossed, relax and continue testing. If you are not, do not even start parallel experiments. Run experiments one by one. It will take much longer, but will give more consistent results.

Reliable results are more important than the speed of testing. All the wrong conclusions will turn out a proper mess results are more important than the speed of testing. And you will have to conduct retrials to discover a problem that could easily be avoided.

A/B Testing speeds do not need to be fast.
The speed in which you conduct your A/B testing is far less important than the methodology. Sometimes slowing down is a good idea.

A/B testing of sites with a small amount of traffic – #4

The experiment is lasting for months (and it is not advised to conduct a test so long!). Because of a long expectation premature conclusions are done (it is very hard to wait until a necessary number of conversions). The results of an experiment are invalid. As a result, all the time and the forces are left to go down with the wind.

It will be so in most cases if few conversions occur on a site in a day. In such conditions, for example, one will be able to consider successful the experiment where the difference in conversion will be very significant (For example, 150 conversions versus 75).

There are many small sites. There are also many sites where the middle numbers are relatively big and several transactions a day are an excellent result.

So what should sites do?

Don’t waste time on A/B tests as long if you haven’t yet begun to drive traffic and conversions on the site. Instead, you need to deal with analysis of behavior of users on the site. It is essential to analyze click and scroll maps and, proceeding from it, to search for obvious problems. It is essential to communicate with target audience and to change texts/elements on the site, proceeding from requirements/fears/desires. It is essential in this case to the check records of visitor actions.

[tweetthis]Don’t waste time on A/B tests if you haven’t begun to drive traffic and conversions on the site.[/tweetthis]

By the way, there is one more important point relative to traffic. Before testing starts, it is essential to properly set the channels of the well-made traffic that will arrive day after day to the site. As soon as you can provide a stable flow of visitors, you will quickly think about testing.

Analyzing the results of your A/B test without using segmentation – #5

Without segmentation you will never be able to analyze how the separate user groups interacted with the site in a test version.

This is one of the key analysis mistakes.

Compare at least two segments of user–at minimum a new one and a returning one. Think about how strongly their behavior on the site differs. Any ideas?

Here we go, there are real figures. The behavior of new users and returned ones is going to be completely different. In the Web shops the users that returned visitations on average 3 minutes more than the new ones. The new visitors on average go through 3.88 pages during a session and the ones that returned–5.55 pages.

Is it enough to think critically about the analysis of A/B tests with the help of segmentation?  I think it is.

[tweetthis]Without segmentation you can’t analyze how separate user groups interacted with the site in a test version.[/tweetthis]

The behavior of users of smartphones and PC differs too. And in such segments, one needs to consider the results of the experiment too. It is quite likely, that on some devices conversion reduced, and on others, the conversion increased due to the fact that the site has become much more convenient to use.

Mobile and PC environments should be considered in separate A/B tests
A/B Testing for conversion rates on different devices is crucial in order to see metrics that accurately measure them.

After an experiment, check not only total figures, but also dig deeper. In the services for A/B testing, which are integrated with Google Analytics, there is a possibility to conduct profound analysis of results and apply any segment to them.

Pagewiz is also a great platform to measure variances in conversion rates between A and B variants of a landing page on mobile and desktop devices. From the step of page design through to tracking and testing variations, one utility can cover all your steps.

Pagewiz is an excellent utility to design mobile and desktop devices.
Easily design campaigns and track metrics for campaigns designed for mobile and desktop devices with Pagewiz.

Not long ago an A/B test was conducted at the main web page of one site. The object of testing was a short additional text explaining what would happen after registration. Results without the application of segmentation being analyzed, the test version with an additional text was revealed to work a bit worse than the original one–the conversion was two percent lower.

Having applied the segment “New users” to the results of the experiment, the situation turned off quite another–the test version had conversion of 10 % more.


The conclusion: this text did not make any difference for returned users because they clicked on a CTA button in any case. And, for new users the text turned one of the triggers, which prompted them to commit a targeted act.

A/B testing old sites – #6

This is a not an item usually addressed in this conversation. But I was still anxious to add it. It is dedicated to the business owners and Internet marketing specialists whose sites haven’t been updated since the conversion to 2.0 to correspond at all to the trends of Web design, and are visitor unfriendly.

What does the testing of such sites lead to?

Long story short? It’s a colossal waste of time. In such a case, you will try to improve what one needs simply to demolish and create again, and a designer or marketer must regularly test a new interface.

Some websites are just too old to test accurately. Rebuild them before you test.
Demolishing an old website and rebuilding may be the best practice before instituting testing.

More than once, I have been asked to increase the conversion rate of a site. When I looked at some sites, I wanted simply to close my eyes and pray under my breath within the first two seconds.

The usability of a site is sometimes so threadbare that nothing can be tested there. Testing in and of itself is contraindicated!

At a minimum, you must have a site convenient for users with a stable targeted traffic and conversions that are made. If the site is convenient, if the traffic goes, there are conversions and there is understanding that their number can be increased, then you can think about testing. In the opposite case, forget about it.

Analyzing the impact of the A/B testing on only one target in the purchase funnel – #7

The standard purchase funnel consists of several steps. In the web-based shop, it can be like that: The main web page of a site -Transition to a product catalog – Transition to a flypage- Adding an item to the shopping cart– Transition to paying of goods – Payment made.

Imagine a situation, when you start an experiment in a catalog of goods. For example, you add to the call to action the information about your 20 percent discount. A week later you analyze the result of the experiment and see that the visitors like it – they more often click on this button and go into a flypage. At first sight, everything is excellent. The clickability of the button increased and more people proceed to the next step of the purchase funnel.

But clicks on this button are not the main purpose of your Web shop. The most important thing is payment for goods. If visitors arrive at the flypage, but do not order the item, what’s the use of it?

A fat lot of use.

Because superficial analysis of the experiment shows only one purpose, you do not see the effect on the entire purchase funnel. What if the addition of an attractive discount affected the number of impulsive clicks on this button? The visitors clicked, but at the same time were still not ready to buy. On the next visit, steps leading to the conversion could conversely decrease.

[tweetthis]Because superficial analysis of an experiment shows one purpose, you do not see effects on an entire purchase funnel[/tweetthis]

Without analyzing impact on other targets, you will not learn this. You will simply establish a victorious variant for the site and sometime later will discover the revenues fell during a calculation of financial indicators. The end result–time is spent in vain, the revenues subsided.

After the experiment is completed, do not hasten to implement the best option. Analyze the impact of the experiment on the entire purchase funnel, starting with the purpose, which you were trying to improve and ending with the ultimate objective (for example, paying).

It is very easy to accomplish if the experiment is done through Google Analytics or another service integrated in Google Analytics. One can check effects on all created targets there.

Disregarding page testing after the start of the experiment – #8

Experiments “break.” The start of an experiment might occur through the special JS code that is inserted into pages of a site. As practice shows, there were, there are and there will always be problems.

We can start by looking at problems with a page, CTA- buttons not able to load or simply non-load of the experiment.

All sites are different: different programming languages, different CMS platforms, etc. are used. Some programmers write sites with clumsy hands. This can cause a conflict between a site and service can arise.

What is this going to lead to?

You run a test. You think everything is in order: the users see an original and a test page, statistics is being collected, and there are orders. In reality, however, everything is vice versa.
The experiment may not be at all displayed all this time. And, if the statistics are not checked a couple of days later, a week later you will wonder about a lack of results.

What is to be done?

Immediately after the start of the experiment, one needs to enter the site and check that:

  • all variations of test (what it looks like)
  • main functions of the site (operability of all buttons)

What is the correct way to check all variations of the experiment?

For this, one needs to use the “incognito” function of your browser and open the page with the experiment. There are several important nuances here:

When you attempt to check the display of the test version in incognito mode, if you opened one tab and got on an original version, close this tab. After this, open one new tab incognito. Keep doing this until you get a test version.

This issue is easily bypassed on platforms which provide you with a ‘Preview Mode’, such as Pagewiz for landing pages experiments.

Preview mode
Pagewiz’s ‘Preview Mode’ facilitates viewing what testing variants look like.

Absence of a clear scheme for A/B Testing – #9

This absence leads to chaotic testing. A/B testing requires systematization and consistency. It is difficult to crank out the process of testing without them. After each experiment, an issue will arise: What to test further? One will have to start again, research more ideas for tests, and it’s like this every time. It is really aggravating. I’ve experienced it myself.

What is to be done?

When preparing to conduct A/B testing a precise plan is necessary. The following points should be described in this plan:

  • the problem on the site
  • the idea of the A/B testing to fix this problem
  • the complexity of implementation of an experiment
  • an approximate term of conducting of the experiment (it depends on the amount of the traffic and conversions in the page)
  • the priority of the experiment (based on the complexity of implementation, time, and a potential result)

You just need to spend a day or two dedicated to a detailed analysis meant to tabulate all the information. It’s best to organize hypotheses depending on their priority. Everything will be much more simple on the principle of your hypothesis (idea for testing).

As it is in a circle:

Action (Start of a test) >> Data (Сapture of data) >> Insight (Analysis of results, conclusions)

A/B Testing in two phases – #10

This is the simplest way of A/B testing, which you should have stopped using long ago. But, as my experience has shown, it is actively used and nobody intends to stop it.

The first version of the page might be placed to the site for a month. At the end of the month, the key indicators of the page are measured. After that, the second version of the page is set and the same situation holds. Two months later, we have rates of efficiency of two pages. They are compared and a more effective option is selected. Everything is so simple.

What is this going to lead to?

Unreliable results.

The quality of the traffic can strongly differ in the first month in comparison to the second one. If the quality of the traffic differs, it is simply impossible to compare results.

Think about how many holidays take place, how many different advertising channels you are attaching, how many sales promotions you do. If during two stages you had different marketing activities, the quality of audience can be completely different.

If you want to work with unreliable results of experiments, you can continue to do that.

What is to be done?

If you want to do A/B tests correctly, test all options during one period on an identical segment of traffic.  Then the reliability of data will be higher.

In addition, it is important that you know you do not need to pay for conducting A/B testing correctly. You can do it through Google Analytics, on the following tab: ‘Behavior’ -> ‘Experiments’ .

You can abandon the technologies, which simplify and allow conducting A/B testing correctly, if you really want to. But a couple of months later you will likely understand the groundlessness of this approach.

Final Points

I covered all the possible mistakes I could in the short space I was provided. Keep these points in mind as you go forward:

  • The amount of conversions
  • The duration of a test
  • Statistical validity

These are basic rules. Strictly observe them during testing.

But if you have something to add, please leave a comment. Then I will be able to include one more mistake in a future article, for the betterment of everyone. That said, I’ll make one final plea: if you intend to carry out A/B tests, please do it correctly!

Landing Pages Designer

Liked it? Share now

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp