Do you already have an account? Log in
Is this your first time here? Register
Do you already have an account? Log in
Is this your first time here? Register
 
We are launching soon, leave your email to be the first who gets the full version

A/B testing: your road toward understanding consumer preferences

10 min read
by Alena Parfisenko
Marketers create landing pages, write email copy, or design call-to-action buttons. It can be tempting to use uncertainty to predict what will get people to click and convert. Different web audiences behave differently. What works for one company may not necessarily work for another. Conversion rate optimization (CRO) experts hate the term "best practices" because they may not always be best for you.
A/B tests can also be tricky. If you're careful, you can make correct assumptions about what people like and what makes them click. Decisions that can easily misinform other parts of your strategy.

Have you seen this scenario?

Whenever fashion marketers are only beginning to establish their initial trigger map points, some may find some crucial points which may not provide further assistance in understanding their customers.
Some of our close collaborators shared that when they tried to understand what triggers work for their customers, they often stumbled across disagreements on strategy with top management or other stakeholders. In other cases, there are no references in an easy-to-understand format on trigger maps that can be presented to the management.
So how can you solve all of that? Conduct A/B testing for your website and emails to understand what works best for your audience.

What is an A/B test?

Email A/B testing commonly referred to as email split testing involves sending two versions of your email to two distinct sample groups from your email list. The email that achieves the highest number of opens and clicks (the "winning version") will be distributed to the remaining subscribers.
Most marketers avoid A/B email testing because they need to know how or what to test. If so, continue reading. It is easier than you believe, and you will uncover tremendous potential to enhance your marketing.
A/B split testing is just a method for analyzing and comparing two variables.
Intelligent marketers do this because they are curious:
  • Which subject line gets the highest percentage of opens?
  • Whether their target audience is more interested in emojis or not, which button text encourages the most clicks?
  • What images in your email generate higher conversion rates?
  • What pre-header generates the highest open rate?
With A/B tests for email marketing, you can optimize your stats, increase conversions, get to know your audience, and determine what drives sales.

Why should you use A/B testing?

Email A/B split testing is the only way to prove which email campaign brings you the most success statistically. It’s also the fastest way to figure out what your audience likes (and optimize your email campaigns accordingly). If you want to book accurate results with your email campaigns, you have to use A/B tests.
AB testing takes the guesswork out of it. After testing, you have accurate data - proof or denial that changes will improve the performance of the emails and increase the percentage of visitor engagement, sales, and conversions.

Where else AB testing can be used?

  • 1
    Advertising content
    In paid advertising, unconvincing text can drain the budget. It's the same on the website. The more interesting and relevant your content is, the less likely the visitor will lose interest and leave before buying.
  • 2
    Calls to action
    Your audience must respond to the website: subscribes, buys, and shares content. Testing calls to action will help you find the right tone for your messages and place call-to-action buttons appropriately on your website's pages. Try different shapes and sizes of buttons, text, and graphics to see which elements increase conversions. 

  • 3
    Individual web pages
    Test the elements of an advertising campaign - creatives or texts - and the individual pages. Analyze the background color, the relevance of the content, and the positioning of blocks, so you know exactly which elements of the website users like and which irritate and make them close the page.
  • 4
    Subscription Forms
    Everything is necessary in lead generation forms, from button colors to text. These nuances can seriously affect the user's reaction - reduce the time to think about the purchase or even make you leave the website. AB-test will help improve subscription forms and increase CTR - click-through rate.

How to conduct A/B testing?

1 - Select the variable to be tested

When you test two subject lines, the open rate will indicate which subject appeals to your readers the most. When testing two different product photos in your email layout, you should consider both the click-through and conversion rates (and conversions).

Two emails can return different results depending on your search criteria. In the email below, the plain-text version had a higher open rate, but the design template was more successful in clicks. Why? Because the design version included a GIF of the video, it encouraged more people to click.

2 - Select the proper sample size

We recommend using the 80/20 rule when you have an extensive email list (over 1,000 members) (also known as the Pareto principle).

Focus on the 20% of your efforts that will provide 80% of your results. Regarding A/B tests, this entails sending variant A to 10% of the population and variant B to the remaining 10%. The remaining 80% will be delivered to subscribers based on the best variant.

We advocate this technique for more extensive lists because more statistically significant and precise findings are desired. Each variant's 10% sample size must include sufficient subscribers to demonstrate which version had the most impact.

To obtain statistically meaningful results when dealing with a smaller subscriber list, the percentage of subscribers you need to A/B test will rise. If you have fewer than one thousand subscribers, you should test 80-95% and distribute the winning version to the remaining small proportion.

If 12 individuals click on a button in email A and 16 do so in option B, it is difficult to determine whether the button functions better. Make sure your sample size is sufficient for statistical significance.

The most prominent and essential metrics you should track are:

  • Sample size: This is the desired outcome when using this calculator.
  • BCR: Baseline conversion rate This represents your present conversion rate.
  • MDE: Minimum Detectable Effect This is the minimum impact size detectable by your test.

The MDE will depend solely on whether you wish to notice little or large changes in your current conversion rate. To detect large changes, you will require fewer data (or a smaller sample size), whereas you will need more data to detect little changes.

To detect minor changes, the MDE must be adjusted to a lower value (such as 1%)

To identify greater changes; the MDE proportion will be increased. Be cautious, and do not set it too high. A greater MDE indicates that you cannot determine whether your "A to B" change made a difference.

3 - The timing

When do you typically read your email? Your answer will likely be, "it depends."

You may be online, see the incoming emails, and click within five minutes. Alternatively, you may view the newsletter two hours after it is delivered to your mailbox. Or the subject line did not entice you enough to open the email.

These are all real-world circumstances. Therefore, while doing an A/B test, you should allot sufficient time.

While you can send the winner as early as two hours after sending based on variables such as subject lines and opens, you may want to wait longer if you're monitoring click-throughs. You can shorten the waiting period while testing your newsletter on active subscribers.

Research indicates that the test's accuracy will be approximately 80% after two hours. The more time added to these hours, the more precise your results will be. To achieve 99% accuracy, it is best to wait one entire day.

Be mindful that lengthier wait times are only sometimes preferable. Individual newsletters are time-sensitive and must be sent immediately. In other instances, waiting too long will result in the winner being notified through email over the weekend. A weekday against a Saturday or Sunday can significantly affect your email statistics (if you need help determining when to send your email, check out this post).

When determining the optimal send time optimization, the essential guideline is that every business is unique. Thus, it is crucial to monitor your data and continue testing.

4 - The delivery time

Remember that the winning email is immediately delivered once the testing period concludes. As this group is likely to contain the most significant number of subscribers, it is prudent to arrange email automation to reach them.

Suppose you test two subject lines on 20% of your subscribers (10% in each group). You wish for the winning newsletter to be delivered to recipients' inboxes at 10 a.m., and you wish to measure the open rate for two hours.

You must begin your test at 8:00 AM so that your A/B test can run for two hours before the winning version is distributed at 10:00 AM.

5 - Test only one variable at a time

Imagine you are simultaneously sending two emails. The text and name of the sender are identical. The only difference is in the subject line. After a few hours, you see that version A's open rate is significantly higher.

An accurate conclusion can be drawn when testing only one variable at a time and observing a distinct difference in the measure being analyzed. However, if the sender's name were also altered, it would be impossible to conclude that the subject line was the only variable.

6 - Determine the option that wins

To determine the winner of the A/B test, measure the results of both emails. The results are the metrics that you originally wanted to improve. There can be three basic options:

  • Email A is better than page B.
  • Email B is better than page A
  • Emails A and B showed the same results: leave one of the options at your discretion.

For the results of the A/B test to be trustworthy, the testing must be statistically significant and conducted on a statistically significant sample of users.

Statistical significance of a test

The statistical significance of a test is a variable to keep in mind when processing the results. It is the reliability of the study, which is calculated mathematically. Anything above 93-95% is considered an average value: if the A/B test has reliability higher than 93%, the test can be considered successful, and its results can be trusted. You can make changes to the email that is effective.

To calculate statistical significance, you can use online tools - they will do all the work for you and give you the results in a simplified form.

Here are some links to statistical significance test calculators:


If you want to avoid comparing A/B results manually by conversions, clicks, visits, and other parameters, it's easier to use the basic features of Google Analytics. The second option is to combine revenue data from CRM and expenses from your ad accounts in one report using end-to-end analytics.

All in all

A statistically significant sample is a group of visitors on whom we test changes. Every person who visits an email is your target audience, but the sample is a small fraction of that audience. For example, the first thousand visitors.
Any sample would be 100% applicable to our traffic in an ideal world. Let's say we test the hypothesis on a hundred people; we see that the conversion rate was 10%. It will remain the same when we expand the sample to a thousand people.
In real life, it can be quite different. At the first hundred visitors, the conversion rate will be 10%. But at a thousand - it will fall to 3% and remain the same for the rest of the audience. The point is that the conversion level is influenced not only by the tested elements but also by many external factors - competitors' advertising campaigns, which we mentioned above, gender characteristics, and even the time of day.

Related Content