Do you already have an account? Log in
Is this your first time here? Register
Do you already have an account? Log in
Is this your first time here? Register
We are launching soon, leave your email to be the first who gets the full version
7 min read

Split: how to do email A/B tests
the right way

A/B tests are quick, cheap, and incredibly efficient when it comes to boosting your marketing emails’ quality and metrics. We’ve already spoken quite a bit about A/B tests in the past, but now it’s time to dive deeper.
See, there are some issues with split tests you need to be aware of — if done wrong, split tests can even damage your emails’ efficiency, making you lose metrics and revenue. In this article, we’re going to tell you more in-depth about the intricacies of split tests to help you prevent that. Sounds useful? Let’s begin!

Quick recap

First of all, we’ll promptly go over the main points of our previous pieces to make sure we’re on the same page. Here are some essential terms and guidelines you need to remember:
  • For an A/B test, you send the same email with one variable changed to equal and randomized groups of subscribers to learn which version of the variable provides better results.
  • The base version of an email is called control, and the iterated one is called challenger.
  • You can test any part of an email, including its inbox view, copy, visuals, CTA, etc.
  • The process begins with identifying the test goal, forming a hypothesis, and defining the variable.
  • Before the test, you need to set parameters like duration, sample size, split percentage, etc.;
  • No matter the outcome, the results will provide you with valuable data: the change you were testing could increase, decrease, or not affect the target metric;
  • You can implement the better option if your confidence level is high enough and the statistical significance of the test is undeniable.
Now that that’s out of the way, let’s proceed to the new information!


You’re by now well aware that you can split-test anything in your emails, and that even a slight change may make a significant impact. Surely, the more you test, the better; but to achieve more progress in the shortest period of time, you need to figure out which tests to run first.
While improving Open Rate is typically the first priority, there’s no cemented hierarchy here: what you need is a simple scoring system to identify what should go first. The sheer number of scoring systems for marketing is insane, but we do feel like making it a bit easier on you and recommending the best one for A/B test prioritization right off the bat.
Try adopting the ICE scoring system. To save you time and prioritize the right tests, it reduces the entire process to answering these three questions:
Impact — How big of an impact you expect this change to make?
Confidence — How confident are you that this change will have an impact?
Ease — How easy is it to test this change?
You don’t have to put together any sort of matrix to score your A/B testing ideas with the ICE model: you can do it as you go by simply answering these questions in your mind.
The simpler and more effective you expect a particular split test to be, the sooner you should do it. The variables you do not expect much from (or the more difficult ones) can wait - you need results, and you need to prioritize the tests that bring them.


Normally, there are some average benchmarks for most things in marketing: they help even new marketers navigate the deep waters of numbers and proportions. But that's not the case with A/B testing: you can't Google whether the average open rate increases when you add a heart emoji to the subject line of a fashion newsletter. So you're on your own. Or are you?
Any experienced marketer knows that the only relevant numbers you should consider are your own. So first, take a look at your average email performance. That will give you a pretty solid idea of what results to expect and what metrics to focus on. And second, you can find out the average metrics for your niche to see how they match up with your own.Any experienced marketer knows that the only relevant numbers you should take into account are your own. So, first of all, look at your average email performance. It will give you a pretty solid idea of what results to expect as well as what metrics to focus on. And second, you can find average metrics for your niche just to see how they correspond to yours.

So, what are your next steps?

  1. When planning a split test, pick your target metric — the one you’ll be trying to influence. That’s the most important number on the board for you, and it defines the entire test’s success.
  2. Then find out what other numbers might be affected by the variable you're testing. If you're interested in tracking their changes, you should also pay attention to them and use them as secondary measures. If you're interested in tracking their changes, you should also pay attention to them and use them as secondary measures.
  3. Finally, double check everything: Your primary metrics need to correlate with the main goal of the test; the secondary metrics should be relevant to the variable being tested so as not to be distracting; and for all of them, you need relevant industry benchmarks and your past email performance data.

Testing tools

A/B testing for email is not difficult to do without help: You can split your audience, prepare two versions of the same email, send them, and then manually compare the results - all by yourself. Still, we would recommend you to use a testing tool for this task - why, you may ask? Well, there are several reasons.
First of all, there is the human factor. It's very easy to miss something when manually going through multiple statistics and numbers, and an important metric or correlation goes unnoticed. Testing tools prevent this by gathering all the necessary information in one place and showing you what matters.
Second, you can save yourself a lot of time by outsourcing the test to the tool of your choice - unlike what we just described, you only need to enter the baseline data and parameters and get the results after the test is complete. The time and effort you save can be used for other things, be it work or just lunch.
There are two types of tools you can use: either large email marketing platforms that offer A/B testing capabilities (like Markeaze or HubSpot) or solutions designed specifically for split testing (like Optimizely or AB Tasty). Both options have their advantages and disadvantages.
A/B testing solutions get the job done. While you can't expect more than split testing, the customization options and depth they offer for your testing are phenomenal compared to general marketing platforms. We strongly recommend that you go for this option if you regularly run complex and multi-stage A/B tests - other tools just aren't up to the task.
Marketing platforms offer you much more than split testing. However, you shouldn't expect in-depth testing from them: most platforms have a limited number of variables to choose from. If you don't need complicated tests and are happy to get a number of other important features, you should go for this option. We have a few guides to help you choose a platform: one, two.
But there is a hidden danger you should be on the lookout for...

Statistical significance

Some A/B testing tools seem to be flashy and sell you the idea of fully automated A/B testing with low effort (even no effort!) and high profit. They do it all by themselves and send the winning email version to the rest of your contact list as soon as they get the results. While this sounds great, it's also incredibly dangerous. Why? Because they ignore statistical significance.
Statistical significance or confidence indicates how likely it's that your test results are false, useless, or not objective. It's measured as a percentage: If you want to be sure that your results are correct, the confidence level must be set high. Now what does this have to do with our previous topic?
Many ESPs, platforms, and even testing tools don't take trust levels into account - putting you at risk. Let's run some numbers to explain what we mean.

Update your privacy policy for GDPR

According to privacy laws, you must clearly describe how you plan to use your subscribers’ data, including using third parties like MailerLite.
You have to state each data processor separately and clearly explain how and why they are using the data. To make your life easier, we wrote a statement about MailerLite that you could add to your privacy policy.
Under GDPR, people have a right to know how their private data is handled. If you don’t have a privacy policy, you should consider adding one now. It’s one of the most inconspicuous legal requirements, but it’s still necessary.
Here are some of the basics to help you get started. In general, most privacy laws require you to inform users of the following:
  • Your name (or business name), location, and contact information;
  • What information you’re collecting from them (including names, email addresses, IP addresses, and any other information);
  • How you’re collecting their information, and what you’re going to use it for;
  • How you’re keeping their information safe;
  • Whether or not it’s optional for them to share that information, how they can opt out, and the consequences of doing so;
  • Any third-party services you’re using to collect, process, or store that information (such as an email newsletter service or advertising network)


CRM data collection helps teams better understand their leads, customers, and operational processes. CRMs, by default, already serve as an organized database—allowing users to add, access quickly, and utilize identity, descriptive, qualitative, and quantitative information. Thoroughly managing CRM data is an excellent way to improve relationships, solution offerings, and sales, marketing, and customer support activity efficiencies.

Related Content