A/B testing best Practices

By |

With an abundance of amazing technology at our fingertips, it's sometimes hard to remember that trial and error still have tremendous value when it comes to improving marketing performance.

Thoughtful, deliberate experimentation is critical to understanding the root causes of performance, whether strong or weak.

You've surely heard that marketing is part art and part science. No where is this more evident than in A/B or split testing email approaches.

Perhaps you've sent 100s of emails to your audience with mixed results. What has worked best and why? What has flopped and what caused the failure? Applying A/B or split testing email best practices is one important way to identify what is driving success, mediocrity or abject failure.

Best Practice #1: Establish Benchmarks

In order for A/B testing to be effective, you need to have tracking systems in place and existing historical data to use for benchmarks. You need benchmark data against which to compare and contrast the split testing results. If your metrics are incomplete or unreliable, go back to start: get your tracking in order, collect data for several months, create benchmarks and then you can move on to split testing your strategic approaches.

Best Practice #2: One at a Time

It's a misconception that A/B testing only applies to subject lines. A/B testing can be applied to most anything: CTAs, preview text, headlines, subheads and forms, just to name a few. What is absolutely crucial, however, is that A/B testing is applied to only one item at a time. In scientific trials there is always a control group, or a group of participants that receive a placebo. A/B testing is similar in that you want to test only one variable at a time to isolate the source of impact or change.

Apply A/B testing to your subject lines by creating two unique sets of copy. See how each performs. Keep testing and identify any patterns that appear; and then work to craft your new subject lines based on successful tests. Then, once you have a handle on subject line patterns, test your CTAs and so on and so forth.

Best Practice #3: Sample Size Matters

In order for your A/B test to be relevant, you must have a statistically significant pool of participants. If your email list is 200 contacts, it might not be time to institute A/B testing, given the respective sample sizes of test A and B will be only 100 contacts.

If you have a larger list, say of 1,000 contacts, your approach will be different. It would be advisable to perfom an "A" test on your entire list, then a "B" test on the entire list, as opposed to breaking up the A/B tests into sets of 500 contacts each.

Note: The downside to this approach is the danger of timing impacting the results. If possible, send the split test emails out in relatively close succession, so that seasonality and external influences don't significantly skew results.

Moreover, if your list is very large- multiple thousands- then dividing the list evenly in two could yield statistically significant findings.

The key takeaway here is be thoughtful and strategic with your A/B testing approach. There is no hard and fast contact quantity formula--every business and industry is different- but, for the most part, the more contacts that are tested the more reliable the trend that emerges will be.

Best Practice #4: Keep It Up

Continually improving marketing performance requires consistent, thoughtful testing. Doing A/B tests a few times at random will have much less impact than the development and implementation of a systematic, consistent testing regimen. Testing needs to become part and parcel of your marketing program and not something you do only when results fall flat. 

A/B testing best practices can both prevent repeated failures and take what's worked to new levels of success.

Apply these 4 A/B testing best practices and you'll be sure to see improvement and stronger ROI. 

If you'd like more tips like these, join the conversation. 

Connect with Illumine8