top of page
  • Writer's pictureLeedr

How to Split Test a Campaign

Split testing, often referred to as A/B testing, is a cornerstone of effective marketing campaigns. By comparing two or more versions of a campaign to see which one performs better, marketers can make data-driven decisions that can significantly boost their campaign's success.

"Effective split testing can be the difference between a mediocre campaign and a blockbuster success."

Starting with a Hypothesis

At the heart of every split test is a hypothesis. In this context, a hypothesis is an educated guess or prediction about how one version of your campaign will perform compared to another. It's essential to have a clear objective for your test, such as "Version A with a blue button will get more clicks than Version B with a red button."

Your hypothesis can relate to the elements you want to use to test for example: your hypothesis might be that women are more likely to purchase your product so you split test male and female audiences.

The Importance of Documenting the Tests

Consistency and repeatability are the backbones of split testing. Without proper documentation, you risk making the same mistakes or overlooking potential goldmines of data. Whether you're using simple spreadsheets or sophisticated software solutions, keeping a record is crucial.

"Documenting your tests isn't just about keeping track; it's about building a foundation for future campaigns."

the bare minimum is to start with a spreadsheet. Enter the results that your test or benchmark ad achieved so you have something to compare your test results to.

What You Can Split Test

Ok, but what are you going to test? There's no end to what you can test, but below are some of the most common things to begin testing.

- Audiences: Different ads may resonate differently with various audience segments. Tailoring your campaign to specific demographics can yield surprising results. You can split out audiences by location (city or regional), gender, age or interests. For example, you might find that people who are parents over the age of 40 are more likely to purchase from you.

- Creative: The visuals of your campaign play a massive role in its effectiveness. A simple change in colour or design can make all the difference. You might want to test using an image against using a video, or drill down into what makes up your ad image. For example, you might test images using smiling people compared to graphics.

- Placements: Where your ad appears can be just as important as its content. Testing different placements can help you find the sweet spot for maximum visibility and engagement. For example, you might test Facebook vs Instagram. Or you might want to know where on Facebook is more effective: the feed, sidebar, stories, etc.

- Landing Pages: First impressions matter and we all know that a well-designed landing page can significantly boost conversions. Testing can make your landing page more effective. You can test placement of your form / product on the page, or a variation of the layout, or something small like the colour of the CTA button.

- Messaging: The way you phrase your call to action or describe your product can drastically change how it's received. You can test versions of ad text that are shorter, or include more details, or a version that included emoji. As you refine it, you can test things like tone and type of language you use.

Power of Three: Using Dynamic Ads

Even though you could test 100 versions of an image, you shouldn't. The established wisdom is to limit what you're testing to three versions. This is called the "Power of Three" in split testing and it refers to testing three distinct types or versions of a campaign element. For instance:

- Images: One might be a plain image with no overlays, the second a designed version with graphic elements, and the third might feature solely on text.

- Headlines: Experiment with different lengths, tones, or even fonts to see which grabs attention best.

- Calls to Action (CTAs): A direct "Buy Now" might work for some, while others might respond better to a softer "Learn More."

"Using the Power of Three in split testing allows for a broader understanding of what truly resonates with your audience."

What are Statistically Significant Results?

Statistical significance ensures that the results of your split test aren't just due to random chance.

A common rule in split testing is the rule of 10: wait for at least 10 conversions or 10 days before analysing results. Various tools can help determine if your results are statistically significant, ensuring you're making decisions based on reliable data.

"Statistical significance in split testing is the bedrock of data-driven decisions, ensuring results aren't mere chance but backed by reliable data."

Pruning the Underperforming

Once you've gathered enough data, it's time to cut out what's not working. By identifying and analysing underperforming elements, you can optimise your campaigns for better results. This continuous testing and iteration lead to what many marketers refer to as evidence-based advertising.

"In the world of marketing, data is king. Use it to prune the underperformers and let your best campaigns shine."

Split testing is more than just a marketing tactic; it's a philosophy. By constantly testing, iterating, and optimising, you ensure that your campaigns are as effective as they can be. We encourage all marketers, whether seasoned pros or just starting out, to embrace split testing in their strategies. You can download the free guide to industry KPI's to help you.

Ready to take your campaigns to the next level? Dive into the world of split testing and watch as your results soar!


Get this in your inbox

Get a helpful post every week in your inbox when you sign up. Cancel any time. 

Thanks for submitting!

bottom of page