As we all know, Pay-Per-Click (PPC) advertising is a powerful tool for driving traffic to your website and boosting conversions. However, success in the world of PPC ads isn’t guaranteed. To maximize the return on your investment, you need to continually optimise your ad campaigns. A/B testing is a vital component of this optimisation process, as it allows you to see what works and what doesn’t. We are after all data-driven marketers. We don’t know what idea is going to work until we test it.
In this blog post, we’ll explore the dos and don’ts of A/B testing for your PPC campaigns, helping you make the most of your advertising budget.
The Dos of A/B Split Testing
In this section, we will cover everything you should do for successful A/B testing that moves the needle. These are tried and tested methods that I’ve used throughout my career that are both important to increase conversion rates but also click-through rates so that you can drive more traffic and sales.
Set Clear Goals:
Before you start A/B testing, define clear and measurable goals for your PPC campaign. Are you looking to increase click-through rates (CTR), conversions, or reduce your cost per acquisition (CPA)?
Understanding your objectives will guide your testing efforts. In general, for ad testing, you should be working to increase CTR, while with landing page tests, you will want to increase conversion rates (Conv %), for example.
The two most common forms of testing are either ad testing or landing page testing.
Both of these test types are designed to increase CTR and conversion rate respectively. Understanding what you want to achieve will show you what variables are important.
Test a Single Variable:
Yes, that’s right, only test for one metric at a time. A/B testing works best when you isolate a single variable to test.
This could be the headlines, call-to-action, or the single elements of the landing. Testing multiple elements simultaneously can make it difficult to determine what’s driving changes in performance.
It’s vital that you know what is having an impact on your ad copy and landing page and that you can keep it as simple as possible in your testing strategy. This will also allow you to logically and strategically move on to the next test as you will know what variable worked or didn’t work. For example, simply test the form on your landing page, as seen below.
Use a Large Enough Sample Size:
To ensure the results of your A/B test are statistically significant, make sure you have a large enough sample size.
A small sample can lead to unreliable results. Google Ads offer statistical significance calculators to help you determine when you have enough data.
An idea would be to test with your top-performing campaigns, in terms of clicks and/or impressions. This will allow for a large enough sample size, which also makes sure that any success can “move the needle”. This would naturally be dependent on your business needs.
If we use the below as an example, you can see that the trail has a higher click-through rate (CTR). However, it does not have statistical significance.
For the test to reach statistical significance, we will need to reach 30,000 impressions. It is more than likely that while the test was positive in this regard it will not be positive in a real-world environment.
Implement Proper Tracking:
Properly setting up conversion tracking is essential. It enables you to measure the impact of your changes accurately. Use UTM parameters and Google Analytics to track user behaviour and conversions on your website.
You need to make sure that you have as much data as possible. Make sure that you are measuring what’s important (and only what’s important) to your business.
These could be sales or leads through the website, calls and conversions imported from Google Analytics or third parties.
Run Tests for an Adequate Duration:
Don’t end your A/B test prematurely. Running tests for at least 1 month allows you to capture variations in user behaviour over time.
Seasonal fluctuations and other external factors can affect results, so be patient. This can be especially true when testing bidding strategies, as you will have to go through a “learning period”. Performance can decrease during this.
Remember, sometimes you need to “go down to go up”.
As you can see from the below example, the trial ad test was winning, until the 17th Oct. At this point, we saw a switch in performance where the base started to drive a higher CTR at a much higher rate. This completely changed the outcome of the test. Had this test been ended early and set live across the account, results would have been negative.
If vital that you don’t make assumptions about your test and that every decision is made in a data-driven manner.
Analyse and Document Results:
It is important that you create a test and learn section. Once your A/B test is complete, analyse the data carefully. Document what worked and what didn’t. These insights will inform future campaigns and improvements. I find it best to create a series of slides (or one-pagers) collected together and shared internally within your organisation. Please see the example below.
Following these steps will allow you to identify what to test plus how to interpret and track your data. This is the important aspect in creating a good A/B testing strategy which will put you on the right track for success.
It’s just as important to be aware of “what not to do” as it is to be aware of “what to do”. In the next section, I will dive into just that.
What not to do When Running A/B Split Tests
Ignoring Mobile Users:
Mobile devices play a significant role in PPC traffic. Don’t neglect mobile optimization when A/B testing. Ensure your ads and landing pages are mobile-friendly to capture this valuable audience. Performance can swing greatly depending on the device. As you can see from the example below.
For this account, CTR is greater on computers, but the conversion rate is better on mobile. This is something that must be considered when looking at a testing strategy. It’s possible to do separate tests on mobile versus desktop.
Not Having a Testing Calendar:
You have to plan correctly for your testing programme. Basically, you should be at least covering ad copy and landing pages.
But you can include strategy tests such as bidding, exact and phrase in a single campaign etc. You should create a testing repository of all ideas.
You can then create a testing calendar from this repository, as seen below.
It’s possible to test in a similar pattern as below or mix and match depending on your needs. The important thing is to make sure that you know what you are going to test, for how long, and what the variables will be.
Remember, if the test does not yield a positive result, you can still use this as an insight into what doesn’t work and tweak it if necessary. For that reason, your testing calendar can be quite flexible and open to change depending on business needs.
Neglecting Ad Extensions:
Ad extensions can significantly impact the performance of your PPC ads. Don’t forget to test various ad extensions such as site links, callouts, and structured snippets.
While ad extensions have their individual uses (such as the call extensions to drive calls), in the main they are important for increasing click-through rates on the main body of the ad.
Failure to implement ad extensions on the trail section of the A/B test will skew your data.
Overlooking Negative Keywords:
A/B testing isn’t just about what you include; it’s also about what you exclude. Neglecting negative keywords can lead to wasted ad spend. Continuously review and update your negative keyword list.
This is especially true with Google’s recent changes to its match-type targeting. Today, “exact” isn’t really “exact” and phrase is the new broad modifier. Broad modifier of course going the “way of the dodo”! Therefore, you need to pay close attention to the search term report.
Google has also started to show less data within the search term report, so any indication that a test campaign is driving irrelevant traffic should be taken very seriously.
As you can see below, sometimes you need to add negative keywords on a daily basis before you start seeing relevant traffic.
Ignoring Competitor Analysis:
Don’t operate in a vacuum. Regularly analyse what your competitors are doing in the PPC space. Their strategies can provide valuable insights and inspiration for your own A/B testing efforts.
Sometimes some of the best ideas are stolen. Remember that you are all fighting for the same paid search traffic. Therefore, if they are winning you are losing.
Also regularly monitoring the auction insight report, will give you a view of your place within the market and competitor movements in terms of spend etc, as seen below.
There are some great competitor tools out there such as SEM Rush, which will give you an idea of competitor ads and keyword strategies which should also be looked at.
A/B Split Testing: Conclusion
A/B testing is the cornerstone of successful PPC campaigns. When done correctly, it empowers you to make data-driven decisions, optimise your ad spend, and achieve your marketing goals.
By following the dos and don’ts outlined in this blog post, you can elevate the results of your PPC ad campaigns and stay ahead of the competition.
Remember, PPC advertising is an ongoing process, and A/B testing is your secret weapon for continuous improvement.
If you would like to discuss PPC strategies in more detail and what we can do for your business, please contact me at firstname.lastname@example.org.
PS: Our test and learn “one-pager” plus testing calendar are available here.