Have you ever wondered how some PPC marketers write ad copy that consistently performs better than yours?
For a long time, I did – until I decided to learn how to beat my competitors.
After reading countless blog posts and running all kinds of A/B tests, I’ve learned what works in the world of PPC ad copywriting (and what doesn’t). I’ve figured out how to get more conversions from my ad copy and how to do it at scale.
Now I want to help you do the same. Here are five ways to improve your PPC ad copy to drive more conversions.
1. Avoid choice fatigue
Searchers don’t read through search results pages thoroughly. They scan.
When your headlines look exactly the same as several competitors, the searcher must stop and analyze your ad in order to identify the differences. This means that similar headlines won’t earn clicks.
If you’re familiar with neuromarketing, you know this is a form of choice fatigue. Your potential customer has too many choices and can’t easily identify why they should choose your ad. As Roger Dooley puts it:
“Sales-killing choices are those that appear very similar and offer the consumer no shortcuts in making a decision.”
Here’s an example of choice fatigue:
Best practice says always use the search term in the headline. Well, that advice was from five years ago. It worked then, but today everybody is doing it.
Dynamic keyword insertion is great for quickly creating relevant copy on a large scale but it’s hardly a well-kept secret. It works great in many cases, but when too many advertisers are using it on the same search query, it causes choice fatigue.
If you’re using DKI, do some manual searches for the queries driving those ad impressions and see what your competitors are up to. Start with the high impression DKI ads that have a low CTR and make sure you aren’t creating choice fatigue.
If you see lots of ads with the same or very similar headlines, write new custom ads to test.
2. Don’t make empty promises
The best ad copy headlines have offers or promises that can be fulfilled.
PPC ads that offer promises that aren’t fulfilled on the landing page may earn more clicks but they won’t convert very well. Brian Clark of Copyblogger explains that making vague promises can turn people off:
“Advertisements that proclaim, ‘satisfaction guaranteed’ are fairly common – and that’s the problem. The statement can come across as just another hollow promise, because it often is.”
Here’s an example of a false promise:
Notice how they advertise “Free Invoicing Software” in the headline and description?
Now have a look below at the landing page the ad leads to. There, we learn that the software isn’t exactly free – it comes with a 30-day free trial.
While only subtly different semantically, “completely free” sets high expectations.
A free trial is an awesome thing, but when you overstate your offer and then show a visitor a different offer (perceived as worse), it activates the portion of the brain associated with pain.
The headline “Free Invoicing Software” definitely gets my attention, but it sets the wrong expectation.
We have to respect the consumer in order to win their trust instead of using smoke and mirror tactics just to earn their click. Here’s how Oli Gardner put it in his talk at Hero Conference:
“Marketers aren’t respecting the click.” – Oli Gardner
When you accurately represent your offer in your ad copy, qualified leads click through to your landing page and there are no nasty surprises. Everyone wins.
3. Avoid common ad copy testing mistakes
If you’re a rational, “left-brained” PPC professional, then writing ad copy likely isn’t your forte. If you’re more of a creative type, then writing great ads is probably easier for you, but unfortunately what you think is great ad copy doesn’t matter.
What does matter is the data. The biggest problem when it comes to ad copy testing is that there is so much (sometimes conflicting) advice on how to do it right. There are tons of scripts and tools to automate the ad copy testing process, but if you don’t understand how they work, you’re probably not getting great results.
Here are the biggest ad copy testing mistakes (trust me, there are more) I’ve made in my eight years of ad copy testing that I hope you can avoid:
Not aggregating stats
Whether you run large or small campaigns, you need to aggregate stats – but for different reasons.
If you have small accounts, you should aggregate so you can speed up the process of reaching statistical significance. If you have large accounts, you need to aggregate stats so you can derive actionable insight from low volume ad groups.
Not segmenting results
- Example 1: Local vs. national segmentation
Don’t lump stats together for campaigns that are targeted differently.For example, locally targeted campaigns may perform differently than nationally targeted campaigns. Accordingly, the ad copy that works for local campaigns may not be best for national campaigns.In the example below, you can see the ad copy test stats segmented by campaign location target (local vs. national).See how our test ads are doing great in our local campaigns but not so hot in our national campaigns?
If you are going to aggregate stats to reach statistical significance more quickly, then you have to understand the performance differences in your targeting. If you don’t, you’ll choose winning ads that are actually losers in a different segment.
- Example 2: Mobile vs. desktop segmentationSimilarly, you shouldn’t aggregate stats together for mobile ads with desktop ads. Always handle mobile ads and desktop ads separately even if you only use a different display URL (such as m.domain.com/keyword).You can see in the example below that our test ads are kicking butt on computers but are losing on mobile devices.If we weren’t segmenting mobile and desktop, our results could be misleading.
- Example 3: Top vs. Other segmentationIn your AdWords account, the “Top vs. Other” segment shows you the difference in your data for ads that appear at the top of the search result page versus those that appear along the right hand column.
Well, you may have guessed it, but you shouldn’t lump stats for Top vs. Other together.
Since Google uses the extended headline for “Top” ads, you will see a large variance in your ad performance when you segment this way. Your “Top” ads are going to be your highest volume ads, so if you focus on optimizing the “Top” ads, you’re going to make the biggest impact with your ad copy testing process.
Using wrong sample sizes
There are so many articles about statistical significance in ad copy testing, yet plenty of people still stop running tests before they’ve had a chance to show meaningful results. Very often, this is because people get the sample size of their test wrong.
There are many tools that exist to help determine the correct sample size for your ad copy tests: for starters, this tool predicts how many visitors you’ll need in order to have a conclusive A/B test. If you have larger accounts or manage ad copy testing for multiple clients, PPC Hero Ad Automator and Adalysis from Brad Geddes are great options.
The bottom line when it comes to statistical significance is that you need to understand how to define your hypothesis and your minimum sample size correctly.
Here’s a great post by Peep Laja explaining the fallacy of using a statistical significance level without a defined minimum sample size.
Watching the wrong metric
Many people watch their click through rate to determine which variation is best, but that logic is faulty. Every impression has the chance to become a conversion so I like to use impressions for sample size and my rationale is below:
You can’t actually buy more clicks or conversions, but you can buy more impression share. And buying more impressions is the goal for ads that are performing well based on any metric (CTR, CPI or PPI).
Which brings me to my next point…
4. Stop judging ads on CTR
Brad Geddes was one of the first PPC professionals I know to advise against using CTR as the primary metric for choosing winning ad copy.
He suggests using metrics like Conversions per Impression, or better yet, Profit per Impression. You can read more about the logic behind Brad Geddes’ argument here, but to demonstrate, take a look at the difference in CTR and CPI Here:
When you choose winning ads based on CTR, you could be earning more clicks but failing to convert them. The example above shows just that.
CPI and PPI aren’t standard metrics built into Google Analytics, but they aren’t that hard to create and they are definitely worthwhile. Check out this great visual guide to custom metrics in Google Analytics, by Justin Cutroni.
Change the metrics you judge your ad copy on and you will make more informed decisions and drive more profit.
5. Don’t forget to pre-qualify clicks
If you’re advertising in a price competitive industry or you have a premium or higher-priced product, you should always pre-qualify clicks by disclosing prices up-front.
Much like making empty promises that aren’t fulfilled on your landing pages, unexpectedly high product prices can be disappointing for leads. Check out this quote from a Pain of Paying study conducted by Carnegie Mellon, Stanford and MIT:
“The sections of the brain associated with pain processing are activated when prices are too high.” – George Loewenstein, Carnegie Mellon
I’ve heard tons of smaller advertisers complain about the quality of the leads they get from Adwords. The last thing you want to do is pay for clicks from people that don’t want to pay your price.
If you are worried about hurting CTR and Quality Score by using this strategy, then consider running an ad copy test only on your broadest, worst-performing ad groups.
Here’s a great example of pre-qualifying:
Making it all work
If you want to beat your competitors at the PPC ad copy game, then you need to look beyond the “best practices” that everyone is using.
You just might find that when you do things differently, you can get amazing results.
Now it’s your turn. What are some of your ad copy testing failures and successes? What do you differently to get consistently better results? I’d love to hear your comments below!