To continue our series on demystifying SEM, today I will address another question that came my way recently from a local business client who is trying to optimize his AdWords account.
Is it a good idea to have a bunch of ads with slightly different headlines and descriptions, or only a few ads per ad group?
The Short Answer
Pretty much everyone knows that testing different ad variations is good, but how you choose your ad language and exactly how you organize the test can have a huge impact on your performance, your efficiency, and whether you can use your data later to make informed decisions.
Pretty much everyone knows that testing different ad variations is good, but how you choose your ad language and exactly how you organize the test can have a huge impact on your performance, your efficiency, and whether you can use your data later to make informed decisions.
![]() |
| Marketing Meme: Super Cool Ski Instructor |
Too many ad variations will spread your data too thin and be confusing to manage, so while there are infinite permutations you could test, starting with three that you think are the most likely "home runs" can make your testing manageable.
For your three ads per ad group, I would start by testing significantly different concepts, identify if there is a clear winner, and then create variations based on the winner that systematically test which elements are the likely cause of performance.
The Onion Answer (because there are always so many more layers...)
Scientific Testing
First of all, you should review the post about Correlation versus Causation to make sure that you are keeping scientific testing and data interpretation principles top of mind.
The great thing about ad testing environments, especially on Google, is that because you are testing variations against each other in real-time, you can actually account for some of the common variables, such as seasonality and environmental factors, because all of your test ads are running at the same time, to the same audience spread, in roughly the same placements.
Of course, Christmas language is likely to do better during Christmas time, clearance language may do better in January when people expect clearance sales, etc. so even when you are running your ads against each other in real time, you do need to keep these variables that can impact performance in mind*.
In a lab environment, you might start an experiment by focusing on one variable against a control, but even in the more scientific environment of ad testing, you will need to balance science with driving performance and efficiency. It takes time to gather data, sometimes months depending on the traffic coming through your ads, so casting a net of three quite different variations (each one a 'hypothesis') is my preferred starting point.
From that point on, the winner can be the inspiration for more scientific variable testing -> was it the headline, the call to action, the tone, etc. that really worked. Start with variations you think will work, gather the data, and then deconstruct why it worked with similar variations that test specific variables. Over time you can then create more ads that work better using your learnings.
Tactical Note: To improve your testing environment, in the Google system there is a setting in which you can set your ads to "rotate evenly" instead of "optimize" (Google Help Center Details) -> this is an important setting to use if you want scientific data, because otherwise the system will start to show your highest CTR ad more often and potentially to more relevant traffic, making your data pool uneven.
Choosing the Home Runs
You'll already gain significant efficiency by focusing your initial efforts on your three potential home run ads.
Scientific Testing
First of all, you should review the post about Correlation versus Causation to make sure that you are keeping scientific testing and data interpretation principles top of mind.
The great thing about ad testing environments, especially on Google, is that because you are testing variations against each other in real-time, you can actually account for some of the common variables, such as seasonality and environmental factors, because all of your test ads are running at the same time, to the same audience spread, in roughly the same placements.
Of course, Christmas language is likely to do better during Christmas time, clearance language may do better in January when people expect clearance sales, etc. so even when you are running your ads against each other in real time, you do need to keep these variables that can impact performance in mind*.
In a lab environment, you might start an experiment by focusing on one variable against a control, but even in the more scientific environment of ad testing, you will need to balance science with driving performance and efficiency. It takes time to gather data, sometimes months depending on the traffic coming through your ads, so casting a net of three quite different variations (each one a 'hypothesis') is my preferred starting point.
From that point on, the winner can be the inspiration for more scientific variable testing -> was it the headline, the call to action, the tone, etc. that really worked. Start with variations you think will work, gather the data, and then deconstruct why it worked with similar variations that test specific variables. Over time you can then create more ads that work better using your learnings.
Tactical Note: To improve your testing environment, in the Google system there is a setting in which you can set your ads to "rotate evenly" instead of "optimize" (Google Help Center Details) -> this is an important setting to use if you want scientific data, because otherwise the system will start to show your highest CTR ad more often and potentially to more relevant traffic, making your data pool uneven.
Choosing the Home Runs
You'll already gain significant efficiency by focusing your initial efforts on your three potential home run ads.
Then, from ad group to ad group, you don't have to have 100% unique language, in fact, it's better for testing if you combine unique headlines with standard relevant variations across multiple ad groups, because you will be able to gather data faster and see how the language performs across ad groups.
To start, you should brainstorm your best value propositions, CTAs (call-to-actions) and tonal elements and then mix and match to make ads, always pairing with a headline that is customized to the ad group's theme. CTAs should directly reflect whatever the primary conversion action is that you want your user to do (this is basically the objective of your advertising).
To start, you should brainstorm your best value propositions, CTAs (call-to-actions) and tonal elements and then mix and match to make ads, always pairing with a headline that is customized to the ad group's theme. CTAs should directly reflect whatever the primary conversion action is that you want your user to do (this is basically the objective of your advertising).
Example:
A local theater in San Francisco promoting its children's production of Cinderella**
A local theater in San Francisco promoting its children's production of Cinderella**
value prop = in San Francisco
value prop = 25% off
value prop = accessible/affordable
value prop = for children/toddlers
value prop = 25% off
value prop = accessible/affordable
value prop = for children/toddlers
call-to-action = reserve today
call-to-action = book early and save
Note that all variations make it clear that we are talking about live performances of Cinderella in San Francisco.
You generally don't want to pay for clicks from users who actually want to buy the Disney DVD of Cinderella, so it is up to your ad to make it clear what your business does. If the Disney user sees your clear and accurate ad and clicks anyway, then they know what they are clicking on and you may have just acquired a new customer. If they really just wanted the DVD, then they won't click on your ad, and you don't have to pay for a click (this particular optimization tactic is focused on ROI, which as you have probably noticed is at odds with CTR in this example, more on that in this post).
You generally don't want to pay for clicks from users who actually want to buy the Disney DVD of Cinderella, so it is up to your ad to make it clear what your business does. If the Disney user sees your clear and accurate ad and clicks anyway, then they know what they are clicking on and you may have just acquired a new customer. If they really just wanted the DVD, then they won't click on your ad, and you don't have to pay for a click (this particular optimization tactic is focused on ROI, which as you have probably noticed is at odds with CTR in this example, more on that in this post).
Headlines should always contain a clear reference to your most important keywords in that ad group. The "Cinderella performance" ad group should definitely have "Cinderella performance" in the headline, the "Cinderella for kids" ad group should always have "Cinderella for kids" in the headline, etc.
If you have been running offline advertising, such as billboards or buses, etc. you should make sure that language from those posters is contained in at least one variation per ad group. If that language does well, expand it quickly and take advantage of the synergy between your channels. If it does poorly, you don't have to keep it - what works for posters doesn't necessarily work for Search ads, and next time around try to choose language that is more likely to work for all channels - all channels working together will make them each more effective.
Increasing Efficiency
Applying consistent description lines can be very fast and efficient in an excel spreadsheet uploaded to AdWords Editor or in AdWords Editor itself. If you are writing ads for more than one ad group, you should definitely use AdWords Editor - it will prevent mistakes and save you hours or even days. If you're writing thousands of ads constantly, consider an API-based solution (if you don't know what that means, you probably don't need one).
Also, name your ad groups what you would put in the headline - "Cinderella for Kids" can be the ad group name, and then if you are using an excel spreadsheet, you can easily copy and paste your ad group name into the headline column, thus completely eliminating your need to write headlines. You can use a simple "length" formula in a neighboring column to identify any ad group names that are too long for the 25-character limit for headlines and then tweak only those.
Keyword Insertion - Wonderful or Woeful?
We can't talk about ad testing without a shout-out to Keyword Insertion which can be a great tool or a terrible crutch depending on how it's used.
'Keyword Insertion' is basically a formula that you add to the ad text that will insert into the ad whatever keyword from your keyword list matched the user's query on Google (there is a similar version of this for Microsoft with slightly different details***).
'Keyword Insertion' is basically a formula that you add to the ad text that will insert into the ad whatever keyword from your keyword list matched the user's query on Google (there is a similar version of this for Microsoft with slightly different details***).
Example:
{KeyWord:Cinderella Performances}
Discover the magic of ballet.
SF Shows- Book early & save!
SF Shows- Book early & save!
In this case, the {KeyWord:xxxxx} will insert whatever the keyword was that the user used from your keyword list. If the keyword is too long or it can't show because it breaks an ad policy (such as violating a trademark), the system will show the default headline, which you put in the formula after the colon (in this example it's 'Cinderella Performances').
This can be a great tool to test, but like everything in AdWords, you have to be careful if you really want it to work right, and it shouldn't be used blindly. Most people overuse it to avoid having to create specific ad groups, but there are additional benefits to having specific ad groups that they miss out on (again, read this post for more details).
Don't use keyword insertion:
1) If most of the keywords are too long to fit within the headline's character limit (25 characters)
2) If the keywords would look weird inserted with the capitalization rules (you indicate which letters to capitalize in the formula, eg {keyword: will have them all lower case, {Keyword: will have only the first letter of the phrase capitalized, {KEYWORD: will have them all capitalized -> don't ever do this, it's almost 100% against ad policy)
3) If you are using funny keyword variations (such as misspellings) -> it will make your headline misspelled, because it's inserting your misspelled keyword.
4) As a replacement for highly targeted ad groups.
On the plus side, it won't insert weird broad match variations that the Google system expands your ads to, it will only insert the original keyword from your list that inspired the system's expansion. Ex. your keyword in broad match is "cinderella performances." The Google system expands that to a user who searches for "Disney t-shirt" (this can happen). Your keyword insertion ad will show the "cinderella performances" headline, not the "Disney t-shirt" query that the user searched. This protects you and the user from displaying/seeing false automated relevancy.
Key Takeaways:
- Know what you're trying to do before you write your ads, plan how you will measure success.
- Have the appropriate ad group structure to enable you to write and test the best ads.
- Apply scientific principles to your ad testing, while also shooting for the best possible performance.
- Focus on writing a few 'home run' ads to start with, then optimize scientifically based on performance.
- Be efficient by using the same relevant description lines across multiple ad groups, naming your ad groups something that would be a good ad headline, and using AdWords Editor and/or Excel to quickly apply common description lines across lots of ad groups.
- Test keyword insertion if you believe it will help, but don't use it as a crutch in lieu of targeted ad groups.
And, a whole dedicated post on Measurement Musts is now here!
Stay tuned and subscribe for more posts on SEM and many other topics near and dear to the 2014 Digital Marketer's heart!
For help customized to your business needs, contact us at www.DigiMarketeer.com!
************
* Similarly, over time if one variation's performance is significantly better or worse than others its average position may be different from the other test variations, which can have an impact on performance and sway the test with a snowball effect because ads that show higher on the page almost always have higher CTRs. However, If your data is strong enough to produce this result, it's time to pause your bad performers and create new test variations based on the winner.
**Note that this is not the business of the client who asked the question, but rather a theoretical example that demonstrates the same principles.
***I focus on Google due to its extensively larger market share, higher return on efforts, and consistency in performance. Almost all concepts discussed in these blog posts can be applied to other search engines with slight technical tweaks. Unless you have a lot of time and have reached your capacity with Google advertising, I wouldn't advise wasting efforts on the other search engines.


No comments:
Post a Comment