Over the years, I have met many business owners and marketers who have launched a big initiative or marketing channel without a clear enough picture of how they would define and measure success.
More often than not, they ended up with a spent budget and an unclear picture of how to decide if it was worth it, sometimes with no sense of how they could improve performance in the future.
![]() |
| Marketing Meme: Most Interesting Man |
Here are my 5 Marketing Measurement Musts:
1) Know what you're trying to measure and how you will measure it before you launch.
Before creating any marketing assets or committing to any budget, you should know what your marketing objective is - not just "to make more money" or "to grow," but what the actual actions are that a user must take in your business environment in order for you to be successful. Is it a purchase? An email sign-up? Following your fan page? Time browsing your site? All of the above (you'll need to prioritize)?
Step one is to know what these important actions are, what your primary conversion action is (if you can only optimize for one which will it be) and to have a sense of how valuable you think each action is to you. In the case of a purchase or lead, the immediate click-based conversion can be quite simple to measure, but for something like an email sign-up or fan page follow, you are valuing an initial action that sets a user up for future actions/purchases which means that you will have at least a 2-stage measurement - the immediate action and the later action.
Step Two is equally important -> Be set up to actually measure these conversion actions within your website and your marketing environment. All too often this step is incomplete due to the fact that it requires some sort of technical implementation, such as adding Analytics tags or a conversion pixel. But don't let this step go unfinished, because knowing what you are trying to achieve is not going to help you if you can't measure whether you are actually achieving it.
If you are measuring revenue/conversions for AdWords, you should be set up through a combination of Google Analytics and pixel-based Google conversion tracking that presents its data right in the AdWords Interface (they show you different views, often the AdWords account view is easier to work with, but it doesn't show you data like "time on site" or "bounce rate" which is why the combination of Analytics + conversion tracking is ideal).
If you are managing Social Media or Display Ads, make sure that your agencies and partners are integrated with your in-house measurement systems enough that you can have a fair and accurate view of the data. Make sure that before you launch, you have set your own expectations of how a channel will perform in comparison to another -> if you are already running Search ads and are now launching Display, expect to see higher Display/Social impressions and fewer clicks than in Search. Your clicks may be less likely to convert or may take longer to convert because the users viewing a Display or Social Media ad, generally, are not as ready to convert as their counterparts who are actively searching for your product on a search engine.
If your conversion action can't be easily measured online (like if it's in-store visits), plan for how you will measure success in the immediate (such as traffic to a certain page of your website or printing of a coupon) and in the long run (use of that coupon in store), and have a measurement system in place that can manage and link the data.
In the case of a traditional-first campaign supported by digital channels, use the techniques available to you to drive digital behavior, but understand the limitations of your available measurement. Are you running a local NPR campaign? Standard practice is to use a special URL in the on-air reading, but only a % of users will use that URL. Just looking at the URL traffic in Analytics or weblogs will not demonstrate the overall impact of the initiative. Pairing that campaign with outdoor bus ads with consistent messaging customized to your target customer's wants and needs? Good idea! But understand that they are intended to work together - hearing the brand message on NPR and then seeing the bus ads will often drive one measurable action (or an action that doesn't have an online footprint at all for companies whose primary conversion is offline).
NPS and other third party measurements can help you understand the efficacy of certain brand initiatives, as can using test and control DMAs, but only if you are operating in multiple markets (for example, running an NPR campaign in ten target cities, while comparing results to ten similar control cities that don't have the NPR campaign). Additionally, for brands that cultivate longer term relationships and loyalty from customers, post-interaction surveys, and/or conversations with sales teams (such is in local B&M stores) can add additional context.
Overall, when it comes to measuring online+offline initiatives, you should:
-have a clear idea of how online and offline interact
-create a list of expected proxy metrics and have a clear idea of their limitations
-understand what data you can and can't get and set your targets accordingly
-don't necessarily avoid an initiative just because you can't measure a click -> think innovatively about how you can supplement the offline experience with online, and what tools you have to gather additional customer/user behavior data after the desired action is taken (loyalty cards, coupons, social sharing/"buzz" incl. use of your hashtags, QR codes, branded searches, post transactional surveys, etc. can all add to your picture).
From a short and long term perspective, you’ll need to decide how you want to value impressions, clicks/visits, page views, time on site, foot traffic, interim online actions, social shares, and conversions. Depending on your business model, these metrics will have very different values to you.
2) Gather enough data to make informed decisions.
The longer your ads run, the more data you have, and the more likely it is to be accurate and to drown out noise caused by micro-environments and external variables (review this post about correlation v. causation for more detail on this point). Plotting your performance data on a day of week, week over week, and year over year basis can help you understand your trends and reduce the impact of common seasonal and environmental variables.
When deciding how much data is the right amount, you need to know your conversion funnel and how long it takes for your target user to convert. Are you selling something expensive or aiming for a conversion that requires an initial investment from the user (such as a paid subscription or extensively filled-out profile)? Expect your users to take longer than a moment to decide whether to convert - they may return as many as 10 times over as long as a year or two depending on your business before they convert.
Using an Analytics system (including Google Analytics which is free) can be very helpful for understanding what users are doing on your site before and after conversion, so that you can optimize to address their needs. If users are bouncing in very high numbers immediately upon landing on your site, either you are bringing irrelevant traffic or you are turning them off with something on your landing page. Are users doing a lot of research, browsing many areas of the site before deciding to convert? Are they bouncing when they reach your checkout/payment page? A lot can happen in a user path before a user finally converts, so only looking at your marketing traffic + conversion rates isn't going to show you the whole picture. Run your marketing long enough to learn in a statistically significant way what your users are doing, and coordinate your user experience optimization to test and address any potential site issues while your marketing traffic is still coming through.
Don’t make hasty or wildly generalized decisions based on small samples of data, even if you feel pressure to act quickly - no optimization action is better than a misinformed one. If that means you have to wait another month to have a proper sample, then you should wait another month.
That said, if your performance is consistently terrible, you should pause and evaluate:
1) are you measuring the right conversion
If you are measuring revenue/conversions for AdWords, you should be set up through a combination of Google Analytics and pixel-based Google conversion tracking that presents its data right in the AdWords Interface (they show you different views, often the AdWords account view is easier to work with, but it doesn't show you data like "time on site" or "bounce rate" which is why the combination of Analytics + conversion tracking is ideal).
If you are managing Social Media or Display Ads, make sure that your agencies and partners are integrated with your in-house measurement systems enough that you can have a fair and accurate view of the data. Make sure that before you launch, you have set your own expectations of how a channel will perform in comparison to another -> if you are already running Search ads and are now launching Display, expect to see higher Display/Social impressions and fewer clicks than in Search. Your clicks may be less likely to convert or may take longer to convert because the users viewing a Display or Social Media ad, generally, are not as ready to convert as their counterparts who are actively searching for your product on a search engine.
If your conversion action can't be easily measured online (like if it's in-store visits), plan for how you will measure success in the immediate (such as traffic to a certain page of your website or printing of a coupon) and in the long run (use of that coupon in store), and have a measurement system in place that can manage and link the data.
In the case of a traditional-first campaign supported by digital channels, use the techniques available to you to drive digital behavior, but understand the limitations of your available measurement. Are you running a local NPR campaign? Standard practice is to use a special URL in the on-air reading, but only a % of users will use that URL. Just looking at the URL traffic in Analytics or weblogs will not demonstrate the overall impact of the initiative. Pairing that campaign with outdoor bus ads with consistent messaging customized to your target customer's wants and needs? Good idea! But understand that they are intended to work together - hearing the brand message on NPR and then seeing the bus ads will often drive one measurable action (or an action that doesn't have an online footprint at all for companies whose primary conversion is offline).
NPS and other third party measurements can help you understand the efficacy of certain brand initiatives, as can using test and control DMAs, but only if you are operating in multiple markets (for example, running an NPR campaign in ten target cities, while comparing results to ten similar control cities that don't have the NPR campaign). Additionally, for brands that cultivate longer term relationships and loyalty from customers, post-interaction surveys, and/or conversations with sales teams (such is in local B&M stores) can add additional context.
Overall, when it comes to measuring online+offline initiatives, you should:
-have a clear idea of how online and offline interact
-create a list of expected proxy metrics and have a clear idea of their limitations
-understand what data you can and can't get and set your targets accordingly
-don't necessarily avoid an initiative just because you can't measure a click -> think innovatively about how you can supplement the offline experience with online, and what tools you have to gather additional customer/user behavior data after the desired action is taken (loyalty cards, coupons, social sharing/"buzz" incl. use of your hashtags, QR codes, branded searches, post transactional surveys, etc. can all add to your picture).
From a short and long term perspective, you’ll need to decide how you want to value impressions, clicks/visits, page views, time on site, foot traffic, interim online actions, social shares, and conversions. Depending on your business model, these metrics will have very different values to you.
2) Gather enough data to make informed decisions.
The longer your ads run, the more data you have, and the more likely it is to be accurate and to drown out noise caused by micro-environments and external variables (review this post about correlation v. causation for more detail on this point). Plotting your performance data on a day of week, week over week, and year over year basis can help you understand your trends and reduce the impact of common seasonal and environmental variables.
When deciding how much data is the right amount, you need to know your conversion funnel and how long it takes for your target user to convert. Are you selling something expensive or aiming for a conversion that requires an initial investment from the user (such as a paid subscription or extensively filled-out profile)? Expect your users to take longer than a moment to decide whether to convert - they may return as many as 10 times over as long as a year or two depending on your business before they convert.
Using an Analytics system (including Google Analytics which is free) can be very helpful for understanding what users are doing on your site before and after conversion, so that you can optimize to address their needs. If users are bouncing in very high numbers immediately upon landing on your site, either you are bringing irrelevant traffic or you are turning them off with something on your landing page. Are users doing a lot of research, browsing many areas of the site before deciding to convert? Are they bouncing when they reach your checkout/payment page? A lot can happen in a user path before a user finally converts, so only looking at your marketing traffic + conversion rates isn't going to show you the whole picture. Run your marketing long enough to learn in a statistically significant way what your users are doing, and coordinate your user experience optimization to test and address any potential site issues while your marketing traffic is still coming through.
Don’t make hasty or wildly generalized decisions based on small samples of data, even if you feel pressure to act quickly - no optimization action is better than a misinformed one. If that means you have to wait another month to have a proper sample, then you should wait another month.
That said, if your performance is consistently terrible, you should pause and evaluate:
1) are you measuring the right conversion
2) is your marketing strategy set up for success
3) is your measurement technically set-up correctly.
If you see consistent terrible performance, you should turn off your test and re-evaluate because a) you will need that budget for a future, better test, and b) all ad platforms keep a quality score of you as an advertiser and consistent terrible performance (especially from a CTR/user engagement perspective*) can hurt your ability to affordably show ads in the future.
3) Don't just review CTR data, review ROI data in the same context.
Sometimes you’ll see an explosion in CTR while your ROI goes down, or mediocre CTR with an awesome ROI. This dynamic is caused by the push and pull between bringing in relevant users and bringing in all users who see your ad. For most digital marketing, you should be aiming to limit your traffic to those target users who are most likely to click and convert, unless you are measuring an alternative to conversion such as pure impressions on relevant sites (common in brand advertising). In most cases, you will need to understand how your CTR and ROI are interacting and optimize to balance them (check out the ROI v. CTR post for tips).
Conversions per click, revenue per click and conversions/revenue per impression are good metrics to play with and can be calculated with the data in most ad platform reporting centers as long as you have conversion tracking set-up properly. This is one additional example of why you need to have your measurement system set up before you launch - if you are not set up to measure these metrics, you will have a very hard time deciding whether the clicks you're paying for are helping you.
4) Act on your findings.
Over time, when you see that a particular ad or keyword variation is consistently doing poorly, dump it, pause it, or optimize it.
In ad testing, if there is a clear split in which one language set or image ad is doing better in some contexts and not others, then you can take advantage of your well-designed campaign structure with numerous small thematic niche test groups (relevant in Search, Display, Re-targeting, Social Media ads and Email marketing) by creating segmented buckets for future testing. Group similarly performing test groups together when you look at the performance data and compare that to the established performance view of thematic groupings.
For example, to use our e-commerce apparel retailer as an example, perhaps you are running Search and Contextual Display ads with each ad group specific to brand x style (ad groups for each: gucci boots, gucci heels, nine west boots, nine west heels, etc.). You can look at performance ad group by ad group, and then you can look at higher level groupings - how does a particular ad text or image ad do for all of your luxury brand ad groups v. your middle-tier brand ad groups? Perhaps there is stronger commonality in performance by brand than by style or vice versa. Perhaps brand is more important in Search but style is more important in Display due to the power of the image relevancy.
Whichever trends you notice, you will immediately understand an important distinction about your users' priorities that you can then apply in future marketing channel planning, campaign structure and ad optimization.
5) Segment and measure incremental impact.
Now we're getting advanced! If you already had some ads running and your goal of a new launch was to improve clicks, revenue, or ROI, you can often measure the incremental impact** of your ad testing efforts.
Example:
For Search ads which run in an equally rotating test environment with a goal of revenue (see the Ad Magic post for more details):
Take the CTR, conversion rates and AOVs (average order values) of your original ads within the same ad groups (your "controls") and apply them to the impressions that showed for your new ads during a designated test period (data for tests and controls should *both* be from this time period). Applying the control CTR to the test impressions will give you a "would have been" clicks metric (the clicks you would have achieved with those impressions had your test not launched). Applying control conversion rates to those clicks will show you a "would have been" conversions metric and then applying the control AOV to those conversions will give you a "would have been" revenue metric. If you only care about clicks, you can do a simple comparison by applying the CTR of your original ads to the test group's impressions to see how many clicks you likely would have had if you had not launched your test group (note that all clicks are not equal, and without a further relevancy/conversion metric you will be limited in your understanding of whether the increase or decrease in clicks was good). These types of "would have been" comparisons will quickly show you whether your ad testing has had an overall positive or negative impact because you can view the "would have been" against the "what actually was" and instantly see the results.
You can use this same comparison technique on larger thematic segments, by grouping sets of ad groups or ads that have some similarity into segments and comparing the test v. control data within the segments.
Back to the shoe retailer example: if the retailer has control and test ads across all of their ad groups running at the same time, they could look at the test v. control data using the technique above for brand segments (how did the new ads drive revenue for luxury -> Gucci + Burberry + Prada ad groups versus mid-tier brands -> nine west, enzo angiolini, clarks, etc.), style segments (did the new ads drive revenue for boots, high heels, etc.) and whatever other relevant groupings the the advertiser may want to view. Perhaps, for example, sale language worked well for the mid-tier brands but had a negative impact on the luxury brands. Perhaps it did well for boots but not for high heels, etc.
This technique can give you a holistic picture of segment performance without having to analyze every individual ad in a campaign and can be a good solution to individual ads not having enough data to be statistically significant. Be careful though, to make sure that there aren't individual performance outliers that are swaying your data.
A similar approach can work with other digital channels as well, but you have to make sure when you are comparing test and control groups, that your "control" is really a control, meaning that the performance to which you're comparing your new launch does is not influenced differently than your new ads by major external variables (see Correlation v. Causation post for more details).
Nothing gains support of executives more than being able to show that your marketing test just made the company $1million of incremental revenue.
Conclusions
If you see consistent terrible performance, you should turn off your test and re-evaluate because a) you will need that budget for a future, better test, and b) all ad platforms keep a quality score of you as an advertiser and consistent terrible performance (especially from a CTR/user engagement perspective*) can hurt your ability to affordably show ads in the future.
3) Don't just review CTR data, review ROI data in the same context.
Sometimes you’ll see an explosion in CTR while your ROI goes down, or mediocre CTR with an awesome ROI. This dynamic is caused by the push and pull between bringing in relevant users and bringing in all users who see your ad. For most digital marketing, you should be aiming to limit your traffic to those target users who are most likely to click and convert, unless you are measuring an alternative to conversion such as pure impressions on relevant sites (common in brand advertising). In most cases, you will need to understand how your CTR and ROI are interacting and optimize to balance them (check out the ROI v. CTR post for tips).
Conversions per click, revenue per click and conversions/revenue per impression are good metrics to play with and can be calculated with the data in most ad platform reporting centers as long as you have conversion tracking set-up properly. This is one additional example of why you need to have your measurement system set up before you launch - if you are not set up to measure these metrics, you will have a very hard time deciding whether the clicks you're paying for are helping you.
4) Act on your findings.
Over time, when you see that a particular ad or keyword variation is consistently doing poorly, dump it, pause it, or optimize it.
In ad testing, if there is a clear split in which one language set or image ad is doing better in some contexts and not others, then you can take advantage of your well-designed campaign structure with numerous small thematic niche test groups (relevant in Search, Display, Re-targeting, Social Media ads and Email marketing) by creating segmented buckets for future testing. Group similarly performing test groups together when you look at the performance data and compare that to the established performance view of thematic groupings.
For example, to use our e-commerce apparel retailer as an example, perhaps you are running Search and Contextual Display ads with each ad group specific to brand x style (ad groups for each: gucci boots, gucci heels, nine west boots, nine west heels, etc.). You can look at performance ad group by ad group, and then you can look at higher level groupings - how does a particular ad text or image ad do for all of your luxury brand ad groups v. your middle-tier brand ad groups? Perhaps there is stronger commonality in performance by brand than by style or vice versa. Perhaps brand is more important in Search but style is more important in Display due to the power of the image relevancy.
Whichever trends you notice, you will immediately understand an important distinction about your users' priorities that you can then apply in future marketing channel planning, campaign structure and ad optimization.
5) Segment and measure incremental impact.
Now we're getting advanced! If you already had some ads running and your goal of a new launch was to improve clicks, revenue, or ROI, you can often measure the incremental impact** of your ad testing efforts.
Example:
For Search ads which run in an equally rotating test environment with a goal of revenue (see the Ad Magic post for more details):
Take the CTR, conversion rates and AOVs (average order values) of your original ads within the same ad groups (your "controls") and apply them to the impressions that showed for your new ads during a designated test period (data for tests and controls should *both* be from this time period). Applying the control CTR to the test impressions will give you a "would have been" clicks metric (the clicks you would have achieved with those impressions had your test not launched). Applying control conversion rates to those clicks will show you a "would have been" conversions metric and then applying the control AOV to those conversions will give you a "would have been" revenue metric. If you only care about clicks, you can do a simple comparison by applying the CTR of your original ads to the test group's impressions to see how many clicks you likely would have had if you had not launched your test group (note that all clicks are not equal, and without a further relevancy/conversion metric you will be limited in your understanding of whether the increase or decrease in clicks was good). These types of "would have been" comparisons will quickly show you whether your ad testing has had an overall positive or negative impact because you can view the "would have been" against the "what actually was" and instantly see the results.
You can use this same comparison technique on larger thematic segments, by grouping sets of ad groups or ads that have some similarity into segments and comparing the test v. control data within the segments.
Back to the shoe retailer example: if the retailer has control and test ads across all of their ad groups running at the same time, they could look at the test v. control data using the technique above for brand segments (how did the new ads drive revenue for luxury -> Gucci + Burberry + Prada ad groups versus mid-tier brands -> nine west, enzo angiolini, clarks, etc.), style segments (did the new ads drive revenue for boots, high heels, etc.) and whatever other relevant groupings the the advertiser may want to view. Perhaps, for example, sale language worked well for the mid-tier brands but had a negative impact on the luxury brands. Perhaps it did well for boots but not for high heels, etc.
This technique can give you a holistic picture of segment performance without having to analyze every individual ad in a campaign and can be a good solution to individual ads not having enough data to be statistically significant. Be careful though, to make sure that there aren't individual performance outliers that are swaying your data.
A similar approach can work with other digital channels as well, but you have to make sure when you are comparing test and control groups, that your "control" is really a control, meaning that the performance to which you're comparing your new launch does is not influenced differently than your new ads by major external variables (see Correlation v. Causation post for more details).
Nothing gains support of executives more than being able to show that your marketing test just made the company $1million of incremental revenue.
Conclusions
The beauty and the pain of digital marketing is that it is measurable, in many ways far more measurable than any previous marketing platform. But with great numbers, comes great responsibility.
Plan ahead, know what your objective is and how to measure it, and in no time you will have the digital marketing world at your fingertips. You will never again scratch your head wondering how you could have managed to spend up a budget so quickly without even clean data to show for it.
Stay tuned and subscribe for more posts on many other topics near and dear to the 2014 Digital Marketer's heart!
For help customized to your business needs, contact us at www.DigiMarketeer.com!
Plan ahead, know what your objective is and how to measure it, and in no time you will have the digital marketing world at your fingertips. You will never again scratch your head wondering how you could have managed to spend up a budget so quickly without even clean data to show for it.
Stay tuned and subscribe for more posts on many other topics near and dear to the 2014 Digital Marketer's heart!
For help customized to your business needs, contact us at www.DigiMarketeer.com!
***********
*See the post on ROI v. CTR for more details.
**In this context "incremental impact" means that we are looking for measurable increase or decrease in revenue/conversions that is not just shuffling of data from one channel to another. The "increments" are "increments above or below standard performance." Stay tuned for a future post on common issues facing marketers in measuring "incrementality," such as what to do when your re-targeting ads are "stealing" revenue by shuffling it from the introductory channel (such as Search or Display that originally brought the user to your site) to the re-targeting data (which always targets users who have already been to your site).
**In this context "incremental impact" means that we are looking for measurable increase or decrease in revenue/conversions that is not just shuffling of data from one channel to another. The "increments" are "increments above or below standard performance." Stay tuned for a future post on common issues facing marketers in measuring "incrementality," such as what to do when your re-targeting ads are "stealing" revenue by shuffling it from the introductory channel (such as Search or Display that originally brought the user to your site) to the re-targeting data (which always targets users who have already been to your site).

No comments:
Post a Comment