Marketing strategists, analysts and CMOs alike often fall prey to appealingly presented data showing unbelievable performance trends that seem almost too good to be true. Some time later, their intuition is confirmed when the promised performance trends prove to be MIA.
This situation is even more common when the data has been presented by agencies and vendors who are very skilled in painting a beautiful picture of how their efforts "caused" a particular outcome, after all, it is their job to convince you...right? But after the handshake and the champagne toast end, too many marketing teams are left to quickly discover that somehow the trends aren't working for them. So what happened? Are they suddenly, uniquely misfortunate?
![]() |
| Marketing Meme: Matrix Marius |
To understand what is commonly going on in scenarios like this, marketers must reach back deep into their memories to the social science, statistics, economics or science lab class that they took in college and bring back to the forefront of their minds that pesky little example that some professor undoubtedly loved emphasizing - Correlation is not Causation.
While the phrase may sound like it should be relegated to booming from the pulpit of a college lecture hall, it is actually one of the most important critical thinking principles that any adult should utilize on a daily basis, especially a marketer.
As a refresher, you can read the verbose Wikipedia explanation here or you can simply review this:
Correlation = x happened and y happened (and they may be unrelated).
Causation = x caused y
Correlation is a common occurrence when looking at data, particularly data that is selected and visualized to be tempting in some way (either as a sales pitch or performance review). It often looks very convincing on a clearly displayed graph, featuring trends which appear to be related - after all, they are presented next to each other on the graph.
BUT, beware! Most of the time trends presented are not necessarily related at all and are certainly not clearly causal, as they are very often presented to be.
Take this favorite example from social science and statistics teachers everywhere:
Ice Cream Consumption "leads to" Murder
Fun examples of this type of data manipulation and misinterpretation include "Ice cream consumption leads to murder," "A pirate shortage caused global warming," and "Using internet explorer leads to murder" (check out this list of other examples of correlations & false causations with visuals).
In the case of "Ice Cream and Murder" it turns out that they both go up in summer, as the weather gets warmer, as schools are out for break, and as all of the other summer occurrences emerge. Perhaps it's because it's easier to kill people when you're outside more in fine weather, maybe there are lots more teenagers on the street who kill each other when they aren't in school, maybe beer gardens that are only open in the summer cause drunken murders, or perhaps it's easier to murder when your hand isn't too frozen to pull the trigger, etc. Perhaps it's all or none of the above.
In the case of "Ice Cream and Murder" it turns out that they both go up in summer, as the weather gets warmer, as schools are out for break, and as all of the other summer occurrences emerge. Perhaps it's because it's easier to kill people when you're outside more in fine weather, maybe there are lots more teenagers on the street who kill each other when they aren't in school, maybe beer gardens that are only open in the summer cause drunken murders, or perhaps it's easier to murder when your hand isn't too frozen to pull the trigger, etc. Perhaps it's all or none of the above.
Whatever the details, it should be clear to most people intuitively in these funny examples that the conclusions being made from the charts are lacking support, and thus they are a good demonstration of why you need to think carefully and ask lots of questions about the context of the data you are reviewing.
But how about this common variation:
Here we see basically the same chart as the "Ice Cream and Murder" chart - two metrics that are going up over time, that seem to be related, but without a demonstrated causality. Is spend "causing" revenue to increase? If you look carefully you'll notice that revenue actually took a stronger angle up before spend took its stronger angle up. Perhaps then revenue caused spend to increase? It's possible, however, sadly for your presentation to the CEO, it is also possible that the two are, in fact, not causing each other's trends at all.
What you need in this case and most others is additional context. What else was going on when these trends were observed? What were the variables that existed in your system? Is January a month in which you always see an increase in spend and revenue? Perhaps your customers are ramping up from a post-holiday low and nothing you did in your program impacted this performance. What are the metrics behind the increased spend? Is it more clicks or are you getting the same amount of clicks and they are just more expensive? Perhaps this trend is bad, because the increase in spend is too high for your increase in revenue and now despite the metrics being up, your ROI has actually gone down.
The point is, without additional information, the chart above is just as meaningless as the "Ice Cream and Murder" chart, it just doesn't feel that way because you logically know that the two metrics should be related.
That said, the chart above can be used as the starting point for a deeper investigation - seeing the correlation is a good indication of where you need to look to really understand the trend. If spend was going up and revenue going down, whether the metrics were related or not, you know you have a performance problem on your hands that needs to be solved, and the direction of those metrics is a clue as to where to look first.
And so, how do you make these correlative data comparisons useful to you?
1) Think of them as the starting point of your investigation. Ask yourself, did you expect this correlation? If not, what did you expect? At this point, you can start comparing the data to your "hypothesis" (ie, what you expected) that was wrong. Doing so will give you structure in your investigation.
2) Think about the related metrics and add them to your graphs for context. If ROI is going down while your revenue and spend are going up, you may not learn the cause, but you understand the scope of your situation better. If you then add traffic metrics and find that traffic is not going up but spend is, suddenly it's clearer that your existing traffic is more expensive - now you have a direction to investigate. Again, it's not clear what the "cause" of the higher expense is yet, but you now know another fact about your situation - your clicks are costing more. If your traffic was increasing with spend but your ROI is still going down, perhaps you've had an influx of irrelevant traffic or perhaps you have a problem on your website that is lowering conversion rate and thus losing out on the opportunity that the new influx of equally qualified users could result in. Maybe your email team launched an email capture that gets in the way of the standard search/display conversion funnel and the conversions from the increased traffic are now showing up in their metrics and not yours. All of these things could be happening at the same time or none of them, so it isn't an either/or situation. Yet, each metric informs the journey, and understanding how the metrics commonly interact will help you quickly get down to the bottom of performance problems and work more quickly towards solutions.
3) Make a list of your common external variables and check it off at the beginning of every investigation. The scientific method is all about eliminating variables so that eventually in a controlled environment, you can be sure of the real "causal" relationship. In marketing you rarely have such a luxury because your programs are out there, in the real world environment, and you can't just turn them off to find out if a particular technique was the cause of your performance trends (props to SEM ad testing systems which are the closest to a scientific environment any marketer can ask for!). That said, you can become familiar with the common external variables that likely impact your numbers.
When I worked in e-commerce, my top variables included:
1) Seasonality -> Does this trend happen at this time every year? Be sure to review data in common seasonal contexts to understand if your trend is unique to now: year over year, over time (Q4 v. Q1), day of week (Monday v. Monday instead of date v. last year's date), and against a holiday calendar.
Year over year and week over week data is a good way to look at seasonality in context so that you don't get too excited in November when suddenly your traffic goes through the roof...which it does...every year...
Does your first week in April look terrible? Check out when Easter fell year over year. More often than you would expect, seasonal trends can have a very strong influence on your data. Knowing how they impact your business consistently is key to creating forecasts and strategies that take advantage of these trends, and to not wasting time troubleshooting a trend that should be expected.
2) Promotions -> Did your company just launch a big promotion? Twice yearly 50% off? Maybe the big promotion just ended and now you see a dip in revenue and conversion rate.
Companies use promotions because they work, and you should expect to see them impacting your data. They most often impact data on the conversion side, and can also "move" conversions from one time period to another by convincing users to "buy now" instead of waiting as they might have otherwise. Re-targeting programs can have huge problems during the weeks after a big sale because the "maybe" customers were pushed into buying while on site the first time - exactly the behavior the sale was meant to promote, yet, afterwards the marketers and executives must accept that their re-targeting channel will be lower for a few weeks as a result.
Frequent promotions can also have a negative impact over time by making your company seem like its normal prices are too high and/or making your products feel commoditized. Whether promotions are good for your company should be a serious strategic discussion with your target customer top of mind. Does your target customer love promotions? Keep them (I'm talking to you, JC Penney!). But regardless of your strategy, when you are planning, you should be sure to document what you expect to happen during and after a sale to pre-empt any "data emergencies" that occur from customer behavior that you could have easily anticipated.
3) Changes to how the ad networks serve -> Did the ad network change where your ad was showing? Maybe your re-targeting partner had an outage with their Facebook connection which was the top source of your re-targeting clicks. Did a major partner switch the formatting of their ad display? Did Google add an irrelevant traffic source that is showing millions of new impressions that don't make a difference but make your CTR look terrible? These external changes can have a huge impact on how your data looks and on your performance.
4) Other marketing initiatives run by other teams at your company -> Did your catalogue team just drop a catalogue? You'll probably see it in other channel traffic numbers. Did your affiliate team just launch a big discount that is over-riding your last-click revenue attribution, making your search and display channels look artificially low? Perhaps a TV campaign launched and you have higher traffic due to brand awareness from new users. Understand how your channels interact and how your chosen measurement methodology may be contributing to performance trends.
5) Competitor behavior -> Did a competitor just burst onto the scene or shrivel up and die? Search, display, social media, and re-targeting can all be heavily influenced by the behavior of competitors. The more competitive you need to be for prime placement in the channel, the more competitor behavior can influence you, and sometimes the best way to investigate is to find examples of your ads serving in the real world.
6) Weather or other environmental factors -> Did a blizzard just strike half the country and knock out power and internet to millions? Perhaps it did at this time last year and your YoY numbers look awesome because this year the weather is fine. If you are looking at too granular trends, you can often fall prey to these environmental micro-trends. Eliminate them when you can so you don't go off on a wild goose chase only to see your numbers normalize when the variable's influence dries up.
Conclusions:
While it may seem daunting, marketers should ask these questions and push for context when reviewing any data that is presented to them in any context, especially at a sales pitch. If these concepts are new to you, bring this blog post with you to the next vendor pitch.
Note how many times they present correlative data out of context - you will probably be shocked by the frequency. Then, politely ask them to give you the context that demonstrates the causality, ask them how they know the trend isn't seasonality or isn't driven by competitor behavior and watch the blood drain from their faces as they realize that they will have to provide better proof to you and your savvy marketing team before you will buy their product.
Either they will or they won't, and you will be able to make a much more informed decision. And from that point forward, you can keep the champagne for toasting the many contracts that don't get signed for services that can't be delivered. After all, you have better things to do - like use your new savvy scientific approach to optimizing your marketing programs!
Stay tuned and subscribe for more posts on the science of marketing and many other topics near and dear to the 2014 Digital Marketer's heart!



No comments:
Post a Comment