The following column originally appeared in The Drum on 8/12/20
Around the time Taylor Swift dropped her Folklore album – although admittedly to less attention – Nielsen announced an equally pivotal ‘overhaul’ of its cross-channel measurement approach. Positioned as a way to power ‘flexibility’ in a market overwhelmed by indecision, the announcement was a dramatic preview of the future of media measurement.
Despite well-known struggles keeping up with consumers’ hyperactivity, Nielsen remains the gold standard of media ratings. Some $70bn in US media are bought and sold based on its ‘currency‘. Its various ‘Total Audience’ products, introduced five years ago, layer in viewing on mobile, streaming and on-demand platforms using a combination of panels and direct data collection.
Ratings measure reach, of course, and they started with TV viewership diaries and then automated in-home ‘People Meters.’ Capturing online consumption was handled at first in a similar way: 200,000 people installed a ‘PC Meter‘ on their computers, which tracked video viewing on websites. But digital media presented frustrating technical issues. For example, in 2010 Nielsen admitted it was undercounting time spent online by as much as 22% because of super long URLs.
Early and often, TV studios and ad agencies questioned the raters’ ability to capture the full range of modern media consumption: on smartphones, tablets, connected TV (CTV), out-of-home. NBCUniversal’s outspoken head of ad sales, Linda Yaccarino, famously compared the situation to a frustrating football game: “Imagine you’re a quarterback, and every time you threw a touchdown, it was only worth four points instead of six.”
Don’t blame the players – blame the referee. Some of the industry pushback is a case of punishing the messenger. After all, it is happening at a time when linear TV is seeing its viewership in a state of free fall (down some 20% in five years). When she was president of Nielsen’s Watch (ratings) division, Megan Clarken observed wryly: “Like any referee we’re not always going to be loved.”
Still, every marketer is in a sense in Nielsen’s (and Comscore’s) position: having to report some kind of measurement of reach and response across an array of channels. We’ve all had to adapt to abrupt changes in consumer behavior, data availability, and tools over the years. We can learn from Nielsen as it points us to what to do next.
So where are they pointing us? I think, in three directions
Reliance on partners
Nielsen will be relying on various proprietary ecosystems, such as social networks, to provide data about consumption that would otherwise be opaque. As the company’s Chief Data and Research Officer Mainak Mazumdar admitted in an interview, “We will work with multiple parties in a significant way, which we did not in the past.”
Whatever the outcome of the current Congressional probes, few industry observers believe the open web is poised to grow. Growth is in the gardens, which already hold a dominant share of digital ad spending. In the US, about two-thirds of such ad spending goes to Google, Facebook and Amazon, according to eMarketer.
None of these ecosystems allows marketers to see user-level data, such as impression log files with IDs. Without that level of detail, marketers can’t build real multi-touch attribution (MTA) models. They can’t independently measure unduplicated reach and frequency. They’re reliant on aggregate reporting provided by the platforms themselves, or on managed tools such as Google’s Ads Data Hub.
What this means: Make a map of all the proprietary ecosystems, including big publishers, that see your audience. Develop a detailed understanding of the data provided by each one. If you have scale, lobby for more access, or ask your agency to do it.
Admitting nobody really knows what’s going to happen, COO Karthik Rao revealed a major goal of his team was to build “a flexible platform that we can adapt to new technology, data and regulatory changes.”
Everybody embraces adaptability in principle; in practice, it’s not so easy. Adaptability means that the method used must be able to accept data at different levels of detail – from national-level campaign data down to user-level impressions and clicks – depending on what’s available. This availability in turn depends on media partner policies and privacy regulations, by region.
So, we admit that a single MTA vendor “silver bullet” – so hyped in a decade ago – won’t work. Whether we want to or not, we will all need to use more sophisticated econometric and media mix models (MMM), in-house or through an agency. There are too many unpredictable variables for simple models to succeed.
Nancy Smith, president of Analytic Partners, pointed out recently in the Drum that the future of measurement falls more heavily on MMM than MTA. “In my own review of activities with marketers,” she wrote, “I’ve seen about 80% of the impact coming from MMM and only 20% coming from MTA.” Instead of a standardized approach, therefore, she advocates “user-level analyses within the channels that matter” combined with a “holistic measurement framework” that unites these channel-specific measures.
What this means: Develop a distinct approach to measuring individual channels, including big publishers. Incorporate these into a larger measurement framework based on econometric principles.
Ratings are important, but the goal of measurement is to determine impact: did the campaign or ad view cause incremental sales, or improve brand perception? In the absence of complete data at the individual level, marketers will have to execute more in-market tests to measure the incremental impact of ads.
It’s a daunting task. Back in 2013, researchers at Google published a depressing paper titled “On the Near Impossibility of Measuring the Returns to Advertising.” They pointed out that there is simply too much noise in the ad environment to make measurement useful: too many factors, like the economy, the weather, consumers’ moods, competitors’ moves, viewability, etc., that obscure the truth.
There’s still a lot of noise, but our methods have improved. And it’s reassuring to see that Google now encourages testing to determine the impact of ads. In a recent blog post, the company said marketers should strive for the “gold standard” of using treatment and control groups: “Experiments … should play an important part of an advertiser’s attribution strategy.”
What this means: In situations where data gaps are significant, tests can add information. And often, they are the only way to make sure the ads really caused the outcomes you’re seeing.
If all this seems like an admission that the future will be more complex and unpredictable than the past, that’s because it will be.
In the words of Taylor Swift: “I’ve been having a hard time adjusting.”