PepsiCo Launched Two Consumer Ecommerce Sites in 30 Days — Here’s What We Can Learn From It

The following article first appeared on the Salesforce Marketing blog in January.

Last May, at the height of the COVID pandemic’s first wave, PepsiCo raised some bubbles by launching not one but two websites where consumers could browse and purchase a selection of the company’s more than 100 widely munched snack and beverage brands. At a time when many outlets were closed and people ordered more food online, the company quickly made it easy for consumers to buy directly.

One ecommerce site, PantryShop.com, took a lifestyle-themed approach, offering bundles of products in categories such as “Rise & Shine” (Tropicana juice, Quaker oatmeal, Life cereal) and “Workout & Recovery” (Gatorade, Muscle Milk, and Propel electrolyte-infused water). A second site, Snacks.com, offered a more straightforward lineup of Frito-Lay brands such as Lay’s, Tostitos, Cheetos, dips, nuts, and crackers.

These platforms complement PepsiCo’s retailer channels to ensure the company continues to deliver consumers their favorite products on the right platform, in the right place, at the right time. 

Whenever, wherever, however has been the mantra of ecommerce digital marketing for a while, but it’s become more important in the age of COVID-19.

Whenever, wherever, however has been the mantra of ecommerce digital marketing for a while, but it’s become more important in the age of COVID-19. Most of Salesforce’s customers – particularly those in high-velocity consumer categories such as consumer packaged goods (CPG), retail, and restaurants – have had to become extraordinarily flexible over the past 10 months to adapt to ever-changing local guidelines and consumer behaviors.  

What was most striking about PepsiCo’s foray into direct-to-consumer (D2C) commerce and marketing was its speed. “We went from concept to launch in 30 days,” said Mike Scafidi, PepsiCo’s Global Head of Marketing Technology and AdTech. “Within 30 days, we stood up our ecommerce capabilities and started delivering direct to our consumers.”

PepsiCo’s products are consumed more than a billion times daily in 200 countries. It boasts 23 brands with more than $1 billion in annual sales. How does such a large and complex global company pull off such impressive footwork?

The answer, Scafidi said, was preparation. “Digital is inherently disruptive,” he explained. “We’ve been training for this for a long time. We’ve been preparing to adapt to disruption for 20 years.”

Planning for change is a skill

Scafidi and I spoke from our remote locations about “Building Resilience and Adapting Your Marketing Tech in Uncertain Times” during Ad Age’s CMO Next conference, co-hosted by Ad Age’s Heidi Waldusky. Scafidi stressed that tumultuous times took PepsiCo back to basics — inspiring the company to lean on skills it had been developing for years — especially in consumer research and media measurement.

Part of the reason PepsiCo was able to launch Snacks.com and PantryShop.com so quickly, he said, was “we were leaning on what we were doing already.”

He reminded me of an analyst quote I read recently on embedding resilience into sales and marketing plans: “[T]he more an organization practices resilience, the more resilient it becomes.”

Over the past year, many organizations have had plenty of time to practice being resilient. As stores shut down and millions huddled at home, there was a surge in digital activity across all channels. Media consumption soared: people around the world watched 60% more video, for example. And they shopped. Salesforce’s latest Shopping Index shows that comparable online sales were up 55% in Q3 of last year after climbing 71% in Q2.

We’ve heard from many of our customers that they needed to launch new capabilities faster than ever before. Otherwise, they’d lose business. Curbside pickup, buy online, pickup in store, expanded digital storefronts, appointment scheduling, contact tracing – the list goes on.

Our desire to help customers adapt to rapid digitization inspired us to launch Digital 360, a suite of ecommerce digital marketing products combining marketing, commerce, and personalized experiences under a single umbrella. With it, Salesforce Trailblazers like Spalding and Sonos were able to scale their online commerce dramatically, making up some of the shortfall in brick-and-mortar sales.

When times are changing, it’s too late to build up basic skills. If you have a foundation in place, that allows you to adapt.

MIKE SCAFIDI, PEPSICO GLOBAL HEAD OF MARKETING TECHNOLOGY AND ADTECH

Unilever also faced dramatic market shifts in the recent past. Keith Weed, the company’s chief marketing and communications officer, pointed out back in 2018 that the pace of change “will never be this slow again” – not knowing just how fast that pace would get. And like PepsiCo, Unilever met the hyperfast present by relying even more on its customer research skills.

“We know that people search [online] for problems, not products,” Weed said. So the company created Cleanipedia.com, which offers detailed solutions to cleaning problems in 26 languages. Built before COVID-19, the site was ahead of its time and has attracted 28 million visitors to date.

Building a foundation that scales to meet customers wherever they are

When times are changing, it’s too late to build up basic skills. “If you have a foundation in place, that allows you to adapt,” Scafidi said.

For example, the PepsiCo team was able to rapidly restructure its internal media measurement analyses because it had already put in the work to develop an ROI Engine, which helped determine the real impact of its advertising, promotions, and email. The ROI Engine automates data inputs, processing, and algorithms to improve paid media optimization decisions. Combining the ROI Engine with a customer insight capability called Consumer DNA, “We were able to stabilize our understanding of the consumer and adapt to where they were,” Scafidi explained.

PepsiCo’s Consumer DNA project is an example of a custom-built tool that allows the company to gain a 360-degree view of the consumer to enable more effective and targeted media buying and marketing activation.

At Salesforce, we help our customers engage with their customers. To do this in 2020, we too relied on core skills, built up over years, to adapt to an environment that seemed to change by the second. The result was launches like Digital 360 and Work.com. The latter helps companies safely reopen their workplaces. We also introduced a customer data platform (CDP) called Customer 360 Audiences, which serves as a single source of truth to build a unified profile for marketers and others.

The Ancient Greek philosopher Heraclitus said, “the only constant in life is change.” As customers like PepsiCo show us, the best way to adapt is to build core skills that can help you pivot quickly in the future.

Customer Data Platforms: Soon to be a Major Motion Picture?

Cover (not actual size)

Happy to announce that the book I co-wrote with Chris O’Hara on Customer Data Platforms has just been published by Wiley and is now available at fine booksellers everywhere such as this one.

Helpfully, the title is “Customer Data Platforms: Use People Data to Transform the Future of Marketing Engagement“.

Note the key phrase: Customer Data Platforms (CDP). Only the hottest mar-tech category to appear in at least a decade, and we literally wrote the book on it. At least, the first substantial mainstream book on this key topic. We cover the category from data integration and identity management to exploration, activation and A.I.

It’s accessible and a quick read, full of charming illustrations and architecture drawings. If you’re interested in this fast-growing tech category that points the way toward the converged platform future, our book is a great place to start.

Happy launch-day, my friends! Here’s to a more integrated, privacy-friendly and analytical future! The future is bright. peace mk

I Have Seen the Future of Measurement … and It Is Messy

The following column originally appeared in the mighty AdExchanger on 11/10/20

Measurement is a footnote – outglamorized by targeting and the opera of browsers, pushed into the corner during debates about the future of media. But it’s arguably more important than aim and requires more discipline.

A few weeks ago, the World Federation of Advertisers (WFA) released a statement and a technical proposal for a cross-media measurement framework. It was the outcome of a yearlong global peer review and an even longer discussion among participants including Google, Facebook, Twitter, holding companies, industry groups such as the ANA and MRC and large advertisers including Unilever, P&G and PepsiCo.

Reactions ranged from enthusiastic to less so, but few people seem to have read more than the press release. After all, it’s not a product and could be yet another in a parade of grand ambitions in online-offline media measurement, dating back to Facebook’s Atlas.

But it describes a realistic scenario for the future of measurement. Sketchy in spots, the WFA’s proposal is ironically the clearest screed of its kind and is worth a closer look.

To be sure, this is a project focused on solving a particular problem: measuring the reach and frequency of campaigns that run on both linear TV and digital channels, including Facebook, YouTube and CTV. In other words, the kinds of campaigns that cost participating advertisers such as P&G a reported $900 billion a year.

And P&G’s own Marc Pritchard is on record calling the proposal “a positive step in the right direction.”

The need is clear. Advertisers today rely on a patchwork of self-reported results from large publishers, ad server log files, aggregate TV ratings data and their own homegrown models to try to triangulate how many different people saw their ads (reach), how often (frequency) and how well those ads fueled desired outcomes, such as sales lift.

The latter goal is acknowledged in the current proposal, which doesn’t try to solve it. But the WFA, building on previous work from the United Kingdom’s ISBAGoogle, the MRC and others, lays out a multi-front assault on reach and frequency that covers a lot of ground.

How does it work?

The proposal combines a user panel with census data provided by participating publishers and broadcasters, as well as a neutral third-party data processor. The technical proposal spends some time talking about various “virtual IDs” and advanced modeling processes that are loosely defined – and the goal of which is to provide a way for platforms that don’t share a common ID to piece together a version of one.

Needless to say, a lot of the virtualizing and modeling and aggregating in the WFA’s workflow exists to secure user-level data. It’s a privacy-protection regime. It also engages with the much-discussed third-party cookieless future.

Panel of truth

The proposal leans heavily on a single-source panel of opted-in users. At one point, it calls this panel the “arbiter of truth,” and it’s clear most of the hard work is done here. Panelists agree to have an (unnamed) measurement provider track their media consumption online and offline. Panels are a workhorse of media measurement as provided by Nielsen and others, but they are expensive to recruit and maintain. It’s not clear who would build or fund this one.

In the past, other panels have struggled to collect certain kinds of cross-device data, particularly from mobile apps. Panels also get less reliable in regions or publishers where they have less coverage, a problem that could be addressed by joining multiple panels together.

In addition to the media consumption, demographic and attitudinal data it provides, the panel is used to “calibrate and adjust” much more detailed census data voluntarily provided by publishers (including broadcasters).

Publisher-provided data

No walls here – at least in theory. Given that Google and Facebook support the WFA’s proposal, it’s implied they’re open to some form of data sharing. It’s already been reported – although is not in the proposal itself – that some participants will only share aggregated data, but it’s better than nothing. The WFA’s idea of “census data” includes publisher log files, TV operator data and information collected from set-top boxes.

This census data is married at the person-level with the panel data using a series of undefined “double-blind joins of census log data with panelist sessions.” Joined together, the different data sets can correct one another: The panel fills gaps where there is no census data, and the more detailed census data can adjust the panel’s output.

Virtual ID’s, anyone?

The census data will have to be freely provided, and so wide-ranging participation across many publishers is required for success. Another requirement is a way to tie impressions that occur on different publishers (which don’t share a common ID, remember) to individuals to calculate unduplicated reach and frequency.

In a general way, the proposal describes a process of assigning a “Virtual ID” (VID) to every impression. This VID may – but may not – denote a unique individual. How is it assigned? Based on a publisher-specific model that is refreshed periodically and provided by the neutral measurement provider. It appears to use cookies (and other data) in its first version, graduating to a cookieless solution based on publisher first-party data in the future.

The output here is a pseudonymized log file with a VID attached to each impression, overlaid with demographic data – at least TV-style age and gender cohorts – extrapolated from the panel.

Doing the math

In the final step, each individual publisher will perform some kind of aggregation into “sketches.” These sketches are likely groups of VIDs that belong to the same demographic or interest segment, by campaign. And it is worth noting here that the “sketches” can’t be reidentified to individuals and are somewhat similar to proposals in Google’s “privacy sandbox.”

At the penultimate step, each individual publisher sends their “sketches” to an unnamed independent service that will “combine and deduplicate VIDs” to provide an estimate of reach and frequency across the campaign. The WFA has a proposal for this Private Reach & Frequency Estimator posted on GitHub.

A GitHub explainer mentioning data structures and count vector algorithms is ad tech’s new sign of sincerity.

Finally, outputs are provided via APIs and dashboards, which support both reporting and media planning. End to end, it’s an ambitious proposal that has many of the right players and pieces to work. Its next steps are validation and feasibility testing led by the ISBA in the United Kingdom and the ANA in the United States.

Whatever happens, we’ve learned something from the WFA’s proposal. Even in a best-case-scenario, accurate global campaign measurement will definitely require heroic levels of cooperation.

Preparing for the Post-Literate Consumer

This article originally appeared in The Drum on 11/05/20.

Let’s imagine you were unlucky enough to be a visitor to our planet today, and you wanted to learn something about us from our media consumption. What would you see?

Well, you’d be overcome by images. TikTok has more than 500 million active users, and its 15-second maximum clips are watched 17 billion times a month. (Note: there are only 7.5 billion people on Earth.) Snap has 300 million users, and it’s growing faster than Facebook and Twitter, which seem wordy by comparison.

Photo- and video-heavy Instagram is favored by Gen Z over Facebook, and its content is more likely to be shared. Meanwhile, the second most-visited website in the world – with over two billion monthly users, second only to its parent Google – is YouTube.

Our TV screens are growing larger than our walls, surging eight inches on average in the last five years. And the most prevalent cultural phenomenon other than TikTok dance challenges is probably binge-watched streaming services, which spend more than $35 billion a year on new video content.

Blogging is dead and other favorite headlines

You’d be forgiven for believing that we’ve forgotten how to read. Judging by our popular culture, we’re becoming a post-literate, oral society, one whose always-dominant visual sense has overwhelmed our reasoning to the point where 72% of consumers now say they prefer all marketing to be delivered via video.

We don’t notice this trend because we’re part of it, but historian do. An iconic 1958 VW ad cited for its visual austerity contained just 165 words, including ’Lemon’. Last year, VW ran a magazine ad – remember magazines? – that had all of 13 words, including ’Volkswagen, it’s plugged in.’ Newspapers have seen subscriptions decline 70% since 2000, echoed by the 20 million hits received to the search “is blogging dead?”

And when asked what drives results, a recent survey of successful blog writers came up with a rather poignant answer: video!

You can’t even assume people can hear you anymore. One survey found that 92% of consumers watch videos with the sound turned off. We’ve segued into a world of pure imagery, reality with a single key.

Of course, it’s understandable. Sight is our dominant sense and is more primitive than the others. Almost 90% of the information going into our brain is visual, and 40% of neural fibers flow to our retinas. Images are processed much faster than text, which is a learned input that requires years of practice.

Words still exist. This sentence is proof. In fact, it may seem odd to be making such a statement in words, to a literate audience. But mass culture increasingly treats words as a kind of visual fillip, a graphic element used for iconic rather than informational content. Increasingly, American consumers are like English-only speakers who visit Tokyo, struck by the occasional familiar word among the kanji script.

Marketers: visualize this

How can a marketer adapt to the rise of the post-literate consumer?

First, make sure your brand has a strong visual identity – stronger than you think it needs. A recognizable logo and color palette aren’t enough. Assume consumers will not look at your logo – after all, they’re certainly multitasking. Your brand identity must be so strong it can be communicated simply through a consistent, insistent drumbeat of the same colors, fonts, shapes and styles.

Some of the most persuasive work on the impact of advertising, courtesy of Karen Nelson-Field and The Attention Economy, shows that most of its power comes at the margins: from passive, almost subliminal consumption, working on our neural pathways visually when we’re not quite aware that it’s there.

Byron Sharp and the Ehrenberg Bass Institute influentially made a similar point in How Brands Grow. Sharp stressed the importance of “mental availability,” which is not awareness (as he often reminds his Twitter followers), but rather how familiar a brand’s (primarily visual) sensory associations are to consumers.

So be visually consistent, like your brand depends on it.

Second, simplify and streamline your cues. This is another trend that’s obvious to those who’ve looked for it. Logos, fonts, web pages, graphic design – all are retreating from clutter and complexity. It’s almost as though a decline in literacy has extended to the visual realm, or maybe we’re all just overwhelmed and told our visuals that – in the words of Taylor Swift – they need to calm down.

Cut the text. Lengthen the tweet?

Plenty of research confirms that most people prefer simplified designs that are neither complex nor particularly original. Simplification includes increasing the amount of white space, beautifying images – and cutting text.

The rush to simplification is engulfing logos. For years now, iconic brands such as Apple, Mastercard and Starbucks have extracted words from their logos. Coldwell Banker – echoing Hewlett Packard – recently reduced its letter count down to two: CB. And Tinder pulled a Nike not long ago, turning its logo into a wordless flame.

Third – and most importantly – know when to ignore this advice. We have been talking here about mass consumer audiences. If you’re selling advanced hydroelectric plants, different principles apply. And remember that for every trend there is a counter-trend.

Some years ago, I worked at an ad agency doing social analytics for a luxury car brand. Examining the Twitter conversation about the brand, I noticed an odd phenomenon: it was bimodal. I mean that about 80% of the comments were inane, silly, and crass – what you’d expect. But 20% were very different: intelligent, thoughtful, almost nerdy. I concluded there were two different Twitters out there.

If you’re appealing to the second Twitter, the realm of academics and the informed, don’t sound dumb. You may need to increase rather than decrease your word count.

As I suspect you already know, since you’ve made it this far into a 1,000-word essay on marketing, there’s still a lot of life left in words.

Martin Kihn is senior vice-president strategy at Salesforce.

The Case for Advertising

By Martin Kihn @martykihn

There’s a tart scene in Mad Men when Don Draper eyes the iconic VW Beetle ad with the one-word caption: “Lemon.” After a thoughtful pull on his Lucky Strike (“It’s Toasted!”), he says: “I don’t know what I hate about it the most — the ad or the car.”

These days, too many people seem to agree: they don’t like ads. They’re interruptive and meddlesome. Besieged on one side by blockers and browsers, on another by unfriendly memes, advertising seems to be having a moment.

A few years ago, NYU gadfly Scott Galloway went so far as to declare “The End of the Advertising-Industrial Complex,” scripting a future of subscription-only media and brands propelled by word of mouth alone. In a dark analogy, he called modern digital marketing a world “full of anxiety, humidity and …” so on.

Ouch. He reminds me of the observation ascribed to John O’Toole, late chair of Foote, Cone & Belding, that most critics don’t like advertising just “because it isn’t something else.”

It wasn’t always thus. The first major history of the ad business concluded advertising was “a civilizing influence” and “an educative force.” That was in 1929. Times have changed, although as recently as the 1970s a voice as acrid as media theorist Marshall McLuhan’s said, “Advertising is the greatest art form of the 20th century.”

Maybe so, but an increasing number of people seem to be asking: Do we even need it anymore?

What Does Advertising Do, Anyway?

The best and most obvious case for advertising is that it creates (or encourages) demand. As the economist John Kenneth Galbraith famously said in The Affluent Society (1958), advanced societies can’t prosper unless they convince people they need more than they actually do. In addition to information, advertising is supposed to supply desire.

A lot of academic angst has been directed at this point, but let’s avoid that for now and assume ads are supposed to bring in more at the cash register than they cost. Certainly, a lot of – let’s also assume –intelligent people believe this to be true. Even during this very difficult year, some $215 billion will be spent on ads in the U.S.

But do they work? It’s conceivable advertisers are in the pincers-grip of wishful thinking and fuzzy math. Having spent some years in the ad measurement business, I can tell you that measuring return on ad spend (ROAS) is not as easy as it looks, and it doesn’t look easy.

Oddly, digital channels make measurement harder – not simpler. When a majority of ad spend flows to platforms that do their own measurement and don’t release raw data, in the name of privacy, complex equations are required. The good news is that analysts who have worked these equations, over time and on a large scale, generally agree that advertising works. That is: brands that advertise more can be shown to have higher sales, on average, and the impact of ads is incremental.

What about Tesla? Sooner or later, the anti-ad crowd always mentions this brand. After all, four of the five biggest car makers – all major ad spenders – are losing share. Tesla is growing, despite spending almost nothing on ads.

My response to this is simple: come up with a revolutionary product that disrupts a 100 year-old industry, add a charismatic chairperson, and you too will enjoy so much free publicity that it would be redundant to advertise.

What Else Have You Done for Me, Lately?

Demand is its day job, but advertising has more mojo. There is the industry itself with its quarter-million hard-working employees. Many of these people have impressive (subsidized) side-hustles in the arts and sciences. And there is the indirect impact of additional demand, as companies sell more and then hire, buy equipment, take out leases, and so on.

To add all these direct and indirect effects up is a complicated exercise that has been attempted over the years. Each time, conclusions were impressive. A 1999 study overseen by a Nobel Laureate found advertising accounted for about 2.5% of U.S. economic output.

A decade and a half later, IHS Global Insight concluded that every dollar spent on advertising generates $22 in economic output (sales), and every $1 million spent supports 81 American jobs. An update done in 2015 found that advertising impacted up to 19% of U.S. GDP.

There are a lot of caveats, of course. IHS is a respected research firm, but the reports were sponsored by ad industry groups. The reports’ definition of advertising was very broad, e.g., including direct mail. But even if the numbers are discounted, they still point to a major impact on the nation’s economic life.

And we haven’t even mentioned that ads support content. It appears that advertisers spend about $35 per month to reach each U.S. adult online, which should make us feel pretty good. That money pays for news, entertainment and utilities that in an ad-free world either would not exist or would cost us something. Newspapers have lost 70% of their ad revenue since 2006, with a similar decline in reporters and content. Oft-mentioned counter-examples such as the New York Times’ rising subscription revenue are really just poignant outliers.

On the other hand, somebody is making money on ads. Two of the five most valuable companies on Earth are almost entirely ad-supported. Content on Google (including YouTube) and Facebook (including Instagram) is free, and the prolific ad experience they offer clearly isn’t turning people away. So the “advertising-industrial complex” may not have ended but just changed its owners.

Let’s admit that ads aren’t going away. The business needs to adapt its data and approach to a new reality. And many of the industries’ problems — from oversaturation to intrusive targeting — are actually self-inflicted.

Don Draper gives us some solid advice: “You want respect? Go out and get it for yourself.”

Send Your Emails at the Right Time: 3 Ways Data Can Increase Your Marketing ROI

If you want to see the dramatic impact of COVID-19 on a world-class marketing department, look at Unilever. The consumer products powerhouse met a challenging global environment in Q2 and is making rapid changes in tactics, reviewing all marketing spend “to ensure it’s effective and appropriate,” according to CFO Graeme Pitkethly.

Meeting chaos with agility, Pitkethly points out that the company is “dynamically reallocating” budgets as consumer behavior shifts, moving resources out of outdoor ads (no traffic), and TV production (not safe) into areas with higher immediate return-on-investment (ROI) – such as skin care, home, and hygiene.

Unilever’s quick shifts and shimmies are mirrored by our other customers, many of whom are charged with increasing marketing ROI with fewer resources. The World Bank predicts a baseline 5.2% contraction in global GDP in 2020. Gartner’s CMO spending survey released in July showed 44% of CMOs expect midyear budget cuts.

What’s the best way to improve marketing ROI in today’s challenging landscape? By increasing one of these three things: 

  1. Effectiveness – get more revenue from the same investment
  2. Efficiency – get the same (or more) revenue from a lower investment
  3. Optimization – a combination of these through better resource allocation

And how do you know where to start? Data.

Keeping this framework in mind, and highlighting examples from our customers, here are three ways you can use data to increase your marketing ROI.

1. Dial up your digital (Effectiveness)

At a time when most of us spend more time than we’d like staring at screens, digital channels are the best way to reach us. Many indicators from time spent on mobile to time wasted – um, spent – playing networked video games prove that a lot of our lives are now online. And from a commerce perspective, McKinsey said consumers “vaulted five years in the adoption of digital in just eight weeks.”

Digitizing just as fast, marketers are ramping up their technology investments to manage customer data and use it effectively. And if you’re not, you should be. In addition to providing greater control over channels, data-driven investments include analytics to improve customer segmentation, message personalization, and targeting methods such as lookalike modeling. Despite an overall decline in enterprise tech spend, Forrester forecasts a rise in marketing technology investment. “In some cases,” says VP Principal Analyst Shar VanBoskirk, “technology may offer greater efficiency than relying on manual effort.”

Case in point: Orvis, an outdoor clothing retailer, saw pandemic-related store closures lead to tighter budgets and a mandate to improve engagement. They used Einstein Content Selection to automatically choose when to send standard messages and when to deliver content from its values-focused “Giving Back” campaign. They also took advantage of Einstein Send Time Optimization, which uses artificial intelligence to predict delivery times that align with when each recipient is more likely to open an email. Both approaches together led to a 22% higher email click-through rate.

2. Send fewer messages (Efficiency and Effectiveness)

Yes, that’s right: less can be more in digital marketing. All of us know the experience of being hounded by a brand to the point that we stop engaging, progressing from tuning out to turning off (hello, “unsubscribe”). During a time of message saturation, keeping your communications clear, on point, and not too frequent can cut costs (efficiency) and raise responses (effectiveness).

Last year, more than 40% of U.S. consumers said they were “overwhelmed” or “annoyed” by the volume of marketing content they experienced daily, according to Gartner. That reaction has only sharpened this year. In fact, Gartner calls this message-stress syndrome “COVID Fatigue.”

Treating the syndrome takes a good source of customer data and links among call centers, ecommerce, marketing, and other systems. With the right pipes in place, you can execute ROI-boosting tactics like:

  • Suppress social ads to people who have an open case
  • Merge customer records
  • Send fewer messages to people who are overwhelmed

Cable network Showtime did just that, starting before the pandemic. It used Salesforce Audience Studio to suppress spending on existing subscribers, then shifted budget toward those who had recently canceled, for the win back. This more efficient approach to their marketing helped them reach 25 million people.

3. Speed up planning cycles (Optimization)

During normal years – that is, not now – it’s common for marketers to do quarterly or even annual media budgeting. But in today’s environment, that pace won’t work.

Measurement, reallocation, testing, and optimization – these should be ongoing disciplines, not intermittent ones. Continuous monitoring allows you to move spend to higher-performing channels, cut short losses, and make the most of the resources you have. It also allows you to respond to market shifts, such as changing your tactics in areas affected by natural disasters or virus outbreaks.

Part of reimagining marketing in the “next normal,” according to McKinsey, is always-on customer data analytics: “Analytics will need to play a core role not only in tracking consumer preferences and behaviors at increasingly granular levels, but also in enabling rapid response to opportunities or threats.”

Part of this acceleration requires better data management, moving from manual reports to an automated real-time system. At Salesforce, we managed to combine marketing data from 83 sources and 182 streams using Datorama. This reduced wait time on data from two to three weeks to near real time. Most important, marketing ROI grew 28%.

Even as businesses face tighter budgets and a lower tolerance for risk, there’s still a world of opportunity for marketers to increase their ROI. You just need the right data to help you chart your course.

Nielsen’s Pivot Reveals the Future of Measurement

The following column originally appeared in The Drum on 8/12/20

Around the time Taylor Swift dropped her Folklore album – although admittedly to less attention – Nielsen announced an equally pivotal ‘overhaul’ of its cross-channel measurement approach. Positioned as a way to power ‘flexibility’ in a market overwhelmed by indecision, the announcement was a dramatic preview of the future of media measurement.

Despite well-known struggles keeping up with consumers’ hyperactivity, Nielsen remains the gold standard of media ratings. Some $70bn in US media are bought and sold based on its ‘currency‘. Its various ‘Total Audience’ products, introduced five years ago, layer in viewing on mobile, streaming and on-demand platforms using a combination of panels and direct data collection.

Ratings measure reach, of course, and they started with TV viewership diaries and then automated in-home ‘People Meters.’ Capturing online consumption was handled at first in a similar way: 200,000 people installed a ‘PC Meter‘ on their computers, which tracked video viewing on websites. But digital media presented frustrating technical issues. For example, in 2010 Nielsen admitted it was undercounting time spent online by as much as 22% because of super long URLs.

Early and often, TV studios and ad agencies questioned the raters’ ability to capture the full range of modern media consumption: on smartphones, tablets, connected TV (CTV), out-of-home. NBCUniversal’s outspoken head of ad sales, Linda Yaccarino, famously compared the situation to a frustrating football game: “Imagine you’re a quarterback, and every time you threw a touchdown, it was only worth four points instead of six.”

Don’t blame the players – blame the referee. Some of the industry pushback is a case of punishing the messenger. After all, it is happening at a time when linear TV is seeing its viewership in a state of free fall (down some 20% in five years). When she was president of Nielsen’s Watch (ratings) division, Megan Clarken observed wryly: “Like any referee we’re not always going to be loved.”

Still, every marketer is in a sense in Nielsen’s (and Comscore’s) position: having to report some kind of measurement of reach and response across an array of channels. We’ve all had to adapt to abrupt changes in consumer behavior, data availability, and tools over the years. We can learn from Nielsen as it points us to what to do next.

So where are they pointing us? I think, in three directions

Reliance on partners

Nielsen will be relying on various proprietary ecosystems, such as social networks, to provide data about consumption that would otherwise be opaque. As the company’s Chief Data and Research Officer Mainak Mazumdar admitted in an interview, “We will work with multiple parties in a significant way, which we did not in the past.”

Whatever the outcome of the current Congressional probes, few industry observers believe the open web is poised to grow. Growth is in the gardens, which already hold a dominant share of digital ad spending. In the US, about two-thirds of such ad spending goes to Google, Facebook and Amazon, according to eMarketer.

None of these ecosystems allows marketers to see user-level data, such as impression log files with IDs. Without that level of detail, marketers can’t build real multi-touch attribution (MTA) models. They can’t independently measure unduplicated reach and frequency. They’re reliant on aggregate reporting provided by the platforms themselves, or on managed tools such as Google’s Ads Data Hub.

What this means: Make a map of all the proprietary ecosystems, including big publishers, that see your audience. Develop a detailed understanding of the data provided by each one. If you have scale, lobby for more access, or ask your agency to do it.

Built-In adaptability

Admitting nobody really knows what’s going to happen, COO Karthik Rao revealed a major goal of his team was to build “a flexible platform that we can adapt to new technology, data and regulatory changes.”

Everybody embraces adaptability in principle; in practice, it’s not so easy. Adaptability means that the method used must be able to accept data at different levels of detail – from national-level campaign data down to user-level impressions and clicks – depending on what’s available. This availability in turn depends on media partner policies and privacy regulations, by region.

So, we admit that a single MTA vendor “silver bullet” – so hyped in a decade ago – won’t work. Whether we want to or not, we will all need to use more sophisticated econometric and media mix models (MMM), in-house or through an agency. There are too many unpredictable variables for simple models to succeed.

Nancy Smith, president of Analytic Partners, pointed out recently in the Drum that the future of measurement falls more heavily on MMM than MTA. “In my own review of activities with marketers,” she wrote, “I’ve seen about 80% of the impact coming from MMM and only 20% coming from MTA.” Instead of a standardized approach, therefore, she advocates “user-level analyses within the channels that matter” combined with a “holistic measurement framework” that unites these channel-specific measures.

What this means: Develop a distinct approach to measuring individual channels, including big publishers. Incorporate these into a larger measurement framework based on econometric principles.

Causal testing

Ratings are important, but the goal of measurement is to determine impact: did the campaign or ad view cause incremental sales, or improve brand perception? In the absence of complete data at the individual level, marketers will have to execute more in-market tests to measure the incremental impact of ads.

It’s a daunting task. Back in 2013, researchers at Google published a depressing paper titled “On the Near Impossibility of Measuring the Returns to Advertising.” They pointed out that there is simply too much noise in the ad environment to make measurement useful: too many factors, like the economy, the weather, consumers’ moods, competitors’ moves, viewability, etc., that obscure the truth.

There’s still a lot of noise, but our methods have improved. And it’s reassuring to see that Google now encourages testing to determine the impact of ads. In a recent blog post, the company said marketers should strive for the “gold standard” of using treatment and control groups: “Experiments … should play an important part of an advertiser’s attribution strategy.”

What this means: In situations where data gaps are significant, tests can add information. And often, they are the only way to make sure the ads really caused the outcomes you’re seeing.

If all this seems like an admission that the future will be more complex and unpredictable than the past, that’s because it will be.

In the words of Taylor Swift: “I’ve been having a hard time adjusting.”

3 Principles to Help You Rethink Your Approach to Customer Data

The following article appeared originally on the Salesforce.com blog on 8/6 as part of the “Moment Makers” series, supporting the upcoming launch of my & Chris O’Hara’s book “Customer Driven” (about CDPs), coming later this year.

Customers behave differently during crises, and our data strategies have to adapt. During the dark days of the financial crisis of 2008-09, when I led an analytics team at a marketing agency, I met with the CMO of a U.S.-based credit card company. She had noticed a disturbing pattern in her customer data: early-stage companies were charging more on their cards but missing more payments. Heavy late fees made their burdens worse, at the worst possible time.

Looking at the drivers, we saw the problem came down to cash flow. It was temporary and often outside their control. So the CMO made a bold choice, offering a new “start-up card” with less stringent payment terms. The company lost little on the venture and made a cohort of customers who proved to be more loyal than the company’s average cardholder.

Today, as businesses around the globe decide how and when to reopen, it’s a great time to revisit our assumptions about our customers. Like the charge card company’s, your customers have probably changed due to the pandemic.

Based on experience and what Salesforce has learned from our own customers, we recommend three basic principles to guide data-driven marketers in reassessing their approach to customer data: communicate clearly, revisit old assumptions, and practice long-term thinking.

1. Communicate clearly

Whatever your industry, your customers likely start with a level of distrust. For example, a Pew Research Center survey of U.S. adults revealed that 81% think they have little control over companies collection of their data, and an equal proportion think the risks outweigh the benefits. An alarmingly small 6% said they had a “good understanding” of how companies are using their data.

Personalized communication is less about you as a company and more about your customers – their wants, needs, and attitudes in the moment. Uncovering customer needs from data is even more important during times of rapid change, and it’s a key differentiator in customer connections. Salesforce research [recommend viewing dashboard on desktop] found that 71% of consumers say companies who show sensitivity to the current climate are more likely to earn their loyalty.

Customers also told us they are willing to share information with companies that inspire trust by clearly communicating how, when, and why they’ll use the data. In fact, 58% of respondents to our State of the Connected Customer report said they agreed with the statement “I’m comfortable with relevant personal information being used in a transparent and beneficial manner,” while only 17% disagreed.

Key takeaway: Rather than focusing only on data pipelines, customer data teams should also double down on communication, telling customers exactly how and why they’re collecting and using their data, and what they’ll get in return. This strategy is known to produce happier customers.

2. Revisit old assumptions

During the early months of the COVID-19 crisis, brands scrambled to adapt their messaging – for example, by removing things like high-fives and finger-licking that no longer fit into a world of social distancing. After that, many rose to the moment with campaigns that were more down-to-earth. For example, Toyota pivoted from a sales-promotion campaign to a series of videos emphasizing home and family with the tagline, “We Are Here For You.” Revamped campaigns better reflected customers’ changing needs: one survey revealed people want to hear messages that were “safe” and “hopeful” from brands.

Recognizing no two crises are exactly alike, researchers at Harvard studied how customer attitudes and behaviors change during downturns. They saw broad changes both in behavior and how categories of products and services were perceived. Some customers remained optimistic, while others (often, but not always, younger people) threw caution to the wind and “lived for today.” Meanwhile, some “nonessential” products such as makeup and skin care are seen as even more important by many sheltering-in-place today.

Recent Salesforce research [recommend viewing dashboard on desktop] shows how fast consumer sentiments change. Between May 1 and July 1, customer optimism rose six percentage points, confidence rose five points, and trust rose six points. More than ever, marketers need to examine customer data and adapt rapidly to shifts.

Key takeaway: Your customers are going through a lot of changes, and it makes sense that what worked pre-pandemic may not work so well now. Previous analyses into segments, attitudes, and positioning will have to change. Revisit them and develop tools such as surveys and tests to fill data gaps.

3. Practice long-term thinking

There’s ample evidence investing in fundamentals such as R&D, workforce training, and brand advertising during tough times yields long-term benefits. The same goes for customer relationships. If you can afford it, reorienting around longer-term key performance indicators (KPI) and more holistic measures of health will provide long-term dividends.

Now is a time when most B2C and B2B companies are dealing with price sensitivity. Discounts and other concessions are common. There is wholesale revision in annual targets, as relatively few companies can maintain their pre-pandemic momentum.

Shifting from a topline-focused KPI such as net sales or profits may not work anymore. However, short-term pain can yield long-term gain. Recall the components of customer lifetime value (CLTV): revenue x frequency x retention %. If you’re willing to be patient, shifting to a focus on retention and loyalty through superior empathy and service will overcome a (temporary) dip in revenue.

Key takeaway: As we gear up and return to work, we owe it to our stakeholders and customers to ensure we’re infusing clarity into our data collection and communication, we’re revisiting our pre-pandemic data assumptions, and we’re focused on making recovery work for the long run.

What To Do About the ‘Privacy Paradox’ Now that CCPA Has Launched

The following is my debut column for The Drum (US), newly-edited by my old Dentsu Aegis Network colleague Ken Hein in the U.S. On the occasion of the enforcement of CCPA and Apple’s IDFA bombshell (see below), I thought I’d dwell a bit on the strange way we humans try to make decisions about online privacy.

It’s already been a long, hot summer for advertisers. In addition to everything else, the CCPA was unleashed on a bewildered public. Plus, Apple announced a major change to its Identifier for Advertisers (IDFA), forcing users to opt-in to ad tracking in apps starting this fall and raising the specter of a flurry of GDPR-like consent screens tripping gamers on their way into Animal Crossing.

At the heart of this orgy of opt-ins lies a dark secret: people are not good at making decisions about privacy trade-offs. We just aren’t. Why? Because of a strange phenomenon called the ‘privacy paradox’.

What exactly is the privacy paradox?

Here’s the pickle: when we’re asked if we value our personal data, almost all of us say ’yes’. Yet our behaviors show otherwise. For example, in 2018, Facebook confronted a flush of bad PR after a public data scandal, yet its revenue grew 30%. We regularly surrender intimate information to platforms such as Google (searches), Facebook (party photos), Amazon (purchases), Stitch Fix (waist sizes) and so on, all without a squeak.

There is ample evidence that we appreciate relevant content: Amazon and Netflix both built a business on trenchant recommendations. After GDPR appeared in Europe, the cost of advertising to consumers who had opted in to targeting actually rose. And anecdotal evidence suggests that prices for comparable ads are about 50-60% lower on Apple’s Safari (which blocks most user-level targeting) than on Google’s Chrome browser (which does not, for now).

What’s a marketer to do? It turns out, there are a lot of theories about the privacy paradox – including one that it doesn’t exist. A detailed overview of the academic literature found 35 explanations packed into 32 studies. These and other briefs can help to point the way.

How do you solve the paradox?

Imagine the following: you arrive on a website or download an app and a pop-up appears saying something like, ’We’d like to track you so we can make your experience better – yes or no?’ In that moment, you haven’t experienced anything; you just got there. You can’t value a ’good experience’ because you haven’t had any experience yet.

So the trouble with ’rational choice theory,’ as it’s called, is that we’re usually forced to make decisions without enough information. Our ability to do continual ’privacy calculus’ is constrained. Common biases that plague privacy decisions include time constraints, lack of information or interest, immediate gratification and a tendency to think we’re ’giving up’ more data than we are.

Four tactics to try

Marketers and advertisers are going to have to master the art of gaining consumer trust. How? Some general guidelines from the research include:

1. Don’t talk about people behind their backs

It turns out that we don’t like this behavior online any more than we do at work or school. Our attitudes toward information sharing depend both on the type of information and the way it’s shared, what social scientist call the ’information flows’.

One study found we are much more comfortable with open, direct so-called ’first-person sharing’ than we are with covert ’third-party sharing’. The latter, when disclosed, actually drove down purchase interest by 24%. Conversely, using ’overt data collection’ can restore interest and rebuild trust.

Bottom line: tell people directly how you are gathering their data.

2. Give a sense of control

Like Janet Jackson, we really want ’control’. An alarming 81% of respondents to a Pew Research survey confessed they felt they had almost ’no control’ over companies collecting their data. This feeling is rife in the US. When consumers in the US and the EU were asked if they would opt out of data collection in future, US consumers were 1.5 times more likely to say ‘yes’.

Why? One likely explanation is that, for all its fits and starts, GDPR provides a sense of control. In America, our hodgepodge of legislation and tools does not. People have been shown to share data much more willingly when they believe they can control what they share, even if that control is an illusion.

Bottom line: make customers believe they control the data.

3. Explain the benefits in concrete, positive terms

It’s up to the marketer to describe the privacy value exchange as concretely and positively as they can. The insight here is that concrete benefits might often dominate abstract risks – and that privacy ’threats’ are usually abstract. But stay positive and benefit-focused, since there’s evidence that mentioning risks makes people nervous.

The idea is to give the consumer a sense of the awesomeness of your personalized experience, either in words or pictures. In one study, for example, an ad for a rental company using a person’s physical location performed better when it was explained that location data was used specifically to mention services not available elsewhere.

Bottom line: paint a happy picture of tangible benefits for sharing data.

4. Remember, people are different

It is often assumed that attitudes to online privacy and ad targeting are demographically determined. Millennials and Gen Z are the cultural paranoids, while Boomers and Gen X are more relaxed. It turns out these attitudes are more a function of our personalities than our demos: they’re a state of mind.

A few years ago, the Advertising Research Foundation released a report on ’ad receptivity’ that concluded that the anti-ad crowd were more likely to be ’suspicious’ and ’headstrong’. And a different study, published earlier this year, identified about one-third of the online population as ’privacy actives,’ more informed and aware. Rather than retreating from data sharing, these ’actives’ were two time more likely to share their purchase history in exchange for better recommendations.

So, the privacy conversation will be different with different groups, and these groups are likely not segmented by age, gender or income. The ’actives’ just need more information, and the more the better. The ’rejecters’ need their suspicions allayed. It’s up to the marketer to figure out which psychographic segment each consumer inhabits.

Bottom line: throw your customer insights and data science teams at the problem.

And remember, you can always try something new. Ask people to share data after you’ve given them something of value. Be explicit. Ask them how they feel. Give them the remote. The human rules still apply: trust is something that is earned, not just given.

Google Experiments Hint at Cookie-Free Future

The following column first appeared in the mighty AdExchanger on June 16, 2020.

In late April, Google announced in-market tests for some of the proposals in its Privacy Sandbox, where the cookie-free web is being born. In a Github post about the tests, Google’s “RTB team” said it wanted to poke at the “viability of … proposals via small-scale real-world experiments conducted by exchanges and bidders.”

Still sketchy and short, the Sandbox proposals are debated in forums such as the W3C’s Privacy and Web Incubator Communities and its Improving Web Advertising Business Group. So far, these forums are dominated by highly credentialed, privacy-focused software engineers and not advertising boosters.

That’s why Google’s experiments are so important. They represent a tangible Phase 2 in the rapidly moving rollout of the post-cookie web. And the specific proposals in question point the way toward what that web may actually look like in 2022, when the last holdout – Google’s Chrome browser – finally empties the cookie jar.

In a phrase: It will be very different.

Shepherd of the FLoC

Among four Sandbox proposals singled out for testing, two are most relevant for ad buyers: “Federated Learning of Cohorts” (FLoC) and the colorfully named “Two Uncorrelated Requests, Then Locally-Executed Decision On Victory” (TURTLEDOVE). Both were proposed by Google engineers.

Federated learning is a technique that lets a bunch of different nodes – such as browsers or smartphones – build machine-learning models and upload parameters to a master model without sharing user-level data. In one application, Google used it to train smartphones to predict text messages – you know, the guess-the-next-word feature – while keeping individual texts on the phone.

In the FLoC version, each browser captures data on its users’ behavior: websites she visited, the content of those websites and her actions. That data is used to build a model whose parameters are shared with a master model on a trusted server. In this way, each browser can be put into a cluster (or “flock”) based on its user’s browsing behavior.

Flocks have random labels such as “43A7.” To use them for targeting, an advertiser would have to discover which flocks contain target customers and which do not. Armed with such info, the advertiser could bid appropriately on RTB exchanges when an impression with a particular flock label appears.

Some obvious questions: How many flocks will there be? And how do we decode the labels?

“How many?” is a statistical question with no easy answer. Given the scale of the web, many thousands are feasible without threatening anyone’s privacy. What the flocks mean is more ambiguous. In machine-learning terms, each flock is a cluster, so its definition is opaque. Flock labels could be semi-public information, similar to mobile IDs, shared with the websites we visit. Sites with a large number of visitors could analyze the behavior of individual flocks – perhaps using Google Analytics – and start to see patterns.

In a simple scenario, a retailer might notice high-end suit buyers tend toward flock “22H8,” while sale-priced sweat-suiters lean to “17C9.” If the correlation is strong, bidding strategies could be developed and campaigns be – ahem – tailored to either flock, or both, as the label is exposed in the bid stream.

As the author points out, there’s a challenge with sensitive data and what labels consumers will accept. And companies, publishers and ecosystems with more traffic will see more flocks and behaviors. The data-rich will get richer. It is easy to imagine a thriving market around identifying what flock labels mean. Those that point to, say, insurance buyers or swing voters could be very valuable to certain parties. Existing data management platforms could become a kind of phone book for FLoCs.

Flying with the TURTLEDOVE

Assuming some version of FLoCs passes into production, flock-level bidding might not be all that much different from audience buying today. After all, no advertiser runs a different campaign for each person; we always deal with aggregates. The biggest difference between 2022 and today is the end of user-level targeting.

Unfortunately, advertisers have come to rely heavily on user-level targeting for results. Those techniques hardest-hit by the end of the cookie will be difficult to replace:

  • Retargeting
  • Frequency capping
  • Exclusion
  • User-level attribution

TURTLEDOVE is an ingenious attempt to enable some form of retargeting and shows how the browser could subsume ad tech. Its main moves are to separate data about behavioral intent (what the user wants) and context (where the user is now); and to run the ad auction inside the browser itself.

As with FLoC, the browser is the sentinel and lockbox, watching what the user does and storing observations locally. Say a user visits Widgets.com. The browser will label that user in a Widgets “interest group,” based on her behaviors and will store that label; it can also pull information from the brand (Widgets Inc.), such as bids, bidding logic, ads.txt sellers and ad units – in short, everything needed to run a campaign.

Later, when that same browser appears on Pub.com (or an ad network), then Pub.com will send contextual data to the browser, which will run an auction and declare a winner. By separating the “interest group” from the context, neither the advertiser nor the publisher learns anything much about the person seeing the ad. At least, that’s the idea.

Challenges abound. For example, interest groups aren’t updated in real time (there’s a time lag, for privacy), so retargeting is less timely. Brand safety is difficult to enforce. Complex auctions and logic may be a burden. There will certainly be fewer “interest groups” than there are retargeting options today. How many is enough?

Answering these and other questions is the purpose of the experiment phase.

Two conclusions and a question

Where does this leave us in our attempt to foresee the web of 2022? Some conclusions are clearer than others, at this early stage:

  • Personas: The future is aggregate, not individual. Tactics such as retargeting will have to be designed for larger cohorts, not at an item level. These cohorts will need detailed models of behavior and actual lifetime values. Bids will be based on better models of expected value.
  • Customer data: Lacking third-party data, advertisers need another way to build personas. The answer – as everyone is telling you – is first-party data. Most advertisers are going to need more of it, collected with consent, both pseudonymous and known. They are going to need more partners willing to share special data sets. Otherwise, they’re going to have to be very good at market research, pay a premium and waste a lot of impressions. It’s difficult to see how big publishers and walled ecosystems with large data sets don’t win.

And a final question. Many of the Sandbox proposals rely on a “trusted server” (or brain) to act as coordinator and conductor. This server could hold the keys to wisdom and wealth. Who owns it? Is it Google?

That too may be a message in the Sandbox.