Adding some humanity to big data
August 20, 2013 | Jeff Fraser | Comments
“Not everything that can be counted counts, and not everything that counts can be counted.”
Matthew Quint was quoting Albert Einstein when he said that onstage at Marketing‘s Data-Driven Marketing Conference Tuesday morning. The Columbia Business School director of global brand leadership was the conference’s opening keynote speaker, telling the crowd about advantages and challenges of integrating big data into the marketing and retail business.
One big challenge? Adding humanity to big data’s cold hard facts.
“It’s not about data and human understanding in separate places, but about how they’re intermingling,” Quint said. “Sometimes data is only valuable with a human interpretation on top of it – what the data reveals and what insights come from a human analysis of it – but also sometimes we miss things, as humans, with our gut instincts, understanding and anecdotes.”
He broke data integration down into three main steps. First companies learn to use small data, collected in isolated units and applied to limited projects. Small data analysis (surveys, loyalty programs and sales records) has been in use by most businesses for decades. Next comes “bigger data,” an intermediate step where businesses begin to collect tens or hundreds of thousands of data points across a patchwork of consumer touchpoints. That’s where most businesses are today.
The final “big data” stage promises billions of datapoints across a network that encompasses every interaction consumers have with the business, and integrates data analytics directly into business practice. Most businesses haven’t reached this stage and a still have a long way to go.
What are the roadblocks? According to Quint’s research at Columbia, 91% of marketers believe data-driven marketing leads to brand success, but 29% said they have too little data to make it happen. Thirty-nine per cent said their data isn’t collected quickly or frequently enough to drive marketing, and 42% said their data was specific enough to individuals they wanted to reach. A little more than half said data collected by the company isn’t shared with them.
But to tickle our appetites, Quint gave a few examples of companies using innovative data initiatives to understand how their brands are perceived. One of Quint’s colleague worked on data-mining for automotive vertical Edmunds.com, analyzing millions of posts to determine the most common terms used to describe three compact sedans: the Toyota Corolla, Nissan Sentra and Honda Civic. The network analysis was able to identify a number of distinct associations for each of the vehicles – for instance the Corolla was associated with “college” while the Sentra was associated with “mom” and “daughter.”
In another case study, Cadillac used data on “brand-switching” – pairings of consecutively purchased car brands – to determine whether consumers grouped their brand among low-end American makes or luxury imports. By following the data and tweaking its marketing campaign, Cadillac was able to lift its perception as a luxury brand, and disassociate itself from the low-end American car market.