Big Data Caution…from GK Chesterton

“The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.”
…GK Chesterton, “Orthodoxy”

It was with great interest I ran across this comment the other day. And it got me thinking about the world of big data today.

The red flag for data abuse comes when people cede their human initiative and let data take over. Listen to how people discuss “big data” and you’ll start getting a sense their vision is to have data run the world. I suppose in a corporate bureaucracy this provides perfect cover for a mistake. (“The data said to do it” or perhaps “The Data Scientists said it would work!”).
Read more of this post

The Search for Meaning in Big Data

Big Data has been around long enough now that it’s clear that analyzing big data can deliver some sweet benefits. What’s less discussed are the challenges companies face in finding those benefits.

A Key Challenge:  Irrelevant Data. My friend Shahin Khan recently tweeted about one of the key challenges with big data.

@ShahinKhan The ratio of relevant data to irrelevant data will asymptotically approach zero. #BigData

Read more of this post

Big Data. Big Promise. Big Caution.

Big Data imageBig data claims to be the new salvation for all businesses. Because, we’re told, big data will discover amazing new truths. Time will tell.

But in the meantime, most big promises should also be accompanied by big cautions. Which one’s are most important as we approach big data? Recently, on the Financial Times website, Tim Harford wrote a blog post on the topic: Big Data: are we making a big mistake. It is one of the few really thoughtful big data discussions we’ve come across in a while.
Read more of this post

Using Response Measured Advertising in an OmniChannel World

I’ve spent my advertising career in the most immediately measurable of TV disciplines…direct response television. Through that career I’ve seen the tremendous economic power that DRTV offers. Used in the right situation, DRTV delivers far more economic impact (including brand value) than traditional TV.

At the same time…in contradictions we find truth. And here’s the response contradiction.

Response measurements are exceptionally powerful at helping make campaigns more effective.

But if response becomes your ONLY focus, campaigns become less effective.

How’s that happen? We must remember that even the best metrics (response, audiences, targeting, etc.) can never measure the total impact of a TV campaign. They are helpful guides but don’t tell the entire story.

So it’s important to respect the numbers for the extraordinary help they offer as we make media dollars go further (up to 4x further). And it’s important to respect that response numbers are only one window in to the impact of our work.

This reality doesn’t only apply to DRTV. It applies to online ads (especially), direct mail, catalogs, search, and many more areas where we are able to measure response.

Here’s a recent article I called “Seeing the Forest Despite the Trees” (link here) that appeared Response Magazine’s December 2013 edition. It digs deeper into how to work with response measured media in the highly (and extraordinarily profitable) market you enter when your product is sold through the omni-channel world of phone, web, and retail store.

It’s no surprise to find DR marketers obsessed with response to the exclusion of all other reality. But it has been a surprise to find that experienced audience measured advertisers also too quickly lose sight of the fact that response measurements are indicators – but not the whole story.

It’s surprising because many of these are advertisers who have lived in a world their entire careers where they had NO measurement of response and where impact is projected by guys in the back room with pointy hats and crystal balls reading Nielsen reports. (For clarity: I do love audience numbers. But while there’s tremendous learning to be found in audience measurement, projecting sales impact based on audience remains an area for alchemists.)

So embrace response measurement for what it is: An extraordinary measurement that can help us spend client media money far more efficiently. And then lets use that measurement to drive campaigns where the total impact surprises us all.

Copyright 2014 – Doug Garnett – All Rights Reserved.

Does Your Ad Agency Learn from Hard Results?

It’s fundamentally human. The feedback we receive (intentional or unintentional) shapes our actions in the future.

Unfortunately, there’s a pile of cheap phrases around that come out of this reality — like the dully bureaucratic phrase “teachable moments”.

But there’s a critical reality: We all respond to those things that feed back to us as we take our next actions. And as companies and agencies, we need to think deeply about the feedback loops that shape our team’s future actions.

Consider a Truly Objective Feedback Loop…in a Machine Shop.

Read more of this post

Don’t Test Whispers

Key to marketing success is a disciplined approach to testing ideas and action. After all, marketing writing and consulting is filled with tremendously attractive and detailed theories about action “X” causing result “Y”. Yet all these theories were developed based on specific experiences under specific circumstances. So there’s no guarantee that taking them and applying them in your world will create the same result.

So we should test, test test. And yet…testing experience shows that far more things are tested than are found to conclusively help or hurt. Why? One quite common testing error is to “test whispers” – small changes that simply can’t have a large enough impact to drive measurable change.

I once watched Rubbermaid test whispers in focus groups where a series of 5 statements of brand differentiation were evaluated. But rather than vary the statements with ideas that were truly significant to consumers, the statements traded off tiny wording changes. (I found it ironically enjoyable to watch the focus group participants quite frankly explain that all the statements said the same thing.) Read more of this post

An Axiom For New Media: Big Numbers are NOT the Same as Meaningful Numbers

It seems like everything we hear about the new media world is based on big numbers. Hundreds of millions of these and bazillions of those – all delivered with mega-pico-tetra zillions of impressions. Why do we keep falling for big numbers?

I have this theory that we all have an instinctive built-in “adjustment” we apply to sales or promotional numbers. It goes like this: “They say it will save me thousands of dollars. I bet they’re over-stating. But if it still saves me hundreds, I’m happy. So I’ll buy it.” Unfortunately, once the numbers are big enough, our instinctive adjustment isn’t enough – but we use it anyway.

The yell & sell infomercial guys figured this out long ago. In yell & sell, they often make the numbers so huge that even after we adjust the numbers, they’re still impressive. And the truth is, manipulation with these numbers sells a lot of product. (There’s probably an interesting dissertation for someone in figuring out the differences between categories where we adjust by 20% and others where we adjust by an order of magnitude as in my example.)

New media evangelists have picked up this yell & sell gamut and draped it with the credibility of being “measured ROI”, by having the numbers come from a research firm, or by having them “audited”.

Here’s a great Lady Gaga one I heard at a Google presentation: “Lady Gaga posted a music video and got 95 million views in a year. Just think about it, only 500,000 people are watching MTV at any point in time.” Read more of this post

Reading the Fossil Record: Why Mobile Retail Tracking Can’t Replace Focus Group Research

There was a post on Retail Wire this morning that pondered whether retailers will need traditional research once mobile tracking is “in place”. The question is interesting because it reveals a very common flaw in how people think about research.

Developing conclusions from mobile data is the equivalent of scientists reading the fossil record. When I was a kid, scientists had been observing the fossil record for hundreds of years. So, they really thought they knew what the truth was. Dinosaurs were reptiles, they had reptilian skin, they were cold blooded, lived isolated lives, and modern day lizards are their direct descendants.

Fast forward to 2011. I’m no paleontologist. But it’s my understanding that fossil prognosticators now believe that some (many) dinosaurs had feathers, that some (many) were herd animals, they were pretty fast moving, that some lived in family based units, and that birds essentially evolved from dinosaurs.

The original scientists weren’t bad at their jobs. In fact, they were brilliant. The problem was in the observed data. They created solid, grand theories from the observed facts they knew.

The Key to Observed Behavior is What You Can’t Observe. Paleontologists erred in their theories because there were thousands and millions of fossil truths they couldn’t see – they hadn’t yet been discovered or analyzed.

Mobile data puts us in a similar spot. Ethnographic observers are in a similar bind as are direct marketers who rely purely on response. No matter how hard we work, observational research misses more data about human consumers than it captures. And without that data we mis-lead ourselves into error.

What’s fascinating is that as we create grand unified retail theories from this data, behavioral data becomes a type of departmental Rorshach test. Your company is likely to project onto the research the things that help individual careers. Or, it may project the results of your latest session with a highly paid consultant. What’s least likely is that it finds actionable consumer truths.

Wise companies will continue to rely first and foremost on data that helps us see motivation because motivation is the key to changing profit in big ways. Of modern research, its not just mobile that lacks insight into motivation. True “ethnographic research” is purely observational and is quite weak at discovering things that drive sales. (Perhaps that’s why so many firms claim to do ethnographic research but really do in-home one-on-one interviews).

To get to motivation, you have to use qualitative research of some form. It has to be executed by professionals. And it has to be interpreted with all the best care to avoid similar theoretical jumps to the errors noted above. But somehow, I find the challenges in qualitative data much more evident where the errors in things like mobile data are dramatically more insidious.

At the same time, I’m not suggesting we ignore the mobile opportunity! Mobile data offers opportunity for some fun and interesting bits of learning about store organization. But mobile data is limited and, even considering only in-store behavior, I’d probably get considerably more value from Paco Underhill-style teams of in-store observers.

Copyright 2011 – Doug Garnett