Human beings love to categorize stuff and put it in mental “boxes.” It’s a skill that served our primitive ancestors well by allowing them to make the kinds of quick decisions they needed to make in order to find their dinner…and avoid being something else’s dinner. “That fuzzy critter will kill and eat me,” our (great X 5000) grandfather Zog may have thought while checking out the wildlife milling around him on the prehistoric savannah, “But that one is good to eat. I will avoid the one that will kill me and whack the one that is good to eat upside his head. Yum! Here comes dinner!” Splat!
Putting aside for a moment that “upside his head” probably wasn’t in Zog’s vocabulary (we’re taking some poetic license here), Zog’s ability to quickly differentiate between harmful and tasty animals was pretty useful. If the only critter category that Zog knew was “critter,” he would have been just as likely to rile a saber toothed tiger with a whack upside the head as he would have slain a tasty Hypolagus beremendensis (prehistoric rabbit…sheesh!) with the same head-whacking action. It’s not a mistake one repeats more than once. Know what you’re whackin’ and you get to pass on your genes. Whack wildly and end up as a snack.
Over the centuries, the human ability and propensity to classify developed even further. People were classified into groups by other people based on a whole variety of factors: where they lived, what they ate (or didn’t), what gods they believed in, or whether or not they knew how to make truly wicked ale. Curious folk all over the globe advanced civilization by classifying animals, plants, stones, weather, and even the stars themselves. Plato, called by many one of the smartest dudes who ever lived, gained his fame by developing his Theory of Forms which helped us classify what was “real” from what was “unreal,” a theory which dominated thought for centuries, if not millennia. More recently (1735), Swede Carl Linnaeus more or less created modern biology by publishing Systema Naturae, a book which went far beyond Zog’s thinking by classifying every animal and plant he could get his mitts on. And if Russian Dimitri Mendeleev hadn’t taken the time to come up with that bane of every high school chemistry student’s existence (the Periodic Table) which classifies all the elements, we’d probably still be mixing random stuff together and hoping to make gold instead of living the Digital Age.
Humans like classifying stuff because it helps us make sense of the world. And as the world has become more and more complicated, humans responded by creating narrower and narrower classifications for just about everything, including the kind of work that we do. The incredible gains in productivity brought about by the Industrial Revolution happened because inventors and business people realized that by breaking down the process of building something complicated into a series of really small, easy-to-do steps, anybody (even children!) could be taught to do a single task in the creation of that complicated gizmo. String together enough folks doing small, simple tasks into a line (let’s call it an “assembly line,” OK?), hook everything up to a big steam-powered motor, and you’ve got yourself a factory churning out complicated gizmos by the trainload! Workers who made stuff were no longer classified as “craftsmen,” now they could be happy being “the guy who sticks the hoogeewhatsit on the third thingamabob!”
Outside of the factory, other jobs were becoming more specialized, especially as mass production lead to mass media and mass markets. While at one time being a farmer meant having to know how to do everything from birth a calf to fix a plow to predict the weather, the Industrial Age brought specialization to the masses.
Creating mass media was one of the activities that really drove specialization, especially time-based media. Sure, one could still be a “photographer” or a “writer” or an “artist” who created media one object at a time…if you wanted to starve. But if you really wanted to strike it rich creating media you needed to create mass media like film, television, radio, newspapers, or magazines. Making and selling things one at a time was for chumps: the big moolah came from making something that could reach millions of people.
Easier said than done, right? The biggest hurdle was that creating mass media required industrial sized production. Any palooka could pick up a movie camera and shoot some home flicks, but if you wanted to be a mogul who made blockbusters you needed a cast of thousands, a crew of dozens of specialists who knew how to handle one tiny part of the process really well, and a big distribution network of warehouses, trucks, and theaters. When TV came along it became possible to produce on a smaller scale, but you still needed a big studio staffed by lots of people who were good at doing one thing and a transmission infrastructure that cost big bucks and took specialists to keep running.
Of course, the Age of Mass Media birthed the Age of Advertising as manufacturers saw the potential of reaching a mass audience for their gizmos. Advertising could promote their gizmos over the airwaves, urge newly affluent consumers to buy as they motored along the newly-built highway system, and grab people’s attention with cheeky ads in newspapers and magazines that told them to buy more gizmos or become social outcasts.
As you older readers certainly know, creating advertising in the Age of Mass Media wasn’t a solitary pursuit unless you were stuck in the Hell called “local advertising” (shudder). Making ads for mass media required account people to manage clients, media folks to figure out where to put the ads, and, above all, creative types who really, really knew how to make the kinda ads that were going to grab consumers by the ankles and shake ‘em until all the money fell from their pockets.
But alas, each medium was different and required different skills. Print required copywriters who could write words that pulled people away from their favorite gossip columns so that they could learn about how their husbands would leave them unless they decided to smell better by using a certain new Wonder Product! Copyrighters worked with print designers who were masters at the art of using static imagery and design to grab and hold on to the fickle consumer’s attention. Not being a visual medium, radio required writers who could squeeze a compelling message into a few seconds as well as voice talent who could bring it to life and audio engineers who knew how to make it sound good. Nobody did this alone…at least not for long.
Television…well, kids, if you did television ads then you knew you were in the big time. TV had sound. It had motion. It had compelling messages mouthed by beautiful (or at least arresting) people. TV ads packed more persuasion per second than any medium that had come before. But persuasion came with a price: TV advertising required the services of a team of specialists who excelled at their one piece of the production guided by a Creative Director. In comparison to the specialists, the Creative Director didn’t really “do” anything in particular except for keeping everyone’s eye on the Concept he (or, rarely, she) had Created. The result was the ability to sell more gizmos to more people than anyone had ever thought possible just a couple of decades before. It may have been, as John Wannamaker said, that “half [your] advertising was wasted,” but it reached so many people that it didn’t really seem to matter which half.
And this is how it went…until that pesky “Internet” came along.
The Internet (and mostly the World Wide Web, but that’s changing) presented a problem for the old creative types. Because it was a digital medium and, as such could contain any other media that could be digitized, it was like nothing anyone had ever seen before. Worse yet, it was interactive and could respond to actions (both direct and indirect) people using it had taken. Pretty soon it was big…really big…encompassing the whole world, accessible to anyone who had an internet connection no matter where they were, demolishing the whole concepts of “location” and “market.” Worse yet, anyone who had access could potentially publish stuff on it, opening the doors wide to an entire world of competitors. Oh, and the scariest thing? It allowed people to talk to other people…any time and any place! Heck, they could even talk back to the publishers and the advertisers. Consumers? Talking back? The horror!
Clearly this was a whole new ballgame.
At first, most advertisers looked at the web through the smudgy lenses of their old specialties. Print folks saw it as another place to do print-like stuff, only without the resolution and heft of paper or the control of being able to place an ad in a certain place on the page or in a publication. TV folks saw it as something kinda like TV but smaller, more restrictive, less likely to be seen by their moms, and a hell of a lot less sexy than TV. Radio folks…well, they didn’t really know what to make of it. After all, who’d want to listen to stuff on a computer?
At first nobody knew what to do. But gradually folks began to catch on to the unique capabilities of the new medium. First came a rudimentary understanding of interactivity and the fact that a click on one “page” could send you to another. The banner ad was born. Then came an understanding that it was possible to do simple animation and along came the banner ad that included motion. People then began to understand that code could be used to control the browser and the dreaded pop-up (and pop-under) ad started cluttering up our screens. Then, as the technology progressed, so did the ads: “viral” video, screen overlays, advergames…you name it and someone was trying to use it to grab the attention of the ever-elusive online consumer.
But, with a few rare exceptions, most of these formats had problems. The biggest problem was that the people creating them were, in many instances, still doing so with the mindset of the mass-media advertiser. In mass media, the best ads are those that grab our attention, the ones that keep us glued to our seats or keep us gazing at the images imagining our next new car or tropical vacation. Mass media advertising, because it inserts itself in the linear stream of media could only work by interrupting us and making us watch.
Online, things don’t work that way. Bopping from “publication” to “publication” takes only a click. We don’t have to read or watch stuff in order. We can skip around, stop, go away, or just graze on what holds our interest for a few milliseconds. The consumer is in control. Interrupting this control seems, well, kinda wrong. People don’t like it. They install ad blockers. They develop “banner blindness” and don’t even notice ads anymore. That can’t and won’t be interrupted.
But then Google came along with paid search and everything changed.
Search ads don’t interrupt. They don’t get in the way. In fact, if they’re done right, they actually help us do what we came to Google to do rather than blocking us and demanding that we listen to them. They’re contextual, highly targetable, and tied to the content we’re reading. As an advertiser you only pay for them when they accomplish their mission by driving someone to your site…and those clicks are highly measurable, packed with data about who’s doing the clicking. And we can use that data to make decisions about what ads consumers see, creating, for the first time, a customized stream of advertising based on consumer likes, dislikes, and behaviors. It makes old-school “blast it out to gazillions of people and hope that it sticks” TV advertising look positively Paleolithic. Zog would probably approve.
Data. It’s what truly makes online advertising different and more effective and measurable than anything that’s come before. If Mr. Wannamaker were alive today, not only would he know which half of his advertising was wasted, but he’d be able to use that knowledge to re-target those ads to the people who they wouldn’t be wasted on or even modify the ads so that they resonate with the half that didn’t get them the first time.
This is a revolution. And revolutions are always accompanied by a new way of thinking. And therein lies the problem.
Remember Zog and his useful ability to classify tasty animals and dangerous animals? The human ability to classify is both a survival trait and a curse, because if our classifications don’t change to address new realities what was once a useful trait becomes something that stands in the way of our evolution. If we only know that there are animals to be eaten or animals that can eat us, it might never occur to us that there are animals we can ride, animals we can befriend to protect us, or animals that can produce stuff that we like to eat (milk, for example) that don’t require us to bonk them on the head in order to get what we want.
In many ways, advertising today faces the same problem. Our old classifications of media as either “static” (print, outdoor) or “time-based” (radio, TV) don’t make much sense in a world where all of them can be in the same “container.” If our perception of media is that it should be one way, from broadcaster to passive recipient, it’s impossible to see the potential that two way communication via social media could offer a brand that wants to really engage with its customers. If we still think of media as something that does to us rather than something we can interact with then we miss out on the possibilities that come from offering communications that change with user actions, allow them to play with different possibilities, or control their own experience.
But all isn’t lost: today, it’s pretty clear that creative folks are starting to “get” it. But there’s one big part we haven’t talked about yet: data.
The ability to gather data about consumers is probably the single greatest advantage that online advertising offers, and, paradoxically, the one that advertisers and marketers are having the hardest time dealing with.
In a recent study of Chief Marketing Officers by Korn Ferry, only 1 in 4 were able to quantify the value of their advertising over the past year, and barely more than a third were able to prove the short term impact of their marketing efforts. Barely half of B2B marketers even agree that the financial value of their efforts is clear to their organizations! Worse yet, another study by PwC found that rather than working to figure out how to make things better, CEOs see technology advances as “disruptive” and two thirds of them admit that their marketing people are “well prepared” to meet the challenges brought about by these disruptive technologies.
The problem is this: while much about the way we advertise has changed to meet the new realities of what the internet allows us to do, in many respects the activities that we classify as “creative” haven’t changed all that much, especially when it comes to dealing with data. Where ol’ Zog saw tasty animals and dangerous animals many of us still see “creatives” and “numbers people” when we look at our co-workers in the ad biz. Creatives are the “right brain” types: visual, quirky, and, yes, highly creative. Numbers people, on the other hand, are “left brain” types: logical, methodical, good with math. And never the twain shall meet. But they should.
The digital environment’s ability to serve as a “metamedium” that can contain a whole range of media that used to be separated – still images, video, sound, and text—has already forced those who create for the medium to rethink their old media categories in order to understand what happens when media come together in one place. The addition of interactivity and connectivity have forced us to re-think the paradigm of “advertising” as an interrupt-driven form of expression “broadcast” in a one-to-many format designed to insert itself into our consciousness by getting in the way of what we really want to read or watch or see or listen to. We’ve had to move from “talk to” to “talk with” and from “interrupt” to “engage.” And we’ve done a pretty decent job as the medium has matured. But we’re still not there yet. There’s one piece that needs to be incorporated into the creative mix: data.
To truly take advantage of digital media, “creativity” has to be re-imagined as a process that includes—not excludes—data. It’s no longer sufficient to craft a singular killer headline, design a stunning graphic, create linear experiences that tell a single story from a single viewpoint, or even build an interactive experience bounded by its own internal logic. Instead, “creativity” now must include the one piece that we never had before – data that reveals knowledge about who we’re communicating with—in order to develop interactive communications that are able to adapt to external conditions, user behavior, physical location, and even factors such as supply, demand, and pricing fluctuations. Rather than work to craft things that are self contained, tweaked to perfection, and, well, predictable, we need to begin to embrace the idea that the job of the creative is to create possibilities that play out based on the data in which they swim online.
This kind of paradigm shift isn’t easy. We’re not used to working across creative and business disciplines. We’re not used to using both sides of our brains. We’ve been categorized and specialized into highly-defined and circumscribed roles and been taught that mastering every nuance of that role is the key to being successful. With a few quaint exceptions the “craftsman” has been replaced by the “specialist” who is able to do one part of the bigger task exceptionally well before moving it along the assembly line—literal or metaphorical—to the next specialist until the entire complex task is completed. Only a few at the top see “the big picture.” Everyone else just knows their one small part.
Does this mean that everyone who creates needs to be good at everything? No: very few humans possess whatever mojo it is that allows them to master wide-ranging disciplines simultaneously. We all can’t be Leonardo or Elon Musk or Dean Kamen. But it could be that by re-thinking how we classify what “work” and “creativity” and “data” mean to us that we can all come a lot closer than we think.
Right now, many of us are at the stage ol’ Zog was at the beginning of this essay. We’ve developed classification systems that allow us to cope with the world. This is an ad. That is a web site. But just as Zog’s classification system of “good to eat/might eat me” limited his ability to imagine other possibilities for the animals he saw all around him, our classification systems, borne out of a lifetime of experience with media that behaved in a predictable way, limits our ability to see the true range of possibilities that can become apparent once we open up our own internal classification systems and imagine digital media in a different way.
The Industrial Age created a world of specialists with tightly defined roles due to the realities and limitations of the physical world. Mass production of complex gizmos required legions of workers who operated much like the machines they made. If you’re going to make millions of gizmos in a reliable way, you can’t have a factory full of craftspeople creating each one: you need a factory full of specialists who can do one small thing really well. Creating mass media required moving away from an auteur who could do everything to a model where a legion of specialists who were masters of singular tasks (and singular technologies) worked together to create the final product. Along the way, “creativity” got classified within some fairly rigid parameters often specified by the particular medium or technology that the “creative” used in their work.
All this is changing in the Digital Age. Automation in manufacturing is eliminating the need for assembly line workers. As a medium for distributing content, the Internet has leapfrogged over the old methods of delivering content that required enormous physical infrastructures. The kind of functionality that used to require huge (and expensive) production studios can now be tucked under your arm in the form of a laptop or even held in your hand as a tablet. And code now allows us to move beyond the creation of dumb, static, self contained digital artifacts to create forms of expression that can utilize data to adapt to those who interact with them. All these changes are here, now, ready to be utilized to do things we never before thought were possible. But only if we’re able to redefine what “creativity” really means.
The merging of the “real” and “virtual” worlds that continues to take place as the Digital Age moves onward will continue to redefine pretty much every assumption we’ve had about how the world works, how we communicate, how we interact with each other, and, ultimately how we create. And while nobody can truly know where things are going, it seems clear that moving beyond the rigid definitions and classification systems that arose during the Industrial Age is one major step towards the future. Giving up on dualities such as “right brain” vs. “left brain,” “creative person” vs. “numbers person” and “craftsman” vs. “manufacturer” seems to be a vital element in our evolution. Moving beyond “eat” vs. “be eaten” will open up a whole new world of possibility.
And if we can’t? Bonk!