Menu

Social Media Utopia

Posted by Sean Carton | March 31, 2015 | 9:15am

In 1516, Sir Thomas More published Utopia, a fictional account of an ideal society located on an island in the Atlantic Ocean. While it was originally intended to be a work of political philosophy (its full Latin title, De optimo rei publicae statu deque nova insula Utopia, translates literally into English as On the Best State of a Republic and on the New Island of Utopia), over the years since its publication, “utopia” has come to mean any ideal society or land.

But More knew that perfection was outside of the grasp of humanity (in this world, at least) and so named his land by combining two Greek words– οὐ (“not”) and τόπος (“place”) – into “utopia,” literally “no place.” As wonderful as his fictional land sounded, it really didn’t exist, anywhere or any-when.

For most of human history, “inhabiting” a place that didn’t physically exist was impossible. One might visit nonexistent lands through literature (and, much later, film), but when you were there you where there by yourself.

Two-way electronic media changed all that. By allowing humans to communicate instantaneously across vast distances using electrons traveling at near light-speed, our notion of space (and, later, time) began to be radically altered. While the pre-electronic world required humans to physically come together to communicate in real-time, electronic communications didn’t require such proximity. Someone at the other end of the telegraph (and later radio, telephone, and the Internet) is just as “close” to us from a communications standpoint as if they were standing right in front of us. While the fidelity of their presence greatly improved over time as technological developments allowed us to advance from the click-clack of Morse code to the cutting edge virtual reality and telepresence of today, even from the initial technology allowed us to leap across space and, as asynchronous forms of digital communication were developed, even time.

Strangely, however, even while technological developments in communication dissolved the distances between us, it didn’t completely eliminate the notion that our disembodied voices and words existed in some place. Perhaps it’s just a limitation of millions of years of evolution where we learned to communicate with each other face to face, but as the notion of “real” space dissolved, the idea that when we were on the line we were somewhere began to take hold. If our conversations weren’t taking place in the real world, they were taking place in a virtual world, a world that came to be known as “cyberspace.”

While cyberpunk author William Gibson is widely credited with popularizing the term “cyberspace,” it was Electronic Frontier Foundation co-founder John Perry Barlow who first connected it with what we now call “the Internet.” In his 1990 essay announcing the founding of the EFF entitled “Crime and Puzzlement,” Barlow describes cyberspace in a way that fit the technology of the time:

In this silent world, all conversation is typed. To enter it, one forsakes both body and place and becomes a thing of words alone. You can see what your neighbors are saying (or recently said), but not what either they or their physical surroundings look like. Town meetings are continuous and discussions rage on everything from sexual kinks to depreciation schedules. Whether by one telephonic tendril or millions, they are all connected to one another. Collectively, they form what their inhabitants call the Net. It extends across that immense region of electron states, microwaves, magnetic fields, light pulses and thought which sci-fi writer William Gibson named Cyberspace.

Sound familiar? While the technology of 1990 didn’t allow for the kind of multimedia we’ve all come to know and love/loathe today (not easily, at least), Barlow’s description of “Cyberspace” in 1990 sounds pretty much like what we call social media today: a “place” where we leave our bodies and can become anything we want (even if what we want is just to show ourselves in the best light), participate in an endless variety of discussions on an endless number of topics (check out reddit if you don’t believe the “endless” part of this sentence), and where we can connect to one or millions (now billions) all over the world.

If you think about how we talk about the Internet today, just about every metaphor we use to describe it involves a spatial element of one kind or another.

We “go online” to “visit” web sites or “hang out” on Facebook. Hackers “break in” to computers, “viruses” can “infect” us, and software “bots” commanded from the other side of the world can take over our machines.

We store our documents “in the cloud” and “attend” Webinars. We “share” intangible artifacts we “find” online and are often willing to pay (real) money for virtual “gems” or “coins” to “continue” to the “next level” in the games we play on our smartphones or tablets while we wait for one thing or another in the real, annoyingly synchronous, inconveniently tangible, world.

For those of us who grew up in the pre-Cyberspace era, the lines between the “real” and the “virtual” were usually pretty clear, if for no other reason than the same technology that allowed us to “enter” the virtual world often served as a barrier to it, as anyone who sweated over establishing a TCP/IP connection to the Internet back in the 1990’s (or who’s spent hours wrestling with a wonky Wi-Fi connection today) probably remembers with a shudder. But as technology has improved, the lines between the real world and Cyberspace have become increasingly blurry, as we’ve moved from the old days of “booting up and logging on” to the always-on world we inhabit today. If you’re one of the 182 million plus people who own a smartphone in the US, you’re connected to the ‘Net in one way or another as long as that phone is on, whether you know it or not.

Cyberspace has become so ubiquitous that we’ve stopped noticing it, even if we can still remember the time before it existed.

But for those who grew up never knowing a world where Cyberspace wasn’t a click away, the lines between the real and the virtual are even blurrier, becoming so indistinct as to be irrelevant. While the pre-Internet generation knew distinct boundaries between home, work, school, and their social lives, for those who grew up (or are still growing up) in a world where the cyber world was just as much a part of their lives as the real world, those boundaries have disappeared. A small in-person flare-up at school can quickly become a raging wildfire in the world of social media as “friends” are drawn in and the reach of the dispute expands beyond the boundaries of the school and out into cyberspace. In a world where 95% of teens are online,  81% use social media, and the majority use smartphones to stay connected to social media 24/7, the difference between cyberspace and the real world becomes virtually irrelevant, sometimes with incredibly damaging real world consequences.

Of course, not everything that happens in social media is bad, just as what happens in the “real world” isn’t all bad, either. But as the lines between the real and the virtual continue to become hazier for the Always-On Generation born after 1990 (a date which would have made them 5 years old when America Online first established a foothold on the Internet), it’s becoming a lot clearer that the “real world” (online or not) is in for some big changes.

It’s already pretty well established that our Digital Age move to cyberspace altered the media and entertainment landscape beyond recognition. Anything that can be digitized and moved to Cyberspace has been moved, and new technologies such as virtual reality (e.g. Oculus Rift), augmented reality (e.g. Microsoft’s upcoming HoloLens), and 3D scanning (e.g. Structure’s apps) are bringing more of the real world into the virtual.

But it may be the technologies that allow us to enter the netherworld of Cyberspace whenever and wherever we want that may have the greatest impact in the long term. Between the near ubiquity of smartphones and the coming ubiquity of tablets, we are (in the US, at least) easily less than half a decade away from a world where everyone can connect to each other when ever and where ever they want, tap into any form of media on demand, and provide a never-ending stream of data about themselves (and the things that make up their world).

If you’re in the marketing and communications business and don’t believe this is coming, you’d better become a believer. Soon.

The tectonic shifts we’ve seen in the past decade and a half have mainly been a result of the actions of the pre-Cyberspace generation, many of whom had to be dragged kicking and screaming onto the ‘Net. We’ve barely begun to feel the impact of the Always-On Generation. But if what we’re just now beginning to see as research into media consumption habits and online behavior of younger people is any indication, we’re in for a wild ride:

  1. More than 60% of 13-24 year olds would be more likely to buy a product endorsed by a YouTube celebrity (“YouTuber”) than a television or film star.
  2. Ninety percent (90%) of 12-24 year olds turn to YouTube for music.
  3. Sixty percent of those between 18-24 watch TV and video on their smartphones.
  4. The younger someone is, the more likely they are to watch time-shifted TV (and skip commercials).
  5. Only 37% of teens (14-17) watch television with anyone else in their home.

 

It’s easy to look at these numbers (and others we’ve mentioned in this article) and reduce them to simple observations about media consumption habits of young people, but to do so is missing the bigger point. Yes, it is getting harder for marketers to reach younger consumers. Yes, the media landscape is only going to grow more confusing and chaotic. Yes, our ideas of “advertising” have to evolve away from the interrupt-driven models we’ve clung to for over 100 years. And yes, “the kids” love the social media.

But that’s the obvious stuff. What’s harder to see – though it’s becoming clearer and clearer with every passing year—is that we’re in for some major cultural changes that go far beyond media consumption habits to potentially some fundamental changes in how we live.

The real changes aren’t going to come from skipping commercials or being able to stream audio wherever we want, but rather from the fact that where we are will matter less and less as technological developments continue to blur the lines between the real and the virtual. What will happen to our culture when we’re all experiencing our own versions of reality mediated by our own personal technologies? What will happen when community online becomes more important than the community around us? What will happen when the last remaining walls between work, play, school, and life come down? How will humans cope with a 24/7, always-connected world where we’re potentially never alone?

Since the beginning of human history, place mattered. For the first time we’re looking at a world where technology allows us to live, work, play, and learn anywhere, in literally “no-place.” What will life be like in Utopia? We’ll probably find out sooner than we think.

 

For more idfive marketing ideas sign up for our monthly whitepaper.