My talk is called Beyond the Filter Bubble. I'm gonna explain what every word in that means at some point, so it doesn't need to magically make sense.
Before we delve into any of the specific details, we're basically gonna talk about four things today. We're gonna talk about the relationship that we have with media right now. How this interacts with human nature, which is always kind of a weird phrase, but I do get more specific than human nature, I promise. What this means for communication, and then, finally, what to do with all of the stuff I've just said, 'cause it's gonna get medium-heavy.
But, first, it feels like I should set an introduction for how I ended up doing this talk and where the idea came from, because it took me a little while to decide what I wanted to speak about this year, and come up with something that actually felt important to me. First off, hey, I'm Jon.
[Audience] Hi, Jon.
A real welcoming group. I love it here. I work as a strategic planner, as Stevie just said. What that basically means is I spend a lot of time researching human behavior, branding, trends, and what opportunities exist through media platforms to try to get people to believe things about stuff they may or may not want. That's really helpful from an advertising perspective, maybe not super helpful from a consumer perspective, but I like to believe I make the world a better place in my own little way.
The genesis of this talk really comes down to the feeling I had on the morning of November 9th, 2016. I had gone to sleep earlier than I expected to, not in the best mood. I woke up, I made myself a cup of coffee. And then, as I finished my cup of coffee, I put my head in my hands, because this happened. This was a shocking moment for a lot of people, for a lot of different reasons. I'm not attempting to gloss over people with many more serious responses to this specific moment and what it meant to them, nor am I going to spend this entire time bashing anyone's politics, because even if I don't agree with you, that's not technically what I'm here to talk about.
I'm here to talk about the fact that we all freaked out after this happened. We saw that Donald Trump got elected, and people didn't really see it coming. People weren't really 100% prepared for this being the reality, this being the thing that was going to happen. And in large part, we all freaked out because of this. This is the fivethirtyeight.com prediction map from November 8th, before the US election happened. Nate Silver, the guy behind FiveThirtyEight, came to international renown by very accurately predicting both of Barack Obama's election wins. What he did was a very detailed form of poll analysis, not just looking at national polling, but looking at regional polls and exit polls and congressional polls to determine how they may impact what was going to happen in the future.
This guy was essentially seen as the arbiter of truth when it came to data, telling you what was going to happen from a political perspective. And he thought there was a 71.4% chance Hillary Clinton was going to be the next president of the United States. When that didn't happen, and everyone was kind of done freaking out, there were a couple of interesting responses in the broader media sphere that really opened my eyes to a couple of trends.
One of the big ones was the Pew Research Center, which is one of my favorite things in the world, did a bunch of research about the concept of fake news. This was a sentence that people probably hadn't said very many times before the lead-up to the most recent American election, but it's been a constant conversation back and forth. 64% of American adults said that made-up news has caused a great deal of confusion about basic facts and current events. At the same time, about one in four people said they had either intentionally or unintentionally shared completely fabricated news. They were implicating themselves in the problem.
The thing I found most interesting about this is that it was inherently baked into the idea that people were connecting and spreading this information via social media. This was part of the spread vector for this. You can picture it almost like that map in Outbreak of ebola slowly spreading across the United States. And there were some interesting responses.
Our friends at Facebook decided that they were going to start being a little more transparent about the trending news that popped up in their side panel. Instead of just having a title, they would have a title, a little bit of an explanation of what the story was, and then a source, so you could kind of vet, is this something I want to be engaged in? But even that didn't necessarily seem like enough, because a couple of months down the road, in February, Zuckerberg did something actually somewhat unprecedented. He wrote what felt like a 10,000-word manifesto about Facebook's place in the world, what it was attempting to accomplish, and what it was planning to bring to people, specifically calling out among the, I think six, priorities, the idea of creating informed communities where people have access to information and can connect with each other respectfully, and also building civically engaged communities, so people can really understand the politics of the world they're operating in and who they're speaking to.
What I found specifically fascinating about this is this came after about a decade of insisting that Facebook didn't have a responsibility to worry about these things or do these things. But the response to this whole concept of fake news and not being able to trust the media was so intense that it actually required this kind of a long-view statement of leadership, about how this was going to be a big priority moving forward. This is when it hit me. This wasn't really a story about an election. It was a story about everything. It was a story about how people learn about things, how people connect with information, how we predict what's going to happen in the future and how we communicate our understanding of reality to each other. And then I got really excited and started writing the outline to this talk. All of that was actually just the introduction.
Anyways, the first important thing we need to talk about. Everything about media is changing all at once. That's probably going to be the least shocking thing you hear today, considering that's been the headline in every media conference for the last decade or so.
There was this fun thing that happened once every news organization in the world started putting their product online for free. They suddenly stopped making as much money selling print versions of it with advertising in it. It was all very fascinating. The specific thing I wanna talk about right now is the shift to bigger platforms and bigger impact within those platforms. As of a couple weeks ago, according to Facebook themselves, they have two billion monthly active users. That's a really big, round number, and I don't like being the "Wow, big, round number, let's be shocked." But that's actually kind of terrifying in terms of scale, because it's a number that you wouldn't think we were going to reach any time soon. But, for the sake of context, let's actually walk through what that means.
There's about 7 1/2 billion people on the planet, currently, give or take. About 3.8 billion of them are online. I recognize that's a symbol for wi-fi. I'm not saying they all have wi-fi, It's just the closest thing we have to a universal internet symbol. Of those 3.8 billion people on the internet, 2.8 billion of them are connected to some form of social media, which was a staggering thing when I first saw it. 2.8 billion people on this planet are connected to some form of information gathering and sharing, where they can present their own spin, their own information, their own attitude to other people. That's just overall. What's more interesting is out of the 4.9 billion people on the planet who have a mobile phone, not a smartphone, a mobile phone of any kind, 2.5 billion of them are using mobile social media. The scale of this is something that I can't really overstate, which is why you get a lot of fun, panicky headlines like "Social media is taking over the world." But social media's not really taking over the world any more than print media took over the world, or oral histories took over the world. It's just a rapidly maturing platform, and it's a platform that's very quickly becoming standard.
What's interesting is that as it becomes the information platform of choice, there's inherent changes to the way people interact with the world, interact with information, and understand each other. Now, when I say "Information platform of choice," I'm picking words very carefully, because I don't necessarily mean data, I don't necessarily mean entertainment. I'm speaking about people's broader understanding of the planet, and to get a better sense of how people are actually engaging with information, I decided the best thing I could use as a placeholder for information was news. Recognizing that not all information is news, when you think about people absorbing new information, starting to understand the world around them, news is probably the great denominator.
It's also really interesting to watch as a case study, because it's kind of on fire right now, so they're trying some really interesting stuff. In order to understand what was going on, I decided I would survey about 700 Canadians. Out of those 700 Canadians, when asked "What is your number one source of news," 35% of them said a website of some kind. I didn't ask them individually, so I couldn't get really specific, but website was where they landed. 26% said social media was their number one source of news. That was more than I expected on an across-the-board perspective. 18% said TV was number one. Radio was 10%, newspapers was 9%, which is kind of depressing, 'cause I was hoping newspaper would beat radio. Magazines were 2%, which isn't shocking, because very few people's number one source of news is something they get once a week or once a month.
But where this got interesting was when I split it out by age group. When I look at the vaguely-defined millennial demographic of 18-to-34-year-olds who are killing industries left and right, if you trust Buzzfeed and every other magazine known to man, 37% of them said social media was their number one source of news. 34% said a website, 13% said TV. When you look at the 55-plus demographic, who are likely to be the parents of people in this 18-to-34 demographic, 33% said TV was their number one source of news. 30% said website, and 11% said social media. This was fascinating to me because there's a massive divide in the types of stories you tell on each of these platforms.
On TV, everything is a linear narrative. There's a big, exciting explosion graphic, and then a serious-looking anchor comes in and tells you, "This is the thing that happened today, "and when the thing happened, "this is the other thing that happened." And then there's another big explosion graphic, and then they cut to a commercial, and they come back in with a commentator to talk about the thing that happened earlier with the exploding graphic. You know what I'm talking about.
In social media and on a website, you have an individual piece of information, generally cordoned off. There might be a couple of videos in there, but it's not necessarily presented in the same linear manner, so the understanding is inherently different. What's really interesting to me here, though, is in 2015, parse.ly reported that Facebook passed Google as the biggest referrer to news sites. So, even those people who are saying they get their news from a website, there's a really good chance they got to that website through social media. Even that website news starting in social means the lens that people are engaging it with, the way they're understanding it, is shaped by the context in which they see it. And the context in which you see things in social media is actually kind of fascinating.
Now, I went with mobile because I'm your standard frustrating North American millennial, and I spend a lot of time on my mobile device. Enough that I get prompted by people to put it away and be a human being. But looking at screenshots of two of the biggest platforms, Twitter and Facebook, you'll notice a couple of things about the presentation. One, fairly straightforward, simple, really headline-driven presentation of almost anything. It was really hard finding completely innocuous screenshots where it wasn't sharing anything personal about any of my friends or people I follow, so I apologize for the ads that you're seeing right now.
But you'll notice the presentation shows you, on Twitter, you can see A, who's seeing it, B, how many people responded to this, how many people retweeted it, how many people just decided they like it. And you can see that glorious little blue check mark that says "We take these people seriously, "and so should you." On Facebook, the information's even more interesting. You'll see, and I blanked out the names, but in introduction, you get to see which of your friends like a brand, or a media platform, or a story. You get that added context of the number of people who like this specific thing, the number of people who've commented on it. All of these things shape your interpretation of that information, and shape how relevant and valid you might think it is, which inherently changes how you interpret the world.
If someone writes a story about the coming climate change apocalypse, and you see it's scrolling through one of your feeds, and the image is boring and the headline is kinda dry, you don't get nearly as excited as people did about the most recent piece that basically made it sound like we're all doomed, that every media outlet talked about for, like, four days. Remember, ice shelf falling off? We're all going to boil to death, something? That shift in presentation really changes your interpretation of the moment and your interpretation of the message.
Now, this is interesting because, spoiler alert, your brain is not good at the new media landscape. That's okay, though. Mine isn't, either. There are very few people who are the magic unicorns who can actually look objectively at the way these things are presented, and how they're engaging with them, and ensure that they're actually understanding only the intentionally-presented or meaningfully-presented information put out there.
This is one of my favorite images in the world. It's available on Wikipedia. It is the Cognitive Bias Codex. It is a giant, exhaustive list of literally everything wrong with the way your brain interprets information. Which, as someone who spends a lot of time debating and discussing human behavior, is one of the funnest things to do. It's like being in a high school debate class, except, instead of arguing with someone's reasoning, you're actually arguing with their capacity to reason. If you can get them to avoid taking it personally, it's a really great way to win arguments. But I'm gonna point out two specific things, and yes, I have spent a lot of time making sure I got that zoom motion nice and comfortable. You're not gonna use Keynote without having a couple of nice zoomy-zooms. Thank you.
One of the big things that we need to think about is the way our brains are naturally wired. We're drawn to details that confirm our own existing beliefs. There's a whole bunch of different ways that we fall into this specific space, but another thing that we need to consider is that we tend to imagine that things and people we're comfortable and familiar with are superior to things that we're not. There are some really major cognitive biases built into the way your brain reads the universe that social media really, really benefits from.
One of the big ones is confirmation bias, because the information you're getting is generally aligned to the social circle you have, and the patterns you've already expressed, the interests you already have. You're gonna hear a lot of things that you already believe, so that's gonna be a nice, supportive feeling. You're gonna get that little dopamine rush of, "I was right!" And then you're kinda gonna move forward.
Authority bias is an issue here, not just based on the brands and sources that information comes from, but now we have the added layer of, literally, the algorithm and platforms serving you this information provides an authority bias. You know that there's a team of people 5,000 times smarter than you sitting somewhere in Menlo Park slowly building a system to serve us the most relevant things that they possibly can. You're gonna trust that the things you see have some real value and validity to them. I mean, I can keep going about subjective validation, anecdotal fallacy, in-group bias, which is actually a big one.
Generally, it's people who are in your group are superior to people outside of your group. This is not shocking, and it also probably to blame for the vast majority of horrible things in human history. But the important thing to get across is that no matter what you make, social media is going to shape how it's perceived and how it's seen. Now we need to talk about what this new reality looks like. Again, pretty much everything we just talked about was kind of background to get here, but we are gonna get somewhere where this is more applicable and less, look at me talk about what's wrong with human brains.
One of the things that's really nice is going back to that survey, going back to that information. People still validate information on fairly reasonable grounds. Out of those 700 Canadians I asked, I think it was actually 701, what their number one source of trust is, 36% of them said transparency was the number one thing that made them trust in information, trust in news, trust in a brand. 29% said it was the history, which is really great if you're publishing The Globe and Mail or the New York Times. It's a little bit harder if you're a media startup or a website that started last week.
But, knowing that transparency and history are what makes it real to about 65% of people is actually really heartening. It's really good to know that there's a rational reason that people judge information with. Until you remember most of those people are still getting out-of-context news from social media, and that it's really quite difficult to verify something that you've never actually read. This was one of the big findings and takeaways that shaped how I consider content operating in our current world.
This is a quote from the Washington Post, and I'm gonna read it verbatim, because it's so staggering that I don't think I could give it proper words. "According to a new study by computer scientists at Columbia University and the French National Institute, 59% of links shared on social media have never actually been clicked." In other words, most people appear to retweet news without ever reading it. Now, I don't think anyone's shocked by this behavior. I'm imagining pretty much everyone in this room is at least somewhat active on social media, and has generally noticed that you'll see someone retweet an article with kind of an "I told you so" caption, not realizing it completely disproves their long-held point.
What's interesting is the behavior behind this, though, this idea that people are sharing for the validation and benefits of sharing without necessarily even understanding what they're communicating out there. We talked earlier about how one in four Americans in, I believe it was December 2016, admitted to having shared, intentionally or unintentionally, fake information. And we all had our moment of, "Why in the world would someone do that?" But what we're getting at right here is, people are very often sharing information without even being aware of what the information is.
Your information ecosystem is built by people who are kind of just passing the buck and hoping someone figures it out later, which is mildly worrisome, to say the least. Why are people sharing stuff they didn't read? Well, the scariest part is, they probably actually think they read it. Felix Salmon, who is part of the Nieman Journalism Lab, he's a fairly established public intellectual in the journalism space, talked about the headline being the central unit of news this past year. His exact words were, "Even with the best-crafted headline in the world, for every person who clicks on it, there are hundreds, if not thousands, who see it, digest it, and simply move on. People get their news from headlines now in a way they never did in the past, because they see so many of those headlines on Twitter and on Facebook."
This is one of those things that staggered me for a moment, even though I'm completely guilty of this exact same behavior. Literally accessing your information mostly through digital means, often through feed-based digital means, requires a lot of sorting and parsing. You end up reading 20, 30 headlines for every article you actually click through and dedicate time reading. You'll do the same thing on a website. You'll read every single menu item, and then pick the one that seems interesting. But all of those things you read are still in your head. All of them start influencing the way you see the world, and the way you interact with information. When you read a headline that, "X did Y at place Z," you're going to keep that information in your head regardless of whether or not the article completely disproves it.
This concept of people being influenced by headlines they read in articles they don't read is actually really important from a content design standpoint, 'cause we've now reached a world where people think it's acceptable to be educated by reading the incredibly short summation that comes before the even shorter introductory summation that comes before the entire piece of information.
Now, to make matters worse, just in case I haven't made matters bad enough yet, Headlines, issues, and topics are heavily divided by worldview. This is an image that I actually picked up from the Laboratory for Social Machines at the MIT Media Lab. It was an analysis of Twitter conversation, again, leading up to the most recent US election. I don't mean to keep talking about it, it's just one of the most fascinating case studies for people having no idea what other people are thinking in modern history.
What I found fascinating about this is not only did Clinton supporters and Trump supporters not actually speak to each other at all, they actually spoke about completely different things in completely different ways. So, racial issues, pretty much exclusively spoken about by Clinton supporters. Immigration, two totally separate conversations. One group of people talking about a clear path for people to come and contribute to a society that they're engaged in and excited about living in, another group of people worried about wall-building. What was interesting is, these groups didn't overlap. They didn't speak to each other and they didn't share information. There was no shared facts linking these two groups.
Which brings me to this guy. Several years ago, Eli Pariser wrote a book called The Filter Bubble: How The New Personalized Web Is Changing What We Read and How We Think. This touches on a lot of the points I raised earlier. The key point of this book was essentially that the filter bubble is the idea that personalization tools from companies like Facebook and Google have isolated us from opposing viewpoints, leading conservatives and liberals to feel like they occupy separate realities. This is probably one of those sentences that feels so true to everyone here that I don't really need to add much to it, but it's been fascinating watching, over the last year, Eli Pariser become famous as the guy who explains how we got here, by talking about his theory and talking about the filter bubble.
Talking about this idea that we all live in our own little separate information ecosystem, tailored to our own wants and needs and interests, and it protects us from ever needing to consider what other people are thinking. Effectively, we've entered this age of information segregation. I know segregation is a really heavy word, like, I'm violently aware that it's a really heavy word, but I think it's a valid one, 'cause we actually have reached a point where facts are both separate and unequal.
You have different groups of people who, based on different information from different sources believe completely different things with 100% absolute certainty, which is minorly terrifying. Which brings me to another point which I think gets left to the side in this entire conversation. The Overton window. Now, some of you are really big on civics and politics, and, I don't know, maybe libertarian media think-tanks. The Overton window is a concept that was described by Joseph Lehman at the Mackinac Center for Public Policy, which is founded by the creator of the Overton window. "At any one time, some group of adjacent policies along the freedom spectrum fall into a window of political possibility. Policies inside the window are politically acceptable," meaning officeholders believe they can support the policies and survive the next election. That was political-consultant-speak for, there's a window of acceptable stuff that a politician can say.
A politician can get up on a stage and, in Canada, say, "You know, we have socialized healthcare, and that's great, but we should have socialized dental care." People wouldn't immediately freak out if they said that. But if a politician walked on stage and said, "We have socialized healthcare in Canada, and that's why, in our next election, I'm gonna be building everyone robot bodies," people would have a moment of, "That's too far from the window of acceptable discourse, and now I think you're crazy."
The reason this is kind of broken down along the freedom axis is, the Mackinac Center is actually pretty darn libertarian, so everything's on a freedom axis. It's just an inherent thing with their whole shebang. But anyways, this is Joseph P. Overton. I love the fact that his name is Joseph P. Overton, 'cause if you were gonna come up with a name that sounds like a political strategist's name, it would be Joseph P. Overton. Also, kinda reminds me of Michael P. Keaton from Family Ties, so I'm picturing the exact same tie-wearing 17-year-old changing the world. Great part in his hair, too.
Overton was an incredibly smart guy, and this idea of the Overton window has actually become more and more interesting as you've seen narratives kind of diverge, and as you've seen people step more and more out of the norm. This idea that there's this central point which is kind of like today's normal that everyone shares, and all of our conversations happen in a window around this point is actually hugely influential, and really interesting. This idea that, when you are having a conversation with someone, you're operating in the same consensus reality, and you're all limited by the same box. And if people step outside of that box, they're probably a little untrustworthy. You're a little concerned that what they're saying might be dangerous, or might be scary. The box of acceptable conversations is kind of an important thing.
But what's interesting is that theory's based entirely on there being a singular audience. It's a politics-driven theory, so it's entirely about the idea that the voting public considers this single window of things that are possible or not possible, things that are real or not real, things that are acceptable or unacceptable to say or think or suggest. We no longer have one audience. I hope it's been clear through everything I've talked about that we have multiple audiences. Politics is easy to pick on 'cause you can break it into groups of one and two, but in other conversations, you might have 30 audiences. You might have 300.
One thing I learned in the last year, watching a lot of very active, engaged groups of people who are pushing for social change in Toronto is that the number of stakeholders in a room who agree on everything is almost always zero. You can generally get blocs of people together who agree on some subjects, but not all, and so you have that need to create a matched point for conversation. What I'm saying is, effectively, if we don't have one audience, then we can't really have one window of acceptable discourse. So we've got filter bubbles and Overton windows, and because I like creating words based on other people's ideas, that I think we should just mash together, I'm calling them filter windows. I made a handy definition, because I like handy definitions.
The sciencey-sounding handy definition is, different sociocultural groups, given different and contrasting information diets, end up having an incredibly narrow range of overlap in ideas they consider acceptable for discussion when they engage with someone from outside their group. I realize that's probably a little too detailed, so the short version is, why it's impossible to have a conversation that is both substantive and polite with someone you disagree with on a political issue in 2017.
I threw political in air quotes because we have this weird habit as a society of pretending that anything we like to fight about is political, rather than ethical or moral. Most political issues actually have nothing to do with politics. But that's a completely other talk and a completely other discussion. The reason I think filter windows are interesting as a subject is it kind of shows you the space for debate and discussion and collaboration. Let's use political parties as an example. Political parties are a fun, safe example. No one ever gets offended when you discuss political parties. It'll be fun.
If we break these two parties down into an Us and Them, and you can be Us or Them, it doesn't matter. You pick. There's always a range of overlap to some extent. Political parties might be diametrically opposed, but they both, at the very least, believe that political parties should exist. There's this little space of overlap where they can have discourse and conversation. They probably believe in the rule of law to some degree. They probably believe that legislation is a great way to improve or change society. There's some space for conversation, there. Where it gets interesting is when you move out of people operating in the same system, and move to people, oh, I actually forgot that this was the next slide.
So, moving forward from everything I said except for the last sentence and a half, patriotism is another great example. There's so much consensus comfort with the concept of patriotism, but there's always going to be things that fit outside the acceptable realm of conversation. I mean, one thing I noticed during the entire Canada 150 thing, which continues going on, is people are very comfortable celebrating the country they're from, but different groups of people are comfortable in different ways.
I personally wasn't comfortable with any broad, exciting celebrations of Canada and Canadianness without acknowledging the ongoing horrible treatment of indigenous Canadians. That made me uncomfortable. Whereas, for other people, any time I brought that up and tried to acknowledge it, it would make them intensely uncomfortable. They suddenly wouldn't be able to have a conversation with me, because I'd stepped out of their acceptability window of things to discuss. And that's on patriotism, which generally, if people are in a country and like it, is normally considered a pretty safe topic.
Where it gets really interesting is when you have difficult discussions or difficult subjects that you're creating content or having discussions about. Racism is a big one, and I promise I'm not gonna try to litigate racism in the middle of this presentation. That's a little heavy for Tuesday morning. But one of the things I find completely fascinating is two very reasonable, educated people can find it impossible to have a discussion of racism, because one of them may believe and have been raised to believe that discussion of race is inherently racist. That any time you bring it up, you're making people uncomfortable, and it's wrong, and you shouldn't do it, and why are we being so divisive by having this conversation? Whereas another group of people, really I'm kind of describing myself with this second group, might think that everything's kind of a little racist, and if we talk about it, it won't be so bad, and maybe we can be aware of it and start working on some of these things.
What's interesting is it's not only heavy issues that we're running into this conversation and having these challenges. You can have this with literature. If you grew up reading nothing but Harry Potter, and then transitioned directly into Fifty Shades of Grey, you're gonna have a really hard time having a conversation about the merits of literature with someone who's only ever read the Russian masters. Even trivial conversations, and I have this one pretty often 'cause I'm kind of a huge fan. "Is Riverdale good?" is a conversation that you could end up in a completely different world than the person you're speaking to.
You're operating in a completely different filter window, so there's no point of overlap, 'cause they don't understand the majesty of CW teen soap operas. Completely off the cuff, I will say I think The OC is the unspoken classic of our time. It doesn't get the credit it deserves. But there are a surprising number of people who just would not want to talk with me about TV after that. We apparently have no overlap. The point I'm trying to get at, here, is the more we live and learn in our own separate realities, our own little windows, our own limitations of acceptability, the harder it is for us to work across them. The harder it is for us to collaborate, to work together to build something meaningful. And as we're getting to more divided bubbles and windows of acceptability and sources of information, it gets really hard to collaborate, which, I mean, I'm gonna get a little preachy here.
Every good thing that ever happened in human history was because a whole bunch of people worked together to make something awesome happen. All the people in this room are here because of a group of hard-working people collaborated and worked together to make it happen. It wasn't a singular random moment of a singular random genius. And so, every time I see something that divides people, and makes it harder for them to work together, I get inherently uncomfortable with what we're cutting ourselves off from having in the future. End preachy moment, I promise this is the last one.
What I'd like to talk about now is what to actually do with all of this stuff I've said, 'cause I realize I ranged from heavy, to super heavy, to weirdly academic, to kind of heavy again. What to do with creating content for now is a really interesting question, and it's one I think about all the time. A big part of my job is actually developing creative briefings, and getting a group of people into the room, saying, "We need content that does this job. Here's the stuff we need to consider."
The biggest thing I've taken away from this subject, and taken away from digging into it, is you can't assume shared context anymore, which sounds really small, but it's actually minorly terrifying. For all of human history, you've been able to assume shared context within a community. If you lived in a village 1000 years ago, you would assume everyone in the village would know who the current king or warlord or impending threat would be. I mean, even 10 years ago, you'd assume everyone would have a basic understanding of the major facts that are happening in the political universe that you operate in, or in the consumer universe you operate in. That's no longer the case, and the best example I've seen of that actually happened during the last Super Bowl.
This company that I'd never heard of before called 84 Lumber, because apparently you advertise lumber during the Super Bowl now, put together this beautiful ad with this really compelling, heart-wrenching story. You actually saw a mother and child. It looked like you were led to make the assumption they were in Mexico, and they were escaping to a better life. You saw them huddled around a fire, you saw them traveling in terrible weather, piled in the back of a truck with 100 other people. And then at the end of this commercial, you saw them see this giant wall. You saw them come up to this huge wall, and then, when you went to the website and continued watching, there was a door in the wall, and they got to walk in.
This was fascinating for me, because this was the most politically divisive thing I've seen that everyone thought was on their side. When I saw this, my immediate interpretation was, "Oh, this is a pro-immigration message. They're pointing out how cruel it is to lock people out of opportunity if this is a thing they really want and they're searching for." And if you were in my specific social media bubble, there were probably 150 other people coming to that exact same conclusion, and being impressed by this bold, if somewhat vague stance by this lumber company that, again, none of us had ever heard of before. In an entirely different internet bubble, there was a group of people talking about how great this was, because they were finally putting together a story that was showing that, if you're willing to come through the door and do things the right way, the current policy works.
And this was actually an anti-immigration message, that was the same message that everyone was watching. I'm not talking about this 'cause I think this was a brilliantly done example. I'm talking about it because I found it fascinating that someone managed to spend millions upon millions of dollars in the most saturated media moment in the world in saying something that literally everyone thought was on their side. Now, unfortunately, this company is located in an extremely red state, so they had to put out two or three different press releases explaining, "No, we're actually super pro-wall. Go, wall go. Yay, wall."
But the lesson here is that we're operating in a time where you really have two options. You can either create without context, or you can embed context into what you're saying. There was a way to make that specific ad and have it be incredibly clear what you were trying to say, and what side you were on, for lack of a better phrase. There was also an opportunity to create something that didn't necessarily need context to make sense. These are two things to consider in content development, when you tend to have really divided audiences and really divided perceptual windows. You need to create something that either works for everyone, or explains itself enough that it works for everyone. The other recommendation that I would make is communicating in fractals, and this is the nerdiest thing I'm gonna say today. I apologize profusely.
The lazy definition of fractal I will give you is a fractal is a graph or curve in which zooming in on a specific area of it, it will have the same statistical behaviors as the overarching whole. I got a GIF, so it makes slightly more sense. As you continue zooming in, you'll notice that this fractal keeps having the same shape, and it's kinda trippy, and it seems like the kind of thing that would end up in the visualizer of iTunes when I was in high school. The point I'm trying to make is, there's a possibility to make content that tells the same story in the headline, in the introductory paragraph, in the image, and in the total of the page. It makes it harder to make incredibly complicated content, but it also makes content 2017-attention-span-proof when people are probably just gonna read the headline and scroll down until there's something to click, anyways. Depressing, but helpful.
Another consideration here is really leveraging social reinforcement. When I say social reinforcement rather than social sharing or social spreading, I'm trying to make a really specific point, 'cause we don't necessarily want to encourage blind retweets. That's not a great way to, A, make people think that someone has really engaged with a specific piece of content, or B, actually get people to engage with the specific piece of content. A company that does this incredibly well, actually, is Medium. This will probably be the only time someone says something really nice about Medium in the next hour, so I hope they're listening.
This is an article from The Ringer, which is a thing I read obsessively 'cause I really care about basketball. You'll notice on one side of their Medium page you can do the like, share, Twitter, Facebook, bookmark thing. The thing I find interesting is you can actually highlight a line in the story, and when other people come see it, they will see that you highlighted this and thought it was important. Other people can show up and see, oh, Jon thought this specific line about "Canada could be looking at its biggest basketball star ever," I wasn't kidding, I love basketball, would show up to other people who saw it.
What's interesting is that they also have a baked-in response feature, so you can write a response to a specific paragraph or article, and say to someone, "I thought this was great." But you can also post it as your own piece of content. You can highlight a paragraph that you wanna argue with, hit Respond, and then create your own page where you are fighting with that specific paragraph. It'll be linkable from that page. You can share it. But again, it's bringing people into your reaction and your social capital tied to that specific content. And then, if you really wanna be a dork, you can share it on Twitter with a little capture of the specific thing that you thought was interesting and engaging, and driving home that you have your personal connection to this story and to this specific argument.
I thought this was really, really fascinating as an example of leveraging social reinforcement, rather than just kind of blind liking and clicking and retweeting, 'cause it specifically drives home that there was things you cared about, and things you specifically connected to, and it gets people to engage with that internal content that might actually push them to click through and read a thing, which, as we've learned, only 40% of people actually do. The short version of this is that dark social, which is my favorite kind of cool-sounding content term ever, it's literally sharing stuff through messaging and DMs and emails, rather than public social channels.
Personal recommendations, content sharing, personal subscriptions, it'll cut through a lot better than, "Hey, your friend also liked this!" This is probably the most boring and obvious thing, but back it up. If you're gonna say something, embed enough information in it that people can tell whether or not you're basing it on something.
This is an article I read last week, written by Sarah Kendzior, who's a really, really great critic of authoritarians, and apparently, now, the American president, so that's an interesting thing. I am not gonna run through the article again, but I am gonna talk about some specific things that were really interesting. She mentions that the president vaguely tweeted that he gave the country away. There's a link there. And then you click, and it turns out, actually, that's a really loose interpretation of what he said. He argued that, well, not argued, I don't know if you can call that an argument. But he essentially said that Hillary Clinton would have given the country away.
It was nice being able to click through and see what that statement was based on, and understand that I might disagree with the interpretation, but at least I know it was based on something. Similarly, you can find out that, apparently, Russians are suspects in nuclear site hackings, according to sources, and that, I can't even read that without bursting into laughter, but, it's interesting when you see even news media moving towards embedding links in information in its specific stories, when you consider the decade-long resistance to this once-standard, now, oddly-dying because it's harder to embed links when you're having everything on a mobile device practice that used to be just a way that people engaged on the internet.
Baking the proof into your story, whether it's your actions, hard data, or anecdotal reinforcement is really, really valuable to create something that includes enough facts that even people from a completely different filter window will have some context of what you're saying, and root it in some level of acceptability for them.
The last big option is creating a message that you can't really disagree with, and I think this is possibly the best example of that I've ever seen. This is a Citi Bike stand in New York. At this time, pretty much every bank was the most hated organization that could possibly be. You guys may remember, like, 2008, 2010, wasn't great. Banking crisis, houses, et cetera. Instead of making a series of ads talking about how great they were, and people with different information judging Citibank differently based on the way they approached it, they threw a bunch of money at the city of New York to fund the building of a bike program. You could rent a bike, you could ride a bike around.
They literally created a new public utility with their giant pile of money rather than start a debate or message specifically. They did something so inarguably good for everybody that no one could actually get mad at them about it or misinterpret it. It was actually interesting, because you saw people attempting to misinterpret it and turn it into a negative thing, and it was actually really hard to do with a straight face. I mean, giving away access to a giant bike system isn't really something you can say is net-negative.
The point I'm trying to make here is that attaching your message to the common good, if done well, helps find those little windows of overlap in the area between filter windows. We talked about political parties and how they all believe in the rule of law, they all believe in X, they all believe in Y. This is one of those for broader segments of humans. You can thread the needle and find the specific thing you can do or the specific thing you can say that is within the acceptability window to a broad enough group of people.
And that's it for me. Thank you, thank you.