July 21-22, 2021 / Online

Fight Bias with Content Strategy

Written by David Dylan Thomas

March 1, 2021

Transcript

Hello and welcome to Fight Bias with Content Strategy. My name is David Dylan Thomas. Today we're gonna walk through using mental shortcuts for good instead of evil. We can talk about how to use them for evil if you'd like, but that's a different talk.

So, my name is David Dylan Thomas, and I am content strategy advocate at Think Company. It's an experience design firm. And for the purposes of this talk I am also the host and creator of The Cognitive Bias Podcast. And I'll talk to you a little bit about how I came to be hosting this podcast.

So, a while ago, I saw a talk by Iris Bohnet, called Gender Equality by Design, and it was an amazing talk, and it gets really into this notion that a lot of the implicit bias that we see around us, whether it's gender bias or racial bias, really just comes down to pattern recognition.

So, an example might be you're hiring a web developer. And when I say the words web developer, the image that comes unbidden to your brain might be skinny white dude. And this isn't because you actually think skinny white dudes are better programmers. It's just that a pattern has been established from movies or television that you've seen or offices that you've worked in. And if you see a name at the top of a resume that doesn't fit that pattern, you might start to give that resume the side eye. And again, not consciously, but unconsciously. And when I realized that something as horrible as gender bias or racial bias could come down to something as simple, and dare I say human, as pattern recognition, I decided I needed to learn everything I possibly could about cognitive bias. And so I did.

This is a page from the RationalWiki about cognitive biases. There's well over a hundred on this page. And I realized pretty quickly I was not gonna learn this all in a day. So I just picked one a day and learned about it, and then moved on to the next one and the next one. And this turned me into the guy who wouldn't shut up about cognitive bias, and my friends eventually were like, "Dave, please, just get a podcast." So that's what I did.

So, it's worth answering the question, what is cognitive bias? At the end of the day, it's a series of shortcuts your mind is making. We have to make something like a trillion decisions a day. Even right now I'm making decisions about how fast to talk, what to do with my hands, where to look. And if I thought carefully about every single one of those decisions, I'd never get anything done. So most of our lives are lived on autopilot, and generally that's a good thing. However, that autopilot sometimes leads us into error, and we call those errors cognitive biases.

So, here's a fun one. It's called illusion of control, and the way you'll see it is you might be playing a game that involves rolling a die. And if you need a really high number, you'll roll that die really hard. If you need a low number, you might roll it really gently. And we know that how hard we throw the die makes absolutely no difference, but we like to think we have control in situations where we don't have any control, and so we embody that desire through the way that we throw the die.

A much more harmful cognitive bias is called confirmation bias, and you've probably heard of it. It's this idea that once you get an idea in your head, you will look for evidence to confirm that idea, and if anything comes along that doesn't confirm that idea, you'll just say fake news and move on. One of the most powerful instances of this came during the Iraq War. So originally, the idea was we needed to go in there because Saddam Hussein has weapons of mass destruction. We need to get him before he gets us. It's a very plausible, convincing argument. But when we got in there, turns out not so much with weapons of mass destruction. So, even though within a year the president of the United States, the person who had said there's definitely weapons of mass destruction there, had said, "No, "there's not weapons of mass destruction there." Fast forward to 2015 and still about 53% of Republicans still believed that there were weapons of mass destruction there, and 36% of Democrats. And if you watched Fox News, it was something around 51%, which was a news source that was going to confirm that bias. So, no matter what, this is an extremely powerful bias to combat, and we are gonna come back to it later.

Now, part of the reason these biases are difficult to combat is that you may not even know you have them. There's even a bias called the bias blind spot, which basically says you think you don't have any biases but you're damn sure everybody else does. Another part of the problem, and the reason it's hard to see them, is 95% of cognition happens below the conscious level. Now, I'm actually working on a book about all of this called "Design for Cognitive Bias", and in doing the research to make sure of that number I originally thought it was 90%, but it's 95%. It's even worse than I thought. So, the most honest answer you can give if someone asks you why you did something is, "How the hell should I know?" And here's the thing. Even if you do know that you've got the bias, the odds are you're gonna do it anyway.

So, there's a bias called anchoring, and basically, the way it goes is I could ask everyone watching to write down the last two numbers of your Social Security number, and then totally unrelated, I could say, "Hey, we're all gonna bid on this bottle of wine." Those of you who wrote lower numbers are gonna bid lower. Those of you who wrote down higher numbers are gonna bid higher. It's anchoring. It's a thing. Now, here's the thing. I could tell you about anchoring before we begin the experiment and say, "Hey, there's this thing called anchoring. "You're gonna do a thing. Don't do it." You'll still do it. But it gets worse. I could say, "Hey, there's this thing called anchoring, "and I will pay you cash money not to do it." Still do it.

Now, as designers, content strategists, people who make things, why do we need to care about cognitive bias? Because a lot of our job comes down to helping people make decisions. And there's a concept called choice architecture which we really need to pay attention to. A good example of choice architecture is if you go try to buy produce at a grocery store. Common wisdom is to not pick from the top because the grocer is gonna put the oldest stock there because that's the stock they're most trying to get rid of and they know people are lazy, they'll reach for what's right in front of them, and they'll take that. So that situation has been architected to benefit the grocer. It could just as easily be architected to benefit the customer and put the freshest stuff right at the top, where it's easy to reach. But that's the idea. The way you design the experience will influence the choices that people make.

So, think about the decisions your user needs to make. And now think about how people make decisions. And I mean real people, not rational people, because the rational user is kind of a myth. Remember, your user is making most of their decisions, 95% of the time, without even realizing they're making the decision. So, there are content and design choices that we can make that can keep harmful cognitive biases at bay, or even use them for good, and that's what I wanna talk about today.

So, let's go back to that example of that skinny white dude. As it turns out, if you have two identical resumes and the only difference between them is the name at the top of the resume, if it is a male-dominated field, the male name will go up the chain and the resume of the female name will just stay on the pile. And they've seen this again and again and again. But here's the thing. Why do you need that information? What about the name is helping you, the hiring manager, decide who to hire? Think of it as a signal to noise problem. The signal, the thing you need to be paying attention to, is the qualifications, the experience. The noise, the thing that isn't actually helping you and may be hurting you is the gender, the race, or whatever you're reading into the gender and the race in that name.

So, the City of Philadelphia actually did some blind hiring for a web developer role, and they discovered a couple things. One, even in the high-tech world of web development, the best way to anonymize a resume is to print it out, physically, and have an intern who has no stake in the hiring process take a marker and just redact it like a CIA document. So, even once you've done that and you've found a resume that you liked, the next thing you would do naturally is go to their GitHub profile to figure out, like, what their skillset is. What do you think happens the second they go to GitHub? All of the personal information shows up, ruins the experiment. So, clever people that they are, they wrote a Chrome plugin that would anonymize that information as soon as it loaded. Thank god for structured content. And just to complete the circle, they took that code and they put it back on GitHub. It's there right now if you ever wanna use it for anonymized hiring.

So, this isn't just about helping humans make hiring decisions. So, Amazon had a hiring bot that turned out to be really, really sexist. It was only recommending men. So much so that if it saw the name of a women's college on a resume, it would down vote it. So, they tried to figure out how it became so sexist, and they looked at the data that they had used to train the AI. And the data they used was the last 10 years of resumes, which seems reasonable enough, but guess what all those resumes had in common. It was mostly guys. So the AI took one look at that, said, "Gee, it sure must like dudes," and then just keep recommending dudes.

Now, if the job of the AI was to detect bias in Amazon's hiring, it would be a rousing success. You'd say, "Oh, okay. We found where the holes are. "We need to patch that up." But since they were relying on it for this, I feel like, and I've said this before, I really believe that we should start lying to AI. We have this image in our head that what we need to do with AI is show it the world as it is and then it will make good predictions. If you show it the world as it is, it's a very racist, sexist world. It's gonna make racist, sexist predictions. So we need to lie to it and say, "Hey, oh, yeah. "Black people are way more likely to get housing loans," and then let it make those predictions.

Now, this can play in the reverse. So, Amazon Go is a store where you just walk in, get your stuff, and leave. You don't have to check out because Amazon is automagically deducting all of this stuff from your account. So, this had an interesting side effect for Ashlee Clark Thompson, who wrote a story for CNET called, "In Amazon Go, no one thinks I'm stealing". There is a phenomenon known as shopping while black, which manifests as hyperaggressive customer service. "Can I help you? "No, really, can I help you? "No, really, can I help you?" So, there's no one in an Amazon Go store to do that, so she experienced as a black woman a very freeing shopping experience, where she felt unburdened of all of this. So, by removing one design element, namely the cashiers, they actually created a less biased experience for this customer.

We're also seeing this play out in the criminal justice system. So, San Francisco last year tried to implement what they call blind charging. So, the idea is that a DA, a district attorney, has lots of leeway in deciding who they will or will not charge, and not surprisingly, black people were being charged far more often than their white counterparts. So they said, "Well, why do you need any of that information "in the report, the crime report you're looking at, "when you decide who to charge? "Let's remove the race. Let's remove the gender. "Let's remove even the location of the crime "so you only know what happened." So not the personal information of who it happened to or the personal information of the person who did it. Just what happened, and use that to make your decision. Now, it's a little early yet, and we haven't heard how this is playing out, if it's actually equalizing that color in charging or not, but I hope.

Another concept I wanna talk about is called cognitive fluency. And this is the idea that if something looks like it's gonna be easy to read, we assume that whatever it's talking about is also gonna be easy to do. And by the same token, if something looks like it's gonna be hard to read, we assume that whatever it's talking about is also gonna be hard to do. So, I love pancakes. I've been making pancakes a lot lately. This is a recipe for pancakes. And the text is kind of clumped together. It's kinda small. And I might glance at this, and before I read a word, I might come to the conclusion that, "You know what, I bet making pancakes is hard. "I don't think I'm gonna make pancakes." This recipe has, like, big pictures and small little bursts of text, and it could be literally the same text, but glancing at it I might think to myself, "I bet pancakes are easy to make. "I think I'm gonna make pancakes." A two-minute video? Forget about it. We're making pancakes.

Now, if we think about decisions people need to make around, say, public transportation, on my left I see a schedule that's supposed to help me get from the suburbs to downtown Philadelphia. And I might glance at that and think, "There is no way. This is impossible. "This is gonna be way too difficult. I'm just gonna drive." On the right is an app that's supposed to tell me when trains are coming. And again, I haven't read a single word yet, but as soon as I glance at that, I think, "I bet public transportation's pretty easy to use." I'm much more likely to use it. Here's another way this comes up. And if we were voting and raising hands, I would say, "Hey, raise your hands "if you thought Rosa Parks was born in 1912." And then I would say, "Rosa Parks was born in 1914." If we were doing this all in person, the odds are, and we'll give you a sec to vote, but the odds are most of you raised your hand for 1914. Now, first off, you're both wrong. She was born in 1913. But the reason most people when shown this pick 1914 is that if something is easier to read, we assume it is more true.

But it gets worse. If something rhymes, we actually think it's more true. And this has consequences. So, what's going on here is the idea that our minds love things that are easy to process. We love certainty, and we hate, hate uncertainty. That's one thing I've learned by looking at bias after bias after bias. They're all shortcuts to certainty. Things that are easy to process feel more certain. Things that are easy to remember feel more certain. If I asked you what did you get for your fifth birthday and I said, "You got a truck, right," you'd be like, "Oh, I don't know. "I can't remember that well. "I don't know if that sounds right." If I were to ask you what you had for breakfast this morning, you'd be able to remember that pretty clearly and be pretty certain about the answer to that. It's easy to remember. It's easy to process. Things that rhyme are easier to remember and easier to process. Things that are clear, in big fonts and plain language, are easier to process, and that's why they feel more true.

Now, this becomes important when it comes to things that we actually need to believe are true. There is a crisis in this country around African Americans believing health information that comes from the government. In 2002 when responding to the statement, "The government usually tells the truth "about major health issues, like HIV/AIDS," only 37% of African Americans agreed with that statement. By 2016 that had dropped to 18%. Now, we could do a whole other talk about why there are legit reasons African Americans have health concerns- concerns about health information that comes from the government, but this is information that could save lives. Needless to say, this is even more true now in the age of COVID.

So, if it needs to rhyme, if it needs to be in big, bold fonts, if it needs to be in plain language and have pictograms, so be it. Now, in the process of writing the book my editor thankfully challenged me on this and said, "Well, that's nice in theory, "but has this ever actually worked?" So, I did some digging, and I found out that yeah, it actually does. So, this is from one study. Pregnant smokers and ex-smokers who received a specially designed intervention with materials written at the third grade reading level were more likely to achieve abstinence during pregnancy and six weeks postpartum. So basically, you have women who were smoking during pregnancy and after, and when they were given instructions that were written at an easier to process reading level, they did better.

Similarly, there was a plain language, pictogram-based intervention to help people give instructions around dosing. These were caregivers, and they were helping give medicine to people. And the adherence went up. So people actually taking the drugs, which is a huge problem in medicine. Even if you can get someone the medicine, the likelihood of them actually adhering to the regime is not great. And then dosing errors. So, dosing errors went down and adherence went up when the information, the instructions, the education was in this pictogram-based, plain language-based format.

Now, you might say, "Dave, that's great for plain language and pictograms "and blah blah blah, but rhyming, really? "Is rhyming really gonna make a difference? Really?" So, as it turns out, there was a thing called Click It or Ticket. And this was a program that started out with a legislative arm where they made it so that you could get a ticket for not buckling your seatbelt. And that worked pretty well. More people buckled their seat belts, mostly in the older age range. For younger people, though, it wasn't as effective. So they introduced Click It or Ticket, and they found that national belt use among young men and women ages 16 to 24 moved from 65% to 72%, and 73% to 80%, respectively. Now, just to put that in real human terms. For every percentage point increase in seatbelt use, 270 lives are saved. So if you do the math, it comes out to roughly 4,000 lives saved through, in part, rhyming.

So, it's silly, but it works. There are, in fact, Federal Plain Language Guidelines established through the Federal Plain Language Act of 2010, I believe, that say, "Look, if you're a website "that's receiving federal funds to provide a service, "you need to talk about that service in plain language." 18F, among others, have fantastic plain language guidelines that you should check out some time. Another weirdness around how we process these things is price differential. So, if you see a sale price on a website, usually what you see is the sale price right next to the original price with the original price kinda slashed out. However, if you were to move that original price further away horizontally, it looks like a better deal even if it's the exact same amount. And this is even weirder because it's only if you keep them horizontally further apart. The vertical difference doesn't make any difference. We are very weird people.

Another easy to process thing is name pronunciation. So, there's actually a name pronunciation effect. So if a name is easier to pronounce, and easier to pronounce is culturally contextual, right. But if I'm in an environment where Smith is easier to pronounce, that's the name that's gonna get an advantage. And this comes to things like voting preferences and job preferences. In fact, they did a study at law firms, and they noticed that the further up you went in the chain of command at a law firm, the easier the names got to pronounce. This even affects stocks. So if you've got an IPO, the name of your company, if it's easier to pronounce, it generally does better. In fact, even if the little ticker abbreviation is easier to pronounce, it will do better. So, people will invest real money in things that are easier to process.

I wanna talk a little bit about the bandwagon effect, and it's pretty much what you're guessing. If a bunch of people are doing something, it seems like you should be doing it too. A great example is an experiment where you show someone this card and say, "Okay. A, B, or C. "Which of those lines is the most like exhibit one?" And if it was just you and me, you would take one look at that and say, "Well, A, obviously." But it's not just you and me. I put you in a room with seven other people and they all go first, and they all say B. So by the time I get to you, you say, "B?" And they'll ask people after, and they'll be like, "Yeah, I wasn't sure if I didn't understand the question, "or maybe everybody else knew something I didn't know." One interesting thing about the bandwagon effect is if you have even one other person in the room who says A, you now have the confidence to say A. So, this is why if you're doing any kind of exercises with clients or colleagues where you need honest opinions and honest feedback, you need to make sure that the dissenters have line of sight to each other or that nobody has line of sight to each other. So, if you're doing, let's say, a retrospective and you want everyone to write down what went well, if you just say, "Hey, everybody, raise your hands "and tell me what you think went well," if the most powerful person in the room says, "This thing went well," and everybody else doesn't think it went well, they might not say anything because it's the most powerful person in the room. But if everybody writes down on a sticky, "Oh what do you think went well," and then you start putting that on the board and sorting it, you get a more honest opinion. And if two people had unpopular opinions, they'll know each other had unpopular opinions. They'll know they're not alone. They might be more willing to speak up.

So, the framing effect is, for my money, the most dangerous bias in the world. It starts out innocent enough. So, let's say that I see a sign. I'm in a store and I see a sign that says, "Beef, 95% lean," and right next to is a sign that says, "Beef, 5% fat." Which beef do you think people are gonna buy? It's the same thing, but I've framed it in a way that makes you think differently about the decision. Now, this is all good and well, and we're just talking about beef, but when it comes to... What if I were to say, "Should we go to war in April, "or should we go to war in May?" And as it turns out, wars have been started over less. What did I do there? We're no longer talking about whether or not we should be going to war in the first place.

Now, as it turns out, if you are bilingual or speak more than one language, you have a kind of weapon against the framing effect. If you speak in more than one language, if you think about the decision in your non-native language, you are far less likely to fall for the trap. I speak a little bit of French, so I might say, let's see, if I were thinking about the beef decision, boeuf, beef, that's boeuf, that's a lot of vowels. 95%, that's maybe. Ah, I don't know. And by the time I've done all that thinking I see right through the illusion. Part of what's happening is... "Thinking, and Fast and Slow" by Daniel Kahneman is kind of the seminal work on this. And the thing he points out about cognitive bias is that when you're thinking quickly, that's when you're most likely to fall for these traps. If you can slow down your thinking, and thinking in your non-native language slows down your thinking, you are far less likely to fall for a lot of these biases.

Now, as it turns out, you can actually use some of the framing effect for good. So, there's an experiment where I'd show an image, this image, and I'd ask an audience who sees this image, "Should this person drive this car?" And what you'll get is a policy discussion. And some people will say, "Oh, you know, old people are bad at everything. "Don't let them drive." And other people will say, "Oh, that's ageist. "What are you talking about? "Let people do what they want." What I will learn by the end of that conversation is basically who's on what side. Now, I can show this same image to a different audience and ask, "How might this person drive this car," and what I'll get is a design discussion. And some people might say, "Oh, we could change the shape of the dashboard "or we could move the steering wheel." And what I'll learn by the end of that conversation is several ways that person might be able to drive that car. And all I did was change a couple of words there, but it changed the frame of the conversation. I could even go one better and say, "How might we do a better job of moving people around," because that's why the person was in the car in the first place, because they were here and they wanted to be there. And if I frame it this way, now things like public transportation are on the table.

And this comes into effect when you think about things like student evaluations of teachers, which is another area where women get the short end of the stick. So, when it comes to these evaluations, women are generally judged more harshly than their male counterparts. So, the University of Iowa said, "Okay, we're gonna put a couple paragraphs "in at the beginning of the survey "that we send out to students "that are basically gonna say just that. "Look, this is an area where, traditionally, "women are judged more harshly. "So when you're making your evaluation, "make sure you're evaluating on core concepts "and not things like appearance." And the people who got, the students who got that little intervention, that frame, judged women and the courses that women taught more positively than students who did not get that framing.

I wanna finish by talking about our biases, and these are the most dangerous ones because we don't necessarily know they're there. One of them is notational bias. So, one way to think about notational bias is sheet music. So, Western style sheet music. I grew up playing the saxophone, and so I thought this was how all music is expressed and there's no music in the world that you can't express this way, which turns out to be not true at all. There are tons of Asian and African music for which this is completely inadequate. So if you make this the default, it's easy to erase lots and lots of culture.

Something more common for folks like us is to think about when we're asking for personal information. If I'm creating a form like this and in my head there's only two genders, it is very easy for me to bring that bias into the way I design that form, and in doing so erase god knows how many identities. Now, while we're on the topic of asking for personal information. So, there's a bias called the self-serving bias, and the way it works is if something goes well, it's my fault. If something goes poorly, that's on you. And when we interact with computers, that usually holds. If I'm doing some kind of transaction on a computer, if something goes wrong, I'll blame the computer. If it goes right, I'll blame myself. Unless I've given the computer a lot of personal information. The more personal information I give the computer, the more likely I am to blame the computer if something goes right and myself if something goes wrong.

So, for many other reasons and this, we need to be very careful and thoughtful when and how often we decide to ask for personal information, because every time we do it we are potentially engendering an unhealthy relationship between people and their technology. This one always breaks my heart. Until 1986, The New York Times prohibited the use of Ms. as an honorific for women. So, the way it would work would be at the first mention of a woman's name you would say her full name, and every subsequent mention you would say either Ms. last name or Mrs. last name. And the pattern, and we know how important patterns are, the pattern this would set up is that the most important thing to know about a woman is whether or not she's married, and then, you know, maybe her last name. And no such restriction, obviously, existed for men who were using Mr., which tells you nothing about marital status. So, when we think about editorial guides and guidelines, when we think about structured content, because it's essentially what this is, we need to be very careful of the choices we make because you can easily scale a bias. Think about how many articles went out with this pattern being repeated over and over and over again because it was a rule that had been instantiated.

"Language doesn't just describe reality. It shapes it." Mbiyimoh Ghogomu said this, and I think it's very true. And in fact, it's kind of legally true. So, back in the day you had Vice President Dick Cheney, and when he came into power, he basically was asking his lawyers how he could get away with stuff. So, they were very smart, and they said, "Look, you are the vice president, "which means you are in the executive branch. "But you cast the tie-breaking vote in the Senate, "which means you're also in the legislative branch. "But wait a minute. "How can you be in both "the executive branch and the legislative branch? "That can't be true. "So, if you can't be in both, maybe you're in neither. "And if you're in neither, "maybe you don't have to follow the rules of either." And he didn't.

A similar thing happens these days with big tech companies. When it suits them, Facebook says, "Hey, we're a publisher. We're awesome. "New York Times, come publish with us. "We know what you're like." But when someone points out there are rules for publishers, Facebook says, "Whoa, I don't know what you heard, buddy. "We're a platform."

There are tools out there to help us think about language and proper language. So, Radical Copyeditor is a really good resource that helps you think about how to write for and about people who usually don't get a lot of say in how they're written about. Another one is Textio, which is really good at helping you think about writing resumes that are more inclusive, because not everybody wants to be a rock star or a ninja. Another thing I wanna point to is called evidentiality. And the way it works is English uses verb tense in a pretty basic way. It pretty much just tells you went something happened. Bob went to the store. Bob is going to the store. Bob will go to the store. Some languages, Turkish for example, actually do a little more heavy lifting with their verb tense, and it will tell you how you know something happened. So, there's one verb tense for, "I personally saw Bob go to the store." There's a whole other verb tense for, "Somebody told me Bob went to the store," and then a whole other verb tense for, like, "I read on the internet that Bob went to the store." The point is, I can't tell you Bob went to the store without also telling you how I know Bob went to the store.

Now, think about how society would change if you were on the hook for everything you posted, everything you said for how you knew what you were saying. And just imagine for some reason that might be helpful in this day and age. So, English, sadly, does not have this, but I think there are ways to creatively think about introducing it.

So, this is an article that came out back when the first trailer went out for the new 007 movie. Now, remember how I said if something's easy to read, it's easier to believe? The reverse is true. If something is harder to read, it's harder to believe. So, the first paragraph in this article is just about, you know, has confirmable facts. This is the name of the movie. This is who's in it. The second paragraph is about a rumor about why the first director was fired, and it is harder to read and therefore harder to believe. And it should be. It's a rumor.

So, I challenge us to start thinking creatively about ways we can introduce evidentiality into our communications. Told you we were gonna come back to this one. So, for a very long time I had a misconception about what the scientific method was. I used to think the idea was you had a notion about how the world worked, you tested that, and if you got a good result, you'd have a whole bunch of other people test it, and if they got the same result, we'd say, "Yay, there's a law. "Let's move on to the next hypothesis." Not really how it works.

I talked to some actual scientists, and they clued me in. It's more like this. I have an idea about how the world works, and I test that. And if I get a good result, you test it and do the same and the same thing and the same thing, and if we all get the same result, great. I get to spend the rest of forever trying to prove myself wrong. I have to ask myself, "If I'm wrong, what else might be true," and then go and try and prove that. That is a much more rigorous process, which is much closer to the actual scientific method, and the scientific method was created to fight confirmation bias.

Now, as designers it is very easy for us to leave good design on the table because we've fallen in love with an idea we never test. Let me show you how easy. Let's say we're gonna play a game with the computer, and the computer is gonna give us this image and say, "Put whatever number you want "where that question mark is," and the computer will tell you if it fits the pattern. Put as many numbers into that question mark as you want. And when you're sure you have an answer, tell the computer what you think the pattern is. If you're like me, you start out trying this. And the computer says, "Congratulations. "That fits the pattern. "Would you like to try another number?" And if you're like me, you say, "Nah, I got this. Hold my beer. "The answer is is it even numbers." And the computer says, "No." And the reason the computer says no is because I didn't try this. The pattern is not even numbers. The pattern is that every number is higher than the number that came before it, which is a much more elegant solution, and not for nothing, way easier of a code. But I never got to that more elegant solution because I was so in love with my even numbers idea. And woe to us if we come up with a design we love, or god forbid a design our client loves or our boss loves, before we've had a chance to really look and see if we missed something better.

Now, there's a great method for making sure we don't do that, and it's something that the military uses and journalists use, which, how often do you get to say that? It's called red team, blue team. The idea is you have a blue team, and they're gonna go through and really develop the idea, do the research. Get yourself maybe to a wireframe or almost prototype stage. But then the red team is gonna come in for one day, and the red team's job is to go to war with the blue team. They're there to look for every false premise, every assumption they didn't realize they were making, every potential for harm that they didn't see because they were so in love with their idea. And what I like about this approach is that it's fairly economical. I don't have to go to my COO and say, "Okay, we have to spin up two teams for every project now, "and they've gotta check each other's work all the time." No, I need one team for one day. It's gonna get less likely that we're gonna put something harmful out into the world.

Another great approach for this is called speculative design, and it's kind of like "Black Mirror". I don't know how many of you watch "Black Mirror", but it's basically like "The Twilight Zone" for tech. You get some idea for some near future tech and you see how it would play out if real human beings got to use it, which is usually terrible. It is my assertion that anybody who's working on a new technology by law should have to write a Black Mirror episode about it. So, this is an actual job. Speculative design, thinking about these possibilities. So, Superflux went to the United Arab Emirates, who were trying to figure out, like, "What should we do about energy going forward? "Should we continue with fossil fuels? "Should we investigate renewables?" And one of the things that Superflux did was to say, "Okay, let's figure out what your air quality would be like "10 years out, 15 years out, 20 years out "if you continue down the path of fossil fuels." But they didn't just figure it out. They bottled it and made them breathe it. And by the time you get the tinders out, it's unbreathable. So, correlation or causation, at the end of this conference where they did this experiment, the United Arab Emirates announced they were going invest $150 billion, I believe, in renewables.

The final bias I wanna talk about is called deformation professionnelle. Told you I speak French. And it's the idea that you see the whole world through the lens of your job. Now, in the workaholic society we live in, that might seem like a good idea, until it's not. So, the paparazzi who ran Princess Di off the road that night probably thought they were doing a good job. Technically speaking, they were doing an amazing job. They were getting really hard to get photos that were gonna fetch them a really high price. But what they weren't doing a good job of was being human beings.

When one of the former police commissioners of Philadelphia first got the job, he asked his officers, "What do you think your job is?" And many of them said, "To enforce the law." And I think we can look around today and see a lot of them still believe that. But he said, "Okay, that seems like a reasonable answer, "but what if I were to tell you "your job is to protect civil rights?" And again, I think we can say if that were the answer, we'd be seeing a very different world today. But that's how important it is the way you define your job. Their jobs were way harder than they thought it was because protecting civil rights, it encompasses enforcing the law, but it is a much higher standard, which forces you to treat people with dignity. You can't just shoot them.

I've had this slide in the deck for two, three years now, and it has obviously become more important than ever. And I still get very emotional when I get to it. But it has made me only firmer in my conviction that how we define our jobs is crucial. And I will submit that our jobs are harder than we think. Our jobs are not simply make cool shit. No, we need to define our jobs in a way that allows us to be more human to each other.

So, people are working on this. Mule Design's Mike Monteiro produced "Design Ethics", a little handbook that's basically a Hippocratic Oath, a, "First, do no harm" for designers. The Design Justice Network is doing amazing work in this area. Their 10 principles are a must read. But just to point out the first two principles. "We use design to sustain, heal, "and empower our communities, "as well as to seek liberation "from exploitative and oppressive systems." That is a hell of a first line. "Two, we center the voices "of those who are directly impacted "by the outcome of the design process." How often do we think about that second one when we're actually doing our day-to-day work? And Erika Hall has put this, I think, beautifully. She talks about the difference between user-centered design, which is what we think we're practicing, and shareholder-centered design, which sadly is what we're usually practicing.

The Markkula Center for Applied Ethics has pointed out that, "Hey, we've been thinking about ethics "for a really long time here. "Let's take advantage of the work of thousands of years "of people thinking about gnarly problems like this," and they've put out good guidelines for the questions to ask yourself. Another version of this that I love is the Tarot Cards of Tech, which is an interaction where you can go through and click on different cards for provocative questions like, "How might cultural habits change how your product is used, "and how might your product change cultural habits?" Imagine if Twitter had asked itself this before it launched. Another great one is 52 UX Cards to Discover Cognitive Biases. And what I like about this particular offering is that they come with exercises you can use to really get the most out of thinking about the biases that are being introduced. Humane by Design is a beautiful interaction that just sets very- it's almost like an ethical design system. It has great little interactions that give you clear guidelines, like promoting awareness, so that if someone's using your tech, they know how often they're using it. Make that easy to find out so they don't just use it all the time without realizing. We see this in the world of software development.

The Never Again pledge came about when data scientists and people who did big datasets were being asked to do some really unethical stuff with those datasets, for example, around things like immigration. And they pointed out all these times in history, because this is not new when data scientists were asked to do some horrible stuff, and said, "We are not going to do that." And people started putting their foot down. We even saw with Google Project Maven, which was a battlefield AI. A lot of engineers there said, "We did not get into this business to build weapons," and basically said they were gonna walk. And Google backed down. They walked away from a $250 million military contract. Then they turned right around and did Dragonfly, which is censored search in China, and that battle is still being fought.

"We must rapidly begin the shift "from a thing-oriented society to a person-oriented society. "When machines and computers, "profit motives and property rights "are considered more important than people, "the giant triplets of racism, materialism, and militarism "are incapable of being conquered." Now, this is not some software guru at some TED Talk. This is Dr. Martin Luther King, Jr. 50 years ago he saw this, and it's only more true today.

So, the question I'll ask all of us is, how can we define our jobs in a way that allows us to be more human to each other? Thank you.

If anyone wants to grab virtual coffee and chat about these things, I've got a link here. I'll throw it up in Slack. And like I said, I've got a book coming out this summer, it should come out in August, called "Design for Cognitive Bias". And if you wanna learn more about it, I'll throw a link about that in the Slack too. Thank you.

Our sponsoring partners