The Shape of Mis- and Disinformation

"...To understand the many forms of misinformation and disinformation on social media, we recently spoke with Claire Wardle, the executive director of First Draft, a nonprofit news-literacy and fact-checking outfit based at Harvard University’s Kennedy School, for Slate’s tech podcast If Then. We discussed how fake news spreads on different platforms, where it’s coming from, and how journalists might think—or rethink—their role in covering it. The interview has been edited and condensed for clarity.

"April Glaser: I want to start with definitions, because you’re an academic. I want to know what we should mean by misinformation and disinformation, because the problem has often been reduced down to the term fake news—or as Facebook calls it, “false news”—and I want to know what are the different types of not-accurate information out there, and how do you define the space?

"Claire Wardle: You’re right, I do love definitions, and that’s partly because this space is really complex, and so that term F-asterisk-asterisk-asterisk news is just used as this throwaway term, and we’re not actually clear what we mean. Misinformation is false information, but the people who share it don’t realize that it’s false. So that’s my mum resharing an image of a shark during a hurricane that she doesn’t understand is not real. Disinformation is also false information, but it’s actually shared by people who know it’s false, and they’re trying to do harm.

"I also talk about a distinction called “malinformation,” which is actually genuine imagery or content that is used to cause harm, so that could be, for example, leaked emails. They are genuine emails, but they are shared to cause harm. It could be revenge porn. It could be an old image from a previous breaking news event that recirculates that is a genuine image. The reason that those definitions are needed, I think, is because this is a complex space, and that one term does not reflect that complexity.

"Glaser: As journalists, the concept of “F-asterisk-asterisk-asterisk news,” as you put it, or “false news,” or “not quite correct information,” is kind of really offensive to us because our whole job is to not write false news, right? If we do, we’ll be fired. And so I and Will, we have a bias against false reporting. That’s what we do. With that in mind, what would you say the role is of journalists in fighting disinformation? Is my job to debunk fake news now, or is it to report on all the other tragedies in the world, and how do I deal with the fact that this is obviously something that I’m opposed to?

"You’re absolutely right, and that’s why people go, “Oh, you’re stupid not using that term.” The reason I refuse to use the term is because it’s actually being used against the news industry. It’s been weaponized, and it’s being used by, particularly, politicians around the world to describe any type of information they don’t like. And when it’s being used and targeted against—I’m just going to use the term the mainstream media, which does have professional standards, it does have fact-checkers, it does have multiple pairs of eyes looking at things, it does have a corrections policy, then it’s really important that we’re distinctive when we use these terms.

"So we need to talk to audiences increasingly about “What do you think when I use the term fake news?” Increasingly, the numbers are going up, because people go, “I’ll tell you what’s fake news, Claire. It’s the mainstream media.” So we’ve got a problem here: We in the journalism space are using a term that’s not being used by the audience.

To your question about is it your job to debunk it, it’s also a really, really important question, because ultimately, 20 years ago, journalists never really had to deal with the stuff that wasn’t true. It ended up on the cutting-room floor. It was the thing that people said, well, we deal in truth. We’re journalists. But the problem now is because audiences can get the same information as you as journalists—for example, during a breaking news event. They want help navigating that information ecosystem. They are looking to journalists to say “Is that post that’s going viral on Twitter … is that true? Is that video on Facebook … is that true? Can you help me navigate that?”



"So what that means is increasingly newsrooms have had to play a role in helping audiences navigate this space.

"Glaser: Facebook and YouTube have kind of had this flattening effect, where any random person that wants to—a conspiracy theorist or a grassroots journalist, even, which I would say is a good thing in many ways—they can have mass influence as broad as a verified news organization, right? And platforms have certainly played a role in the disinformation crisis that we’re in now where people often don’t know if what they’re seeing is true or not. What do you think the platforms can be doing going forward?

"So a big part of this is that the iPhone was created in 2007. A lot of these social networks were created in 2004, 2005. Our brains are trying to catch up with a new type of information. So, to your point, April, you’re right. Every post on Facebook looks identical. Whether it’s wrong, the New York Times, whether it’s from Slate, whether it’s from a crazy conspiracy blogger, they look identical on Facebook or YouTube.

"And so our brains are desperately looking for heuristics, which are these mental shortcuts we need to help us make sense of information. So in an age of gatekeepers, where we used to go to a newsstand and give somebody some money, and buy a newspaper, our brain would be like, “I’m about to consume some news.” Or at 7 p.m., we’d hear the bongs, and it was like, “Bong, bong, bong, this is the news,” and our brain would be like, “This is the news.” Now, news is a mixture of the fact that a friend is hungover, another friend is pregnant, and there’s just been a new chemical weapons attack in Syria, and my brain does not know how to make sense of all that information in a feed where everything looks the same.

"So, you’re right, we’ve had a flattening kind of move in terms of our information ecosystem, but also visually, everything looks the same, and we haven’t made up for that. And I hope in 30 years’ time, we’ll look back and be like, “Whew, do you remember 2018 when everything looked the same?” Because I think we’ll look back at this period and say we weren’t helping ourselves. Our brains are struggling, but the amount of information we consume every single day on these tiny smartphone screens, we just haven’t thought about how can we help our brains.

"Glaser: Right. It’s hard to know what’s the top-shelf information, and what’s the well, right?

"Exactly. Yeah.

"Will Oremus: Tell us a little bit about the negative effects of misinformation. What’s the worst that can happen?

"Let’s go back to that shark example. Somebody photoshops a shark into an image from a highway, and [they] say this is from the latest hurricane. That’s not great, but I would hope many people now might just say, “Oh, there’s no problem with that.” The bigger problem is when we have misinformation or, more frequently, disinformation where people are deliberately trying to sow confusion, and actually use divisions, particularly social and cultural divisions, to pit people against each other, or to simply confuse people to such an extent they don’t know what to believe.

"That’s a real challenge I see, even something like this podcast, I think, am I actually flagging something that makes people say, “I don’t even know who to trust anymore, Claire”? And so, what we’re seeing potentially is people turning inwards to people that they know, and that’s what we’re seeing in places like India, with closed messaging apps like WhatsApp. When people don’t trust the mainstream media, and they don’t trust Facebook, or Twitter, or these big social media platforms, they are retreating into these closed messaging spaces because partly they don’t know who to trust, and they’re worried that if they’re online, somebody’s going to abuse them or harass them.

"And so what that means is that, as a society, we’re kind of becoming smaller, even though we have all this information we can access. So I think this question of by even discussing it, are we driving down trust? I think we have to be very careful about saying to people, “There’s more information than ever. You can find valid, accurate information. You need to be a little bit wary, but please don’t lose trust in everything,” because I think that’s ultimately the end goal for those people who are trying to sow disinformation.

"Glaser: You brought up something that is really fascinating, and that is the difference between closed ecosystems where disinformation spreads versus open ecosystems. And you mentioned WhatsApp, which is more popular in Brazil and India versus Facebook, which is an open system that’s more popular in the United States. WhatsApp circulates messages in groups of, I think, 256 people maximum, and disinformation has been spreading like wildfire through WhatsApp. Can you break apart the differences between how disinformation spreads or even how it percolates in these two different spaces?

"Yeah, and this is really pertinent at the moment. I think lots of people have been reading about the problems in India where people have actually lost their lives. It’s actually leading to murders and people on the street protesting. And in many ways, you hear people say, “Yes, because social media, like WhatsApp …” WhatsApp shouldn’t actually be described as a social media platform. It’s a closed messaging app, and you’re right to say people can be in groups of up to 256. The average size of a group tends to be about six.

"So what that means is you have people hit very many of these small groups. We are used to thinking in a broadcast way, so whether it’s radio, or whether it’s Twitter, we think, well, there’s one message, and with social media, it can go everywhere. Or radio allows it to go everywhere. With WhatsApp, we don’t have that same mechanism, but it’s jumping very, very quickly between trusted peers.

"So on Twitter, you might say, “Well, that guy’s an idiot. I don’t trust him.” In WhatsApp, you’re more likely to be in a group with your best friends, or family, or co-workers, which means, again, going back to that conversation about our brains … our brains are looking for those heuristics. So actually, if you get forwarded a piece of information from your best friend, your brain’s like, “Don’t really need to check that out because Martha’s always right.”

"So the problem we have with WhatsApp is we need to understand networked environments. In the same way as when everybody talks about Obama in 2008, he understood that, so he had millions of people individually having parties in their house and fundraising individually. He understood that power. When we think about WhatsApp, if you’re trying to slow down misinformation on WhatsApp, you can’t just send out one message that will hit everybody. You need to have ambassadors in every one of those millions of groups say, “P.S., I’ve just debunked this.”

So that is a huge challenge, and within the journalism and communication space, we never had anything like that. We’re kind of back to the era of people in their living rooms, and—I’m British—in their pubs, having conversations within small groups. That’s the challenge with WhatsApp. It’s a very different beast to the social platforms we’ve had before.

Glaser: And this information kind of gets seeded on WhatsApp then, right? So there might be a piece of disinformation that somebody receives, and then they just spread it.

"Yeah, absolutely. And so, for example, in India, there’s one particular political party that really understands WhatsApp, and they have 1,500 operatives starting lots of small Facebook groups, and then even more are started. And so they understand that networked environment. They understand that they need to create these small groups with tight people, because they are trusted nodes, and that’s why I say we need to kind of understand the networked society, because it basically mirrors society much more closely than a Facebook or Twitter. We don’t normally stand with a megaphone and talk to thousands of people like we can with Twitter. WhatsApp is like our own little rumor mills.

"Because that’s the other thing to remember: As humans, we can’t really stop this because we’ve always loved gossip. We’ve always loved rumors. Because if somebody tells me right now that a best friend is having an affair, am I going to verify it? Probably not. I’m going to pick up the phone and talk to somebody else about it. So when we have those issues with WhatsApp, that’s what’s happening. It’s taking advantage that, as humans, we connect with other humans through information at an emotional level—not necessarily a rational level. And that, in a place like India, where people are actually— … They’ve come to the internet much more recently.

"They haven’t necessarily understood how you need to check information, how everything you see online isn’t true, but at the same time remember, a lot of times they’re getting information from trusted friends, which means their brains are much likelier to say, “Oh, OK. I’m not even going to check.”

"Oremus: To me, that’s a fascinating point. And for one thing, it’s a good reminder that misinformation is not just a newfangled phenomenon, right? Technology is enabling it, and algorithms that make stuff go viral are maybe making it worse. But we’ve probably always had misinformation in just human networks, word-of-mouth networks. And Twitter, it feels toxic or scary sometimes because strangers can yell at you, but you’re highlighting that there’s a downside to a site where strangers can’t yell at you and tell you that you’re wrong, or correct your information easily.

"But I wanted to ask you about the standards that the different social media platforms are using to determine what content is allowed on their platform and what isn’t. Historically, platforms like Facebook and Twitter said, “Look, it’s not our job to decide what people can say. We just are connecting people, and they’ll say whatever they want to say.” Now they’re accepting more responsibilities, but they’re trying to make these distinctions, because obviously they don’t want to be in the business of regulating everything everyone says. So you get stuff like Mark Zuckerberg at Facebook saying that he doesn’t take down posts that are denying the Holocaust because maybe he can’t discern the intent. Maybe people really believe that, and so they shouldn’t be banned just for saying things that are wrong, because we all get stuff wrong. You also hear a standard … Facebook has a new policy where if there’s misinformation that could lead to physical violence in the real world, then they’ll take that down, but they won’t take down other types of misinformation necessarily.

Where do you land on what are the right standards, and how are the platforms doing in terms of finding the line between speech that they should allow and speech that they should not allow?

"Great question, and, again, I love to think about how historians will think about this era because from the moment the platform started, they recognized that if they had to take responsibility for the content that was on their platforms, they were going to be in trouble, because they were going to have to hire thousands and thousands and thousands of moderators to make really difficult decisions, because anybody who works in the publishing business … talk to any journalist, they’ll tell you, these decisions are really, really hard. So what they’ve tried to say is we are not a publisher. We are simply a communications platform. And now what you see is this kind of frustrating wrangle where people go, “Zuckerberg, just admit that you’re actually a publisher,” and he’s like, “No, we’re just a communications platform.” And he goes round and round and round. The truth is they are a hybrid. We’ve never had anything like a Twitter, or a Facebook, or a YouTube, and so what the problem is— … We haven’t caught up in terms of regulation or legal frameworks. So we’re in this tussle.

"And so what we’re seeing is that evolution. So at the very beginning, they wanted to make absolutely no editorial judgments whatsoever, and I think one of the first cases where they took down content had been a beheading I think in Syria, and actually Facebook said, “OK, we will take this down,” or Twitter did. And that was almost the first stage.

"And then they started saying, “OK, we’re going to take down pornography and extremist content,” because that’s actually illegal speech. And so when we had speech that people agreed was illegal, they could stand behind that, because they said there’s a legal framework.

"What they hate is this middle ground, this gray area, and they’ve been trying to hold back and say, “We don’t want to have to make decisions.” The minute they said this week or last week when they said, “In Sri Lanka, we’re going to have this pilot to say anything that may lead to violence, we will take down.” Now, this is a huge step, and I actually think a step that they need to take, and it’s come about because of pressure, because we’ve seen violence in places like Sri Lanka and India, and they’re under huge pressure both from regulators and civil society to do something. But I think that just shows a natural progression that they’re going to have to own up to the fact that they are publishers, and they’re going to have to actually hire lots of moderators.

"And I think even this time last year, there were a lot of discussions about Facebook Live, and we were seeing people committing suicide. We were seeing drive-bys. … It was awful, and everyone was like, “You can’t possibly let this happen on your platform, Facebook,” and Mark Zuckerberg said, “We’re going to hire 10,000 moderators. We’re going to put A.I. on this to help us alert us to the signals,” and lo and behold, it is happening, but to a much, much lower level.

"The truth is they, I think, are just going to have to hire incredibly high numbers of moderators across the world who speak local languages, have local context, and I think that’s just what they’re going to have to accept. And if they don’t, we will see very, I think, impressive regulation, which they don’t want either. So I think they’re just going to have to throw a lot of money at this problem.

"Glaser: That might affect their massive profit margins, but …

"It’s so massive, I think there can be a bit of a dent."

Comments