A video shared by President Donald Trump that was edited to make it appear presidential candidate Joe Biden was endorsing his re-election during a campaign rally Saturday was deemed manipulated content by Twitter-a first for the social media company. But Facebook did nothing to flag the video as false content.
Biden had stumbled over some words and the video stopped short of including his correction. The video is the latest cheap fake to raise controversy in recent weeks.
University of Michigan School of Information Professor Clifford Lampe explains cheap fakes and the difficulty in getting the platforms to police them.
Deepfake, cheap fake, dumb fake-terms for misinformation. Can we start with definitions of these?
Lampe: We’ve certainly heard a lot of these terms recently. So deepfake is a kind of a special class of false information. It’s where you use machine learning or artificial intelligence to map a person’s features on their face. And then you can overlay basically one face on the other. So I could, for instance, basically steal your face. The computer uses advanced computational tools to be able to create very realistic fake content that makes it so that a person would say something they wouldn’t normally say. That’s very different from a cheap fake or dumb fake, which are basically two names for the same thing. Those are where you just use common editing practices to make a video that’s misleading.
What is an example of a cheap fake?
Lampe: A classic example is a video that went around a few months ago of Speaker of the House Nancy Pelosi giving a speech. And the makers of the video just slowed the frame rate down a little bit. She was standing at the podium and slowing down the video didn’t change her movements at all, but slowing the video down made it sound like she was speaking in a slurred fashion. And that’s an example of a very simple editing tool that you can use in order to make it sound like a person’s in a state that they’re not in.
The other cheap fake we saw recently also involving Speaker Pelosi was during the State of the Union ripping up her copy of the speech. A military man had been reunited with his family during the speech and the video was edited to make it look like Nancy Pelosi had ripped up her speech just as that man was reunited with his family. So even though those events in real time happened an hour or more apart it looked like they were happening simultaneously.
And that’s the cheapness of it. It’s a super easy edit. The fake part of it is that you’re creating a narrative that’s intentionally misleading.
That’s another really common, super easy, cheap fake where you just remove a clip of video from the full context of the video and it makes it sound like a person said something they didn’t say. So for another example, former Vice President Joe Biden was giving a talk about corruption in the Ukraine. Some conservative commenters took a portion of his video where it sounds like he’s admitting to influencing campaigns in the Ukraine. But if you look at the full video, it explains it in much more detail, in much more context, what’s going on. But the short clip is very convincing. Now, it’s not a deepfake. Vice President Biden did say these things, but it’s edited in such a way to create a narrative that’s not true.
Journalists for decades have actually had to work very hard not to produce cheap fakes accidentally. A big part of journalistic integrity and professional journalistic practice is to be careful when they create video so they’re not creating misleading narratives. But of course, we’re in a new media environment where there’s hyperpartisan news sources that are invested in shaping a narrative that’s not necessarily a neutral point of view.
When it comes to deepfakes, we have some people working on technology to help spot them. What is happening to monitor cheap fakes?
Lampe: The groups that are most effective at monitoring these are the fact check organizations. So FactCheck.org and PolitiFact and Snopes, all of which are working overtime to try to sort out what is real versus not real in the current media environment. A lot of this ends up being amateur detectives who if something looks too good to be true or looks like that can’t possibly be true, they’ll go back and find the original and do side-by-side comparisons themselves.
But at this point, we’re entirely too dependent, I think, and kind of amateur sleuths in the media environment to try to determine what’s fake or what’s not fake. The platforms, Facebook and Twitter and groups like that have mostly washed their hands of this. They’re not willing to have a strong role. Twitter, for instance, with the Pelosi State of the Union video, very explicitly said no bit of that information is fake. All those things happened-maybe not in that sequence. So they refused to take that video down because they didn’t see it as a fake video.
Some of these groups like Snopes are offering solutions that are dependent on me going to their sites. Platforms aren’t taking them down. So it’s really pretty ineffective, isn’t it?
Lampe: Pretty much. In fact, a lot of my conservative friends, as an example, have given up on Snopes. They find a lot of the fact-checking by sites has a liberal leaning bias. A lot of people don’t seek fact-checking because the goal of these fake news stories and deepfake images and videos isn’t to inform. It’s to persuade and to create an emotional state.
One of the most important parts of disinformation, and misinformation more broadly, is that it’s not necessarily about the information itself.
This sort of thing is not new. It’s historical. The media environment always has been kind of a partisan mess. Our third presidential election was Thomas Jefferson versus John Adams. And dozens of newspapers existed in every major city in the United States at that point. And they were very openly hyperpartisan-Whigs versus the Democrats at the time. And the papers would make straight up lies or innuendo about different candidates.
There are all sorts of fake stories about Thomas Jefferson and his time in France, and being too close to the French aristocracy and wanting to have a French king as part of the American system. His opponent John Adams said Jefferson had sold out to the British and that he was somebody who is in league with the devil. Any story, you can imagine, makes our current environment actually look kind of tame. And then the other time we’ve seen this level of hyperpartisan divide in the United States was right before the Civil War. Those elections, right before the war were also rife with lots and lots of really strong hyperpartisan news stories.
Andrew Jackson, for instance, came in as a populist in the White House before the Civil War and was hugely hated by a lot of the establishment. There are tons of news stories attacking him and his wife, who historians think now suffered from kind of undiagnosed mental health problems. She ended up committing suicide because of all the stress of the news that was coming out about her and all sorts of accusations and really personal things that were being said about the Jackson family.
So this has been part of our environment. Journalistic practice became much more conglomerated and much more professionalized. So instead of having dozens of newspapers per city, we typically had a few. And there became schools of journalism around that same time in the early 20th century. And the whole practice became much more professional. So we lost that for a little bit.
But with the rise of social media, it appears we’re back in an environment of hyperpartisan media production.
As you say, we came to this idea that news coverage should be objective. Can we get to that place where people say enough is enough? Do you think it will police itself eventually?
Lampe: I don’t think it will police itself. I mean, partially because it does attack identity and emotion. So social media were invented in an environment where they’re intended to foster interpersonal relationships.
If you look at all the reaction buttons on Facebook, it’s like and love and sad and laughing. And all of these are very emotional responses. It was not designed to be a civic debate platform. And it obviously is not a civil debate platform. Same with Twitter.
We don’t have any mechanisms in there to slow down thinking. Instead, we do exactly what you shouldn’t do when it comes to political deliberation. We trigger those emotional responses. Depending on the groups that you follow, your own kind of homophobic social network, and the memes that you get shown, it’s very easy to create an environment where you’re not exposed to alternative viewpoints, or if you are, it is to a caricature of alternative viewpoints that are presented by people who think like you.
So, unfortunately, I don’t know that it will police itself because it feels too darn good for people. You feel righteous when your tribe is shown to be right. And when I see Nancy Pelosi slurring that reaffirms my identity as somebody who’s a smart, good person, who would never side with that enemy tribe. My tribe is good. The other tribe is bad. And that’s where we are right now. And you can see the results in Pew Research studies: we’re as partisanly divided as we were at the point of the Civil War. So much more than we’ve been at least in the past 70 years.
Is some policy solution likely?
Lampe: I don’t think it is likely. No, there’s two policy barriers in place, both of which are good in their own right but present barriers we have to think about. The first is the First Amendment. We have a very strong First Amendment belief in this country. Most other kind of liberal democracies look at us like we’re a little bit kind of fatalistic about this, but it is such a strong part of our identity as Americans that free speech is sacrosanct.
It makes it so that it’s very hard to regulate speech, rightfully so. Now there are laws and we do have limitations to the First Amendment, things like libel and slander. Typically, deception has been a limitation of First Amendment rights to free speech. But we still haven’t thought about what that means in a social media environment.
The other big barrier we have is Section 230 of the Communications Decency Act of 1996. This is an act that was created that was largely about protecting people online. But there’s a section of 230 that indemnifies large platforms.
So that was the act that said Comcast couldn’t be sued for pockets of the internet that it carried on its lines. Well, that all got transferred to social media platforms. So YouTube, Facebook, Twitter, Instagram, all these platforms are not legally liable for any of the things that are carried on their platform. This has been challenged many, many times in court and, so far, Section 230 has been bulletproof.
Let’s bring this down to the individual. What can I do to make myself savvy?
Lampe: I think it’s tempting to think that this is a special crew of people who are particularly susceptible to cheap fakes or deepfakes or false content. More broadly on the internet, the consistent research shows us that everybody at some point shares bad information and gets basically activated by emotional content as opposed to rational content. So I think the first thing everybody can do is kind of look inwardly when you’re sharing a piece of content. If you feel too emotionally happy or angry or anything when you’re sharing content, at least double-check that. Why are you having such a strong emotional reaction to the content that you’re sharing? That strong emotional reaction is likely tied to the idea that the information triggered some early-stage thinking in you, not more rational thinking. So be self-aware about why you’re sharing and what kinds of information that you’re sharing. Other things you can do, of course, are very common and tried and true media literacy things. So one is to check your sources. Is it a reputable source?
Does it appear that there’s an agenda behind why this thing was written? Who wins and who loses’ Follow the money. When you’re thinking about the news source, ask, "Why would they post this particular thing if I’m only hearing from this from a relatively minor source?” That, as opposed to a much larger source or multiple sources all telling me the same thing. So being suspicious, unfortunately, of all the things that you’re seeing, is a state that I think we have to be in when there’s so much deception that’s out there.
And then like I said earlier, if it’s too good to be true, if it confirms a bias you hold or belief that you think you want to share with people, you should be a little suspicious of that as well. So always think again, why is this story being shared? And can I confirm this across multiple sources at the same time?
And then I think the other thing that you can do is read the better news articles and better news sources on the side of the partisan divide that you don’t particularly ascribe to; have a broad ecology of news.
I think when you talked recently about this, one of the reasons that cheap fakes are dangerous is the way they are being distributed now-not by media sources but by social media. And we have politicians who are distributing.
Lampe: That’s right. In terms of who’s distributing cheap fakes and deepfakes, I think in the 2016 election we saw a lot of this coming out of adversarial countries. So, for instance, Russia was very much tied to the internet research agency and there’s a lot of content being produced across the pond that kind of made its way into the United States via various deceptive means. I think that has shifted. That’s still very much part of our media environment, but also the domestic campaigns have realized how effective those campaigns can be and are starting to take advantage of that overall. So they’re really adopting these deepfake and cheap fake types of tools and distributing them.
The other thing that makes all of this very different in this social media environment compared to what we’ve seen before, is the idea of micro-targeting. That’s the real magic of social media. Creating a fake video is super easy. But if I just did that without any distribution means that wouldn’t go very far without micro-targeting. What I can do is I can find demographics of people and buy them.
We know famously of Cambridge Analytica doing this during a 2016 election, but there are now dozens, if not hundreds of companies that do this at a very deep level.
Misinformation, and disinformation more broadly, and the cheap fakes and deepfakes are to create a type of apathy. What you do is inoculate your true believers, too, against contradicting information and you discourage people overall from trying to seek information. So the environment that I think people are trying to create very purposely is one where truth doesn’t matter anymore. And when truth doesn’t matter, what you’re going to do depends on your identity.