ADVERTISEMENT

Act Now to Head Off Looming ‘Deepfakes’ Disasters

Facebook’s dilemma on the doctored Pelosi video shows need for new standards before it’s too late. Here’s a start.  

Act Now to Head Off Looming ‘Deepfakes’ Disasters
U.S. military personnel use smartphones to take photographs and video footage of U.S. President Donald Trump arriving for his Memorial Day address on board the USS Wasp aircraft carrier at the U.S. naval base in Yokosuka, Kanagawa Prefecture, Japan. (Photographer: Kiyoshi Ota/Bloomberg)

(Bloomberg Opinion) -- Parody and satire are a legitimate part of public debate. Doctored videos are all over the place. Some are funny. Some are meant to make a point.

Arguments of this kind are being made to support Facebook’s decision not to take down the doctored video of House Speaker Nancy Pelosi, in which she was made to appear drunk or otherwise impaired. 

One version, posted on the Facebook page Politics WatchDog, was seen over two million times in just a few days. There is no question that many viewers thought that the video was real.

Acknowledging that it was fake, Facebook said, “We don’t have a policy that stipulates that the information you post on Facebook must be true.”  

Rather than deleting the Pelosi video, Facebook said that it would append a small informational box to it, reading, in part, “before sharing this content, you might want to know that there is additional reporting on this,” and linking to two fact-checking sites that found it to be fake. Facebook also said that it would also “heavily reduce” the appearance of the video in people’s news feeds.

In striking the right balance, Facebook is in an admittedly tough position, but these steps are an inadequate response to a growing danger.

Also last week, the responsibilities of Facebook, Twitter and other social-media providers were put in sharp relief by an announcement, both creepy and amazing, from engineers at Samsung. They have made major progress in the production of “deepfakes”: faked videos of human beings, alive or dead, that appear real. The new approach can take very few images -- potentially just one and produce “highly realistic and personalized talking head models.”

Before long, any public figure, and indeed anyone who has ever been photographed, can be shown to say and do anything at all. If Russia continues to seek to undermine the electoral process in the U.S., it will have (and may already have) a powerful tool.

To respond to the evident risks, here is a proposal, meant tentatively and as an invitation for discussion:

Facebook, Twitter, YouTube and others should not allow their platforms to be used to display and spread doctored videos or deepfakes that portray people in a false and negative light and so are injurious to their reputations — unless reasonable observers would be able to tell that those videos are satires or parodies, or are not real.

The proposal grows directly out of libel law, which (simply stated) imposes liability on people who make false statements of fact that are injurious to people’s reputations. Those who make a doctored video or a deepfake might well produce something libelous, or very close to it

It is true that in New York Times Co. v. Sullivan, decided in 1964, the Supreme Court ruled that the First Amendment imposes restrictions on libel actions brought by public figures, who must show that the speaker knew that the statement was false, or acted with “reckless indifference” to the question of truth or falsity.

But with doctored videos or deepfakes, the court’s standard is met. Those who design such material certainly know that what they are producing is not real.

To be sure, many doctored videos are legitimate exercises in creativity and political commentary, including satire, humor and ridicule. The same is true of the coming deepfakes.

If videos show the Beatles playing Taylor Swift songs, or Joe Biden and Bernie Sanders dressed in Nazi uniforms and looking like Adolf Hitler, let freedom ring. In such cases, reasonable viewers would not think that they are watching something real.

It is true that the key terms in my proposal  “false and negative light” and “injurious to their reputations” are vague. In some cases, their application would require an exercise of judgment. We could easily imagine serious disputes.

In that light, some social-media platforms, including Facebook, might reject the whole idea and conclude that it’s better to inform people that a video isn’t real, rather than to take it down altogether.

But if a newspaper prints a libelous statement, it’s not enough for it to append a note saying, “This isn’t true.” Many readers will accept the libel, not the note. If the newspaper wants to avoid liability, it will not publish the statement in the first place.

In a sense, doctored videos and deepfakes are even worse than purely verbal libels. Viewers of convincing images might continue to think, in some part of their mind, that what they saw captured reality.

That’s especially damaging if we are speaking of current or aspiring political leaders. Such libels do not merely injure the person at whom they are aimed; they injure the democratic process itself. This is true whether the victim is Nancy Pelosi, Elizabeth Warren, Lindsey Graham or Donald Trump.

It’s right to emphasize that Facebook is a social-media provider, not a newspaper. But it is also a private company, not a government. Because the First Amendment applies only to government, Facebook, like all private providers, has a lot of room to maneuver. It is free to provide safeguards against unambiguously harmful speech.

To their credit, Facebook and other social-media providers have proved willing to rethink their practices. With respect to doctored videos and deepfakes meant to undermine or destroy people’s reputations, it’s crucial to get ahead of what is bound to become a serious threat to individuals and self-government alike.

I have served as an occasional adviser to Facebook, but not on the issues discussed in this column.

To contact the editor responsible for this story: Katy Roberts at kroberts29@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cass R. Sunstein is a Bloomberg Opinion columnist. He is the author of “The Cost-Benefit Revolution” and a co-author of “Nudge: Improving Decisions About Health, Wealth and Happiness.”

©2019 Bloomberg L.P.