ADVERTISEMENT

Facebook’s Laudable Deepfake Ban Doesn’t Go Far Enough

The company’s announced crackdown on doctored media is a welcome step forward, but its new safety filter has big holes.

Facebook’s Laudable Deepfake Ban Doesn’t Go Far Enough
The Facebook Inc. logo is displayed on a smartphone. (Photographer: Gabby Jones/Bloomberg)

(Bloomberg Opinion) -- Facebook says that it is banning “deepfakes,” those high-tech doctored videos and audios that are essentially indistinguishable from the real thing.

That’s excellent news — an important step in the right direction. But the company didn’t go quite far enough, and important questions remain.

Policing deepfakes isn’t simple. As Facebook pointed out in its announcement this week, media can be manipulated for benign reasons, for example to make video sharper and audio clearer. Some forms of manipulation are clearly meant as jokes, satires, parodies or political statements — as, for example, when a rock star or politician is depicted as a giant. That’s not Facebook’s concern.

Facebook says that it will remove “misleading manipulative media” only if two conditions are met:

  • “It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”
  • “It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

Those conditions are meant to be precisely tailored to Facebook’s concern: Use of new or emerging technologies to mislead the average person into thinking that someone said something that they never said.

Facebook’s announcement also makes it clear that even if a video is not removed under the new policy, other safeguards might be triggered. If, for example, a video contains graphic violence or nudity, it will be taken down. And if it is determined to be false by independent third-party fact-checkers, those who see it or share it will see a warning informing them that it is false. Its distribution will also be greatly reduced in Facebook’s News Feed.

The new approach is a major step in the right direction, but two problems remain.

The first is that even if a deepfake is involved, the policy does not apply if it depicts deeds rather than words. Suppose that artificial intelligence is used to show a political candidate working with terrorists, engaging in sexual harassment, beating up a small child or using heroin.

Nothing in the new policy would address those depictions. That’s a serious gap.

The second problem is that the prohibition is limited to products of artificial intelligence or machine learning. But why?

Suppose that videos are altered in other ways — for example, by slowing down them down so as to make someone appear drunk or drugged, as in the case of an infamous doctored video of Nancy Pelosi.

Or suppose that a series of videos, directed against a candidate for governor, are produced not with artificial intelligence or machine learning, but nonetheless in such a way as to run afoul of the first condition; that is, they have been edited or synthesized so as to make the average person think that the candidate said words that she did not actually say. What matters is not the particular technology used to deceive people, but whether unacceptable deception has occurred.

Facebook must fear that a broader prohibition would create a tough line-drawing problem. In its public explanation, it also noted that if it “simply removed all manipulated videos flagged by fact-checkers as false,” the videos would remain available elsewhere online. By labeling them as false, the company said, “We’re providing people with important information and context.” Facebook seems to think that removal does less good, on balance, than a clear warning: “False.”

Maybe so, but in the context of deepfakes, Facebook has now concluded that removal is better than a warning. In terms of human psychology, that’s almost certainly the right conclusion. If you actually see someone saying or doing something, some part of your brain will think that they said or did it, even if you’ve been explicitly told that they didn’t.

There’s room for improvement, then, in Facebook’s new policy; the prohibition ought to be expanded. But steps in the right direction should be applauded. Better is good.

Disclosure: Over the past five years, I have served on several occasions as an informal adviser to Facebook.

To contact the editor responsible for this story: Jonathan Landman at jlandman4@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cass R. Sunstein is a Bloomberg Opinion columnist. He is the author of “The Cost-Benefit Revolution” and a co-author of “Nudge: Improving Decisions About Health, Wealth and Happiness.”

©2020 Bloomberg L.P.