How Deepfakes Make Disinformation More Real Than Ever
An attendee uses his smartphone to record a facial-recognition demonstration on himself in MWC Shanghai exhibition in Shanghai, China. (Photographer: Qilai Shen/Bloomberg)

How Deepfakes Make Disinformation More Real Than Ever

(Bloomberg) -- One video shows Barack Obama using an obscenity to refer to U.S. President Donald Trump. Another features a different former president, Richard Nixon, performing a comedy routine. But neither video is real: The first was created by filmmaker Jordan Peele, the second by Jigsaw, a technology incubator within Alphabet, Inc. Both are examples of deepfakes, videos or audio that use artificial intelligence to make someone appear to do or say something they didn’t. The technology is a few years old and getting better. So far, it’s mostly been used to create phony pornography, but many worry that it has the potential to disrupt politics and business. Researchers at New York University have called deepfakes a “menace on the horizon,” with the “potential to erode what remains of public trust in democratic institutions.” With U.S. elections approaching, Facebook is tightening its policy.

1. How are deepfakes made?

Originating with an early practitioner and Reddit user named “deepfakes,” the name appears to give a nod to deep learning, a subset of machine learning that uses layers of artificial neural networks to train computers to perform a task. With deepfake videos, a program is typically fed high-quality images of a target’s face and then seamlessly swaps it onto someone else’s face in a video. A deepfake audio uses a legitimate recording to train computers to talk like a specific person. Similar machine-learning techniques can be used to train computers to write fake text. A video that’s been slowed down, sped up or edited for deception -- such as a recent clip of House Speaker Nancy Pelosi slurring her words -- isn’t typically considered a deepfake and is sometimes called a shallowfake.

2. How are deepfakes being used?

Deeptrace, an Amsterdam-based company that detects and monitors deepfakes, concluded in a September report that 96% of deepfakes found online were phony pornography, in which women’s faces are mapped on to porn stars’ bodies to depict sex acts that never took place. Actresses from Western countries and South Korean K-pop singers were among the most frequently targeted. In early 2019, criminals persuaded an executive of a U.K.-based company to wire them $243,000 by using a deepfake of his boss’s voice on the telephone. In Malaysia, supporters of a government minister accused of appearing in a video having homosexual sex, which is illegal there, said it was a fake, but experts found no evidence the video was manipulated, according to Deeptrace.

3. What are the worries?

The fear is that deepfakes could unduly destroy reputations, sway an election, artificially boost a stock price and even set off unrest. Imagine falsified videos depicting a presidential candidate molesting children, a police chief inciting violence against a minority group, or soldiers committing war crimes. High-profile individuals such as politicians and business leaders are especially at risk, given how many recordings of them are in the public domain. For ordinary people, especially women, the technology makes revenge porn a possibility even if no actual naked photo or video exists. Once a video goes viral on the internet, it’s almost impossible to contain. An additional concern is that awareness about deep fakes will make it harder to discern truth. In early 2019, Gabon’s military launched an unsuccessful coup against President Ali Bongo after his critics labeled a New Year’s video address a deepfake amid uncertainty regarding his health. Again, an analysis didn’t find signs that the video had been manipulated, according to Deeptrace. The existence of deepfakes could also make it easier for people who truly are caught on tape doing or saying objectionable things to claim that the evidence against them is bogus.

4. What are authorities doing about deepfakes?

Virginia amended its law banning non-consensual pornography to include deepfakes, and Texas criminalized deepfakes that interfere with elections. California banned the distribution of doctored audio or video of a political candidate 60 days before an election, with exceptions for satire and news coverage. The state also gives victims of non-consensual pornography the legal basis to sue to recover damages. Amid concerns that disinformation will play a role in the U.S. presidential election in 2020, as it did in 2016, a member of the U.S. Congress proposed a bill that would require anyone creating a deepfake to label it altered content.

5. And social media?

Facebook said this year it would begin removing deepfakes if the content met both these criteria:

  • It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t readily apparent and would likely mislead someone into thinking that a person said words that they didn’t actually say.
  • It is made by artificial intelligence or machine learning and is meant to appear to be authentic.

The policy won’t apply to parodies or satire. Facebook was criticized last year for not removing the Pelosi video and said afterward it had moved too slowly to curtail its reach. In November, Google said it was “clarifying” its ad policy to highlight an existing prohibition on deepfakes. Twitter said it plans a new policy on deepfakes ahead of the 2020 election.

6. Apart from pornographers, who’s making fakes?

Researchers and companies are working at improving deepfake technology. Researchers at Carnegie Mellon University created a system that can transfer characteristics, such as facial expressions, from a video of one person to a synthesized image of another. China’s Baidu and a handful of start ups including Lyrebird and iSpeech offer voice cloning, which could be used for human-machine interfaces. Deepfakes have created something of an ethical conundrum for researchers, since they are trying to find ways to improve and make more accessible a technology that’s primarily been used for malicious purposes. One justification is that the technology would eventually leak out anyway; another is that the same technology can be used to detect deepfakes.

7. How can deepfakes be detected?

Machine learning can be used to identify head and face movements in deepfakes that aren’t like those of the real person. It can also train algorithms to identify changes in patterns left by the recording process. The U.S. Defense Department, as well as Google and Facebook, are providing funding and data sets to assist researchers working on solutions.

8. Are there benevolent uses?

Yes. Scottish firm CereProc created a digital voice for a radio host who lost his voice due to a medical condition, and vocal cloning could serve an educational purpose by recreating the sound of historical figures. CereProc created a version of the last address written by President John F. Kennedy, who was assassinated before delivering it. The U.K.-based company Synthesia created a deepfake video in which soccer legend David Beckham appears to speak in nine languages appealing for efforts to end malaria. Deepfakes have potential uses for entertainment, as with a video on YouTube inserting actor Nicholas Cage in different movie and television roles.

The Reference Shelf

©2020 Bloomberg L.P.

BQ Install

Bloomberg Quint

Add BloombergQuint App to Home screen.