ADVERTISEMENT

The ‘Deep Fake’ Threat

The ‘Deep Fake’ Threat

(The Bloomberg View) -- Pondering a strange new technology, in 1859 Oliver Wendell Holmes wrote: “The very things which an artist would leave out, or render imperfectly, the photograph takes infinite care with, and so makes its illusions perfect.” That’s becoming a bigger problem than he might’ve guessed.

Illusions have long thrived on the internet, of course, from doctored photos to fake news. But a newly sophisticated variety is worth paying attention to. Sometimes called “deep fakes,” they’re videos made with the help of artificial intelligence that appear genuine but depict events or speech that never happened. Without precautions, they could prove highly disruptive — to people and politics alike.

In their simplest form, these forgeries are fairly easy to produce. An aspiring video-faker can simply feed images of a given person to a machine-learning algorithm, which in turn learns to overlay their features onto the body someone in another video, thereby creating a simulacrum of the original person doing things she hasn’t done. More advanced iterations are even more potent: Here, for example, is former president Barack Obama delivering an entirely fabricated speech.

It’s an impressive bit of voodoo. But it’s being put to some devious uses. Porn, naturally, was an early one: Specialized apps can now superimpose anyone’s face on a relevant performer’s body with unnerving ease. Crooks and creeps will no doubt find such a tool appealing, whether for embarrassing an ex, extorting strangers or simply inducing havoc.

Down the road, though, more serious abuses loom. Fake news, spread through social media, has already roiled political races; in some cases it has led to violence. Because people tend to lend more credence to videos — and can be easily misled by them — that problem could get worse. Hoax films of officials making divisive statements, acting corruptly or otherwise behaving badly may become familiar elements of political campaigns.

In a similar vein, intelligence agencies might use deep fakes to blackmail politicians, sway elections, spread propaganda, or exacerbate ethnic and religious tensions. Terrorists, too, will likely see the appeal. Human-rights groups worry that such videos will make it much harder to hold abusive officials to account.

Fakes also threaten to erode trust more broadly. Video evidence has become a pillar of the criminal-justice system, for instance, precisely because film seems like such a reliable record of someone’s speech or actions. Deep fakes could feasibly render such evidence suspect. In his uncanny way, President Donald Trump foreshadowed this problem last year, when he began privately musing that the “Access Hollywood” tape that surfaced during the 2016 campaign — in which he boasted of assaulting women — may have been faked. It wasn’t, but expect plenty of defendants to try the same ploy.

In theory, many of these dangers can be addressed through the normal legal system. Blackmail using a fake video would presumably be just as unlawful as the traditional kind. Yet other abuses may be legally ambiguous or, indeed, protected speech. 

An overriding challenge, then, will be identifying and calling out these forgeries if that’s possible. Technologists will need to build on fledgling efforts to detect and flag manipulated videos automatically. Congress, as a start, should fund basic research on the topic. And platform companies, which bear ultimate responsibility for the content they serve up, must come to grips with the legal and commercial implications of this threat, while working to educate the public.

As for the rest of us? In the short term, at least, society will need to adapt to an unsettling new reality — one where the line between real and fake is blurred, seeing isn’t necessarily believing, and the illusions get more perfect by the day.

©2018 Bloomberg L.P.