ADVERTISEMENT

Facebook and Google Are Guilty of a Failure to Take Ownership

After the New Zealand massacre, will governments begin to regulate how social media oversees content?

Facebook and Google Are Guilty of a Failure to Take Ownership
(Source: BloombergQuint)

(Bloomberg Businessweek) -- Evil travels freely on the internet. It flickers before us in plain sight. Yet, despite the livestreamed horror of March 15 in Christchurch, New Zealand, enacting social media reforms will take massive efforts in regulation, requiring the cooperation of the tech giants, governments, and consumers.

In a perfect world, YouTube’s and Facebook’s algorithms would race through a video as soon as someone tries to upload it. If the machines recognize troubling content or a video that’s already been banned, the images would never reach the public.

So what happened in New Zealand? Alleged gunman Brenton Tarrant had an audience of only about 200 people during the 17 minutes he broadcast his attack on the Al Noor mosque in Christchurch. Yet, though Facebook Inc. took the video off Tarrant’s page 12 minutes after the livestream ended, hundreds of thousands of clones of the footage were produced and circulated on the internet. The video was reposted on Twitter, where it auto-played on the timelines of unsuspecting users; it appeared in Reddit’s infamous “Watch People Die” forum, which is exactly what its name says it is; and, of course, it showed up on YouTube, the world’s leading video-hosting site. Facebook managed to stop 1.2 million copies from circulating on its site—but 300,000 still got through. At its peak, the video was uploaded once per second, YouTube Chief Product Officer Neal Mohan told the Washington Post.

Google’s and YouTube’s AI censors work well for videos they’ve had time to “learn” to recognize, such as previously collected terrorist propaganda or child pornography cataloged by police officers. To a certain extent, the algorithms may spot permutations of that footage. They still struggle, however, with what’s going on in clips they’ve never seen, according to Rasty Turek, chief executive officer of Pex, a startup that helps companies identify videos infringing on copyright. Even if the platforms identify fresh material as objectionable, the people who post it can use simple tricks like changing the size of the clip, speeding it up or slowing it down, or simply flipping it on its side to fool the algorithms.

Google and Facebook have spent years trying to find ways to stop problematic videos from appearing on their websites. YouTube’s Content ID tool, which has been around for more than a decade, gives copyright owners such as film studios the ability to claim content as their own, get paid for it, and have bootlegged copies deleted. Similar technology has been used to identify illegal or undesirable content. But when a new video circulates, it can be copied, altered in a minor way, and re-uploaded, slipping past the censors. “It’s whack-a-mole,” Turek says. “People are genius at these things. They will figure out how to get something past it.”

At congressional hearings in the U.S. over the past two years, executives from Facebook and YouTube said they were investing heavily in artificial intelligence that would be able to find and block violent and graphic videos before anyone saw them. But if all you have is a split second, how do you distinguish fact from fiction, and freedom of expression from murderous reality? Getting from AI’s idealistic intentions “to understanding the context of a video is like a frog understanding a grocery store,” Turek says.

By the point in the livestream when Tarrant left his car and entered the mosque, it was too late to save lives. But could social media platforms have stopped him long before that? Perhaps, if they’d recognized the warning signs in Tarrant’s online behavior. Two days before the assault, he posted multiple photographs on Twitter of assault-style firearms and ammunition, marked with telltale writing. Among the inscriptions were the names of accused mass murderers: Alexandre Bissonnette, convicted of killing six Muslims at a mosque in Canada; Luca Traini, a neo-Nazi convicted of shooting six African immigrants in Italy; and Josué Estébanez, who killed a teenager who was protesting fascism in Spain. Tarrant made multiple mentions of “14,” a reference to the 14-word-long white supremacist slogan. Also on view was a bulletproof vest covered with symbols commonly used by neo-Nazis: a Celtic cross and the black sun—or Sonnenrad—patch.

Even before posting photos of his weapons, Tarrant spent weeks tweeting out racist videos, calling immigrants “invaders,” and claiming white people were being subjected to genocide. He also posted a link to an 87-page manifesto to Twitter hours before the shooting. None of this behavior raised red flags at Twitter. His profile remained active until the murder spree in New Zealand.

Tarrant is an Australian citizen, but in the U.S., where Twitter is based, his tweets would have been largely protected by the First Amendment to the U.S. Constitution. The rifle photos, however, could press against the boundary of inciting violence. “Putting the name of a mass shooter on your rifle is pretty much the same as yelling ‘fire’ in a crowded theater,” wrote Robert Evans, who reports about right-wing extremism for the online investigative journalism site Bellingcat. “If you’re posting pictures of your rifle with the names of other mass shooters on it, [Twitter] should look into where that guy is posting from and inform law enforcement.”

Following the attack, Twitter removed Tarrant’s profile. By then, however, users on YouTube had already preserved it and uploaded videos that scrolled through his account, which included links to the manifesto. YouTube also became home to numerous re-uploaded versions of the original Facebook Live broadcast. Internet service providers in New Zealand took matters into their own hands, blocking a website popular with white supremacists where the video was spreading. “It’s an unprecedented action,” tweeted Jason Paris, CEO of Vodafone New Zealand Ltd., when questioned about whether the internet service provider was engaging in censorship. “However, terrorism won’t get any oxygen from Vodafone.”

The world’s democracies have been reluctant to remove content from the web, though, fearful of mirroring the policies of more restrictive regimes, such as China. “While the intent may be to protect citizens from bad stuff, rather than to regulate or coerce, progressives would have a hard time squaring their love of freedom with shutting off access to things like YouTube or Facebook,” says Ari Ezra Waldman, a professor at New York Law School who studies technology regulation. Hate speech is banned in Australia—though penalties are relatively light, mostly resulting in apologies. But in New Zealand the Department of Internal Affairs declared that sharing the March 15 video was a crime. Legislators also quickly proposed a ban on semi-automatic rifles like those used in the attack. And, in the wake of the massacre, politicians have called for regulation of social media companies, which have been highly resistant to policing their own content.

In the U.S., despite slews of congressional hearings, lawmakers have been reluctant to impose restrictions on technology companies. Some have tried to put together a federal online privacy bill, but to no avail. A bill that would have restricted online political advertising lost steam when its Republican sponsor, Arizona Senator John McCain, died. Congress did pass a law that makes online platforms bear some responsibility for activity on their sites—but limited it to sex trafficking.

If the U.S. doesn’t find a way to regulate internet content and establish watchdogs, could the American tech giants be forced to do so by overseas pressure? The European Union has imposed fines on Google and others for antitrust issues. Last year it enacted the General Data Protection Regulation, which includes “the right to be forgotten,” requiring companies to delete data on any individual who asks them to. Data collectors also need to get consent from people before storing their information. Some U.S. companies have already complied with the European rules worldwide. Could the EU likewise force U.S. companies to increase management of the content that reaches its market of 500 million people? The scale of the undertaking might just make compliance universal.

Immediately after news of the massacre broke, lawmakers in Europe were calling for regulation. “We’ve heard a lot today about taking back control,” Tom Watson, deputy leader of the U.K.’s Labour Party, said on the radio. “Well, today the big social media platforms lost control. They failed the victims of that terrorist atrocity. They failed to show any decency and responsibility. And we can’t go on like this. Today must be the day when good people commit to take back control from the wicked, ignorant oligarchs of Silicon Valley.” Sajid Javid, the U.K. home secretary, also expressed frustration with Facebook, Google, and Twitter for allowing extremist content. “Take some ownership,” he tweeted on March 15. “Enough is enough.”

“At some point, we will have to regulate,” said Frans Timmermans, a Dutch politician who serves as the first vice president of the European Commission, at the World Policy Forum on March 18. “The first task of any public authority is to protect its citizens—and if we see you as a threat to our citizens, we will regulate. And if you don’t work with us, we will probably regulate badly.”

The imagery that populated Tarrant’s posts would be difficult to find in Germany. After World War II the country banned the symbols of “unconstitutional organizations”: swastikas, the Aryan fist, the Iron Cross, and others. In 2017 it passed the Network Enforcement Act, which fines social media platforms that fail to remove illegal content, including hate speech, defamation, and calls for violence. “By far, the largest numbers of people who work to remove hateful comments for Facebook are Germans working in Germany, far more than there are in the U.S., even though the U.S. is a far larger country,” says Henry Fernandez, a senior fellow at the Center for American Progress who studies the role of technology companies in radicalization. Europe is far more willing to regulate content online because of its history with fascism. “European interpretations of the importance of free speech are simply different,” says Waldman of New York Law School. “The cultural memory is very much tied to the horrors of World War II.”

In February, Germany’s federal cartel office—which oversees antitrust issues—imposed “far-reaching restrictions” on the way Facebook and its WhatsApp and Instagram subsidiaries collect, combine, and deploy user data. The country’s policing of social media giants has caught the attention of EU regulators. “We study it with the same great interest because of the two legs, both that Facebook is a dominant player in this market but also how they interpret privacy rules,” EU Competition Commissioner Margrethe Vestager told Bloomberg in early February.

Both Waldman and Fernandez believe European countries are more likely than the U.S. to regulate radical and violent content online. In New Zealand, Prime Minister Jacinda Ardern focused on Facebook in front of Parliament on March 19,saying, “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.” Still, in the U.S., threats to the bottom line are more effective than lobbying Congress to restrict platforms that promote white supremacy. Companies such as Procter & Gamble Co. or Walt Disney Co., which spend hundreds of millions of dollars a year on YouTube alone, have pulled ads when criminal or terrorist content popped up on the platform. “The companies are driven first and foremost by a profit motive,” Fernandez says. The algorithms they use “do not prioritize curbing hate online. Profit first.”

That situation may change with a new generation of U.S. legislators. “As younger people are being elected,” Fernandez says, “they are engaging aggressively on social media as a way to communicate. I would anticipate they would then come with a different understanding of what regulation might look like over time.” The question is: What can social media companies do right now to prevent the next massacre?

To contact the editor responsible for this story: Howard Chua-Eoan at hchuaeoan@bloomberg.net

©2019 Bloomberg L.P.