ADVERTISEMENT

Google AI Struggles to Keep Mosque Shooting Clip Off YouTube

Google AI Struggles to Keep Mosque Shooting Clip Off YouTube 

Google AI Struggles to Keep Mosque Shooting Clip Off YouTube
A man is seen as a silhouette as he checks a mobile device whilst standing against an illuminated wall bearing YouTube Inc.s logo in this arranged photograph in London, U.K. (Photographer: Chris Ratcliffe/Bloomberg)

(Bloomberg) -- YouTube has tried to keep violent and hateful videos off its service for years. The Google unit hired thousands of human moderators and put some of the best minds in artificial intelligence on the problem.

On Thursday, that was no match for a gunman, who used social media to broadcast his killing spree in a New Zealand mosque, and legions of online posters tricking YouTube’s software to spread the attacker’s video.

When the rampage was streamed live on Facebook, police alerted the social network, which took the video down. But by then it had been captured by others, who re-posted it on YouTube.

Google said it’s “working vigilantly to remove any violent footage” and had deleted the video thousands of times by Friday afternoon. Still, many hours after the original event, it could still be found, an unnerving reminder of how far giant internet companies have to go to understand and control the information shared on their services.

"Once content has been determined to be illegal, extremist or a violation of their terms of service, there is absolutely no reason why, within a relatively short period of time, this content can’t be eliminated automatically at the point of upload," said Hany Farid, a computer science professor at the University of California at Berkeley’s School of Information and a senior adviser to the Counter Extremism Project. "We’ve had the technology to do this for years."

YouTube has worked to block certain videos from ever showing up on its site for years. One tool, called Content ID, has been around for more than a decade. It gives copyright owners such as film studios the ability to claim content as their own, get paid for it, and have bootlegged copies deleted. Similar technology has been used to blacklist other illegal or undesirable content, including child pornography and terrorist propaganda videos.

About five years ago, Google revealed it was using AI techniques such as machine learning and image recognition to improve many of its services. The technology was applied to YouTube. In early 2017, 8 percent of videos flagged and removed for violent extremism were taken down with fewer than 10 views. After YouTube introduced a flagging system powered by machine learning in June 2017, more than half of the videos pulled for violent extremism had fewer than 10 views, it reported in a blog.

Google AI Struggles to Keep Mosque Shooting Clip Off YouTube

Google executives have testified multiple times in front of the U.S. Congress on the topic of violent and extremist videos being spread through YouTube. The repeated message: YouTube is getting better, sharpening its algorithms and hiring more people to deal with the problem. Google is widely seen as the best-equipped company to deal with this problem because of its AI prowess.

So why couldn’t Google stop a single video, that is clearly extreme and violent, from being reposted on YouTube?

“There are so many ways to trick computers,” said Rasty Turek, chief executive officer of Pex, a startup that builds a competing technology to YouTube’s Content ID. “It’s whack-a-mole.”

Making minor changes to a video, such as putting a frame around it or flipping it on its side, can throw off software that’s been trained to identify troubling images, Turek said.

The other major problem is live streaming, which by its very nature doesn’t allow AI software to analyze a whole video before the clip is uploaded. Clever posters can take an existing video they know YouTube will block and stream it live second by second -- essentially rebroadcasting it online to get around Google’s software. By the time YouTube recognizes what’s going on, the video has already been playing for 30 seconds or a minute, regardless of how good the algorithm is, Turek said.

“Live stream slows this down to a human level,” he said. It’s a problem YouTube, Facebook, Pex and other companies working in the space are struggling with, he added.

This rebroadcasting trick is a particular problem for YouTube’s approach to blacklisting videos that break its rules. Once the company identifies a problematic video, it puts the clip on a blacklist. It’s AI-powered software is then trained to automatically recognize the clip and block it if someone else if trying to upload it to the site again.

It still takes a while for the AI software to be trained before it can spot other copies. And by definition, the video has to exist online before YouTube can set this machine-learning process in motion. And that’s before people start slicing the offending content into short live-streamed clips.

Another complicating factor is that edited clips of the shooting video are also being posted by reputable news organizations as part of their coverage of the event. If YouTube were to take down a news report simply because it had a screen shot of the video, press freedom advocates would object.

The New Zealand shooter used social media to gain maximum exposure. He posted on internet forums used by right-wing and anti-Muslim groups, tweeted about his plans and then began the Facebook live stream on his way to carry out the attack.

He posted a manifesto filled with references to internet and alt-right culture, most likely designed to give journalists more material to work with and therefore spread his notoriety further, said Jonas Kaiser, a researcher affiliated with Harvard’s Berkman Klein Center for Internet and Society.

“The patterns seem to be very similar to prior events,” Kaiser said.

To contact the reporters on this story: Gerrit De Vynck in New York at gdevynck@bloomberg.net;Jeremy Kahn in London at jkahn21@bloomberg.net

To contact the editors responsible for this story: Jillian Ward at jward56@bloomberg.net, Alistair Barr, Robin Ajello

©2019 Bloomberg L.P.