ADVERTISEMENT

Facebook Says First-Person Christchurch Video Foiled AI System

Facebook Says First-Person Christchurch Video Foiled AI System

(Bloomberg) -- Facebook Inc. said it struggled to identify the video of the New Zealand mosque shootings because of the use of a head-mounted camera by the gunman, which made it harder for its systems to automatically detect the nature of the video.

“This was a first-person shooter video, one where we have someone using a GoPro helmet with a camera focused from their perspective of shooting,” Neil Potts, Facebook’s public policy director, told British lawmakers Wednesday.

Terror footage from a first-person perspective "was a type of video we had not seen before,” he added. Because of the nature of the video, Facebook’s artificial intelligence -- used to detect and prioritize videos that are likely to contain suicidal or harmful acts -- did not work.

Potts was giving evidence Wednesday to a committee of senior lawmakers in the U.K. as part of a parliamentary inquiry into hate crime. Representatives for Twitter Inc. and Alphabet Inc.’s Google and YouTube also gave evidence.

Social media platforms, such as Facebook, have been facing scrutiny after the shooter accused of killing dozens of people in two mosques in New Zealand live-streamed the murders over the internet. The social media company came under sharp criticism for not taking the video down fast enough and for letting it be circulated and uploaded to other platforms like YouTube.

At congressional hearings in the U.S. over the past two years, executives from Facebook and YouTube said they were investing heavily in artificial intelligence that would be able to find and block violent and graphic videos before anyone saw them. In a blog post following the attack, Facebook said that its AI systems are based on using many thousands of examples of content to train a system to detect certain types of text, imagery or video.

Potts was also chastised by the committee’s chair, the Labour party’s Yvette Cooper, for not knowing the senior officer in charge of counter terrorism policing in the U.K., Neil Basu.

“We’ve been told by the counter terrorism chief that social companies don’t report to the police incidents that clearly are breaking the law,” Cooper told Potts. “You may remove it, but you don’t report it.”

Potts responded that he was “not familiar with the person you mentioned, or his statement,” and later apologized for not knowing him. He said, however, that Facebook doesn’t report all crimes to police but does report "imminent threats."

"These are places where government could be giving us more guidance," Potts said.

The committee investigating hate crime is separate to the one that recently recommended the British government take tougher measures to keep technology companies like Facebook in check, following a year-long inquiry into fake news and its impact on elections.

Stephen Doughty, a Labour party lawmaker, directed broad and strongly-worded criticism at all three witnesses.

“Your systems are simply not working and quite frankly it’s a cesspit,” he said, referring to the collective platforms’ content. “It feels like your companies don’t give a damn. You give a lot of rhetoric but you don’t take action.”

Marco Pancini, director of public policy for YouTube, responded that “we need to do a better job and we are doing a better job,” adding that since an earlier hearing “we introduced a team that helps us better understand trends of violations of our policies by far-right organisations.”

“That’s all wonderful but they’re clearly not doing a very good job,” Doughty replied.

To contact the reporter on this story: Nate Lanxon in London at nlanxon@bloomberg.net

To contact the editors responsible for this story: Giles Turner at gturner35@bloomberg.net, Emma Ross-Thomas

©2019 Bloomberg L.P.