ADVERTISEMENT

Facebook's Live Streaming Is Criticized After Mosque Shooting

Facebook's Live Streaming Is Criticized After Mosque Shooting

(Bloomberg) -- The speed at which a video of the New Zealand mosque shooting spread across social media platforms has demonstrated yet again that tech companies such as Facebook Inc. are still struggling to control content, especially from popular services that offer live streaming of events.

While platforms including Twitter Inc. and YouTube said they moved fast to scrub any content related to the incident from their sites, people reported it was still widely available hours after being first uploaded to the alleged shooter’s Facebook account. The first-person view of the killings in Christchurch was easily accessible during and after the attack -- as was the suspect’s hate-filled manifesto. Footage was still up on Google’s YouTube almost 12 hours later.

Facebook Chief Executive Officer Mark Zuckerberg has acknowledged the difficulty of policing content from the 2.7 billion users that power Facebook’s wildly profitable advertising engine. The company’s business model depends on showing people posts they’re most apt to have an emotional reaction to, which often has the side effect of amplifying fake news and extremism.

Indeed, the livestream of the murders highlights how technology helped the alleged shooter connect with like-minded people online. Chat sites have acted as sounding boards for anti-immigrant ideas. The alleged shooter is said to have posted on Twitter and controversial message boards such as 8chan about his anti-immigrant ideas, including publishing a 74-page manifesto. Even after the major tech companies acted to take down the video, commenters continued to praise the murders online.

When Zuckerberg introduced Facebook’s live streaming feature in 2016, the service was dominated by harmless videos including baby bald eagles and a guy getting a haircut. It didn’t take long, though, before people were streaming police shootings, murders and suicides. The company has deployed technology and human monitors to help find and remove offensive or threatening content. Last year Facebook said it was working on chips designed to more efficiently analyze and filter live video content.

"Once content has been determined to be illegal, extremist or a violation of their terms of service, there is absolutely no reason why, within a relatively short period of time, this content can’t be eliminated automatically at the point of upload," said Hany Farid, senior advisor to the Counter Extremism Project and a computer science professor at the University of California, Berkeley. "We’ve had the technology to do this for years."

Facebook's Live Streaming Is Criticized After Mosque Shooting

Some experts warn there’s really no way to keep the dark side of human behavior off social media platforms.

Zuckerberg “refused to confront the underlying problem of Facebook Live, which is that there is simply no responsible way to moderate a true live streaming service,” said Mary Anne Franks, professor of law at the University of Miami. “Facebook has known from the beginning that its live streaming service had the potential to encourage and amplify the worst of humanity, and it must confront the fact that it has blood on its hands.”

Facebook has 15,000 employees and contractors sifting through posts to take down offensive content. Zuckerberg has said artificial intelligence algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories where context matters.

“Hate speech is one of those areas,” Monika Bickert, Facebook’s head of global policy management, said in a June 2018 interview.

In the U.K., Home Secretary Sajid Javid said in a tweet that YouTube and others needed to do more to stop violent extremism being available.

“Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Facebook said on its Twitter account. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”

A spokesperson for YouTube said in a statement that “shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it.”

--With assistance from Jeremy Kahn.

To contact the reporters on this story: Giles Turner in London at gturner35@bloomberg.net;Molly Schuetz in New York at mschuetz9@bloomberg.net

To contact the editors responsible for this story: Giles Turner at gturner35@bloomberg.net, Robin Ajello

©2019 Bloomberg L.P.