ADVERTISEMENT

Facebook Takes Action on Hate Speech Amid Whistle-Blower Claims

Facebook Takes Action on Hate Speech Amid Whistle-Blower Claims

Meta Platforms Inc. took action in the third quarter against more than 28 million pieces of content on Facebook and Instagram that violated its policies against hate speech.

The vast majority of hateful posts that required action were on Facebook, the company said, noting that prevalence of hate speech is well under 1% for both social media platforms. The announcement on Tuesday was part of the company’s quarterly documentation of its efforts to curb offensive content such as nudity, terrorism and hate speech.

It was the first such report Meta has released since a consortium of media organizations published a series of critical articles based on internal documents disclosed by former Facebook product manager-turned-whistle-blower Frances Haugen. The company is battling accusations that it has misled investors and the public about its efforts to fight hate speech and disinformation. It’s also facing questions about how the platform was used to organize the Jan. 6 attack on the U.S. Capitol. 

Facebook prohibits posts that include direct attacks against people on the basis of their race, country of origin, religion, sexual orientation and other sensitive attributes. To catch hate speech, the company uses artificial intelligence to scan for images and text that look like they could violate its policies and take them down or send them to human reviewers for a final decision.

Haugen’s documents revealed that Facebook estimated that it only takes action on less than 5% of hate speech on the platform. The company, which last month changed its corporate name to Meta from Facebook Inc., has said that statistic refers to hate speech that is automatically removed and doesn’t include content that’s taken down after human review or that’s demoted. 

Meta also reported that it removed 13.6 million pieces of content on Facebook and 3.3 million pieces on Instagram that violated its policy against violence and incitement. The company said it detected most of the contentious posts before they were reported by users. 

Meta Chief Technology Officer Mike Schroepfer said during a press call with reporters that the company has deployed more generalized artificial intelligence-powered content moderation systems that can analyze posts for potential violations across multiple categories and in several languages at once. 

“The problems we are dealing with are always evolving and so is the way we approach them,” he said. 

©2021 Bloomberg L.P.