ADVERTISEMENT

Facebook Fight Against Racists, Drugs Falters During Covid-19

Facebook Fight Against Racists, Drugs Falters During Covid-19

(Bloomberg) --

Josh Eibelman was scrolling through his Facebook News Feed earlier this month when a post saying “Free Palestine” caught his eye. The post had been shared to a pro-Israel group that Eibelman belonged to but rarely visited. When he clicked onto the group’s page to see what was going on, he was surprised to see its membership was ballooning, and that it was suddenly full of anti-semitic posts.

“It seemed like a very coordinated effort,” said Eibelman, 21, a college senior from the Boston area.

Eibelman reported the posts using Facebook’s automated reporting system, and was surprised again at the response. “They said they’re not able to prioritize this report because of coronavirus,” he said. Eibelman left the group, but the posts remained up for days before the group was switched from public to private earlier this week.

Facebook and Instagram users are finding that the company’s efforts to combat Covid-19 misinformation mean that other types of inappropriate posts are being overlooked due to a lack of resources. “We have fewer people available to review reports right now,” Facebook said in an automated reply to Eibelman. The response acknowledged “this could be disappointing” and suggested Eibelman handle the issue in other ways, like proactively hide the posts or unfollow the Page.

Facebook Fight Against Racists, Drugs Falters During Covid-19

Facebook Inc. instead is prioritizing posts that could cause “imminent harm,” including those about unproven Covid-19 cures, but also self-harm or terrorism. The company said it took down hundreds of thousands of such posts in March. It added warning labels to another 40 million posts that contained misleading info about the virus, but were not necessarily a direct threat to user safety.

Facebook Chief Executive Officer Mark Zuckerberg acknowledged in late March that the company’s efforts to fight Covid-19 misinformation, and staffing issues because many content moderators were unable to work remotely, would create this kind of problem. He said the worst types of coronavirus information, like advisories to drink bleach as a cure, were “obviously going to create imminent harm,” while posts that might be missed, like political misinformation, wouldn’t hurt anybody. Zuckerberg said. “That is just in a completely different class of content than the back-and-forth accusations a candidate might make in an election.”

But even harmful posts are facing review delays. Eric Feinberg, vice president for content moderation at the Coalition for a Safer Web, has reported multiple instances of illegal drugs for sale on Instagram. After reporting one user, who was posting oxycodone hydrochloride tablets, codeine syrup and other medications, along with masks and sanitizers, he received the same response: “We have fewer people available to review reports because of the coronavirus (Covid-19) pandemic, so we’re only able to review content with the most potential for harm.” Instagram suggested muting or blocking the posts instead -- a solution unlikely to make the site safer. “Their auto filters are not working,” Feinberg said.

Facebook Fight Against Racists, Drugs Falters During Covid-19

A Facebook spokesperson confirmed that while some posts aren’t a priority for reviewers, they might be reviewed eventually.

“When people report content to us, we are now ​letting them know that we will prioritize those instances that have the greatest potential ​to harm our community so we can keep our reviewers safe by having them stay home,” the spokesperson said. “With fewer reviewers available, we’ve increased the use of AI to proactively remove content that violates our community standards.”

Facebook has always struggled to quickly review and remove some types of controversial content on its services, even without distractions from a global pandemic. The company is moving much of its content-review process to automated software programs, and in areas like child pornography and terrorism propaganda, almost all violating posts are removed automatically. More nuanced areas, like hate speech and misinformation, have been harder for Facebook to handle, in part because it’s not always clear what violates a company rule and what doesn’t. Since the 2016 election, Facebook has hired thousands of people to work on content moderation and other security problems, and currently has 15,000 moderators.

Image-based content has been especially troublesome, frustrating users like Feinberg, who regularly reports posts from drug sellers and terrorist groups. And in the coronavirus response, Facebook is focusing most of its efforts on content going viral, a strategy likely to miss the image-based harmful posts on Instagram.

“It makes people like me feel sort of abandoned by these platforms especially when you don’t have a lot of other avenues of communication when everyone has to be at home,” Eibelman said.

©2020 Bloomberg L.P.