ADVERTISEMENT

Facebook, Instagram Pulled Millions of Posts for Violations

Facebook said that 80% of the hate speech it removed from the service in the third quarter was detected by its software systems.

Facebook, Instagram Pulled Millions of Posts for Violations
Facebook-owned Instagram logo is displayed on an Apple  iPhone in this arranged photograph. (Photographer: Andrew Harrer/Bloomberg)

(Bloomberg) -- Facebook Inc. removed tens of millions of user posts in the past six months for violating its terms of service regarding issues like child pornography, drug sales and terrorism. Millions more were removed from Instagram.

That’s according to a report released Wednesday by Facebook that details how the social media company enforces its own content policies. The report, which is published every six months and for the first time includes data from Instagram, said that Facebook identifies most of the content it removes automatically using its own software algorithms.

The numbers provide a reminder of the scale at which Facebook operates. Following the report’s release, Chief Executive Officer Mark Zuckerberg said that the company -- the world’s biggest social network -- gets unfairly criticized for reporting large takedown numbers, but that these accounts actually show Facebook is taking these problems more seriously than competitors are.

Some people look at the amount of content Facebook takes down “and come to the conclusion that because we’re reporting big numbers that that must mean that so much more harmful content is happening on our services than others,” he said. “What it says, if anything, is that we’re working harder to identify this and take action on it and be transparent about that than what any others are.”

Some highlights from the report:

  • Facebook removed 11.6 million pieces of content related to child pornography in the quarter ended in September. Facebook says its algorithms identified 99% of that content. Instagram removed another 754,000 pieces of content, with an automatic detection rate of just under 95%. By comparison, in the first quarter, Facebook removed just 5.8 million pieces of content related to child porn or exploitation.
  • Facebook removed 4.4 million pieces of content related to drug sales in the third quarter, and another 2.3 million related to firearm sales. That was up from 841,000 and 609,000 pieces respectively six months earlier.
  • Facebook said that 80% of the hate speech it removed from the service in the third quarter was detected by its software systems. That’s up from 68% of the hate speech removed in the first three months of the year.
  • Terrorism content is slightly harder to identify on Instagram than on Facebook. Facebook proactively identified 98.5% of all terrorism content - including 99% related to al-Qaeda and ISIS. Instagram removed 92.2% of terrorism content using software algorithms.
  • Facebook also said it removed 1.7 billion fake accounts in the third quarter -- 500 million fewer than it took down in the first quarter, when it eliminated a record 2.2 billion. The company says this decline is due to better preventative measures it has put in place that prevent “millions of attempts to create fake accounts” every day. “Because we are blocking more attempts to create fake, abusive accounts before they are even created, there are fewer for us to disable,” Facebook explained in the report.

To contact the reporter on this story: Kurt Wagner in San Francisco at kwagner71@bloomberg.net

To contact the editors responsible for this story: Jillian Ward at jward56@bloomberg.net, Molly Schuetz

©2019 Bloomberg L.P.