ADVERTISEMENT

Facebook and Twitter Can’t Police What Gets Posted

Facebook and Twitter Can’t Police What Gets Posted

I wouldn’t want to work at a social media company right now. With the spotlight on insurrection planning, conspiracy theories and otherwise harmful content, Facebook, Twitter and the rest will face renewed pressure to clean up their act. Yet no matter what they try, all I can see are obstacles.

My own experience with content moderation has left me deeply skeptical of the companies’ motives. I once declined to work on an artificial intelligence project at Google that was supposed to parse YouTube’s famously toxic comments: The amount of money devoted to the effort was so small, particularly in comparison to YouTube’s $1.65 billion valuation, that I concluded it was either unserious or expected to fail. I had a similar experience with an anti-harassment project at Twitter: The person who tried to hire me quit shortly after we spoke.

Since then, the problem has only gotten worse, largely by design. At most social media companies, content moderation consists of two components: a flagging system that depends on users or AI, and a judging system in which humans consult established policies. To be censored, a piece of content typically needs to be both flagged and found in violation. This leaves three ways that questionable content can get through: It can be flagged but not a violation, a violation but not flagged, and neither flagged nor considered a violation.

Plenty falls through these cracks. People who create and spread toxic content spend countless hours figuring out how to avoid getting flagged by people and AI, often by ensuring it reaches only those users who don’t see it as problematic. The companies’ policies also miss a lot of bad stuff: Only recently, for example, did Facebook decide to remove misinformation about vaccines. And sometimes the policies themselves are objectionable: TikTok has reportedly suppressed videos showing poor, fat or ugly people, and has been accused of removing ads featuring women of color.

Time and again, the companies have vowed to do better. In 2018, Facebook’s Mark Zuckerberg told Congress that AI would solve the problem. More recently, Facebook introduced its Oversight Board, a purportedly independent group of experts who, at their last meeting, considered a whopping five cases questioning the company’s content moderation decisions — a pittance compared with the firehose of content that Facebook serves its users every day. And last month, Twitter introduced Birdwatch, which essentially asks users to write public notes providing context for misleading content, rather than merely flagging it. So what happens if the notes are objectionable?

In short, for a while AI was covering for the inevitable failure of user moderation, and now official or outsourced moderation is supposed to be covering for the inevitable failure of AI. None are up to the task, and events such as the capital riot should put an end to the era of plausible denial of responsibility. At some point these companies need to come clean: Moderation isn’t working, nor will it.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”

©2021 Bloomberg L.P.