Facebook's Oversight Board Is a Convenient Flak-Catcher

Bookmark

The easiest way to avoid taking responsibility for an unpopular decision is to blame an irrefutable, irrational authority. For example: “I’d love to help you move this weekend, but my wife won’t let me,” or We’d love to reinstate Donald Trump’s Facebook account, but it’s up to the Facebook Oversight Board.”

Facebook’s Oversight Board was established last year as a “Supreme Court” for people to challenge the company’s content moderation decisions. It recently announced that it had overturned four content removal rulings initially made by the platform. 

In one case, Facebook removed a post from a user in Myanmar that included a widely shared image of a Syrian toddler who drowned attempting to reach Europe, along with a pejorative comment about Muslims. While the post might be offensive, the board found that it did not advocate hatred or intentionally incite any form of imminent harm, and ordered Facebook to restore it. In another instance, the board upheld Facebook’s decision to remove a post about Azerbaijan that included a racial slur.

The cases illustrate the difficulty of maintaining a content moderation policy that satisfies two billion users distributed across the planet.

Instead of highlighting just how impossible this effort is, the recent actions only raise expectations that tech companies should take responsibility for behavior thats schemed up on their platforms. On CNBC, former SEC Commissioner Laura Unger compared the run on GameStop to the Jan. 6 riots at the Capitol, attributing both events to a social media-driven frenzy.

Perhaps anticipating regulatory attention, the platforms took the initiative to shut things down. Last week, the online messaging platform Discord banned the r/WallStreetBets chat server at the height of the GameStop short-squeezing frenzy for hateful and discriminatory content” — a vague accusation that can be applied to any sizable forum on the internet. Reddit briefly closed the r/WallStreetBets subreddit, and Facebook disabled a discussion group called Robinhood Stock Traders.

It’s tempting to demand that social media companies solve the societal problems amplified on their platforms, but we might be overestimating their capabilities. In 2018, data analytics firm Cambridge Analytica was accused of exploiting Facebook user data to manipulate voters. The allegations led to mass outrage, a Netflix documentary and the formation of Facebook’s Oversight Board. But after a three-year investigation, the British Information Commissioners Office found no evidence of election interference, and concluded that Cambridge Analytica’s impact had been greatly exaggerated.

One obstacle in the debate over the responsibility of social media for creating problems that supposedly never existed before has been the lack of historical parallels. But consider the case of banks in the drug wars that go back nearly 50 years. There wasn’t much enthusiasm for laws to stop money laundering until the 1970s, when illicit drug use escalated, and a War on Drugs was declared. Over the next decade, banks reluctantly conformed to reporting requirements as the media engaged in a public pressure campaign to name and shame offending institutions.

Even though international banks played a key role in the illicit flow of money, financial policing failed to control a problem that was endemic. Still, law enforcement agencies didn’t simply outsource their jobs to the banks and walk away. Regulators provided specific rules and guidelines for identifying suspicious activity.

Expecting Facebook to solve issues like hate speech and domestic terrorism is like expecting financial institutions to solve drug trafficking. They are taking the blame for an institutional failure that happened elsewhere.

While social media platforms may not enjoy their role as a whipping boy, they are somewhat motivated to maintain the illusion of having significant influence over user behavior. They are in the business of advertising, after all. If Facebook can’t steer users away from conspiracy theories, how can the platform be trusted to display a targeted ad for your business?

A more honest assessment might point out the limited efficacy of content moderation. Four years ago, Mark Zuckerberg dismissed the allegation that fake news influenced the presidential election as a “pretty crazy idea.” “Voters make decisions based on their lived experience, he continued.

Facebook has since taken up the task of fact-checking Covid-19 posts and preemptively blocking problematic news stories. It’s not clear whether something prompted Zuckerberg to change his mind, or if he caved to public pressure.

But no matter how much problematic content Facebook removes, it cannot restore the loss of trust in mass media or educate citizens on how to distinguish fact from fabulism. A social media company can’t be expected to protect election integrity, prevent civil unrest and ensure the orderly function of the markets.

So is it any wonder that Facebook wants an Oversight Board to catch the flak for stuff it can’t control, the same way Congress and the media and the SEC want to use tech companies as their own flak catcher?

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Elaine Ou is a Bloomberg Opinion columnist. She is a blockchain engineer at Global Financial Access in San Francisco. Previously she was a lecturer in the electrical and information engineering department at the University of Sydney.

©2021 Bloomberg L.P.

BQ Install

Bloomberg Quint

Add BloombergQuint App to Home screen.