ADVERTISEMENT

It’s Time to Grant Content Moderators Full Employment

It’s Time to Grant Content Moderators Full Employment

(Bloomberg Businessweek) -- Content moderation has made headlines lately as Twitter cracked down on several of President Donald Trump’s tweets regarding mail-in ballots and protests against police misconduct. Facebook, meanwhile, has refrained from taking action on contentious presidential posts.

These decisions by the social media giants are exceptional, as they involve top executives groping for ways to handle a singular user who fires off his incendiary missives from 1600 Pennsylvania Avenue. Ordinary content moderation—the process of deciding what stays online and what gets removed—looks very different, although it is no less important. Moderators generally evaluate suspect posts that have been reported by users or flagged by automated systems.

Without this workaday activity, Twitter, Facebook, and YouTube would be inundated not just by spam but by personal bullying, neo-Nazi screeds, terrorist beheadings, child sexual abuse, and other harmful content. Despite the centrality of routine content moderation, however, major social media companies have marginalized the people who do this work, outsourcing the vast majority of it to third-party vendors in the U.S., India, the Philippines, and elsewhere. The vendors’ employees toil in front of computer workstations in ordinary-looking office buildings, but their wages and benefits typically don’t come close to those of Silicon Valley workers.

Facebook, the largest social media platform, with 2.5 billion users and 2019 revenue of $71 billion, provides an apt case study. The overwhelming majority of Facebook’s 15,000 moderators work for the kind of vendors that run customer-service call centers and back-office billing systems.

Three problems stem from this cost-saving strategy, according to a new report published by the Center for Business and Human Rights at New York University. First, the peripheral status of moderators undercuts their receiving adequate counseling and medical care for the psychological side effects of continual exposure to toxic online content. Second, the frequently chaotic outsourced environments in which moderators work impinge on their decision-making. And third, in some parts of the world distant from Silicon Valley, the peripheral status of content review has led to the social media companies paying inadequate attention to how their platforms have been misused to stoke ethnic and religious hatred.

There’s increasing recognition that content review can take a mental toll. In May, Facebook settled a class-action lawsuit filed on behalf of more than 10,000 current and former U.S. moderators who alleged that “as a result of constant and unmitigated exposure to highly toxic and extremely disturbing images,” they had suffered “significant psychological trauma and/or post-traumatic stress disorder.” Without admitting wrongdoing, Facebook agreed to pay $52 million to cover the plaintiffs’ medical costs and attorneys’ fees—a deal that works out to an average payment of only several thousand dollars per person. In addition, the company agreed to ensure that its third-party vendors will provide more access to mental health coaching from licensed professionals.

A more direct, if more expensive, way of making sure moderators get the care they need would be to onboard them as Facebook employees, with Silicon Valley salaries and benefits. Workers at the company’s Menlo Park headquarters have access to free on-site counseling at an in-house facility offering “individual therapy, psychiatry, groups, and classes,” according to the company website. All full-time Facebook employees have coverage for mental health therapy through the company’s several medical insurance programs.

The second shortcoming of outsourcing is a tendency toward unsettled workplaces and distracted employees. Online tech magazine the Verge has described harrowing conditions in several U.S. moderation sites. Debrynna Garrett worked in Tampa, Fla., for Cognizant, a former Facebook vendor. She and other former moderators describe in interviews a work environment marked by almost continual disputes with quality auditors, who also work for the third-party vendors and double-check moderators’ decisions. These clashes—which concerned whether the platform's “community standards” have been violated—led to loud arguments, physical confrontations, and occasionally, even a fist fight, says Garrett, who worked for Cognizant until March 2020. “It is incredibly distracting to have that going on day after day,” she says. Garrett is a plaintiff in a separate suit against Facebook and Cognizant that is pending in federal court in Tampa.

Facebook and Cognizant deny liability in the Tampa case. A Cognizant spokesperson says that the company “strives to create a safe and empowering workplace for its employees around the world. Like any large employer, Cognizant routinely and professionally responds to and addresses general workplace and personnel issues in its facilities.” The company announced in October that it would exit the moderation business because the activity no longer fits with its “strategic vision.”

Further serious harms have spread, at least partly as a result of Facebook’s failure to ensure adequate moderation for non-Western countries that are in varying degrees of turmoil. In these places, the platform—and/or its affiliated messaging service WhatsApp—have become important means of communication and advocacy as well as vehicles to incite hatred and, in some instances, violence. Myanmar is the most notorious example. In March 2018, United Nations investigators found that hate speech on Facebook’s platform played a “determining role” in the military-led ethnic cleansing of Myanmar’s Rohingya Muslim minority.

In May, Facebook released human rights assessments it commissioned from outside analysts. In its look at Sri Lanka, the consulting firm Article One Advisors concluded that in 2018, “the Facebook platform contributed to spreading rumors and hate speech, which may have led to ‘offline’ violence.” Last year’s bloody Easter bombings in Sri Lanka followed Islamist Facebook messages calling for attacks against non-Muslims.

Facebook says that it has learned hard lessons in at-risk countries. The company has created a Strategic Response team, improved technology that can pick out hate speech, and changed policies to facilitate removal of misinformation and incendiary rumors. It has also added moderators to focus on countries such as Myanmar and Sri Lanka.

But Facebook is retaining its outsourcing model. Company executives emphasize the advantages of having vendors that are able to hire moderators around the globe who speak local languages and work in all time zones—and can be shifted flexibly to concentrate on the latest potential crisis.

All these advantages, however, can be achieved with full-fledged employees, including those based around the world but supervised directly by Facebook. Content moderation would improve if the people doing it received the pay, benefits, stature, and oversight available to regular social media employees. This would require a significant investment by Facebook and its rivals. The added outlays would reflect the true cost of doing business responsibly as a global social media platform.
 
Barrett is the deputy director of the NYU Stern Center for Business and Human Rights.

©2020 Bloomberg L.P.