ADVERTISEMENT

What Social Media Needs Is More Humans

Rare is the week that doesn’t bring some controversy over someone being banned from Twitter or Facebook for being too offensive.

What Social Media Needs Is More Humans
An Apple iPhone 6 smartphone is held as a laptop screen shows the Twitter Inc. logo in this arranged photograph taken in London, U.K. (Photographer: Chris Ratcliffe/Bloomberg) 

(Bloomberg Opinion) -- Rare is the week that doesn’t bring some new controversy over someone or something being banned from Twitter or Facebook for being too offensive. (Latest: a Led Zeppelin album cover.) As regular readers know, I prefer more speech to less speech, but this column isn’t about what content rules private companies should enforce. Today I’m wearing my fair-process hat. These mighty controversies over kicking users off social media would be mightily reduced if there was a better process for making the decisions.

And I have one. I can summarize my proposal this way: Human at the front end, human in the middle, human at the back end. Each phase has two rules. Here’s how things would work.

“Human at the front end”:

Rule 1 — Human complaint. No initial action should be taken against a user except in response to a complaint. The complaint must come from a human being, not an algorithm. The human being, moreover, must not be an employee or associate of the provider itself. In this way, any sense that the provider is targeting a particular point of view is avoided at the outset. (The complaint should also cite a particular offending post or set of postings rather than a general objection.)

Rule 2 — Notice of specific charges. If after reviewing the complaint, the provider decides to move forward, it must provide the accused user with a statement of the specific content that potentially violates the rules in question.

“Human in the middle”:

Rule 3 — Human judges. Actual human beings must decide whether to proceed with the suspension or banning process, and other, separate human beings must be assigned to judge the merits of the case.

Rule 4 — Some kind of hearing.  The accused user must have the opportunity to defend the questionable content before a human judge (or, better, two human judges). During the process of adjudication, there will be no suspension of the user’s ability to post and no deletion of any posts.

If both human judges agree that the user should be sanctioned, they should send their recommendation to a third judge, not involved in the case, for approval and action.

“Human at the back end” :

Rule 5 — Human judgment. The judgment delivered to the user should be signed by a human being (but see below), and must give clear information about whom to contact if the user seeks to appeal. (That is, there must be something other than a general email from a department.)

Rule 6 — Specific conclusions. A judgment that involves suspension of an account or other restrictions must state with specificity which content constituted the violation and how. There should be an end to conclusory messages about having run afoul of the provider’s policies.

*

What’s the advantage of all this?  For one thing, when users deal with people rather than a faceless monolith, they’re more likely to believe that their concerns are being taken seriously. When one’s account is being suspended, this is no small thing.

More important, these rules would slow things down by making both the complaint process and the adjudication process time-consuming. A user who’s upset might decide not to bother; judges might decide that a particular offense is too minuscule to be worth the bother. This would be to the good, because more speech would remain untouched.

At the same time, none of this would eliminate algorithms from the process. The human judges would use them, not only to collect data on the accused’s previous posts — is this a pattern or a one-off? — but also to compare the mendacity or viciousness of the posts with those of other users, particularly those of different political persuasions. (The difference in politics will be easy for the software to detect.) But the algorithms would be used by human beings to resolve particular complaints rather than crawling through the servers on their own to find violations.

As to the judges themselves, the provider would do well to task people of varying political stripes with this role. And if the provider’s own employees are exorcized because of the politics of some judges, the provider should think twice about whether to restrict content at all (other than for harassment or personal attacks). After all, if the employees can’t accept the possibility that other employees disagree with them, it’s hard to imagine how they could possibly be fair judges of content.

You might reasonably object that letting the accused know the identity of the judges might open the judges themselves to harassment and perhaps doxxing. Fair point. But the provider can make harassment of the judges, by any user, a violation per se, much as a court can punish contempt. And the analogy is not unimportant. Trial judges, after all, manage to sign their names to decisions that do far more than simply kick people off a website.

Still, you might think that it’s too much to ask the provider to disclose the names of those responsible for a particular ban. Certainly the proposal can be modified to keep the identities of the human judges secret, as long as the accused still has a way of contacting them directly, as opposed to sending a note to an amorphous department.

But the prospect is worrisome. I oppose, deeply, the exercise of anonymous power. There is a certain confidence in the rightness of one’s views that stems from signing one’s name to what one has to say. Judges do it. So do columnists. It’s a way of standing behind one’s views, no matter how controversial.

And it’s very human.

That’s why if you’re bold enough to decide who gets to speak and who doesn’t, you should have the courage to sign your name.

One possibility at this point is to allow the user to avoid the proceeding entirely and emerge with a clean slate simply by deleting the content in question.

With apologies to the brilliant Judge Henry J. Friendly, the 60th anniversary of whose appointment to the federal bench we celebrate this year.

An exception can be made if the user is dilatory or uncooperative during the hearing process.

Some might argue for a third back-end rule which would allow for proceedings against users who file multiple complaints that lead to no sanctions. The idea would be to discourage litigious but frivolous plaintiffs from hunting for possible violations to file against. The user subject to a proceeding would also be entitled to some kind of hearing. Nevertheless, I think a potential rule of this kind would be a bad idea. For one thing, the rule might swiftly tax the provider’s human resources. For another, only rarely does a court punish the litigious pro se plaintiff. Most important, the rule wouldn’t be necessary. The humans who review complaints would soon get a sense of which users filed a steady stream, and would likely take those less seriously.

To contact the editor responsible for this story: Michael Newman at mnewman43@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Stephen L. Carter is a Bloomberg Opinion columnist. He is a professor of law at Yale University and was a clerk to U.S. Supreme Court Justice Thurgood Marshall. His novels include “The Emperor of Ocean Park,” and his latest nonfiction book is “Invisible: The Forgotten Story of the Black Woman Lawyer Who Took Down America's Most Powerful Mobster.”

©2019 Bloomberg L.P.