ADVERTISEMENT

The Free Speech Debate About Social Media Is Broken

The Free Speech Debate About Social Media Is Broken

The U.S. Supreme Court is strongly committed to the “marketplace of ideas.” It tends to believe, in the words of Justice Louis Brandeis, that the remedy for falsehoods and fallacies is “more speech, not enforced silence.”

If you believe that, you might also believe that if people lie about Covid-19, the 2020 presidential election, a politician, a journalist, a neighbor — or you or me — nothing can be done. Sure, you can answer with “counterspeech”: the truth. And that’s it.

The problem is in many cases, counterspeech is ineffective. Lies lodge in the human mind. They are like cockroaches: You can’t quite get rid of them.

This psychological reality raises serious questions about current constitutional understandings and also about the current practices of social media platforms, including Facebook, YouTube and Twitter, in trying to stop falsehoods. Ironically, those understandings, and those practices, may themselves be based on a mistake of fact — something like misinformation.

In United States v. Alvarez, decided in 2012, the Supreme Court appeared to rule that lies and lying are protected by the First Amendment. The court struck down a provision of the Stolen Valor Act, which makes it a federal crime if you claim, falsely, that you won the Congressional Medal of Honor. According to the court, that provision is unconstitutional; the government cannot punish that lie.

As the court put it: “A Government-created database could list Congressional Medal of Honor winners. Were a database accessible through the Internet, it would be easy to verify and expose false claims.” In a nutshell: The right remedy for lies is more speech, not enforced silence.

The Alvarez case involved a boastful lie about oneself, and it is not entirely clear how it applies to vicious lies about others, or to lies about health, safety and elections. In limited circumstances, the justices have allowed civil actions against defamation, even when a public figure is involved. But in general, the court has been reluctant to allow any kind of “truth police.” Social media providers, prominently including Facebook, have felt the same way.

But the broad protection of lies, defended by reference to the marketplace of ideas, rests on an inadequate understanding of human psychology.

Whenever someone tells us something, our initial inclination is to believe it. If we are told that it is going to snow tomorrow, that the local sports team won today’s football game, or that a good friend just had a heart attack, we tend to think we have been told the truth.

Sure, you might not believe a source that you have learned to distrust. But most of the time, we assume that what we hear is true.

It’s called “truth bias,” and it extends more broadly than that. Suppose you hear something that you really know to be false. Or suppose that right after you are told something, you are explicitly told, “That was a lie!” For example, the falsehood might be that a vaccine doesn’t work, that a corporate executive engaged in sexual harassment, that an aspiring politician was once a member of the Communist Party, or that a prominent sociologist is a cocaine addict.

Even if you are informed that what you have heard is false — a joke or an act of malice — you are likely to have a lingering sense that it is true, or at least that it might be true. That impression can last a long time. It will probably create a cloud of suspicion, fear or doubt. It can easily affect your behavior.

It might lead you to fear or dislike someone, or to believe there’s something wrong with that person, even if there really isn’t. You might think that on balance, a statement is probably false.

But “probably false” doesn’t mean “definitely false.” It means “maybe true.”

University of Pennsylvania psychologist Paul Rozin has undertaken some fascinating experiments that help explain what’s going on here. In one of his experiments, people were asked to put sugar from a commercial sugar package into two similar brown bottles. Then people were given two labels, "sugar" and "sodium cyanide," and were asked to put them on the two bottles, however they liked. 

After having done that, people were reluctant to take sugar from the bottle labeled “sodium cyanide” — even though they themselves had affixed the label! When the label “cyanide” is seen on a bottle, people don’t want to use what’s inside it, even if they know, for a fact, that it’s only sugar. That helps explain why lies and falsehoods are so corrosive; some part of us believes them, even if we know we shouldn’t.

Lies and falsehoods, including conspiracy theories, often have a lasting harmful effect, long after they have been successfully debunked. That conclusion has strong implications in practice. It suggests, for example, that social media providers should not be at all confident that corrections, labels, warnings and clarifications will undo the effects of lies.

A far more effective approach would be to reduce the likelihood that the most harmful lies will circulate in the first place, not necessarily by removing them, but by reducing their salience and visibility (on, for example, News Feed) and hence the likelihood that they will circulate. Facebook should be doing more of that. And when serious harms are inevitable, taking lies down, or not allowing them up in the first place, would of course be more effective still.

No one — least of all public officials — should assume the role of a Ministry of Truth. But informed by psychological research, some social media providers have improved their policies for dealing with lies about Covid-19 — sometimes by taking them down.

That’s strong medicine, usually to be avoided. But when there’s a serious threat to health or safety, or to democracy itself, it’s just what the doctor ordered.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cass R. Sunstein is a Bloomberg Opinion columnist. He is the author of “Too Much Information” and a co-author of “Nudge: Improving Decisions About Health, Wealth and Happiness.”

©2021 Bloomberg L.P.