ADVERTISEMENT

The Big Question: How Should Internet Speech Be Regulated?

The Big Question: How Should Internet Speech Be Regulated?

This is one of a series of interviews by Bloomberg Opinion columnists on how to solve today’s most pressing policy challenges. It has been condensed and edited.

Romesh Ratnesar: In the wake of the Jan. 6 attack on the U.S. Capitol, Twitter permanently banned Donald Trump from its platform and suspended the accounts of many others accused of inciting violence. Several other companies, including Facebook, Google and Apple, took similar steps. As a former Facebook employee, government official and author of the book “Terms of Disservice: How Silicon Valley is Destructive by Design,” you’ve been critical of the way social-media companies have handled the challenge of moderating content and combating misinformation. Do you think they made the right decision in “de-platforming” Trump and others?

Dipayan Ghosh, co-director, Digital Platforms & Democracy Project,  Harvard Kennedy School: I think it was the right decision for Twitter to suspend Trump’s account in perpetuity. Other companies seem to be following that lead, including Facebook. The main outstanding decision is whether or not Facebook is going to permanently de-platform Trump. Facebook has referred the case to its third-party oversight board to decide whether or not it should de-platform Trump on the basis of the content he’s pushed out and his behavior in the real world. I think this is a case where the company probably could have come to this kind of decision much earlier, but it is what it is. Going forward, what I would hope to see is companies making more of these kinds of decisions and holding political leaders accountable and responsible for the things that they say and do.

For the biggest companies out there — for the Facebooks and Googles and other companies that have a huge financial infrastructure behind them — they know that this is a critical decision for their business and for their business models. Their biggest market revenue-wise is the United States, so it makes complete sense for these companies, from a purely commercial lens, to be very careful about these kinds of decisions and refer them to third parties. Ultimately, Facebook and Google have become too important in our media ecosystem not to deserve a more rigorous form of media regulation. I think it’s time and it may be coming very soon.

RR: Are there any potentially negative consequences from this kind of de-platforming? What about the risk of pushing extremist voices onto platforms where they might have a smaller audience, but are actually harder to track and monitor? 

The Big Question: How Should Internet Speech Be Regulated?

DG: I think there is some risk in pushing extremists off of the mainstream social media platforms and onto platforms like Parler. But there is also the benefit of extremists not having the capacity to engage and group together on Facebook and Google and Twitter. If you’re someone who wants a more regulated information and media diet, then you should be happy about this. We don’t necessarily want to see extremism and hateful conduct and incitement to violence. But all that said, as extremists and people in very far-right or far-left political factions move to smaller platforms, it’s likely that they’re going to become even more violent and more hateful and those sentiments could get manifested in the real world in some way. I think as a society we need to improve our capacity to listen to people and respect people, and as a government we need the capacity to understand what’s happening on any given information network.

RR: Some critics of the technology industry, including German Chancellor Angela Merkel, have expressed concern about unelected, unaccountable companies making these decisions of massive importance related to free expression. She and others think that this power shouldn’t be in the hands of the tech companies, and instead should be the product of a more public and transparent process. 

DG: I think there is some validity to that argument. As a global society, we don't necessarily want executives at a company like Facebook or Twitter or YouTube to decide what should be exposed to the mainstream and what all of society has the capacity to see. And the reason is because these are private companies that are presumably acting in their corporate interests. The moment we have pure corporate interests defining what people can or cannot see, we will necessarily see impingements on freedom of speech and freedom of political expression. I do think that it was right for Twitter and the other companies to de-platform Trump. But the sentiment underlying Merkel’s comments is, I think, quite sound. So what do we do about it? Well, I think these companies need to increasingly shift the oversight over these kinds of decisions to external third parties that have the public interest in mind and command public trust. And we should also think about regulating the practice of shifting the decision-making responsibility to outside third parties, and perhaps imposing liability on companies for the emergence of negative impacts on the rest of society. 

RR: Would that include revoking Section 230 of the Communications Decency Act, which gives social media companies protection from liability for content on their platforms?

DG: I think revoking Section 230 would be better than leaving it as is. And doing so wouldn’t necessarily mean that all of a sudden, all the content in the world would come down. Instead what we would have is companies starting to become more careful and concerned over the emergence of hateful content, disinformation, misinformation — I think companies would develop systems to automatically detect that kind of content and to suppress that kind of content. Again, I’m talking about content that incites violence, that incites hatred, that incites voter suppression. These are the kinds of things that companies would become liable for. I don’t think that would be a bad thing. We would still see the vast majority of public discourse over social media take place and a free and open exchange of ideas, and simply have companies that are designed to cut out the clearly offensive and illegal content. 

A lot of people ask, what about the fact that small companies — like Parler — won’t necessarily have the capacity to conduct such content moderation at scale? Well, we can have a graded approach where larger companies are more subject to clear violations. Or a startup exception to these kinds of regulations where companies that are, let’s say, under 50 million users are not subject to the same liability.

RR: You served in President Barack Obama’s administration before joining Facebook, at a time when there was a lot more enthusiasm about the potential for close cooperation between the U.S. government and Silicon Valley. Do you think that enthusiasm was misplaced?

DG: I was not at all concerned about the relationship between the Obama administration, or the whole of government, and companies like Google or Facebook. The Obama administration was quite transparent in the dealings that it had. I can’t think of any case in all of my work in technology and economic policy during the administration, where any decision was really impacted by an interest or loss of advocacy from a particular company. I think we have to acknowledge, of course, that 2015, 2016, and even 2017 were a long way off from where we are today. The negative sentiment around companies like Facebook simply was not there — and certainly not in the first Obama term, which would have been the time when we could have pushed rigorous sorts of regulations. There wasn’t really a strong feeling that regulating the industry was a significant priority, I think. All that being said, the Obama administration did actually launch a comprehensive legislative proposal on digital privacy. And it kind of came out in a place that was attacked by both industry and public advocates — which we thought was a good thing, in the sense that it reflected a measured approach to bringing about a comprehensive baseline privacy law in the United States. But it was something that Congress decided not to touch. 

RR: Did your experiences at Facebook make you think differently about how the government should approach its relationship with the technology sector?

DG: I saw Facebook as a great opportunity to learn about how the industry works, and to be at the cutting edge of how one of the most important Internet companies deals with matters of privacy. Facebook might be the company that most implicates consumer privacy by default, because of the course of its business. Looking back on my time there, particularly in regard to how data is used to drive political advertising at audience segments and how that impacts public opinion, I think the company could have done better. Looking forward, Facebook should give consumers more authority and more autonomy over the ways their data is collected, how it’s used, and the kinds of advertising and content that they will be subjected to. Instead of all of those processes being unilateral decisions undertaken by the company in the corporate interest, there should be more autonomy and decision-making power handed over to users. I think that’s the core issue that I have with the business model. And that really applies not only to Facebook, but also to Google and other technology firms today.

RR: Obviously it would be ideal if the companies could undertake these reforms themselves. Assuming they don’t or remain reluctant to do so, what kind of new regulatory framework is needed to make sure those changes are put in place?

DG: We need to look at the business model of this industry as a machine. It’s a machine that collects lots of data on an uninhibited basis, uses algorithms to curate content and to target ads and profile people, and engages in this aggressive platform-growth push that very often makes use of questionable corporate development decisions. And it’s that business model — whether you look at Facebook, or Google or any other dominant Big Tech firm — that instigates these negative externalities. They include the disinformation problem where the Internet Research Agency in Russia decides to engage targeted audiences in the U.S. with specific conspiratorial content, or the spread of hate speech, or the spread of misinformation concerning Covid-19. What we need is a set of regulations that contains those negative externalities and throws them into the waste bin. And to do that, I think we can impose greater accountability over offending content through Section 230 reform. Where companies collect lots of data, we need better privacy laws that give consumers rights over what companies can collect. We need consumers to have rights over retention, deletion, access and transparency. When it comes to these algorithms, we need transparency over how they work. And finally, to address platform-growth behavior, we need better policies to protect competition in the digital economy. If we can push privacy, transparency, competition and perhaps Section 230 reform, I think we’ll have a much fairer and more democratic set of media regulations to set us on the right path. 

RR: So what’s the bottom line? How can policy makers protect against and prevent the abuse of these platforms, while still preserving the principles of an open internet?

DG: I think we’ve come to a stage where our new-media ecosystem is driving damage and causing harm on society. And so we need to rethink media regulation entirely. To truly correct the harms that the Internet has engendered and spread, we need to think about the business model underlying the Internet as it interfaces with consumers. We need to think about how that business model works and how it implicates public interests, including democratic interests. And where we find those gaps that cause those kinds of harms, we need to close them — through privacy laws, through better competition and through transparency. Those are the kinds of things that I hope that Congress and the new administration can think about in the way forward.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Romesh Ratnesar writes editorials on education, economic opportunity and work for Bloomberg Opinion. He was deputy editor of Bloomberg Businessweek and an editor and foreign correspondent for Time. He has served in the State Department, and is author of “Tear Down This Wall.”

©2021 Bloomberg L.P.