ADVERTISEMENT

Facebook, Twitter, Google CEOs Split Over Social Media’s Shield

Zuckerberg Supports Section 230 Reform Ahead of House Hearing

The leaders of the three most popular social media platforms are at odds over the thorniest public policy question they face: who’s responsible for policing the content that appears on their pages.

At issue is a decades old law that protects social media companies from liability over content posted by users. The heads of Facebook Inc., Alphabet Inc.’s Google and Twitter Inc. are all slated to appear before a House panel Thursday to testify about the spread of false information that contributed to the deadly Jan. 6 Capitol attacks.

The executives outlined their positions in prepared remarks ahead of the hearing. Chief Executive Officer Mark Zuckerberg supports reforming the meansure, known as Section 230 of the Communications Decency Act, while Google’s Sundar Pichai remains averse to any changes to the legal shield. Twitter’s CEO Jack Dorsey defended the company’s handling of misinformation.

Zuckerberg called for making liability protection for internet platforms conditional on having systems in place for identifying and removing unlawful material.

The liability shield “would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing -- sometimes for contradictory reasons -- that the law is doing more harm than good,” Zuckerberg said in his written testimony.

He added that platforms “should not be held liable if a particular piece of content evades its detection -- that would be impractical for platforms with billions of posts per day.” Under Zuckerberg’s proposal, a third party would determine whether the company’s systems are adequate enough to handle the load.

Google CEO Pichai signaled that he is opposed to any changes to the law. Reforming it or repealing it altogether “would have unintended consequences -- harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges,” he said in his written testimony.

Instead, Pichai wants companies to focus on “developing content policies that are clear and accessible,” such as notifying users if their work is removed and giving them ways to appeal such decisions.

Twitter’s Dorsey touted the company’s decisions to apply labels to misleading posts about the vaccine and the election. Twitter has permanently banned former President Donald Trump and is soliciting feedback about how to handle world leaders who violate its rules, while Facebook is awaiting a verdict from its oversight board after kicking Trump off its platform. Google suspended Trump’s YouTube channel in the aftermath of the deadly attack on the capitol.

Dorsey warned that “content moderation in isolation is not scalable, and simply removing content fails to meet the challenges of the modern Internet.” Twitter is experimenting with new approaches to crowd source policing speech online, including a project called Birdwatch, which would allow users to add notes to tweets that are misleading or inaccurate.

“Every day, millions of people around the world Tweet hundreds of millions of Tweets, with one set of rules that applies to everyone and every Tweet,” Dorsey said. “We built our policies primarily around the promotion and protection of three fundamental human rights -- freedom of expression, safety, and privacy.”

Zuckerberg, Pichai and Dorsey are set to testify before the U.S. House Committee on Energy and Commerce at noon Thursday.

Following the Jan. 6 riots, there’s been growing bipartisan interest in holding tech companies accountable for certain hate speech and extremist content on their platforms.

Politicians on both sides of the aisle have proposed bills that would weaken Section 230 to encourage the platforms to change their content moderation practices. Democratic senators, led by Senator Mark Warner of Virginia, introduced the SAFE TECH Act that would hold companies liable for content violating laws pertaining to civil rights, international human rights, antitrust and stalking, harassment or intimidation.

And a bipartisan bill -- the PACT Act -- from Democratic Senator Brian Schatz of Hawaii and Republican Senator John Thune of South Dakota would require large tech companies to remove content within four days if notified by a court order that the content is illegal.

President Joe Biden has said he is interested in revoking Section 230, arguing that internet platforms have failed to curb misinformation responsibly. Former President Donald Trump had also called for revoking Section 230 over the unfounded accusations that social media platforms are censoring conservative viewpoints.

©2021 Bloomberg L.P.