ADVERTISEMENT

Facebook, Twitter, YouTube Keep Misinformation in Check, For Now

Facebook, Twitter, YouTube Keep Misinformation in Check, For Now

Tech platforms faced an unprecedented challenge this week: Prevent misinformation from spreading on their platforms, and tamp down on efforts to undermine the U.S. election as it unfolded.

Facebook Inc., Twitter Inc. and Google’s YouTube failed to do the same in the last presidential contest, as foreign actors mounted campaigns to influence voting. Over the last four years the tech giants created dozens of policies, rules and products to fight misinformation and fake accounts.

Those policies have held up so far. It may take months until we know for sure how much misinformation circulated online this year. Most of the foreign interference efforts from 2016 weren’t identified until much later. But Tuesday's election—which spilled over into Wednesday—went off without a major, viral misinformation meltdown, an obvious win for platforms hell-bent on preventing one.

Of course, conspiracy theories did crop up, even if they failed to gain traction. False narratives included the idea that poll workers in Pennsylvania were trying to steal the vote, and that workers in Arizona invalidated ballots by asking people to fill them out with Sharpie pens. But such posts only saw shares and likes in the thousands, not the millions, according to data from Zignal Labs. 

The greatest test could be yet to come. Here’s how the three key platforms have performed:

Twitter
Jack Dorsey's short-form blogging platform aggressively policed President Trump's account in the run-up to Election Day—and took stronger action than other social-media companies once votes had been cast. Twitter hid or labeled six separate tweets from the president starting early Wednesday morning.

In the first instance, Trump claimed Democrats were trying to “steal” the election. Twitter hid the post within minutes. In addition to adding a warning label, it limited likes and comments, and if users wanted to share it, they had to add their own commentary.

Another post from Trump on Wednesday, where he claimed Democratic-controlled states had “surprise ballot dumps,” took longer to label, but was eventually tagged as well. Both posts violated Twitter’s rules against undermining election results. Biden, who has tweeted fewer times than the president since Tuesday night, has not been flagged or labeled.

In one post Twitter did not hide, Trump claimed a “big WIN!”—even though not all votes were counted. Twitter has a policy against candidates claiming victory prematurely, but did not flag this tweet because it was too vague, according to a spokesman.

Even though it blocked tweets from Trump decrying fraud and declaring victory, the company did not restrict a live video in which the president said many of the same things. A spokesman said that simply sharing the video, from a speech early Wednesday morning, wasn’t a rules violation. Since many news channels carried Trump’s comments live, and also shared them, Twitter didn’t want to police that video, the spokesman added.

The result was a reminder of how confusing such policies can be, and that companies ultimately have discretion to enforce their somewhat vaguely-worded rules.Kurt Wagner

Facebook
Facebook was never going to be able to root out all the misleading or dangerous content going viral on its platform. The company recognized that early, and came up with policies that would affect all posts—not just the stuff that violates rules.

Everything on Facebook that was about the election—including all posts with the word “vote” in them—got a label directing people to the company’s information center with legitimate information from non-partisan sources. Banners at the top of Facebook and Instagram told people to vote, to stay in line if they were at a polling place, and to recognize, after polls closed, that a winner hadn’t been called.

Facebook telegraphed clearly to users in the weeks before the election that a winner wasn’t likely to be decided right away. That lessened the surprise Tuesday night. And Facebook's post-election ad ban proved to be prescient. Because Facebook doesn’t fact-check political ads, the content could have been misleading.

It’s impossible to know how much these efforts calmed the waters. And it’s hard to congratulate Facebook for taking action, when the company's mechanics—with virality that rewards shock-and-rage posts—caused the problem in the first place. There's also a cost to taking a neutral labeling approach to all content: People can still share and react to electoral lies.

The same Trump posts that Twitter obscured and restricted from being shared have been seen thousands of times on Facebook.

The company’s biggest test is yet to come. Facebook Chief Executive Officer Mark Zuckerberg has said many times that he expects violence to erupt around the election results, and that Facebook would remove any content advocating that. The company hasn’t given much insight into what prompts for violence it has already removed, and we can’t see much going viral. But it will be crucial for the company to stay vigilant in coming days.Sarah Frier

YouTube
Before the election, YouTube faced less heat than the other major social networks. That’s in part because Trump is less active on Google’s video service. YouTube has also spent the last couple of years overhauling its system to promote more official news outlets and bury conspiracy theories.

It has managed through the election relatively well so far, although there have been a few snafus. On Tuesday, several YouTube accounts livestreaming fake results jumped to the top of search results on the site, before YouTube snuffed them out. YouTube says it takes down videos that mislead voters or incite violence, and applies information labels to videos questioning mail-in ballots—clips that spread widely in the months before Election Day. The company won't say how many videos it has pulled.

Many clips questioning the election results haven't gained much traction on the service. Still, a few videos promoting baseless theories did circulate. An anchor for One America News Network, a pro-Trump cable outlet, began a YouTube clip posted Wednesday morning saying: “President Trump won four more years in office last night.” YouTube pulled ads from the video and added a label beneath that read: “Results may not be final.” But it didn't remove the content since it didn't "materially discourage voting." Several other YouTube clips on Wednesday accusing Democrats of stealing the election remained on the site.

On Wednesday afternoon, Trump posted a trio of clips from a press conference broadcast on Fox News. One featured his son, Eric, declaring victory in Pennsylvania before the state was called. YouTube applied the same label as it added to the OANN video, pointing viewers to the official election results from the Associated Press on Google.Mark Bergen

©2020 Bloomberg L.P.