ADVERTISEMENT

The Twitter Purge Solves Nothing

What looks like a major purge by Twitter is more like a public relations onslaught.

The Twitter Purge Solves Nothing
Twitter Inc. logos are arranged for a photograph in Washington, D.C., U.S. (Photographer: Andrew Harrer/Bloomberg)

(Bloomberg Opinion) -- I lost 120 Twitter followers overnight. President Donald Trump lost 340,000, the New York Times 732,000, former President Barack Obama 3 million or so. Even Twitter's chief executive officer, Jack Dorsey, lost 200,000 in his company's much-hyped crackdown on dubious accounts. But what looks like a major purge is more like a public relations onslaught, as Twitter and Facebook try to outdo each other at showing they care about the health of the social network conversation.

Top Twitter lawyer Vijaya Gadde had alerted users before the purge in a blog post, explaining that most of the accounts the company was targeting weren’t bots. They were mostly set up by real people, she wrote, “but we cannot confirm that the original person who opened the account still has control and access to it.” To confirm this, Twitter tells the supposed account owners to solve a captcha or change their password. The accounts for which this doesn’t happen get “locked”; after a month, they stop counting toward Twitter’s total user number. Now, they no longer pad follower counts, either.

The interesting part here is how Twitter determines that there’s something wrong with an account. According to Gadde’s post, the trigger is usually a sudden change in an account’s behavior. Suddenly, it might start tweeting “a large volume of unsolicited replies or mentions” or “misleading links.” The same behavior in a new account also sets off alarm bells: Twitter’s algorithms identify the account as potentially “spammy or automated” and “challenges” its owner, for example by asking her to confirm a phone number. 

Twitter reports a large increase in the number of accounts challenged in this way – from slightly more than 2.5 million in September to 10 million in May. Given that Twitter had 226 million monthly active users in the first quarter of 2018, that looks like a large number – but only until one looks at Facebook’s recent report purporting to document similar activity. 

In May, Facebook announced that it had taken down 583 million fake accounts, down from 694 million in the fourth quarter of 2017. That’s about 27 percent of Facebook’s monthly active users in the first quarter. But of course Facebook didn’t decimate its user base – that would have sent the stock price tumbling. The company explained that it killed the fake accounts just as malicious actors tried to register them. The idea is that Facebook’s user base is not inflated, it only contains 3 to 4 percent fake accounts, but it would have been bloated with fakes had it not been for algorithms that, the company says, detected 98.5 percent of the fakes before users reported them.

Facebook’s criteria for spotting fake accounts are similar to Twitter’s: Repeated posting of the same content, sudden increases in the number of messages sent and other activity patterns. Both Twitter and Facebook also have systems to stop automatic account registration. 

The problem is that, on the scale on which the social networks operate, even a very high detection rate still allows millions of fake accounts to be added every month. Of the 583 million fake accounts Facebook removed in the first three months of this year, algorithms spotted 98.5 percent. That means users flagged the rest, or 8.7 million accounts. Facebook has no idea how many went unreported. In a 2017 paper, a team of Canadian researchers showed that an internet-of-things botnet’s requests to create accounts on Instagram, owned by Facebook, were successful in 18 percent of the cases. The detection technology may work better now, but there’s still no way for the social networks to know exactly how well they’re doing at the cops and robbers game. At any rate, the market for fake followers and likes is still thriving, as a simple search will reveal to anyone interested.

The automatic detection of fake or hijacked accounts is a flourishing academic field because there’s demand from the social network companies, which are willing to devote significant resources to this work – and even to do it manually where algorithms fail. Facebook, for example, admits that its technology is better at detecting nudity than hate speech, which is flagged algorithmically in just 38 percent of the cases before users report it. It’s better to spend heavily on detection than to face public outrage and regulatory scrutiny in the wake of fake news and election interference scandals.

Obviously, no police force can prevent or punish 100 percent of crimes. The social networks are increasingly making their policing efforts public so that users, and society in general, might begin to think about them in these terms: They do what they can but some bad stuff just can’t be prevented.

That, however, is false framing. Technically, nothing is stopping Twitter and Facebook from setting up an identification procedure that would make automated registration impossible – but they aren’t doing it. Twitter has started requiring a phone number or email confirmation upon signup, but both easily can be automated. Proper identification doesn’t necessarily mean requiring a government-issued ID; the resources spent on detection could be redirected toward identification technology. That, however, could trigger further attacks on the social networks for collecting too much data about their users.

As they try to navigate between the nuisances of spam and fake news on the one hand, and privacy concerns on the other, social network companies can only step up the public relations activity around their fake-fighting efforts. In the process, they do their best not to hurt the user numbers their investors follow religiously. Does this approach improve the health of the social network conversation? My answer, so far, is a resounding no. Your experience might be different. To find out, say something combative on Twitter and see what happens.

 

To contact the editor responsible for this story: Jonathan Landman at jlandman4@bloomberg.net

©2018 Bloomberg L.P.