ADVERTISEMENT

Facebook’s VIP ‘Whitelist’ Reveals Two Big Problems

Facebook’s VIP ‘Whitelist’ Reveals Two Big Problems

Facebook has a couple big problems when it comes to filtering out the often misleading and dangerous stuff that users post on the social network. First, its artificial intelligence doesn’t work. Second, the company doesn’t want to admit this, because hiring humans to do proper moderation would undermine its business model. The combination should have legislators and shareholders very worried.

An investigation by Jeff Horwitz at The Wall Street Journal has shed new light on Facebook’s duplicity: Even as executives publicly claimed that their automated moderation was applying the same rules to all users, the company was actually giving special treatment to celebrities and politicians. Such “whitelisted” accounts were handled by humans, who allowed inflammatory and misleading posts that algorithms otherwise would have censored — including a call to violence from then-President Donald Trump.

Why the two-tiered approach? For one, it seems Facebook recognizes that its algorithms are glitchy, and it’s fine with foisting them upon regular users but doesn’t want to aggravate influencers, who might complain loudly and publicly. Beyond that, and perhaps more important, incendiary posts by famous people generate a lot of engagement, and hence advertising revenue. The whitelist was a way of quietly addressing these issues while maintaining the fiction that the AI was actually working.

To some extent, Facebook merely reflects the broader application of technology in society. In many realms, the elite get the human touch while the rest get the algorithm. People with ivy-league backgrounds find jobs through their friends, while others must contend with hiring algorithms that funnel them into less prestigious positions, even if they know their stuff. Applicants to top-tier colleges get personal interviews, while other schools use enrollment algorithms that systematically reduce scholarship awards.

Yet Facebook has a particular motivation for insisting that its automated process can deal with everyone equally. As long as people (and authorities, and Facebook’s own oversight board) believe it, the company can keep portraying the worst posts as unfortunate anomalies rather than drivers of profitability, and avoid hiring the much larger and very expensive army of humans that would be required to moderate content responsibly. It’s hard to see how, if Facebook came clean and took on the costs of addressing its real problems, it could support a $1 trillion-plus market valuation.

Having been exposed, Facebook might now learn to hide its problems better — for example, by not employing people to identify them in writing. Preferably, legislators will recognize the threat presented by a business model that encourages and capitalizes on the worst behavior of the most influential users. And shareholders — in Facebook and other tech companies — will stop trusting AI and start demanding more evidence that it’s both safe and effective.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”

©2021 Bloomberg L.P.