Can Antitrust Law Rein In Facebook’s Data-Mining Profit Machine?BloombergQuintOpinion
Facebook engaged in an elaborate bait and switch on user data: Privacy disappeared when competition did. Laws governing competition could change that.
Facebook is always in the news, but not always for edifying reasons. Recently, the Trump administration sued Facebook for targeting housing ads to people in a discriminatory manner. In 2018, a great deal of coverage focused on how the political consulting firm Cambridge Analytica misappropriated thousands of Facebook data points on more than 200 million users and may have used that data in efforts to influence the 2016 U.S. presidential election. Meanwhile, publishing executives continue to gripe about Facebook practices that have brought much of their industry to death’s door.
As bad news piles up, some politicians and critics have begun to call for a breakup of big tech companies.
But thus far, a step-by-step explanation of how its business model works and the relevant economic arguments have largely been missing.
My paper, “The Antitrust Case Against Facebook,” details the quintessential market power fact-patterns that standard economics and competition law have made familiar.
Facebook’s ability to inflict harm today on both consumers and news publishers rests heavily, though not exclusively, on a single practice that Facebook engages in: surveillance.
Facebook not only tracks users while they are on its site; it also follows them after they leave Facebook itself. This singular ability to surveil customers—across millions of independent and sometimes competitive businesses—is the source of Facebook’s unusually high profit margins in digital advertising, as well as its ability to inflict harm on market participants. Facebook could not get away with this when it faced real competition.
It is important to understand what Facebook did to establish its dominant market position. For ten years, privacy (not surveillance) was Facebook’s proclaimed competitive advantage. This always included the specific promise not to track users across the Internet. But in 2014, after Facebook locked in the market, and competitors exited, its leadership abruptly changed their minds about tracking. To conduct the tracking, it leveraged an extensive framework for surveillance that it spent years building while deflecting concern that it might have been building up capacities for precisely this purpose.
Facebook’s monopolistic position implies that familiar economic meanings of the word “price” undergo a subtle transformation in meaning. When one signs up for a “free” Facebook account, the “price” is what the user agrees to in Facebook’s terms and conditions.
When you use any free media service, somewhere in the fine print, you are made to understand that advertising will a part of that bargain.
Today, though, if you use Facebook, you also agree that Facebook will “track” you. Meaning, Facebook will monitor what you research, read, shop for, purchase, or even input across millions of sites and mobile applications on the Internet. According to a recent Wall Street Journal investigation, agreeing to this tracking includes allowing Facebook to ingest the most sensitive and personal details that you enter on unrelated mobile applications.
For example, when you enter your heart rate or information about your ovulation cycle onto one of the many popular health apps, these apps share that data with Facebook almost immediately. For those that use an Android phone, the surveillance goes deeper in a different direction. Facebook apparently scrapes a record of your phone calls—effectively rendering use of the Facebook social network the equivalent of having a pen register on your device.
If the people would not allow a government to track citizens, why is their free market resulting in the same outcome?
Levels of user privacy weren’t only relevant to Facebook’s market entry; they were at the crux of the Facebook-user bargain from 2004 through 2014. Facebook consistently promised consumers it would not track their digital footprints off of Facebook itself. Facebook tried to renege on this promise in 2007 but the market was competitive enough, and had enough consumer choice, to thwart that attempt. Consumer pushback forced Facebook’s retreat. Then, when the company launched its “like” buttons in 2010, consumers were concerned that Facebook might leverage the buttons to conduct surveillance. Facebook placated consumers by them that “like” buttons would not be used for tracking. Facebook’s conduct from 2004 through 2012, provides the benchmark of quality—at least with respect to commercial surveillance—that the restraining forces of competition demanded.
Competition didn’t only restrain Facebook’s ability to track users. It restrained every social network from trying to engage in this behaviour.
Speaking on behalf of then-competitor MySpace, owned by News Corp., one analyst commented on this dynamic: “News Corp. and Fox recognise the importance of allowing people to be alone with their friends, so they do not feel like they are being looked at by Big Brother. They understand how many competitors they have nipping at their heels right now, so they are doing everything they can not to alienate users.”
While competition ensured that the market produced something that was good for consumers, the exit of competition greenlit a change in conduct by the sole surviving firm. By early 2014, dozens of rivals that initially competed with Facebook had effectively exited the market. In June of 2014, rival Google announced it would shut down its competitive social network, ceding the social network market to Facebook.
For Facebook, the network effects of more than a billion users on a closed-communications protocol further locked in the market in its favor. These circumstances—the exit of competition and the lock-in of consumers—finally allowed Facebook to get consumers to agree to something they had resisted from the beginning. Almost simultaneous with Google’s exit, Facebook announced (also in June of 2014) that it would begin to track users’ behaviour on websites and apps across the Internet and use the data gleaned from such surveillance to target and influence consumers. Shortly thereafter, it started tracking non-users too. It uses the “like” buttons and other software licenses to do so.
It was only after an historic public offering, the acquisition of more than a billion users who had been promised privacy, and the exit of competitors, that Facebook was finally able to compel consumers to consent to this state of affairs.
It is important to note that Facebook’s ability to get people to agree to this level of surveillance is also the basis of Facebook’s ability to harm consumers and publishers.
Surveillance translates into influence: Information gathered about people can be used to try to influence election outcomes, which is now well known. But information gleaned from surveillance can also be used against people in other ways. If Facebook is monitoring usage of the health apps on your mobile phone, and therefore knows intimate details about your health, it can steer housing or job ads away from you based on this knowledge.
Facebook’s policy of tracking users across the Internet hurts competition, too. Facebook tracks what a publication’s readers are reading, and then turns around and uses that information to undercut that very publication in advertising markets. For example, if Facebook tracks that you are reading an article about foreign policy on The Economist’swebsite, it can turn around and sell ads directed at you for a lower cost than what The Economist could (since The Economist has to pay the writers to create the content). This cross-site tracking is the underlying mechanism that explains why Facebook and Google gain almost every incremental dollar that enters the digital advertising market. Meanwhile, establishment media organisations like The Guardian are now asking readers for donations, while start-ups like BuzzFeed have resorted to layoffs.
The problem is that people may not have chosen Facebook as the market winner had Facebook been honest about its intentions.
For more than 10 years, Facebook induced users to choose Facebook over other social networks by promising high levels of privacy. But, as I catalogue in the paper, many of those promises were false, even at the time. For example, the company was getting users to choose Facebook based on promises that it would not track them, but at the same time filed a patent to do precisely that.
When Facebook toyed with tracking consumers in 2007 but consumer push-back forced it to retreat, consumers became suspicious of the company’s long-term privacy intentions. To assuage users, Facebook promised it would let users vote on future privacy changes. “We’re making it so that we can’t just put in a new terms of service without everyone’s permission,” said Facebook founder and CEO Mark Zuckerberg. “We think these changes will increase the bonding and trust users place in the service.” But after Facebook further locked in the market, it revoked users’ ability to vote on privacy changes.
Facebook’s course of misleading conduct resulted in precisely the type of harm that antitrust law concerns itself with: the exit of rivals and the subsequent extraction of monopoly rents at the expense of consumer welfare. While deceptive conduct is not usually the concern of antitrust law, it can be a form of anticompetitive behaviour in markets that exhibit strong direct network effects. Misleading, deceptive, or otherwise unethical conduct at the early stages can induce market participants to choose the firm that they think increases their welfare, but which actually locks in the market for that firm to their detriment. Under the Sherman Act, the U.S.’ antitrust law, it is illegal for companies to willfully acquire monopoly power in a market by engaging in behaviour that is anticompetitive—including behaviour that is “unethical” or “deceptive.”
While politicians and regulators grapple with how to make sense of consumer privacy concerns and the power of Big Tech, competition principles explain why Facebook can today extract this exchange from a democratic people—and how it might be stopped from doing so. It is time for regulators, judges, or public policy to remedy this momentous case of bait and switch.
This article was originally published on Institute for New Economic Thinking website.
Dina Srinivasan was a technology entrepreneur and advertising executive. Srinivasan holds a J.D. from Yale Law School, where she studied law & economics and was an Olin Fellow with the Kauffman Program in Law, Economics and Entrepreneurship.
The views expressed here are those of the author and do not necessarily represent the views of BloombergQuint or its editorial team.