ADVERTISEMENT

Computer Algorithms Need to Know What 'Fair' Means

Computer Algorithms Need to Know What 'Fair' Means

(Bloomberg View) -- If humans are going to entrust big decisions to computers, how can they ensure that those computers act in humanity’s best interests? Amazingly, given the increasing power and pervasiveness of algorithms, it’s a question that researchers are only beginning to answer.

Algorithms specialize in prediction: Which way financial markets will move, who is likely to pay back a loan or commit a crime, what kinds of news and ads will attract a particular person. In doing so, they also shape the future. Computers that identify patterns in stock trading can cause flash crashes. Criminal risk scores can turn people into criminals. Facebook’s news feeds keep people engaged, but also promote outrage and even catalyze violence.

Humans have developed surprisingly few tools to deal with such negative externalities, or even to recognize them. We’re only just beginning to understand, for example, how facial recognition technology tends to misidentify minorities -- a real problem if police are using it to search for suspects. The standard procedure is to set algorithms loose on people without checking for flaws, and often with little or no mechanism for appeal.

So how can we make sure that algorithms act fairly? To start with, we need to define “fair.” At a recent conference at New York University, for example, researchers explored various statistical definitions. Should a risk profiler, for example, treat all racial groups equally, regardless of their other differences? Should it acknowledge differences, but focus on achieving similar error rates? Should it correct for previous wrongs? Do some definitions seem good in the short term but have negative longer-term repercussions?

In one promising paper, UC Berkeley computer scientist Moritz Hardt and colleagues set up a model to explore the effects of different definitions of fairness on lending and credit scores. They find that in some cases, a process designed to protect certain minority groups can actually do harm in the longer run. Specifically, if the algorithm focuses on providing two demographic groups with equal rates of credit, the one with less ability to pay will default in higher numbers, resulting in credit-score downgrades that will make it worse off in the longer run. If, by contrast, the algorithm seeks solely to maximize profit without regard to demographics, the group that starts off at a disadvantage will remain disadvantaged.

The right balance appears to be somewhere in the middle. The paper finds that if members of the disadvantaged group are given loans at rates higher than in the maximum profit scenario, but lower than in the forced-equality scenario, they broadly improve their credit scores. This comes at some short-term cost to the lender, but is likely beneficial in the longer term as society as a whole becomes better off.

History supports this conclusion. After the introduction of the Equal Credit Opportunity Act of 1974, which was aimed at combating widespread discrimination against women and minorities, lenders increasingly relied on relatively objective criteria such as credit scores. The long-term result: Women, who were well behind men in financial terms in 1974, now have on average higher FICO scores.

The challenge, then, is to get tech giants and others to recognize -- and take responsibility for -- the effects their algorithms can have on society. It won’t be easy, given the revenues that the likes of Google and Facebook can generate by finding the most effective ways of getting people to look at ads. It will require political leverage and a willingness to focus on relatively abstract, long-term problems. This isn’t something for which our political system is well equipped. Let’s hope it won’t take a societal flash crash to convey a sense of urgency.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O'Neil is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of "Weapons of Math Destruction."

To contact the author of this story: Cathy O'Neil at cathy.oneil@gmail.com.

To contact the editor responsible for this story: Mark Whitehouse at mwhitehouse1@bloomberg.net.

For more columns from Bloomberg View, visit http://www.bloomberg.com/view.

©2018 Bloomberg L.P.