ADVERTISEMENT

Get the Racism Out of Health Care Algorithms

Get the Racism Out of Health Care Algorithms

(Bloomberg Opinion) -- Machine learning algorithms have quietly seeped into the world of health care, to the point that automated systems sometimes make life-or-death decisions. The trend seems inevitable as medicine becomes more complex and costly. But there are downsides.

This week, for example, researchers found a substantial racial bias in an algorithm that decides who needs extra care to avoid costly emergency room visits. This may seem surprising, given that the algorithm didn’t take any racial data into consideration. But it did rely on historical data, and there’s racism embedded in history, as well as fallible human assumptions that inevitably go into the making of algorithms and interpretation of their output.

Government and private insurance programs are increasingly adopting algorithms and artificial intelligence to predict our future health-care needs. Last spring, at a conference, Harvard Law professor Jonathan Zittrain compared artificial intelligence to asbestos. “It turns out that it’s all over the place, even though at no point did you explicitly install it,” he said. And by the time we recognize any potential dangers, it’s hard to remove.

In the paper on racial bias, published in this week’s issue of Science, the researchers wrote that they had access to additional data, including the self-reported race of people in the database. And what they found was that those who identified as black were much less likely to be included in the group targeted for extra care than white patients of similar health status.

The system was created with good intentions, said lead author Ziad Obermeyer, a health policy professor at the University of California, Berkeley. It was adopted in conjunction with the Affordable Care Act to direct medical attention to those most in need, thus both avoiding pain and suffering and saving money.

The race problem isn’t rooted in the algorithm itself, but in the way people have used it. The algorithm was predicting not future health status but future health costs, using data on people’s past health costs.

But patients who identify as black have historically received less health care than patients of equal health status who identify as white. The system was therefore less likely to flag black patients as eligible for extra preventive care, simply because less had been spent on their health care in the past. The crux of the problem was in the assumption that health needs were equal to health costs.

This same sort of bias has been found in algorithms used in the criminal justice system. There, the problem also stems from the fact that they don’t predict what people think they predict. Though advertised as predicting future crime, it is more accurate to say they predict future arrests. And yet, there is a growing body of evidence that racial bias affects who gets stopped by police, who gets arrested, and whose charges are more likely to be dropped.

Writing a commentary to accompany the health algorithm paper, Princeton African American Studies professor Ruha Benjamin illustrated the problem with a stark hypothetical case involving the real historical figure Henrietta Lacks. In the real story, Lacks came to Johns Hopkins Hospital in the 1950s with symptoms of cervical cancer. She was sent to what was known as the Negro ward, where her care was cheaper. She ultimately died from the disease.

In the hypothetical case, with a machine in charge, the same bias would be encoded, because the algorithm would use cost as a proxy for health, and would misinterpret her past low health-care costs. “On the basis of those results, she would be discharged, her health would deteriorate, and by the time she returns, the cancer has spread and she dies.”

Benjamin wrote that she’s concerned that most algorithms used in health care, housing and employment aren’t transparent, and so it could be impossible for researchers to find what might be substantial racial biases. Obermeyer, the lead author of the study, said that the case they researched was unique in that the algorithm was public, along with all the data on the patients, as well as the additional data on race and health conditions.

He was able to fix the algorithm so that it measured health rather than health costs – and as a result the number of self-identified black patients deemed eligible for additional care doubled.

Getting rid of algorithms altogether isn’t practical, and could cause more harm than good. In a 2017 commentary for the New England Journal of Medicine, Obermeyer wrote that medicine had become far too complex for the human mind to handle without the help of machines.

What we could work to improve is the lack of transparency. The complexity of the human body is nothing compared to the complexity of the healthcare billing codes, which have spawned an industry of experts in using them to maximize billing. Something as simple as weighing a patient during an office visit, or, more ominously, prescribing opioids, can drive up billing, as physician and former drug executive Mike MaGee writes in his book, “Code Blue: Inside America’s Medical Industrial Complex.”

In 2016, when I wrote about crime algorithms that were already being used in Philadelphia, one of the creators of the system, University of Pennsylvania Professor Richard Berk, worried that people would put too much faith in its predictions. The danger in mixing medicine and algorithms might also lie in the trust algorithms engender, creating a false picture of simplicity, efficiency and objectivity that papers over a system that’s inherently convoluted, overpriced and unfair.

To contact the editor responsible for this story: Sarah Green Carmichael at sgreencarmic@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Faye Flam is a Bloomberg Opinion columnist. She has written for the Economist, the New York Times, the Washington Post, Psychology Today, Science and other publications. She has a degree in geophysics from the California Institute of Technology.

©2019 Bloomberg L.P.