(Bloomberg Businessweek) -- Hospitals collect all manner of data on patients, from doctors’ notes to test results to measurements such as pulse and blood pressure. Doctors have long known the data points can be leading indicators of potentially fatal medical emergencies. If physicians were able to analyze the data to identify when serious deterioration starts, they could save lives. Now computers have started doing just that.
Researchers are using artificial intelligence algorithms to comb through the records of patients who suffered, say, sepsis or lung failure. The software examines data points from hours or even days before the onset of a crisis to see which combinations of factors might have predicted a fatal condition. The algorithm trains itself to model the warning signs. “It’s the idea that you could recognize these risk points or the point of tipping toward the cascade,” says Eric Horvitz, a research director at Microsoft Corp. who, together with scientists from the University of Washington, is studying AI methods for early detection of conditions such as heart, lung, and kidney failure. They’re using 10 years of data from 80,000 patients.
Several universities, startups, and tech companies are also trying to devise early prediction systems. Google in May published a research paper on an AI system it built to gather vast amounts of patients’ medical data—almost 50 billion pieces—and provide a score of how likely a patient is to die soon.
A key target of several studies is sepsis, an often fatal complication of infection. According to the Centers for Disease Control and Prevention, 1 in 3 patients who die in hospitals have sepsis. It occurs when a massive immune response to fight an infection instead triggers inflammation throughout the body, causing tissue damage and organ failure. Many hospitals have added so-called Rapid Response Teams to quickly react, but by the time a team is rushed in, it’s often too late. “Cases are preventable if you could actually intervene at the right time,” says Suchi Saria, a computer science, math, and health policy professor at Johns Hopkins University.
Saria, who lost a nephew to sepsis, and several co-researchers looked at whether they could create a machine learning model. They had the model examine 54 readily available medical measurements and determine which were most predictive of septic shock. The software then tracked those measures to see what conditions were present at the onset of septic shock. From that, researchers were able to build a scoring system that performed better at identifying at-risk patients than other commonly used measures.
Saria and her team further refined their software, compensating for missing data and making sure doctors were notified only when the algorithm reached the right level of certainty; too many alarms might cause clinicians to ignore the alerts. They gave the software to about 300 doctors at a Johns Hopkins hospital for testing. In its first year, nurses and doctors caught cases much earlier and calls to the Rapid Response Team dropped 75 percent.
Dallas’s Parkland Hospital has been running clinical trials on machine learning technology that tries to find patients who aren’t in intensive care but are at risk of a heart or respiratory emergency. The results of the trials, which have not yet been published, indicate the system is faster than doctors or nurses at identifying deteriorating patients, the hospital says. The system is good enough that in 90 percent of alert cases a clinician took action, according to Ruben Amarasingham, chief executive officer of Pieces Technologies Inc., Parkland’s for-profit arm.
Horvitz also trained as a medical doctor at Stanford University before joining Microsoft as an AI scientist. During his internal medicine rotation, as a student, he was talking to a 48-year-old patient, he recalls. The patient was telling jokes and going through his medical history, describing recurring chest pain. Suddenly, the patient’s chest pain became acute, and in a flash, the medical team was rushing him to the cardiac ICU. The patient died as Horvitz and the others were pumping his chest, manually providing respiration.
“You never forget that sort of thing,” Horvitz says. “One second you are talking to the patient about their life and their history and then you’re rolling the patient down because they’re not in the right place at the hospital.” He’s hoping this sort of predictive work might give more patients a better chance.
A big gap still exists between promising research and clinical success. To be effective, the new sources of information must be well-integrated into doctors’ and nurses’ daily workflow and provide predictions accurate enough that hospitals and insurance companies can base expensive care decisions on them. Also, doctors, because they’re using these programs to make life-and-death decisions, want what they call explainable AI, meaning systems that go beyond a simple warning to offer an explanation of why a patient has been flagged. This is difficult for some AI technology. “There’s a lot of concern with these types of systems when they are black boxes that can’t explain themselves,” Amarasingham says. “So a big focus for us is how do we explain how the system has come to that conclusion in clinical terms.”
©2018 Bloomberg L.P.