Science Gets Messy in Frantic Dash to Publish Covid-19 Research
On April 22, researchers at Northwell Health, a major hospital system in the New York area, reported a stunning 88% death rate among Covid-19 patients on ventilators. The number was published in the prestigious Journal of the American Medical Association and made headlines across the country.
Two days later, the journal issued a clarification amid a flurry of harsh criticism from scientists on Twitter. The abstract replaced the frightening 88% figure with a far more reassuring metric: Only 24.5% of coronavirus patients on ventilators at Northwell Health had died so far, the new version said. Yet none of the underlying mortality data in the study had changed.
What happened? And which number is right? As it turns out, both are. The higher percentage is based on a tiny number of patients who either died or recovered within days. The lower percentage adjusts the death figures to include everyone on ventilators who was still alive and battling the virus at the time the study ended. Many of them may not survive, but some may.
“The numbers in the article were accurate,” says Karina Davidson, senior vice president of research at Northwell Health, referring to the original mortality calculations. “There was so much misinterpretation” of the 88% mortality rate that Northwell opted for clarification.
The Northwell experience demonstrates how early research released in the middle of a health crisis can result in more confusion than clarification. In the push for medical answers, even top researchers and journals are having trouble getting the balance right. Their usually cautious and codified world of scientific research has suddenly morphed into a dizzying race to the finish line in the age of Covid-19. Medical studies are being pumped out faster than ever before. And public officials are struggling to interpret this gusher of data to make quick decisions that could affect the health and lives of billions of people around the world.
The Northwell Health study is one of the latest in a dramatic streak of confusing reports. Just days after JAMA clarified Northwell’s ventilator death rate, drug maker Gilead Sciences Inc. issued a press release stoking excitement that it was nearing a breakthrough with its experimental treatment remdesivir, a drug seen as holding great promise for an early Covid-19 treatment. Hours after the company announced preliminary test results, Anthony Fauci, the National Institute of Allergy and Infectious Diseases director, touted a U.S. government study at a White House event, saying the drug had met its targets.
Fauci’s comments buoyed a stock-market rally propelled earlier by Gilead’s press release. But there was just one problem: On the very same day, The Lancet, a prestigious journal, published results from a small Chinese study that showed very different and less-promising results.
“It’s not just a single trial that’s going to hold the truth here,” said John Norrie, a professor of medical statistics and trial methodology at the University of Edinburgh. “The bottom line is, you can’t rely on press releases,” he said, which often contain a fraction of the information that’s in a full study.
Signs of the medical-data frenzy are everywhere. In recent weeks, numerous studies screening for antibodies — a sign of past illnes— found widely varying rates of infection in the population. The studies have been criticized by other scientists for their methodologies, including placing too much confidence in tests that may produce false positives.
Stanford University researchers, who conducted one of the studies, also were criticized by doctors and scientists for relying on Facebook to seek volunteers, which may have skewed the findings by attracting people who previously had symptoms. On April 30, the authors posted an updated paper, more clearly laying out the study limitations and including more data on control samples, presenting a more-conservative estimate for infection prevalence rather than a range.
On March 18, the Centers for Disease Control and Prevention put out early data suggesting Covid-19 hits younger people more often than had been previously believed. Deep in the report it became clear the data were incomplete and researchers didn’t know the ages of 42% of those infected.
Weak Studies, Big Impact
And then there was the worldwide firestorm over hydroxychloroquine, fanned by President Donald Trump’s tweets and public backing. It was ignited, in large part, by a small French study whose methods, including the lack of a good control group, have since been widely criticized by the medical and research community. The publisher and medical society that run the International Journal of Antimicrobial Agents have commissioned an independent review to determine whether concerns about the study have merit, according to an April 11 statement.
“There are hundreds of articles out there, but the quality is quite poor” in many, said Lauren Westafer, an emergency-room doctor and assistant professor at the University of Massachusetts Medical School in Baystate, speaking broadly about falling standards during the pandemic. “It’s problematic to make conclusions based on that kind of data,” added Westafer, who has a podcast about dissecting medical science. “The methods and quality are so deeply flawed.”
The acceleration of medical data is partly thanks to the rise of preprint services such as bioRxiv and medRxiv. The two sites often post drafts that haven’t been reviewed by other scientists or published in medical journals.
MedRxiv, launched less than a year ago, has been particularly inundated with coronavirus reports. While 225 papers were posted in January, when the virus was just emerging in China, the April total hit 1,500 — 1,000 of which are related to the virus, said John Inglis, executive director of Cold Spring Harbor Laboratory Press, which started and manages the two archives.
The websites were developed originally to help researchers share their preliminary work and help refine it before publication. But since the coronavirus, the general public is increasingly digesting the information, often after it is tweeted out on social media or highlighted in media reports.
“We are acutely aware of the fact that there are many kinds of audiences now for what is appearing on our preprint services, and that is a new phenomenon,” Inglis said. “Most of the time, the audience for this preprint is a professional one, but the epidemic has opened them up to the public gaze.”
The organization has tried to emphasize that the findings are preliminary and shouldn’t be considered a guide to any type of medical care. Still, it’s impossible to know what impact they are having.
“What we haven’t yet developed is an infrastructure that is equipped to deal with this volume,” Inglis said. “Here is this massive information, and people need guidance on what is important and reliable and what is well done and what isn’t well done,” he said. “Everyone is struggling to handle the volume.”
Even when research is careful and well vetted, the frenzy to get out new results can trip up researchers. Before it ran in the Journal of the American Medical Association, Northwell Health’s patient study went through multiple revisions prior to publication. One goal of the study was to report a series of unusual symptoms that could help other hospitals better identity Covid-19 cases. The Northwell researchers worked tirelessly, in some cases holding conference calls at 2 a.m, the only time overworked doctors were available. One key finding: Only 31% of its Covid-19 patients had a fever at the time they were admitted.
“We were motivated to get information and results out as quickly as we could to help others manage these patients,” Davidson said.
At some point during the review process, JAMA asked Northwell to add the mortality-rate figures, Davidson said. After the report was published, though, the journal said in a statement that its editors asked the Northwell authors to clarify the data, resulting in the revision. Davidson said Northwell initiated the clarification.
Typically, researchers would wait for more-complete data before putting out mortality rates in such a study. But in this case, with the need to publish Covid-19 data quickly, Northwell researchers didn’t yet know the fate of most patients. Of the 1,151 put on ventilators as of April 4, the outcome for only 320 was known at the time of publication: 38 were discharged alive and 282 died — that’s an 88% death rate.
The rate was more or less consistent with some reports from China and elsewhere among Covid-19 patients requiring ventilation. Still, the release sparked backlash.
“It’s irresponsible to publish this kind of misleading mortality data prematurely,” Robert Dickson, a critical-care doctor at the University of Michigan’s medical center wrote in a tweet the day after the study was released. “The take-away: ‘88% mortality among ventilated patients!’ — is absolutely going to inform goals-of-care discussions around the world. This sloppiness has consequences. It’s maddening.”
Northwell authors published their clarification the next day. They removed the 88% figure from the paper’s abstract and added new language to the conclusion that noted doctors expect the ventilator mortality rate to decline as more complete results roll in.
Northwell said it expects to have 30-day-follow-up data providing a clearer picture on all the ventilated patients in the study sometime in May.
©2020 Bloomberg L.P.