ADVERTISEMENT

Advice to Researchers: Admit What You Don’t Know

Advice to Researchers: Admit What You Don’t Know

(Bloomberg Opinion) -- Research in science or medicine or economics is most valuable when it is unbiased, with researchers honestly reporting the limitations of their results. It’s a lot less valuable if it exaggerates what’s known, claiming excessive certainty or precision, in an effort to win an argument. That happens a lot, of course — researchers are only human.

But where does the problem occur most? For more than a decade, economist Charles Manski of Northwestern University been studying the issue, which he refers to as the “Lure of Incredible Certitude.” In a recent article, he suggests that it’s most prevalent in his own profession, economics. The trouble seems to stem from an intense desire to make strong claims about matters relevant to policy, even when there’s really no good evidence to back them up.

Some examples are more spectacular than others. In a 2015 report that received broad media attention, economists from the Copenhagen Climate Consensus claimed that pursuing the Paris Climate goals would return less than $1 in benefits for every $1 spent. In contrast, they suggested, reducing barriers to world trade would return an astonishing $2,011 for each $1 spent. You may wonder about the assumptions required to come up with such a number. It sounds implausibly large, and it is obviously impossibly precise — made so, one might suspect, to draw attention and hype the study’s impact.

Similarly unwarranted uncertainty routinely comes from more reputable sources. For example, the Organization for Economic Cooperation and Development makes forecasts of things like gross domestic product and unemployment that state only single numbers — say, 2.78 percent — without giving any information on how accurate it expects the predictions to be. How confident is it that the forecast won’t be 2.77 percent or 1.85 percent? Given that historical analyses of such forecasts find they’re often off by a magnitude of one to two percentage points, the second and especially third digits look pretty meaningless.

Manski reviews other examples, such as published estimates of the costs of proposed legislation made by the Congressional Budget Office. In 2017, for example, the CBO estimated that Obamacare would produce a reduction of federal deficits by $337 billion during the period 2017-2026. Given that the real outcome will depend on the myriad unpredictable responses of states, hospitals, insurers and people, it might be more credible for the CBO to give a range of possible outcomes — between $250 and $450 billion, perhaps. But that’s not the standard practice.

Why not? And are there legitimate reasons for downplaying uncertainties? In years of research, Manski reports encountering a number of rationalizations. One common idea is that people generally don’t like uncertainty and tend to make better decisions if it’s ignored. This, he points out, is psychologically naive, as research shows that people actually deal with uncertainty in many different ways.

The real reason for expressing incredible certitude, Manki argues, is rhetorical — strong claims seem more surprising and get more attention, making it tempting for researchers to offer simplistic analyses with unequivocal policy recommendations. The idea resonates with the charge that many economic models rely on implausible assumptions to derive surprising results, and then conveniently forget to emphasize those assumptions when presenting the supposed policy implications.

Of course, there can be legitimate reasons not to report specific numerical estimates of errors. I asked economist Bill Conerly, who often makes macroeconomic predictions in a column for Forbes, why he doesn’t give any explicit figures for the likely errors in his estimates. He said he doesn’t because he doesn’t think he can, and doing so would in itself be misleading. Variations in many influences on the economy can’t realistically be captured with statistics. For example, Conerly asked, “Can you give me a standard error around your own prediction of what Trump will do?” Obviously not.

Conerly replaces a quantitative estimate of uncertainty with clear verbal descriptions emphasizing just how much we don’t know about what might happen and why. That seems sensible — don’t pretend to quantify the unquantifiable, which would only be another form of implying more certainty than is warranted.

Manski’s study goes well beyond economics, exploring the Lure of Certitude in medical research and other areas of social science, such as criminal justice. In this hyperpartisan era, it’s easy to see how people might be lured to make things seem more certain than they are. But researchers who do it undermine the ultimate value of science. Estimates of uncertainties should be made explicit when they can be. Otherwise, more people should act like Conerly — offering insights, but also being aggressively open about what they don’t know.

To contact the editor responsible for this story: James Greiff at jgreiff@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Mark Buchanan, a physicist and science writer, is the author of the book "Forecast: What Physics, Meteorology and the Natural Sciences Can Teach Us About Economics."

©2018 Bloomberg L.P.