ADVERTISEMENT

The Forecasting Business Shouldn’t Be This Bad

The Forecasting Business Shouldn’t Be This Bad

(Bloomberg Opinion) -- By now, some of you may have noticed that I am none too fond of the average Wall Street forecaster.

This isn't because of some random prejudice, but rather, a view that has evolved based on long experience. It is backed up by solid statistical evidence that forecasters are not very good at forecasting.

It bears repeating that: 1) almost all forecasting is folly, and 2) forecasting is marketing. However, with a few small tweaks forecasting could be more useful, or at least more honest. Here are a few suggestions:

No. 1. Share the underlying model’s past performance: We're all familiar with those who trumpet the accuracy of their forecasts. And now, you too can partake of their unique genius for X, for the low, low price of . . . whatever.

But let's restrain our enthusiasm for the anecdotes cited as proof. Now, if the forecaster has an audited track record showing how the prognostications stacked up versus reality during the past five years, and can demonstrate how these made clients some money, that might be worth notice.

But probably not.

There is a reason the standard disclaimer -- past performance is no guarantee of future results -- is provided. It's designed to protect people (largely from themselves). It serves as a reminder that a good track record may not be repeatable; that those winning outcomes could have been the result of luck or that specific era or some other random element.

No. 2. Acknowledge the unknown variables: Reading a Politico column about Yale economist Ray Fair’s economic models pointing to a Donald Trump re-election blowout in 2020, I was impressed.

Not because of the landslide forecast, but because of the very smart use of caveats: “Fair and other analysts who use economic data and voting history to make predictions also note that a sharp decline in growth and an increase in the unemployment rate by next fall could alter Trump’s fortunes.”

Not locking oneself into any single outcome because things might change is simply common sense. Unfortunately, that is a rare characteristic in too many forecasters.

No. 3. Acknowledge inherent biases: Bloomberg Businessweek recently noted how poorly economists do at forecasting recessions. As a group, “they’re more likely to miss recessions than to predict ones that never occur.” But it wasn't because the economists were necessarily bad at economics, but rather, because of basic game theory. Career risk for being wrong is very real. “There’s not much incentive to stick one’s neck out,” as the article noted.

Groupthink tends to push forecasters to try to hide in the middle of the pack, making them more likely to be wrong when the consensus is wrong, but also less likely to suffer negative consequences when that occurs. Everybody being wrong can offer a lot of protection.

Perhaps this helps to explain a recent finding in an International Monetary Fund research paper: During a 22-year period (1992 to 2014), private-sector economists managed to forecast only five recessions out of 153 economic contractions across 63 countries.

No. 4. Use errors to make better forecasts: Most of us learn too little from our successes, but we might learn even less from our failure. Investors such as Ray Dalio, business leaders like Jeff Bezos and others have acknowledged the significance of failure as a key aspect to their process.

All models do is take a series of data inputs, sprinkle a little fairy dust on them, and then generate an output. But even if the model does OK, how has the forecaster used its output? Can they make money for clients with it? Alternatively, do they anchor themselves to these predictions, regardless of subsequent data?

There is a specific skill to adjusting to errors and failures in order to improve. It is a skill used too little by economists and financial analysts.

5. Learn from the pros: Perhaps the foremost academic expert on why so many forecasts fail is University of Pennsylvania professor Philip Tetlock. His 2006 book "Expert Political Judgment" studied thousands of forecasts, and came to the conclusion that people simply are not good at making predictions about much of anything .   

With one caveat: Buried within Tetlock’s huge dataset of failed forecasts was a surprising subset of superforecasters. Those in this group stood out for their ability to make more accurate predictions than others. They did it in a variety of ways: they incorporated multiple sources of information and used an array of analytical tools. They broke problems down into small, manageable pieces. Perhaps most important, they were not afraid to change their forecasts when the data changed.

None of these methodologies are out of reach for economists or other Wall Street seers. They should try them. They certainly couldn’t yield results any worse than those the prognosticators now have on offer.

To contact the editor responsible for this story: James Greiff at jgreiff@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Barry Ritholtz is a Bloomberg Opinion columnist. He founded Ritholtz Wealth Management and was chief executive and director of equity research at FusionIQ, a quantitative research firm. He is the author of “Bailout Nation.”

©2019 Bloomberg L.P.