ADVERTISEMENT

How Goldman Sachs Lost the World Cup

How Goldman Sachs Lost the World Cup

How Goldman Sachs Lost the World Cup
An artist paints designs onto a large football sculpture in Moscow, Russia. (Photographer: Andrey Rudakov/Bloomberg)

(Bloomberg Opinion) -- Goldman Sachs’ statistical model for the World Cup sounded impressive: The investment bank mined data about the teams and individual players, used artificial intelligence to predict the factors that might affect game scores and simulated 1 million possible evolutions of the tournament. The model was updated as the games unfolded, and it was wrong again and again. It certainly didn't predict the final opposing France and Croatia on Sunday.

The failure to accurately predict the outcome of soccer games is a good opportunity to laugh at the hubris of elite bankers, who use similar complex models for investment decisions. Tom Pair, founder of the Upper Left Opportunities Fund, a hedge fund, tweeted recently:

How Goldman Sachs Lost the World Cup
Tom Pair@TomPair2
UBS ran 10,000 simulations and forecasted Germany to win the World Cup. Goldman Sachs ran 1,000,000 simulations &… twitter.com/i/web/status/1…

Sent via Twitter for iPhone.

View original tweet.

Of course, past data don’t always predict the future; Goldman Sachs never tells clients to make decisions solely on the basis of its models’ findings. And in any case, the model only generated probabilities of winning a game and advancing, and no team was given more than an 18.5 percent chance of winning the World Cup. The moral of the story is probably that buzz-generating technologies such as big data and AI don’t necessarily make statistical forecasting more accurate. 

Goldman Sachs ran a less ambitious statistical exercise for the 2014 World Cup. It only used certain team statistics, such as the number of goals scored in the last 10 official international games and the teams’ rankings, as well as variables to account for the teams’ distance from home. The initial simulation had Germany losing 1-2 to Brazil in the semifinal and Brazil beating Argentina 3-1 in the final game. It also predicted that Spain,  the 2010 champion, would be defeated by Argentina in the semifinal. 

Spain, of course, dropped out at the group stage, and Brazil lost 1-7 to Germany, which proceeded to win the Cup. In a post-mortem, Goldman Sachs economists Jan Hatzius and Sven Jari Stehn cited soccer’s “inherent randomness” as a reason for the model’s failure and wrote:

The model predicted a 2-1 victory for Brazil, but the actual result was a 7-1 win for Germany. We regret the miss. But, speaking as Germans, we also note that there are more important things than being right.

For the more elaborate 2018 attempt, Goldman Sachs’ economists fed oodles of data about teams and individual players into four different types of machine-learning models to figure out the statistics’ predictive power. Then they ran simulations to compute the most likely score of each game. The first results of adding player-level variables, such as an athlete’s average ranking on the team and measures of his defensive and offensive abilities, looked encouraging. Thanks to the use of more granular data, made possible by AI, this year’s model should have worked better than the 2014 one.

If anything, it worked worse.

The 2014 version managed to put three of the right teams in the semifinals, so it wasn’t that far off. The epic Brazilian meltdown against Germany was a one-off event that will never be forgotten in either soccer-crazed country. There was no way for the computers to get that one right.

For 2018, the investment bank initially had Brazil, France, Germany and Portugal in the semifinals; Brazil was supposed to beat Germany in the last game. Of these teams, only France reached the final four. 

Goldman Sachs updated the model throughout the tournament. It predicted a Brazil-Spain final on June 29 and Brazil-France on July 4. Its most recent prediction had England and Belgium squaring off for the cup. Both were eliminated in the semifinals.  

Of course, the predictions weren’t million-dollar bets or even promises. “The forecasts remain highly uncertain, even with the fanciest statistical techniques, simply because football is quite an unpredictable game,” wrote Stehn and his colleagues Manav Chaudhary and Nicholas Fawcett. “This is, of course, precisely why the World Cup will be so exciting to watch.”

That last prediction, at least,  did come true.

In fairness, Goldman wasn’t the only bank whose sophisticated model couldn’t cope with the complex task. UBS, for example, gave Germany the highest probability of winning, followed by Brazil, Spain and England. Croatia, according to the Swiss bank, had a 4.4 percent chance of reaching the semifinals. Bookmakers and academics working with bookmakers’ odds have done no better.

But Goldman Sach’s misfire is perhaps the most curious. Modern technology allows a remarkable level of detail in modeling, creating the illusion of boosting predictive power. But life can be infinitely more complex than even the most carefully built, AI-powered, exhaustively data-mined model. Soccer, with the many factors that affect game outcomes -- players’ injuries and intra-team conflicts, the refereeing, the weather, coaches’ errors and moments of inspiration -- remains only a tightly-regulated game involving a few dozen people. The behavior and performance of big corporations, entire industries and nations is arguably even more difficult to model based on data about the past.

The technological sophistication of today’s models shouldn’t cloud our judgement. Life still defeats our best efforts at locking it into a database.

To contact the editor responsible for this story: Max Berley at mberley@bloomberg.net

©2018 Bloomberg L.P.