ADVERTISEMENT

Perils of Polling

Perils of Polling

(Bloomberg) -- Polls used to be seen as the gold standard for assessing politicians, elections and voter concerns. In recent years, polling’s reputation has been tarnished. In the 2017 U.K. election, final polls underestimated the Labour vote and overestimated support for the U.K. Independence Party. Almost every poll in the 2016 U.S. election missed support for Republican Donald Trump, who won the presidency. In 2016, pollsters failed to predict the clear victory of the “leave” camp in the U.K. referendum on whether to stay in the European Union and the rejection of the Colombian peace deal with rebels. In 2015, polls were wrong on outcomes in Israel, the U.K. and Greece. The bungles have undermined the industry’s claim to scientific rigor. Can poll crafters devise a better formula that delivers more accurate results in this no-time-to-spare mobile era?

The Situation

Ahead of the 2018 U.S. midterm elections in November, many people fear that the polls can’t be trusted. While post mortems of the 2016 election noted that national polls correctly predicted that Hillary Clinton would win the total U.S. popular vote, polls at the state level were badly off and underestimated Trump support. Because the U.S. president is ultimately chosen by the Electoral College, which is guided by state results, almost no polls predicted the Trump victory. Pollsters certainly face a range of constraints. In the U.S., a majority of people now live in homes without a landline phone. So to reach a representative group, firms have increased calls to mobile phones, which are now three-quarters of some samples. To do this, pollsters have to dial numbers by hand (U.S. law bans cell phone autodialing) and make more calls, since mobile users tend to screen out unknown callers and fewer will sit through 20 minutes of questions. This isn’t cheap — mobile-phone surveys can cost nearly twice as much — or easy. Pew Research’s response rate on its 1997 polls was 36 percent; it was just 9 percent in 2016. 

Perils of Polling

The Background

George Gallup, an advertising market researcher, created the first scientific political poll in 1932 for his mother-in-law, who was running to be secretary of state of Iowa. (She won.) He founded the American Institute of Public Opinion, later called Gallup Polls, in 1935. During the 1936 presidential election, the prestigious Literary Digest’s survey tallied millions of returned postcards and found overwhelming support for the challenger, Republican Alfred Landon. Gallup interviewed 50,000 people chosen at random and correctly predicted Democratic President Franklin D. Roosevelt would win re-election. Yet Gallup and other pollsters botched calls on the 1948 presidential race, leading to the winner, Harry Truman, gleefully waving a newspaper that relied on surveys for the first-edition headline: “Dewey Defeats Truman.” These polls’ errors included not surveying right up to Election Day, missing people who made last-minute choices. Further refinements in the U.S., including conducting surveys in the evening when more people were home, helped improve accuracy. U.K. polls were overhauled after 1992, when some underestimated the Conservatives’ win over Labour by almost 9 percentage points. This was attributed to “shy Tories” – people who planned to vote Conservative but told pollsters they hadn’t yet made up their minds.

Perils of Polling

The Argument

The biggest polling firms say they’ve learned from past mistakes. For example, one problem with 2016 U.S. state polls was that they didn’t adjust their samples for an over-representation of college-educated respondents, who were more likely to favor Clinton. Pollsters say this has been fixed for 2018. But poll models are built based on past turnouts. Many observers expect that voter participation will be higher for the first midterm election of Trump’s presidency, which could throw off some polls’ accuracy. There have been concerns that as expensive phone surveys decrease in frequency, poll aggregations, sites that average many surveys, will be dominated by less scientific polls and accuracy would suffer. Some pollsters now sign up pools of respondents in advance and offer cash or gift cards as incentives, which is believed to skew the sample and the quality of answers. Other firms have turned to cheaper web questionnaires, which have the obvious problem of restricting the sample to people who are online. Pollsters like to remind people that good surveys always state a margin of error: A plus or minus 3 percentage point margin could mean a candidate with 48 percent support could really be anywhere from 45 percent to 51 percent — the difference between winning and losing.

The Reference Shelf

  • The American Association for Public Opinion Research report on what happened in the 2016 presidential election polls.
  • A preliminary inquiry into what went wrong with the polls’ predictions in the 2015 U.K. election cited poor sampling and “herding” — when pollsters adjust their findings to be in line with what other polls are reporting.
  • Pollsters have long acknowledged that their surveys carry a measure of inaccuracy, which is why they include a margin of sampling error, explained here by the Massachusetts Institute of Technology.
  • The British Polling Council has answers for frequently asked questions.
  • Andrew Kohut, founding director of the Pew Research Center, traced the history of how polls came to play major roles in policymaking and politics.

To contact the editor responsible for this QuickTake: Anne Cronin at acronin14@bloomberg.net

©2018 Bloomberg L.P.