ADVERTISEMENT

The Big Question: How Do You Make Polls More Accurate?

The Big Question: How Do You Make Polls More Accurate?

Frank Wilkinson: You’ve conducted the famed Iowa Poll, published by the Des Moines Register and rated A+ by FiveThirtyEight, since the 1990s. It’s an Iowa institution. And in your last poll, published the weekend before the election, you found Republican Senator Joni Ernst ahead by 4 points and President Donald Trump ahead by 7 statewide — which is pretty much the way the race turned out, but not what other polls predicted. In fact, polling proved dodgy across the country in 2020, with many surveys missing the mark for a second presidential election in a row.

I actually received an email this week from someone whom I respect a great deal that included the phrase, “polling is broken.” Is that correct?

Ann Selzer: Paraphrasing Shakespeare, “First thing we do, let’s kill all the pollsters.” There are polling practices that don’t work. But there are polling practices, obviously, that do work.

FW: Can you explain the distinction?

AS: I can’t tell you I’ve done an exhaustive look at everything there is to know about every single poll, but I was reading one poll that said the poll sample is weighted for party primary vote history — based on state voter registration and the census. I call it polling backward. If your head is turned around, looking backward, saying “What’s past is prologue, so I’m going to pay attention to the past,” you’re going to miss the freight train that’s coming straight ahead. I assume nothing. It sounds simplistic, and that’s why people get a little concerned and think it can’t possibly work. One day it won’t work and then I’ll be a goat, not golden. I’ll have to pick myself up and carry on.

FW: In this election, it was clear from the beginning that Trump was not trying to persuade voters to switch from one party to the other. He was trying to find non-voters and make them voters.

AS: That had been successful for George W. Bush. So let’s be clear that it’s not that that’s an unwise strategy, it has worked in the past. It was just perhaps harder this year.

FW: Given the murky composition of that electorate, why was your poll so accurate?

AS: I think my polls are accurate because I have a method where my data will reveal to me what the future electorate is going to look like. And then as somebody said, I have the guts to just sit there after the poll numbers are released and wait to find out if it’s right.

FW: Meaning that if you get back an outlier poll, you don’t fiddle with it?

AS: Well, how do I know it’s an outlier? I mean, I might know it’s an outlier in relationship to other polls, but I do not know whether it is or is not an accurate reflection of this future electorate. There’s no way for me to know for certain. If we see something that surprises us — and that sometimes happens — we check things. But most of the time we don’t find anything that we think deserves to be messed with.

The Big Question: How Do You Make Polls More Accurate?

FW: Isn’t what you’re describing really about skill and expertise in modeling the electorate?

AS: I think it’s a different point of view about whether it’s a good idea to try modeling the electorate. This was cemented in my spine in the 2008 caucuses. I was the outlier. Our final poll said not only that Obama would win with a comfortable margin, but also that for 60% of the people who would show up on the Democratic side, it would be their first caucus. Nobody who is modeling caucus-going would ever put a number like that in their model. It had never happened in the past. It was more like maybe 25% or 35% — somewhere in that range. I had a call from a friend of mine who was high up in the Hillary Clinton campaign, who said, “Look, I’ve trusted your numbers until now, but I’ve knocked on 99 doors and I don’t find this lurking Obama support.”

And I said, “Well, tell me about the doors you knocked on.”

“Oh, we’re talking to former caucus goers and registered Democrats.”

What Obama figured out is that if that were the caucus-going public, he would lose. So he was out creating new caucus goers. And the entrance poll on caucus night said it was something like 57% new caucus goers. Judy Woodruff comes to my office and says, “How did you assume this? Why did you assume this?” And my answer is so simplistic. I assumed nothing. My data told me.

FW: In Iowa this fall, most polls predicted a very close election. You concluded those numbers didn’t represent the actual electorate.

AS: It’s not a matter that I knew or didn’t know. My approach is to stay out of the way of what the data will reveal.

FW: But you still have to decide what an appropriate universe of respondents is, don’t you?

AS: What we do is random digit dialing, we talk to adults 18 and over, and you’re one of two ilks: You are either going to meet our definition of a likely voter and we’re going to continue with you, or you’re not, in which case we’re going to gather a few demographic bits of information — your age, your sex, maybe education, and what county you live in. So that’s about 900 people whom we talk to, and we’re going to weight that to known population parameters so that it looks like the whole population. Then we extract from that the 800 who are likely voters. And if you’re more likely to vote, if you’re older, for example, you’re going to be more prevalent in my likely voter sample. There is known error in who responds to polls and who doesn’t. And we adjust for that at the general population level. We don’t presume to know what’s happening at the likely voter level.

FW: Most of the public polls in 2020 were off in the same direction. Whether they were state or national, they were a little bit higher, or a lot higher, on the Democratic side. Is part of the problem the response rates and what type of people are agreeing to respond?

AS: Oh yeah. We rely on the kindness of strangers — people who will pick up their phone and find out who it is and continue to talk to us and answer our questions. And, yes, my concern for the industry, and for me personally, maybe more than anything else, is how long that kind of business model can persist.

FW: How should the industry deal with that?

AS: There is no such thing as the industry at large. It’s a collection of commercial enterprises that have various ways that they market what they do. Why would a newspaper pay the money to do their own polling? Well, because it helps tell a story. Imagine if there were no polls and you’re a reporter, how do you know what’s going on? Who’s getting traction? Who’s failing to get traction? It’s a device to help inform. This idea that it’s become an Olympic sport is a little odd, right?

FW: Yet it seems that inaccuracy is more prevalent than in the past.

AS: Some of that is due to the changing technology. When I first started polling, you typically had one phone number per household. And if we knew your phone number, we pretty much knew where you lived. Those were the days, my friend, the golden years. Then came this, that, and the other technological change. And it got harder and harder. Now with response rates being so low, the deck is stacked against us being able to do this.

FW: Both because of technology and because of the decline of social trust, I assume.

AS: I don’t know what to say about that. Again, you’re talking to me after our poll worked out, right? So I can’t be in a position of saying, “Well, there’s no trust.”

FW: Because you had sufficient trust to reflect what the electorate was in fact going to do.

AS: Correct.

FW: Do you foresee any specific evolution to increase accuracy or do you think that polling will be a bit of a shot in the dark?

AS: The American Association of Public Opinion Researchers has a listserv, and there’s a fair amount of conversation on it. I don’t think there will be some unified response. I think the change will happen at the level of the individual pollster deciding to do something different. The problem in 2016 had to do with an insufficient number of high quality polls in three states (Michigan, Pennsylvania, Wisconsin). So why did that happen? Well, because people decided telephone interviewing was either already dead or would soon be dead. And that the answer was to create very large panels of people who hold up their hands and say, “Poll me.”

The problem becomes, how do you make this group of volunteers look like the future electorate?  Some have better approaches than others, but there is some guesswork, and it’s the guessing — even if it is educated guessing — that makes me nervous.

FW: Because the randomness has been removed.

AS: Correct. There’s a thing in social science called the instrument effect, which is that the mere fact of measuring something changes what you’re measuring. If you take your tire pressure, you release some air out of the tire in order to measure it; you change the tire pressure. So if you are a person who has agreed to be polled repeatedly as part of a panel, what does that change? Does it change the way you consume media? Change the conversations you have with people? Does it change the conversations you have with yourself in the middle of the night?

FW: Introducing one more complication to an already complicated task. So, what’s your bottom line on how polls can be made more accurate?  

AS: Other people — and I run across them all the time — say, “Well, what are you doing to anticipate and model the size and the shape of the future electorate?” And I say, “I don’t know how you do that.” Because yes, the best predictor of future behavior is past behavior. But the Selzer Caveat is: “Until there’s change.”

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Francis Wilkinson writes about U.S. politics and domestic policy for Bloomberg Opinion. He was previously executive editor of the Week, a writer for Rolling Stone, a communications consultant and a political media strategist.

©2020 Bloomberg L.P.