A Major Miss in Michigan Puts Polling Under the Microscope

Click here to view original web page at chronicle.com

Polling is more art than science, says John Della Volpe, director of polling at the Institute of Politics of Harvard U. A key to getting it right, he says, is “knowing how to ask questions in the right context.” Over time, sciences tend to improve. But public-opinion…

Harvard U.
Polling is more art than science, says John Della Volpe, director of polling at the Institute of Politics of Harvard U. A key to getting it right, he says, is "knowing how to ask questions in the right context."

Over time, sciences tend to improve. But public-opinion polling seems not to. Before Tuesday’s Democratic presidential primary in Michigan, more than a dozen polls were unanimous in predicting a win for Hillary Clinton, by an average of more than 20 percentage points. But the actual voting favored Bernie Sanders, 50 percent to 48 percent.

Polls did a much better job of predicting the results of Michigan's Republican primary, but many observers still saw the Democratic upset as an embarrassing failure for the industry. In an interview on Wednesday, John Della Volpe, director of polling at the Institute of Politics at Harvard University, explained who actually trains pollsters, whether they are practicing a science or an art, and why they continue to make such high-profile errors. The interview has been edited for clarity and brevity.

Q. What happened in Michigan?

A. I haven’t studied it in depth, but I have some general thoughts on when things typically go wrong.

One is that it’s very difficult to poll an open primary, where independent, unaffiliated voters can choose a ballot for Democrat or Republican. In this situation you had two contested primaries, and a significant number of independent voters did not make up their minds, perhaps until the final day or even the minute when they were asking for that ballot.

A second factor is that some pollsters choose to interview only voters who have a history of voting in primaries. It is clear that you can use history to predict some events in voting. However, what we’re seeing this year is outlier events. Donald Trump and Bernie Sanders are bringing new people into the process — people who may have just registered, or may not have voted in many years.

One of the noteworthy failures in polling in recent history, the defeat of the House majority leader, Eric Cantor, in his Virginia district, involved new groups of voters who were missed by pollsters.

A third factor is weighting data based on age. Exit polls showed nearly 20 percent of those voting in the Michigan Democratic primary were age 18 to 29, well above the expected level of about 15 percent, and those in that age category voted, 4 to 1, in favor of Bernie Sanders.

It’s almost impossible for me as a pollster to come into a state one or two times, or every two or four years, and think I have an understanding about a state, and that’s what often happens with media polls and university polls.

Q. What about cellphones: Have pollsters yet figured out how to account for the shift away from landlines?

A. There is not a proven methodology that states that a certain percentage of a poll should be landline or cellphone. It’s actually less-educated people and people from lower socioeconomic backgrounds who are more likely to have cellphone-only households.

One television station had significant leads for Hillary Clinton, around 20 percent or 30 percent. I’d be willing to bet that was a 100-percent landline-telephone survey, which is a less-expensive methodology because privacy regulations require cellphone calls to be dialed by hand.

Q. Is polling an art or a science?

A. It’s a great question. I just wrote a speech about the falling prominence of pollsters in America. I’m much more on the side of the art aspect of it. There is far too much emphasis on the one question of who will win or lose — there’s no shortage of that amount of data. What there is a shortage of is the insights and the context to interpret that data. It’s at least half, if not more than half, art at this stage.

Q. Do aspiring pollsters major in the field in college? Should it be a more rigorously studied subject?

A. It should be, and it’s evolving so quickly. I work with 25 young people at Harvard who care deeply, and spend a minimum of one — oftentimes up to seven — semesters with me. My goal for that is: Can I work with them to help them develop a question that can create some insight into a subject that not a lot of people know about? That’s how I’m judging whether they’re successful. My emphasis is on the art of the question-writing, the interpretation, the context, and I think that’s where everybody should start in terms of the intellectual curiosity in putting these issues into proper context.

Q. What about these high levels of societal distrust we see, what effect is that having on polling?

A. I’d argue, and have some data to prove, that the very fact of taking a political survey is a political act. If you hate government, and somebody interrupts your dinner and wants to talk to you for 10 minutes about government, you’re likely to hang up.

Going back to the beginning of my career, in Massachusetts in 1990, we had a very polarizing candidate in the Democratic primary named John Silber. There were massive problems with polling in that Democratic primary because he was bringing in a new kind of voter, and his voters were unhappy with the establishment, and they didn’t want to talk about politics. Sound familiar?

I’ve never been surprised about the rise of Donald Trump. Not because I’ve been asking survey questions about Donald Trump, but I have been asking questions about what keeps you up at night, and asking questions about how people think about their future, and that’s the art of knowing how to ask questions in the right context. You need to have the art of asking additional questions to piece the rest of the puzzle together.

Q. Is polling getting any better?

A. No, we’re getting worse.

Q. Why? The scientific process involves examining failures and getting better, doesn’t it?

A. There aren’t a lot of economic incentives to get it right. A TV station is going to hire a local pollster, and they’re going to poll an election once or twice, and they’re not going to get paid a lot of money. If they’re right, they’re going to get paid, and if they’re wrong, they’re going to get paid. If I make a quality prediction in the stock market, and I do it more than a couple of times, I have a very different lifestyle than if I do it in public-opinion research.

Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.


Click here to view full article

Spread the love

Leave a Reply

Your email address will not be published.