01/17/2020

The Science Of Polling In 2020 And Beyond

20:02 minutes

How does the way a poll is conducted influence the answers people give? In this explainer, Pew Research Center’s Courtney Kennedy walks through what to know about survey mode effects.


The Iowa caucus is less than three weeks away and polls this week show that Vice President Joe Biden has a six point lead over other primary challengers in the Democratic race. Six days ago, a different poll had Senator Bernie Sanders up by three points in Iowa, and at the start of the year another poll had the candidates tying in the state.

In today’s fast-paced digital culture, it is more difficult than ever to follow and trust political polls. Campaigns, pollsters, and media outlets each say that their numbers are right, but can report different results. Plus, the 2016 election is still fresh in the public’s mind, when the major story was how political polling got it wrong. SciFri listeners expressed uneasiness from polling encounters and shared their stories on the Science Friday VoxPop app

But despite how people may feel about the practice, the numbers suggest that polls are still working. Even as telephone survey response rates have fallen to around 5%, polling accuracy has stayed consistent, according to a new report published by the Pew Research Center. But things get trickier when talking about online polls. 

“The challenge has been getting a sample that’s representative of the population online,” says Doug Rivers, chief scientist with online pollster YouGov. He spoke with Science Friday earlier this week. “You don’t have a device like what’s used in telephone polling in randomly generating telephone numbers as a way of selecting people.”

So how can polling adapt to the way people live now, with texting, social media, and connecting online? And will the public continue to trust the numbers? Ira talks with Courtney Kennedy, director of survey research at the Pew Research Center about the science of polling in 2020 and beyond. 


Can You Trust That Poll? 

With the rise of the internet and DIY online surveys, anyone can conduct a poll, says Courtney Kennedy at Pew Research Center. While the increased access to polling technology has allowed for creativity and new perspectives, it has also made it difficult to trust the legitimacy of a poll. Before Friday’s show, Kennedy told SciFri three questions you should ask when you’re evaluating a poll:

1. Who created the poll?

“You need to know if there are any conflicts of interests,” Kennedy says. If you see a poll by a campaign or a lobby group, you’ll want to disregard it because these polls are released selectively and may have an agenda behind them, she says. 

“Who did the poll is a key thing, always will be.” 

2. How were people recruited?

“If you told me you went out and interviewed 2,000 people across the country, what’s the source? What database did you actually use to find them?” she says. 

More traditional means of recruiting people include random telephone digit dialing and polling from a list of registered voters. However, the polling industry has dramatically shifted away from telephone methods and towards digital. Online polls pull from a wide array of sources. People may be solicited through pop-up ads, social media, or an emailed survey invitation by a corporate membership list. Other online polls recruit people by sending a random sample of notices by postal mail, while some use an opt-in sample, or what pollsters call “a convenient” sample, of people on the internet, Kennedy says. 

You should ask yourself: “Was a poll done with a truly randomized sample of Americans from a list that covered everybody? Or was it done with more ad hoc means?” 

3. Did the pollster account for population representation in the statistics?

Kennedy admits this statistical assessment can be tough to make, but she says you want to ask: Did the pollster weight the data to be representative of race, age, education, gender, geography, and other demographics?

“Good pollsters just list out the variables that they adjusted on to make the poll as representative as possible,” Kennedy says. “You want to see that all the major demographic variables are listed there.” 


Further Reading



Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.

Donate

Segment Guests

Courtney Kennedy

Courtney Kennedy is the director of Survey Research for the Pew Research Center in Washington D.C. 

Douglas Rivers

Douglas Rivers is the Chief Scientist for YouGov, based in New York, New York.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. The Iowa caucuses, less than three weeks away. And the polls this week’s show that Vice President Joe Biden has a six point lead over the other primary challengers in the Democratic race. Six days ago, a different poll had Senator Sanders up by three points in Iowa. And at the start of the year, another poll had the candidates tying in the state.

Of course, these days it feels hard to trust political polls with campaigns, pollsters, and media outlets each saying that they’ve got the numbers right. Add to that the lurking memory of the 2016 election when the major storyline was how political polling got it wrong. No wonder many of our listeners expressed some polling fatigue and uneasiness when sharing their own experience of being polled.

SPEAKER 2: I’ve been called about asking how I would vote. I believe that your vote should be secret, and you should never ever tell anyone how you should vote.

SPEAKER 3: I’ve gotten frequent calls on my landline. I used to talk to them, and I used to give out some information. But I no longer do that. I feel it’s too invasive and too personal.

SPEAKER 4: I’ve been contacted multiple times by in-person pollers. Every time, though, that I’ve asked who’s behind the poll, they say they don’t know or that they can’t tell.

SPEAKER 5: I was contacted by political pollsters via email. And to be honest with you, I thought it might be a spam email. So I did some further investigation because I do want to answer it so they can see where people’s opinions are.

SPEAKER 6: I’ve been contacted by political pollsters via email, and I delete all of those emails.

IRA FLATOW: Thank you for sharing those experiences via this Science Friday VoxPop app. But despite how people may feel about political polling, the numbers suggest that polls are still working. According to a new report published by the Pew Research Center, even as telephone survey response rates have fallen to around just 5%, that survey says that polling accuracy has stayed consistent.

That’s been the story with traditional telephone surveys, at least. Things get a bit trickier as polling has moved online. Earlier this week, we spoke with Doug Rivers, chief scientist with the online pollster YouGov, who says online polling still faces some major hurdles.

DOUG RIVERS: The challenge has been getting a sample that’s representative of the population online. You don’t have a device like what you used in telephone polling of randomly generating telephone numbers as a way of selecting people.

IRA FLATOW: So how can polling adapt to the way people live now, with texting and social media and connecting online? And will the probably continue to trust the numbers? Here with me to talk about what polling looks like in the year 2020 and beyond is Courtney Kennedy, director of survey research at Pew Research Center, one of the authors of that recent polling report. Welcome to Science Friday.

COURTNEY KENNEDY: Thank you so much.

IRA FLATOW: That is a fantastic report you folks have put together. It’s just great reading and highly recommended to our listeners. Let me let our listeners weigh in on polling. 844-724-8255, 844 Sci talk. You can tweet us at scifri.

Let’s begin at the beginning, Courtney. Give me the ABCs of a typical poll. Take me from the beginning to the end. How’s it done?

COURTNEY KENNEDY: Sure. Ideally, you want to start a poll with what we call a frame. You need some complete list, ideally, that has every single American on it. And so traditionally with polling, as you mentioned, we used to do phone numbers. And there actually is a complete list of all the landline numbers in the US, and there’s a complete list of all the cell phone numbers in the US.

And for decades, it worked really well to use those lists to draw a nationally random sample and contact folks. But as you alluded to, in recent years, a lot of the polling industry has moved online. And as Doug Rivers said, there is no analog online, right? There’s no master list of email addresses or screen ids or anything like that where we can do that same national random sampling.

But that’s where you begin, with a naturally random sample. You go out, you do your interviews, try to ask the right questions that are neutral and unbiased. And then, one piece of polling that is getting a lot more attention these days is the back end, which is the stuff that we call waiting, where the pollster needs to take their data set of interviews and statistically adjusted to make it as representative as possible of the US population.

IRA FLATOW: So do you go in, then, with a sample error. If you don’t have a complete random sample, is there some sample bias already built into it?

COURTNEY KENNEDY: Well, we don’t think of it as bias. But there’s definitely sampling error in every survey. And that stems from if you do a census, if you interview everybody in the whole country, you have no sampling error because you talk to everybody. But if you do a subsample, which we all do, you only interview 1,000 or 2000 people.

By virtue of interviewing a subset of people and not everybody, you automatically have sampling error. And it’s not really a bias, but it means that margin of error. Your one estimate could be roughly three points too high, it could be three points too low. You’re probably going to be pretty close, but you can’t assume it’s perfect because you didn’t talk to everybody.

IRA FLATOW: What was behind the inaccurate polling data that we all saw with the 2016 election?

COURTNEY KENNEDY: Sure. A lot of people look back on 2016, and they remember just feeling misled, right? But they miss– what actually happened was a bit more nuanced than that. Yes, there were problems with a lot of the state polls, especially in the upper Midwest states that flipped from being consistently democratic to voting for Trump.

What people miss, what they forget, is that national polling in 2016 was quite good. National pollsters actually had a pretty good year. But to your question, what was off in those state polls. I worked on a committee. We did a comprehensive report looking at this. And we found evidence mostly for two things.

One is that there’s evidence that a substantial share of voters actually were legitimately undecided quite deep into the campaign. You have to remember, in 2016, Hillary Clinton, Donald Trump, were two historically unpopular candidates. And so there were folks that, a week or two out from the election, were still trying to figure out whether they’re going to vote, and if they vote, whom they’re going to vote for.

And typically, voters like that who are undecided, wash out about evenly between the two major party candidates. But in 2016, that’s not what happened. In those critical states, those late deciding voters broke heavily for Trump. I’m talking on the order of 15, 20 percentage points. And so for polling, what that means is if you did your poll in September or October, you were in the field too early to capture that late movement. So that’s part of it.

The other piece is, I mentioned, waiting. And so that’s when the pollster does a statistical adjustment to make their surveys representative as possible. And the reason we have to do that is because we’ve just known from decades and decades of polling, some people, some groups, are more likely to do surveys than others.

And in general, older folks, folks who have a college level education, things like that, they tend to be more likely to take polls. And that doesn’t have to be a problem if the pollster anticipates that and fixes it. But in 2016, a lot of the state pollsters, they had too many college graduates responding in the poll, it’s very common pattern, but they weren’t fixing it. And in 2016, having a college education was pretty closely associated with voting for Clinton, right?

And so if you did a poll in Wisconsin, you had college graduates more likely to take that poll. You had too many Clinton voters. And we saw poll after poll after poll this going on, and they had not fixed. It wasn’t malicious. I don’t think at all–

IRA FLATOW: But have they fixed it for this year?

COURTNEY KENNEDY: That’s a great question. I wish I could say yeah, they all did. The reality is that some have, and some have not. So you really have to look– dive into the methods paragraph in their press release to see if they mention, we adjusted on education as part of their weighting.

IRA FLATOW: Let’s go to the phones and go get some of our clips we have from our VoxPop app. Let me go first to Houston to Danielle in Houston. Hi, welcome to Science Friday.

DANIELLE: Hello. Hi. I just had a quick question. I’m African-American and a college educated, as are most of my friends and family. We vote liberally. We talk about it. It’s a common thing that we never see these polls. We always joke about it. I guess they just don’t poll us.

And the other thing is, is this going to be factored in or are they anticipating some variation in the next election with African-Americans that are probably going to come out more heartily in this next election? Have they adjusted for the fact that either A, we have a deep mistrust that is just generational, and B, we don’t get these polls?

IRA FLATOW: OK, what do you say to Danielle, Courtney?

COURTNEY KENNEDY: Sure. Great question. Well, in general, the chances that anybody is selected to take a poll– you’re talking like roughly 1,000 people out of a nation of over 300 million, right? The chances of being selected for a poll are just tremendously tiny.

But I would say, you do meet people. I think some of the folks that called in earlier about their personal experiences. If you have a landline or if you live in a state like Iowa or New Hampshire, there are some things that lead people, I think, to getting more requests than others. So if you live in a state that, frankly, is not considered a battleground, and you don’t have a landline phone, I would say I wouldn’t be shocked, regardless of somebody’s characteristics.

[INTERPOSING VOICES]

IRA FLATOW: You said a landline phone is only 5% of the population. And can you extrapolate accurately from people who are on a landline? Is there a bias on those people who have landlines– why they still have landlines, they may be of a certain character?

COURTNEY KENNEDY: More than 5% have a landline. I think 5% is the share that have a landline but not a cell phone, right? But you’re right, I am skeptical. If I see a poll that’s mostly done interviewing people on landlines, I know from my own experience that that poll is going to skew old and it’s going to skew Caucasian. Now again, pollsters can fix that kind of thing to some extent. But in general, that’s a bad footing to start a poll because of those inherent skews that you get with landlines.

The caller also asked about turnout. That’s a great question. And it’s really interesting because the last presidential election is, typically, the one that pollsters would look to try to anticipate what is the profile of voters going to be this year? But in 2016, it was quite a break from how voters– turnout patterns you saw during the Obama administration. But then in 2018, it was a historically high turnout for a midterm election.

And so I think there is quite a bit of uncertainty is it– OK, is 2020 going to look more like 2016, or is it going to look more like 2018 and look like a public reaction to what they’ve been seeing the last three or four years.

IRA FLATOW: Talking with Courtney Kennedy, director of survey research at the Pew Research Center on Science Friday from WNYC Studios.

Lots of interest. Let’s go back to our box VoxPop app, and let’s go to– Angela from Portola Valley, California had this to share.

ANGELA: In California, some people want to put on the ballot where the ride sharing drivers and other employees like that should be paid on salaries or as contractors. But this question now is very badly worded. And so it was difficult to answer. And I just think that some of the questionnaires need to have better questions. This one kept asking me again and again because it was trying to change my mind how I would vote.

IRA FLATOW: Good point raised there. How do we separate a good poll? How do we know a good poll, Courtney, from a bad one? What do we look for?

COURTNEY KENNEDY: Well, that’s a great question. I mean, I think that that caller was talking specifically about message testing, which I’ve heard from folks. It is annoying, right? If you get called and are subject to one of those polls, I won’t deny it at all that that can be a pretty rough experience.

So I would just say please know that that doesn’t represent all of polling. There’s a lot of pollsters who– most pollsters are not engaged in that kind of thing, and are really trying to get just an honest read of how Americans are feeling.

It’s not a question of a good poll versus a bad one. I do think from a societal standpoint, the most useful polls are not message testing. They’re trying to get those unbiased reads. In terms of the methodology what to look for, when my colleagues or I would get this question 10, 20 years ago, it was really easy to answer. We could tick off– here are five things you look for, and you know if a poll is good.

But as our discussion has indicated, and as Doug Rivers mentioned, it’s really not that easy these days. Because you see tremendous variation in the field of polling. If you think about your favorite news outlet. They’re all doing polling a bit differently these days, right? So you still have some who are doing traditional telephone dialing, random digit dial. You’ve got others that have moved online.

And within that online space, you see massive differences. You’ve got online polls that are done with basically just convenience samples, like anybody they can find online to take surveys. But you also have really expensive, rigorous ones done online where they actually recruit offline by sampling addresses and recruiting people to take online surveys.

IRA FLATOW: So how do you tell which is the good one? That you want to hang up on one of them, but you want to stay online with the other? Or is online surveys, are they really still trying to figure out how to do them correctly?

COURTNEY KENNEDY: I would say, as a field, we are. Then there’s a couple of things, right? So this issue of who’s doing the pole has come up. And that does matter. So if you’re asked to take a poll, and they will not tell you who sponsored the poll, they should, right? We have professional standards as an industry, and one of them is to be honest with respondents if they ask, tell them who’s conducting the research.

So frankly, if they don’t say that, I would say, feel free to hang up or click out or what have you. And similarly, if you’re in the middle of a poll and it might be– one question might not land right with you. But if it’s the whole survey sounds really one sided, you’re under no obligation to participate in message testing if you don’t want to. Pollsters really need to make a good faith effort to try to get an objective measurement.

IRA FLATOW: Are you, as a pollster, disappointed in the quality in general of polling these days?

COURTNEY KENNEDY: I would say I have a mixed reaction. So it’s important to know that the barriers to quote unquote, being a pollster, have basically been eliminated because of technological changes over the last roughly 10, 15 years. And by that I mean, you can go online, and if you’ve got a few thousand dollars, you can go out and do a national poll, interview 1,000 people.

That doesn’t mean that you’re trained to do that or you know how to do that. You know how to do the statistics on the back end. But you do see organizations, fly-by-night organizations you’ve never heard of put out poll numbers. And they get picked up in the media. And sometimes, they’re actually reasonably good, but many times they’re not. And so yeah, I mean, to be honest, that can be frustrating.

IRA FLATOW: All right, we’re going to take a break and get a few more questions. If you’re willing to hang around, Courtney, for a couple more minutes?

COURTNEY KENNEDY: Sure.

IRA FLATOW: OK, we’re going to come back with Courtney Kennedy, director of survey research at the Pew Research Center. One of the authors of the recent polling report we’re talking about. You can see Courtney’s articles– great. Her survey, sciencefriday.com/poll. It’s really very in-depth, and it will answer some of the questions we can’t get to in the short period of time we have. But we are going to talk more with Courtney right after the break. 844-724-8255. Stay with us. You can also tweet us at scifri. We’ll be right back.

I’m Ira Flatow. This is Science Friday. We’re talking this hour about the science of modern polling. How methodology has changed through the years. The challenge is now to conduct polling, especially online. My guest is Courtney Kennedy, director of survey research at the Pew Research Center. A couple of tweets that came in are kind of interesting, Courtney.

Aaron writes, there are so many junk calls nowadays. How hard is it to get a representative sample when many people like me just won’t pick up the calls they don’t recognize?

COURTNEY KENNEDY: Yeah, absolutely. Well, we’ve certainly seen that increase the cost of doing a telephone poll. Because as you can imagine, when fewer people answer, that means you’ve got more time paying professional interviewers to dial more numbers to replace the people that aren’t picking up their phone.

And it’s hard to tease out the causality here, but it seems very reasonable to associate that to the dramatic rise in automated calls. Especially to cell phones, and especially that creepy spoofing where they pretend the number is your number, or some number near you. That’s definitely had a bad effect on polling.

One thing that’s surprising, I think, is insinuated in that comment is you would assume that that means the polls are off. How could the data possibly be any good when that kind of thing is happening in the background?

And one thing that study after study has found that’s surprising is that despite response rates being low, if you are still doing it with random sampling, the data is still pretty good. It’s not perfect. We still have that margin of error. But polls still actually do pretty well, even at those low response rates.

IRA FLATOW: All right, I’m going to leave it there on a high note, Courtney. Thank you very much because I’ve been wondering about polling forever.

COURTNEY KENNEDY: My pleasure.

IRA FLATOW: Now I’m wondering, why do political parties each come up with their own number, as if polling is supposed to be a science, right?

COURTNEY KENNEDY: Yes. They definitely release those selectively, so don’t put too much credence into the polls coming from parties.

IRA FLATOW: And if you want to see Courtney’s study, at Courtney Kennedy, director of survey research at the Pew Research Center. If you’re distrustful of polling, we have a tip list of questions to ask on whether you should trust a poll based on Courtney Kennedy’s expertise. It’s on our website at sciencefriday.com/poll. You can also find a link to that Pew Research Center poll. All kinds of great stuff up there answered lots of my questions.

Copyright © 2020 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Katie Feather

Katie Feather is a former SciFri producer and the proud mother of two cats, Charleigh and Sadie.

About Lauren J. Young

Lauren J. Young was Science Friday’s digital producer. When she’s not shelving books as a library assistant, she’s adding to her impressive Pez dispenser collection.

About Ira Flatow

Ira Flatow is the host and executive producer of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More