10/23/2020

Should We Stop Paying Attention To Election Forecasts?

17:24 minutes

an abstract design of silhouettes of biden and trump with a line graph behind them
Credit: Shutterstock/D Peterschmidt

a blue paint circle badge with words in white that say "best of 2020"The first “scientific” election poll was conducted in 1936 by George Gallup, who  correctly predicted that Franklin D. Roosevelt would win the presidential election. Since Gallup, our appetite for polls and forecasts has only grown, but watching the needle too closely might have some unintended side effects. 

Solomon Messing, chief scientist at ACRONYM, a political digital strategy nonprofit, tells us about a study he co-authored that found people are often confused by what forecast numbers mean, and that their confidence in an election’s outcome might depress voter turnout. Sunshine Hillygus, professor of political science and public policy at Duke University, also joins to tell us about the history of polling in the United States.


Further Reading

Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.

Donate

Segment Guests

Sunshine Hillygus

Sunshine Hillygus is a professor of Political Science and Public Policy at Duke University in Durham, North Carolina.

Solomon Messing

Solomon Messing is chief scientist at ACRONYM and an affiliated researcher at Georgetown in Washington, D.C.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. If you’re anything like me, you have been spending a lot of time looking at polling data and election forecasts. But maybe you don’t totally trust them.

RICHELLE: After 2016, I was so badly burned that I no longer put much weight into electoral forecasts. So I watch other sources to make up my mind.

IRA FLATOW: That was Richelle from Bluffton, South Carolina, commenting on our VoxPop app. We had asked you how much weight you put on polls and election forecasts these days. And some of you still have a good deal of confidence in them, but more are skeptical.

ALICIA: I don’t trust election forecasts much since the 2016 election. And I think that that actually illustrates one of the big reasons why, which is that it doesn’t even matter in the US if a candidate wins the popular vote.

STEVE: I don’t always believe election forecasts, because I think some people are too embarrassed to tell pollsters who they’re really voting for.

ARNOLD No, I don’t put a lot of weight on the forecasts. The forecasts are paid for by people who have an agenda. There’s just not enough neutrality being displayed across the board.

IRA FLATOW: That was Alicia from Amsterdam, Steve from Sacramento, and Arnold from Santa Barbara. As Richelle and Alicia mentioned, after 2016, some people lost faith in the numbers, because most forecasts had given Clinton strong odds, and yet, we know, she lost. Joining me is Sci Fri producer Elah Feder to talk about how we got here– the unintended side effects of election forecasting and why trusting those numbers a little less might be a good thing.

Hi, Ella. Welcome.

ELAH FEDER: Hi, Ira. Thanks for having me.

IRA FLATOW: So what got you interested in the downsides of election forecasting?

ELAH FEDER: So, like a lot of people, I have been visiting 538 a lot recently. For anyone who’s not familiar, 538 is the website that Nate Silver started in 2008. It covers politics. It has a data focus. And it’s most famous for its election forecasts.

IRA FLATOW: Yep, it’s a go-to place for a lot of people.

ELAH FEDER: I cannot stop refreshing it. But this year, I noticed that they’ve completely revamped the way that they present their presidential forecast. So in 2016, at the top of the page, they just gave you the goods right away. They gave you Clinton and Trump’s chances of winning, which is why people were going there. This year, at the very top, there are no numbers, just words.

IRA FLATOW: Yeah, I just pulled this up. And of course, right now, as we record this, on the morning of Wednesday, it says, Biden is favored to win the election, and there are lots of colored maps, mostly blue.

ELAH FEDER: Yeah, you have to scroll down further to get any numbers. And when they give them to you, there’s a lot of text explaining exactly what they mean. And if you’re still feeling lost, 538 even added a cartoon fox with glasses, named Fivey Fox, who pops in with little explanations and tips. Like, Fivey Fox reminds us not to count the underdog out.

So, it’s pretty clear looking at this that 538 is just trying very, very hard to avoid what happened in 2016.

REPORTER 1: Donald Trump has been elected the 45th president of the United States, which leaves us just one question. How the heck did it happen?

REPORTER 2: What went wrong with the numbers? The predictions? The polls that suggested a late surge for Clinton?

REPORTER 3: What did everybody get wrong? I mean, the polls were just wrong.

REPORTER 1: Not just the numbers.

ELAH FEDER: After 2016, a lot of people were very unhappy with the pollsters saying, hey, you know, we trusted you. You told us Clinton would win. The New York Times had given Clinton an 85% chance of winning. Huffington Post put her chances at 98%. So people have written at length about what exactly went wrong in 2016.

Some of it had to do with the polls and the models themselves. Like, one of the big takeaways was that they needed to better account for education level. But some of the problem was with how people were reading the numbers. 538, for example, they actually gave Trump a roughly 30% chance of winning by the end, which is a nearly one in three chance. Those were decent odds. But I remember talking to some friends before the election, and it seemed like they understood these numbers very differently. They saw 30% for Trump, and read it as a Clinton landslide.

IRA FLATOW: You know what? I think a lot of us thought that.

ELAH FEDER: Yeah, I mean, it was a common misperception, even for people who understood these numbers. I think 538 is trying to account for that this time. Well, first, they’ve updated their model. They talk a bit about what they’ve changed. But they’ve also redesigned this page to try to prevent some of the common misunderstandings. But there’s research suggesting that the problem with forecasting might go a lot deeper, to the very existence of forecasts, and how we behave when we think we know how an election is going to turn out.

IRA FLATOW: Aha, so there is history here.

ELAH FEDER: There always is. We’ve been obsessed with knowing how elections are going to turn out for a long, long time. We’ve been trying to predict them forever. We cannot help ourselves. But the pivotal moment for so-called scientific polling in the United States was in 1936.

SUNSHINE HILLYGUS: The Literary Digest fiasco is a favorite for every public opinion professor to teach in class.

ELAH FEDER: Sunshine Hillygus is a professor of political science and public policy at Duke University.

SUNSHINE HILLYGUS: The Literary Digest had sent out millions of requests and had done a quite adequate job of predicting election results in the past. But in 1936, they sent this out. They got the results back and predicted that FDR would lose in a landslide.

ELAH FEDER: Instead, Franklin Roosevelt won in the most resounding victory in US history– 60% of the popular vote, 98% of the electoral college. Which meant that Literary Digest had botched this spectacularly.

SUNSHINE HILLYGUS: The problem with– even though they had millions of ballots that were returned, the problem is that they sent ballots out to people who were subscribers to Literary Digest. They also supplemented that with car owners. So it turns out, right, that this is not a random sample of the electorate. These tend to be people who were more Republican. And so the results just got it wrong.

IRA FLATOW: But if I remember my history correctly, somebody did get it right, didn’t they?

ELAH FEDER: Indeed, this was a big moment for someone named George Gallup, whose name you probably recognize. Gallup was the inventor of the Gallup poll.

IRA FLATOW: Absolutely. Absolutely.

ELAH FEDER: Gallup introduced what is called quota sampling, essentially looking at the demographic characteristics of the sample as it came in. So in quota sampling, instead of just polling people who own cars, you try to pull across a good range of people– different ages, genders, ethnicities. You’re trying to represent the national demographics. It’s not perfect.

SUNSHINE HILLYGUS: We know now that quota sampling can also lead us astray. There’s a saying, demography is not destiny. Not all women vote Democratic. That, if you make a sample that looks like the population across a set of demographics, that you can still get the wrong answer.

ELAH FEDER: But in 1936, George Gallup did not get the wrong answer. He got it right. And then he got it right again in 1940 and in 1944. And since Gallup, our appetite for polls and forecasts has only grown. There’s always been this tension between us really wanting to know what’s going to happen, and at the same time, being pretty skeptical that pollsters actually know what they’re talking about. I mean, they don’t always get it right.

And then, in 2008, along comes Nate Silver predicting the results of the presidential and Senate races nearly perfectly, on his website, 538.

SOLOMON MESSING: Because he did so well in 2008, he becomes this sort of figure.

ELAH FEDER: Solomon Messing is chief scientist of Acronym and an affiliated researcher at Georgetown.

SOLOMON MESSING: And a legend kind of develops around 538. And again, in 2012, he predicts the election very, very well. And what happens is, over the years, for each election, the public, journalists, more and more people, start going to 538 to get a sense of what’s happening in the race.

ELAH FEDER: After 2008, not only do you see the rise of Nate Silver’s reputation as this stellar, wunderkind forecaster, you also start to see a change in how polls are communicated to the public. So traditionally, a news outlet would pay for a poll, and then come back and say that, based on this poll, we estimate that X percent of people are going to vote for this candidate, Y percent for that candidate, with some margin of error.

IRA FLATOW: Yeah, that’s how I remember the history of polling.

ELAH FEDER: But there’s something kind of dissatisfying about that kind of information. I mean, in this country, you can get the most votes and still lose an election. So what people really want to know isn’t just the vote share, but who’s going to win. We call that probabilistic forecasting, where they give you the actual chances a given candidate will win. And that’s what Nate Silver does for us. He takes a bunch of polls and calculates those odds.

SOLOMON MESSING: That forecast is designed to do a bunch of really good things. That provides a number that accounts for the complexity of the electoral college. It attempts to count for polling error.

ELAH FEDER: We love this kind of aggregated, probabilistic polling data. But in August, Solomon and two co-authors published a study suggesting this could have some potentially serious side effects. They were inspired by the 2016 election.

SOLOMON MESSING: Like many of your listeners, probably, we were pretty shocked by Trump’s victory in 2016. 100 million voters stayed home in that election. And we know from anecdotal accounts that some people did so because of this widespread consensus that Clinton was kind of destined to win.

ELAH FEDER: You know, why go out and vote if the outcome of the election is already decided? And so, the researchers wondered just how much could these forecasts affect voter behavior, and did it matter how the numbers were presented. So they ran a study where they told people about a hypothetical Senate race. And they shared the results of some imaginary polling data in one of two ways, either as a vote share or as a chance of winning. So for example, a participant might hear something like, candidate A is predicted to get 55% of the votes, plus minus 2%, margin of error. Or they might hear that candidate A has an 87% chance of winning.

IRA FLATOW: The first one talks about the number of votes, and the second one talks about the chance of winning.

ELAH FEDER: Exactly. But this is key. Both of those numbers are based on the same data. They’re equally accurate. The difference was in how that information was reported to the study participants, so whether they saw vote share or chances of winning. And those numbers, they sounded pretty different. Like, let’s take some real data from 2016. From the national election, at the same time that 538 was giving Clinton approximately 70% chance of winning, they also predicted she’d get about 48% of the popular vote. And that’s based on the same model. But those two numbers feel very different.

IRA FLATOW: Absolutely.

ELAH FEDER: I think part of it is just that one is a bigger number. I think we’re– sometimes, our lizard brains are as simple as that. But another reason might be that people– they’re just used to hearing election stats in a particular way.

SOLOMON MESSING: We’ve had 50 years of experience consuming or covering vote share. And it’s confusing to folks who don’t have a degree in statistics. And even folks who do have a degree in statistics. I probably had too much confidence that Clinton would win in 2016.

ELAH FEDER: And that confusion, it came through in the study.

SOLOMON MESSING: When we ran our study, 40% of the participants seemed to confuse vote share and probability, at least when they were trying to tell us what vote sharing probability was likely to be for an election.

ELAH FEDER: So, someone hears that Clinton has a 70% chance of winning and thinks, wow, she’s going to get 70% of the votes. Which, obviously, that would be a landslide. If that’s what that number really meant, which it does not.

IRA FLATOW: And that’s the problem, because that’s what I thought when I went to the site, also. I understand the confusion here.

ELAH FEDER: Yeah. I think it’s a common misunderstanding. And I think, even when you know what these numbers mean, this part of your brain still responds that way. So that’s one problem. But what the researchers really wanted to know was, does any of this actually matter? You know, does seeing a forecast affect voter behavior?

IRA FLATOW: I have to break in. We’ll be back with more on the effects of election forecasting with producer Elah Feder and our guest, Solomon Messing, after this break. I’m Ira Flatow. This is Science Friday from WNYC Studios.

In case you just joined us, we’re back with Sci Fri producer Elah Feder, talking about the effects of election forecasting. Back to you, Elah.

ELAH FEDER: Thanks, Ira. So when we left off, the researchers had found that sometimes people are confused by forecast numbers, and that seeing a probabilistic forecast, where they give you a candidate’s chances of winning, that can make us feel more confident about the outcome of an election. But the real question was, does this actually affect voter behavior?

SOLOMON MESSING: We ran another study. And this was an election game.

ELAH FEDER: So this was an online game where people could vote for a team, kind of like a political party, and win some money if their team got the most votes. The catch was that voting had a cost, just like in real life. You know, you’re taking time out of your day, you have to wait in lines, so on. In the game, you had to pay $1 to vote. And what the researchers found makes a lot of sense.

SOLOMON MESSING: People are less likely to vote when they see a forecast suggesting an electoral blowout. We didn’t see this decline, this equivalent decline, in response to the vote share.

ELAH FEDER: So in this game, seeing forecast depressed voter turnout, but only when they saw probabilistic forecast. But a game is not real life. So next, the researchers looked at real world data– historical surveys from American national election studies.

SOLOMON MESSING: They happened to ask this question about whether or not you’re confident that the winner of the upcoming election will win by a lot. And so, when we looked at 2016 and compared it to the last four presidential elections, people were much more confident in 2016 that the winner would win by quite a bit. Those folks were also 2% or 3% less likely to vote.

ELAH FEDER: This one issue isn’t necessarily going to make or break an election. For one, Solomon points out that the people who are obsessively looking at forecasts, they’re probably the kind of people who are going to vote anyway. The question is really whether these forecasts reach other voters. But also, as one of the study’s critics pointed out, this effect of forecasting, it’s most powerful when a candidate has a really big lead. And when that’s the case, something that slightly depresses voter turnout, it’s less likely to change the outcome of the election.

Still, there are places where this problem is taken very seriously. France, Canada, Mexico, some other countries have passed laws actually banning publishing polls in the immediate lead-up to elections. Though, note that some of these laws were subsequently struck down for infringing on free speech. Or they just weren’t practical in the age of the internet.

So, election forecasts in this country, probably not going away anytime soon. But after 2016, it seems like some people are approaching election forecasts with more skepticism. And that might be a good thing.

IRA FLATOW: And certainly after this report, I am one of those people. Thank you, Elah Feder. Very interesting stuff.

ELAH FEDER: It was fascinating to learn about. Thanks for having me.

IRA FLATOW: Our guests were Sunshine Hillygus, a professor of political science and public policy and director of The Initiative on Survey Methodology at Duke University, Solomon Messing, chief scientist at Acronym and affiliated researcher at Georgetown. We contacted Nate Silver for an interview, but we did not hear back by the time of this recording.

Copyright © 2020 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Elah Feder

Elah Feder is the former senior producer for podcasts at Science Friday. She produces the Science Diction podcast, and co-hosted and produced the Undiscovered podcast.

About Ira Flatow

Ira Flatow is the host and executive producer of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More