Twitter Polling and Sample Bias: A Case Study

Twitter Polling and Sample Bias: A Case Study

Grade Level

9-12

minutes

15 min - 1 hr

subject

Mathematics

Conducting a survey or a poll seems straightforward enough at first pass. Say, for example, you want to know the percentage of people in a population who like peanut butter and jelly sandwiches. You could just ask each member of the population a very simple question, like “Do you like peanut butter and jelly sandwiches? Answer yes or no,” and then record their answers and calculate the percentage that is PB&J-loving. Or, if the population is huge (the entire state of California, for example, or all members of the international league of sandwich eaters), you could ask a manageable number (a sample) of individuals within the population the same question and assume that the percentage of people in your sample who like PB&J is roughly the same as the proportion in the entire population.

Science Friday’s Tongue-Curling Twitter Poll

Inspired by our Science Club challenge to #TakeASample, we asked Science Friday’s Twitter audience of roughly 640,000 (our “population”) whether or not they could curl their tongue, having heard that about two in five people can’t curl their tongues. Just a small fraction of our followers responded, giving us a “sample size” of approximately 1,619. Here are the results:

Before getting carried away and rashly touting to the world that 81% of Science Friday’s Twitter followers (640,000 humans x 0.81 = 491,000 humans!) can curl their tongue, we thought it prudent to ask a few data scientists who specialize in polling and social media data if there was anything we overlooked. Is there a hidden influence or a flaw in our sample design that could bias our results? As it turns out, like any kind of survey or poll, polling in the Twittersphere is just not that simple, and most experts agreed that our Twitter sample was probably marred by bias and therefore unrepresentative of our total Twitter follower population.

 

In no particular order, here’s a handful of reasons why our Twitter poll doesn’t quite pass muster:

People didn’t know what, exactly, we were talking about.

Curl your tongue? How? Side-to-side like a taco or front-to-back like a wave? We didn’t offer a picture or description of what we meant by “curl your tongue,” so it’s possible that some people didn’t respond because they didn’t know what we meant, or they responded incorrectly because they misunderstood what counted as tongue-curling and what didn’t. Some of the replies to our tweet suggest that our question could have been clearer:

People didn’t respond because they were embarrassed or didn’t care.

Twitter followers who can’t curl their tongue may be less likely to respond to a poll question about tongue curling than people who can curl their tongue because they’re a bit embarrassed. Similarly, people who can’t curl their tongue might be less likely to respond because they just don’t care as much about tongue curling. Both scenarios would sway our results considerably pro-curler.

People chose the answer that made them look good.

People sometimes answer polls dishonestly, especially if the results are public. If for some reason the Twitter followers sampled in this poll thought they would appear cooler or more interesting if they could curl their tongue, they might have lied and claimed that they could.

People just clicked the first answer.

When folks are pressed for time or don’t feel like reading all the choices in a survey, they are more likely to select the first answer provided. With two answers possible, this shouldn’t be an issue, but we can’t rule it out because the two answers provided were presented in the same order to all poll respondents.

People couldn’t recall or didn’t feel like finding out the real answer.

It’s possible that respondents who answered this question in a public place couldn’t recall whether or not they’re able to curl their tongue or didn’t want to try it in public or seek a mirror to verify. Survey questions that draw upon memories of past events or that require respondents to perform a physical action (like getting up to test the batteries in a smoke alarm, for example) are less likely to be answered honestly, if at all.

Just by asking the question, we changed people’s answer.

Tongue curling is no longer believed to be a fully genetically-determined trait as it once was at the start of the 20th century, largely because people can teach themselves how to do it with practice. It’s possible that just by asking this question in a Twitter poll, we inspired our followers to give it the old college try, and “non-curlers” converted into “curlers.” In political polling, this can happen when poll creators describe in detail a ballot measure before asking respondents whether they will support a measure they may know little about. In this way, pollsters may change respondent opinions just by educating them about ballot measures at the start of the questioning process.

People chose the answer that they thought most people would choose.

Though this is more commonly a source of bias with survey questions that are perceived as having a “right” or a “wrong” answer, it’s possible that our respondents chose the answer they thought most people would choose in order to fit in.

People answered multiple times to change the outcome of our poll.

It’s common enough for one person to have multiple Twitter accounts, so it’s possible that someone wanted to try to sway the results of our survey by responding multiple times. Larger sample sizes help to dilute this effect, but without extraordinary effort, we cannot rule out this source of bias.

When, how, and whom we polled may be fundamentally biased.

People who respond to Twitter polls conducted by Science Friday after 4 p.m. on a Thursday might not represent a typical population of people or even a typical sampling of Twitter users. As a final experiment, we conducted a new Twitter poll at roughly the same time and day of the week as our tongue-curling poll, but this time we looked for an indication that @scifri followers are a biased sampling of the general Twitter audience. We asked our followers whether or not they follow pop icon Katy Perry (@katyperry, 88.2M followers) and/or astrophysicist and science communicator Neil deGrasse Tyson (@neiltyson 5.14M followers). Here’s what we found:

If poll respondents from @scifri’s Twitter following are an unbiased sample of Twitter users, we would expect that respondents to our poll would follow Katy Perry and Neil deGrasse Tyson at a ratio of about 17 to 1, because there are about 17 times as many followers of Katy Perry as there are of Neil deGrasse Tyson. In other words, if our polled population were an unbiased sample, we would expect more of our followers to follow Katy Perry, since more Twitter users on the whole follow her than follow Neil deGrasse Tyson. Instead, we found the opposite. What do these poll results suggest about our tongue-curling sample, or about Science Friday’s Twitter audience in general? Tweet your ideas to @scifri with the hashtag #TakeASample!

 

Special thanks to Mark Dredze, assistant research professor of Computer Science at Johns Hopkins University and Cliff Lampe assistant professor in the School of Information at the University of Michigan for assistance with this article.

Meet the Writer

About Ariel Zych

Ariel Zych is Science Friday’s director of audience. She is a former teacher and scientist who spends her free time making food, watching arthropods, and being outside.

Explore More