Our Audience Feedback Survey Was Overrun By Bots. Here Are 5 Lessons We Learned.

When our survey data was skewed by AI, we learned how to filter fake responses from our listeners.

3d rendering humanoid robots working with headset and notebook
Humanoid robots working with laptops and headsets. Credit: Shutterstock

SciFri Findings is a series that explores how we understand the impact of science journalism, media and programming on our audiences. Sign up for our newsletter to get the latest reports!


Getting feedback from those who we serve has always been an integral part of Science Friday’s objective in making science more accessible. Audience research is no different— I am always curious to know what value does the work we do provide our audiences? Where can we make things better? How can we have deeper engagement and impact? In June 2023 we launched an audience survey across multiple platforms (radio, social media, newsletters, donors, etc). This audience survey was informed by in depth interviews on radio programming conducted in Fall 2022. I excitedly waited as the survey went into the world, hoping 200-300 people would care enough to complete it.

The joy and the surprise as the numbers started trickling and then monsooning in— 1, then 100, 500, 2500, all the way up to over 6800! Our Director of Audience, Ariel Zych, and I started playing with the data–we wanted to use ChatGPT to theme and summarize some of the qualitative data. We copied some open responses into Chat and started noticing something odd here and there in the data we were pasting over.

There, in our own raw data, were a damning number of clearly AI-generated responses, shamelessly self-disclosing with responses to open preference questions like “As an AI language model, I do not have personal preferences…” Scrolling more through the data it became clear…AI bots had struck our study, HARD! We looked at each other and couldn’t help but laugh at the irony. Here we share some lessons learned (and some we had forgotten) after we wiped away our tears and started cleanup.

Tips for Your Next Online Survey

  • Use survey software that has a CAPTCHA: “Completely Automated Public Turing test to tell Computers and Humans Apart ” or CAPTCHAs are questions we have all likely seen. These programs differentiate humans from bot respondents. Many online software companies provide CAPTCHA options for surveys but only for paid subscriptions.
  • No CAPTCHA? Trap ‘em: You may not have the budget for licensing survey tools with a CAPTCHA feature. Trap questions are an alternative to a CAPTCHA that can help provide some coverage against bots. They are used to identify respondents who are not paying attention to survey questions (e.g. someone choosing “Strongly Agree” for all questions). A trap question can take many forms, including a question to identify an object in an embedded picture, a prompt to type specific words into a text box, etc. Once the data is collected, you can filter out any respondents with incorrect answers. Trap questions not only protect against bots, but also bad actors such as trolls with an agenda, or those who don’t actually know about the product or program but who want to receive the cash incentive. By providing a small number of trap questions, you can ensure your target audiences are the ones providing you with good data, and eliminate the rest.
    Trap question from Science Friday survey which reads "We want to make sure that you're not a bot! Please choose the answer that matches the host of Science Friday: Bill Nye, David Attenborough, Ira Flatow, Neil deGrasse Tyson
    Trap question used as part of Science Friday radio programming survey. Credit: Nahima Ahmed/Science Friday

    We incorporated a trap question during the design phase of our audience survey. Participants were asked to identify Science Friday’s host, Ira Flatow. Answer choices included only other male science journalists and communicators so that all options could be viable options and limit the number of bots/bad actors in the data. We used this type of trap question because we wanted to survey existing audiences who should know the host, not new audiences. This one step eliminated almost 20% (N=1357) of our sample!

  • More money, more bots, more problems: Cash is king in the survey world. Participants are often rewarded with cash or gift cards for each completed survey. Even the chance for a lottery incentive has shown to increase response rates for online research. We chose to provide a $50 e-gift card lottery incentive to balance the length of the survey and motivate more audience members to complete it. Money is great, but with more money comes higher incentives for bot creators, bad actors, and trolls to participate for cash alone. We quickly realized $50 was a lot to offer for a ~12-13 min survey. It made me think: How can I make sure to value the participants time while still making sure I get the information I need? Next time, we will consider lowering the threshold of our cash incentive. Perhaps it could have been limited to $25 instead? If this didn’t yield enough participants, maybe a second recruitment waive would be in order? In the future, particularly for audience surveys, we might consider offering other things of value, such as merchandise or free event tickets instead. Non-cash offers might reduce the number of people interested in just being paid for survey completion. It can also provide value by giving participants tangible materials and/or deeper engagement with your organization.
  • Segment audiences: Whenever feasible, use different utm or referral links for different recruitment pathways for your surveys. We used different links for each platform (i.e. Twitter, Newsletters, Donors) to understand where traffic was coming from, look for differences in the preferences between audiences, and to capture the possible universe size for our sample. We had more than half our respondents come from Facebook, which is disproportionately higher than we usually see for surveys. Generally, we find our radio audiences to be the largest referrals so seeing so many come from Facebook was a red flag. Additionally, segmenting audiences can identify any strange patterns in the data. For example, if you have previously surveyed audiences, you may already have demographic data to check against new data. If you know your organization primarily serves older adults, and see that your survey consists of only young participants the data may be compromised. Consider whether it could be the topic of the survey, recruitment, or if this anomaly is a potential bot.

Cleaning Up The Data

After a few laughs and tears, I had the task of figuring out how exactly to clean up the jumbled mess of data we had. With a filtered dataset (thanks trap question!), I started cleaning the data using

  • Impossible timestamps: Responses submitted within the same second of each other were removed. Many of the most suspicious responses were submitted with nearly the same time stamp late at night (12-3 am) or early morning (4-7 am) which are unlikely times for our US-based audiences to complete surveys.
  • Obvious AI language: I had a number of open-ended questions for the survey. Any responses that had very obvious language (“As an AI language model, I do not have personal preferences…”) were removed.
  • Non human sounding responses: Some of our open-ended questions included asking why participants preferred certain broadcast formats. We eliminated any responses that didn’t sound authentic to an audience voice. For example, “Live call can increase the audience’s sense of participation and loyalty…” It is doubtful that an audience member would be discussing loyalty.
  • Human-sounding, but identical, open responses: There were some responses that repeated often. This includes phrases like “It can create memorable moments for both the host and the audience” and “Maintained the authenticity of the program”. It was highly unlikely that multiple individual respondents used the exact same phrasing.

Designing audience centered content is an inherently inclusive process. Audience surveys are an opportunity to listen to the needs and concerns of our audiences. Surveys are just one tool we use to help gather audience feedback at Science Friday. When all the cleaning was said and done, we were still left with 1200+ survey participants in our sample! This was significantly higher than the 200-300 we initially anticipated. As online research continues to grow, so does the potential for AI bots. I am appreciative of having discovered new ways to improve my practice even if it cost me hours of work and some new gray hairs.

collage of three photos. from left to right, a child stands up to a mic to ask a question, a man stands on stage speaking to a full theater, a young girl stands in the middle of a crowded theater speaking in a mic as a spotlight shines down on her
Your voices have shaped our show. From left to right, a young audience member asks a question at SciFri Live in San Francisco, Ira stands on stage in Salt Lake City, another young listener asks a question at SciFri Live in San Antonio. Credit: Alexander Lim/Benjamin Altenes/Cindy Kelleher/Science Friday

Meet the Writer

About Nahima Ahmed

Nahima Ahmed was Science Friday’s Manager of Impact Strategy. She is a researcher who loves to cook curry, discuss identity, and helped the team understand how stories can shape audiences’ access to and interest in science.

Explore More

What Do Two Anesthesiologists Do For The Fears Of A General Audience?

Using an Ask-An-Expert model leads to increased knowledge and comfort levels on anesthesia for audiences.

Read More