One year ago, OpenAI released ChatGPT, a generative AI chatbot that can generate shockingly convincing text. Since then, it has become a center of gravity in the tech industry, as software companies race to integrate the new tech into their products. It’s also sparked concern in the education world, with teachers and parents fearing how students may use it to cheat, and whether it will keep young people from learning writing skills.
So what might adjusting to this new technology look like, one year in? Ira sits down with Dr. Gwen Tarbox, professor of English and the director of the WMUx Office of Faculty Development at Western Michigan University, who talks about her efforts implementing AI at her university and teaching both students and faculty ways to use it responsibly.
- Learn more about Western Michigan’s AI resources for faculty and students.
Dr. Gwen Tarbox is a professor of English at Western Michigan University in Kalamazoo, Michigan.
IRA FLATOW: This is Science Friday. I’m Ira Flatow. Later in the hour, a conversation about how infrastructure works and how modern life is dependent on a series of interconnected global infrastructure systems. We want to hear from all the infrastructure nerds out there. Do you have a favorite piece of infrastructure, maybe an infrastructure system, your most–
ChatGPT, the generative AI chatbot that generates shockingly convincing text, it was released, well, almost a year ago. And since then, it’s become the new center of gravity in the tech world. And software companies have raced to integrate this new tech with their products. It’s also sparked concern in the educational world, with teachers and parents fearing how it could be used for cheating and writing essays for students.
So how are schools adjusting to a different world one year later? And who better to talk about this transition than someone who teaches English. I’m joined by Dr. Gwen Tarbox, professor of English at Western Michigan University in Kalamazoo. She’s helping to lead the university’s implementation of generative AI in classrooms and teach not just students, but faculty on responsible ways to use it. Dr. Tarbox, welcome to Science Friday.
GWEN TARBOX: Thank you so much. It’s a pleasure to be here.
IRA FLATOW: Let’s talk about a lot of schools are banning this technology, right? But many have reversed course on these bans. Why is that?
GWEN TARBOX: Well, I think that whenever a new technology is released, especially one that has so much of an impact, it can be very frightening at first for anyone, especially for humanities faculty, for whom this may have been absolutely new information. Our whole identities as academics are often wrapped up in our ability to write and to teach critical thinking. And to think that a bot who didn’t have 30 years of training could come up with some responses that sound a lot like our own can be very, very intimidating.
IRA FLATOW: So what has the reaction been like at your own school among the faculty and students?
GWEN TARBOX: Well, one of the things I’ll say is I also, in addition to being a professor of English, I also direct the Office of Faculty Development within WMUx, which is our university’s innovation hub. And when OpenAI released ChatGPT, we sat down, and we really started to think about that, Ira, because our job is to make sure that we’re supporting our faculty.
And so one of the things that we realized right away was that faculty were going to have very different responses. For some folks in our College of Business, for instance, they welcomed it. They know that industry is going to want it from their graduates. But from folks in other colleges, where that wasn’t the case, we wanted to support them, too.
But what we knew was we wanted everyone to be informed so they knew what they were talking about. We wanted an AI-competent campus. And so, of course, there were folks on campus who were immediately concerned, wanted to ban it in their classes. And that’s their right. But what we started to do was to offer trainings, which allowed folks to learn about how generative AI works, to get an understanding of how it might actually be helpful to them with teaching and learning, and also to help them meet with our instructional designers to come up with other alternatives.
IRA FLATOW: Did you find that there were a lot of misconceptions about how it worked?
GWEN TARBOX: There are both misconceptions about how it worked, and also people were putting in prompts that weren’t particularly well generated and then using the fact that those chatbots perhaps misattributed their work and thinking, well, then this can’t be that effective of a technology.
And so part of our job was to show them, well, let’s take a look at when it’s prompted well and actually what that means. You can see, wait a minute, it is actually really good at tutoring students or other things like that. But what we did was we worked with our English Department and the director of first year writing, Brian Gogan, to come up with a course called AI Writing, Prompt and Response, that we’re currently offering. And that’s been really exciting because a lot of our students have had the opportunity to work with AI.
Another thing you were talking about academic honesty. And we take that seriously at Western. And we did a series of workshops in August about encouraging faculty to talk to students about their AI policies on the first day of class because as you can imagine, Ira, students were going from classroom to classroom and receiving very different information, depending upon the faculty members. And we wanted to make sure that students were well informed about how their particular faculty members felt. So really, as we work to create an ethical framework for AI on our campus, everyone can participate, whether they are on the AI bandwagon or not.
IRA FLATOW: And how have the students responded to your Prompts and Responses AI class?
GWEN TARBOX: It’s been excellent. It’s a HyFlex class. So actually, not only have we had students here at Western, but we’ve had students from around the world taking the course. And they’ve learned the basics of prompting. And now they are creating a project related to their particular discipline where a chatbot could be useful, either in processing data or teaching information.
And that showcase is actually available for anyone who wants to attend. All of our trainings are open access. We’ll tell you, Ira, there’s been a lot of interest nationally in this. We’ve had over 2000 people attend our trainings. And 44% of them are actually outside our own university.
IRA FLATOW: I’m ready to sign up.
GWEN TARBOX: Well, we would love to have you come. It’s December 7. Join us. It’s from 4:00 to 5:00 PM.
But, I mean, on a really serious note, we have been very proactive because we want our university to be one that students look at and say, hey, this is a place where they’re taking this seriously. They know I need to be career ready or that I need to at least encounter this before I go out into the world. And so we’re really trying to do that for our students and our faculty.
IRA FLATOW: One of the underlying tensions regarding the use of AI in education is around equity because on one hand, you can use this tool as an extremely flexible learning partner and have it adapt your learning style, no matter your means to hire a tutor. But on the other hand, you can read this as cementing our reliance on big tech and increasing the societal disparities, right, in the long term. What are your thoughts on that?
GWEN TARBOX: Well, I think that the people who are most likely to make changes that will be positive are our current class of students. They will be the people for whom their entire careers will be impacted by this. We have to look at that as a reality.
And my thinking on this is if our students have had a chance to practice this kind of technology and have been exposed to information on bias and ethics, that at least when our students go out into the world, they are prepared to raise those concerns and questions in the jobs that they hold. That’s pretty much what a university can do. We can’t put our head in the sand. And we’re certainly not going to affect policy if we are underinformed. So I believe that those concerns are absolutely serious and top shelf. But also, we want to make sure that our students get to have a voice in what happens next.
IRA FLATOW: Right, right. And how about data privacy, especially with youngsters, because they’re inputting, right, potentially sensitive data into these models. And there’s not real transparency about how it’s used or is connected to the user, is there?
GWEN TARBOX: Yeah, absolutely. I mean, one of the things that’s a little bit frustrating is we are lagging behind the EU in terms of those kinds of protections nationally. You can see that some universities are trying to handle this by setting up these closed systems. And just recently, the ability for even individuals to set up a closed AI system has become more accessible.
So I think that’s going to be the route that people will want to be taking, is looking at ways to protect individual users within a classroom, but also just helping our students to learn how to read contracts because one of the first things that Dr. Gogan did in his class was have the students review the terms and conditions for these various corporations.
IRA FLATOW: The EULA can be pretty long at the end of these things.
GWEN TARBOX: Yeah.
IRA FLATOW: Science Friday’s Education Team has recently hosted a Zoom event about the future of AI in education. And I want to play you this comment from a high school junior, Sebastian Rao, about regulating this in schools.
SEBASTIAN RAO: There needs to be these type of AI regulation policies in place that don’t outright ban AI but instead provide guardrails for educators and students. Right now, because that’s lacking, a lot of students don’t really know what to think about a lot of these technologies. Some are confused about what does AI mean? Is it cheating in certain circumstances?
IRA FLATOW: Yeah, what do you think of that?
GWEN TARBOX: I think he’s absolutely right. That’s one of the reasons why we have done 20 trainings for our faculty on the various aspects of this, including an entire training on privacy, because we want to make sure that our educators are aware of what they’re sharing with their students. But we also need our high school educators, our college educators to understand the really uncanny and strange experience of doing a lot of work with a chatbot. I’m very interested in what this is going to mean for authorship, what it’s going to mean for– like so many English professors, their identity is based on having spent 30 years becoming these experts.
And I programmed Claude, which is Anthropic’s chatbot that we’ve been using a lot, to write just like me. And it took me about an hour. And it’s shocking. The result of that was I had a little existential crisis. I stepped away from the computer and just called my friend Dave. And I was like, Dave, I’m really, really, really freaking out here.
And even talking to my dissertation students, one of them said to me this week, you are so lucky no one ever questioned that you wrote your dissertation. Now people will wonder, did I really write it? Was it me? And I said to her, well, actually, look at it this way. What it means to write is going to change in your lifetime, and you’ll get to be one of the people who decides what that means.
But the thing is, Ira, having that experience allowed me to understand the chatbot far better than I had before if I hadn’t spent all those hours sort of prompting it and learning it. And now I’m much better prepared to write a syllabus when I next teach because I know what it can do. And this is a conversation that I want to have, an open conversation with my students so that we can talk about how we’ll use it in my class and come up with a joint agreement about what is acceptable and what isn’t. Faculty and high school teachers need to feel comfortable opening up with students and having these conversations so that everyone’s on the same page.
IRA FLATOW: Do you think that other teachers need to have this existential feeling, this existential experience?
GWEN TARBOX: I think we all do because in order to become active participants in creating the future of what happens with AI and our culture, we need to understand what it can do and what it can’t do as well. And also, the only way to have a voice really is to demand it.
IRA FLATOW: Well, Dr. Tarbox, you sound like a very dedicated and driven teacher. And I thank you for the work that you do.
GWEN TARBOX: Well, thank you so much. It’s been such a pleasure to talk to you today. And I just hope that everybody takes some time to learn more about this technology because it’s going to impact our young people for the rest of their lives.
IRA FLATOW: Dr. Gwen Tarbox, professor of English at Western Michigan University.
Meet the Producers and Host
About Sandy Roberts@KaleidoscopeSci
Sandy Roberts is Science Friday’s Education Program Manager, where she creates learning resources and experiences to advance STEM equity in all learning environments. Lately, she’s been playing with origami circuits and trying to perfect a gluten-free sourdough recipe.