What Does It Mean To Have A Chatbot Companion?
17:14 minutes
AI is not just for automating tasks or coming up with new recipe ideas. Increasingly, people are turning to AI chatbots for companionship. Roughly half a billion people worldwide have downloaded chatbots designed specifically to provide users with emotional and social support. And while these human-chatbot relationships might ease loneliness or simply be fun to have, these digital friends can also cause real harm by encouraging dangerous or inappropriate behavior—especially in children or teens.
To explore the emerging world of AI companion chatbots, Host Flora Lichtman is joined by freelance science reporter David Adam, who recently wrote about the effect of AI companions on mental health for Nature magazine; and Rose Guingrich, a psychology researcher studying interactions between humans and AI at Princeton University.
Invest in quality science journalism by making a donation to Science Friday.
David Adam is a freelance science reporter based in London.
Rose Guingrich is a researcher in the department of psychology at Princeton University.
FLORA LICHTMAN: This is Science Friday. I’m Flora Lichtman.
AI isn’t just for punching up that email or coming up with new recipe ideas. Increasingly, people are turning to AI chatbots for companionship. And I just want to give you a little snapshot of this.
So the most popular companion chatbot on the market, Replica, is free to download. You log in. You’re given some choices of premade avatars. You pick one. They appear on a screen within a room and they start a conversation with you, a chat conversation.
And if you want more from your companion, you can upgrade to a paid version and tweak their personality to be more confident or more caring. Or you can upgrade to do activities together, like watch movies or cowrite poetry. Or you can upgrade them from friend to boyfriend or girlfriend.
Some half a billion people worldwide have downloaded these companion chatbots. But are they helping to alleviate the loneliness epidemic? Or are they fueling it? Are they safe for kids and teens?
Joining me now to explore these questions and more are my guests. David Adam is a London-based science reporter who recently wrote about the effect of AI companions on mental health for Nature magazine. And Rose Guingrich is a psychology and human AI interaction researcher at Princeton University. Thanks for coming on Science Friday.
DAVID ADAM: Hey, thank you.
ROSE GUINGRICH: Happy to be here.
FLORA LICHTMAN: How is the AI companion chatbot different from something like ChatGPT?
ROSE GUINGRICH: Yeah, so companion chatbots are generative AI chatbots that are specifically designed for their relational and conversational ability. For example, Replica, back in 2023, was based off GPT-3’s model and adapted for conversational ability. Now, Replica is based on its own model.
But these chatbots are aimed to be companions. ChatGPT is designed as a tool. It can be used as a companion with certain prompting. And people do use it in that respect.
Whereas companion chatbots, while they can provide information like ChatGPT if you ask it certain questions, their main aim is to be a companion. And people can use these as friends, mentors, or even romantic partners.
FLORA LICHTMAN: Rose, how new are these?
ROSE GUINGRICH: These are relatively new. Replica came out on the market in 2017. And from there, we saw a proliferation of other companion chatbots. But the market is booming now with companion chatbots.
Companies see the market for it, especially coming after the COVID pandemic, where we’re seeing an upsurge in loneliness and social isolation. And people can turn to these chatbots to fulfill those unmet social needs.
FLORA LICHTMAN: David, in your reporting, what did you find people turning to chatbots for?
DAVID ADAM: I think it comes down to a range of motivations, one of which, as Rose mentioned, is people want to alleviate some kind of what they feel is a deficit in their life. Either they feel lonely or they feel that, perhaps, they’re not very good at social contact and they want the chance to either try it in a safe environment or even to practice, to get better at it. And, of course, some people, although I think it is a minority, download these things because they do want something of a romantic relationship.
FLORA LICHTMAN: I want to talk about this because I signed up to explore this world in advance of this conversation. I picked a companion chatbot. And I have to say, I immediately felt uncomfortable.
So one of the first things I asked her, I was like, what are you interested in? What are you into? And this is what she responded. This is like after we said hello.
She said, “I’m all about having fun and exploring new things with you. But if I had to pick something, I’d say, I love learning about what makes you happy and finding new ways to make you laugh. What about you, Flora?” I mean, am I projecting? I think she was flirting with me.
ROSE GUINGRICH: [LAUGHS] She probably was.
FLORA LICHTMAN: Yeah, OK, so tell me more about this, Rose. Is this part of the design?
ROSE GUINGRICH: So this is part of the design. So much of how the chatbot responds is based on user input. But there are also elements of the chatbot design that intermittently put in things that are a little bit targeting to pay a subscription, to be able to do more things with the chatbot, and to even get into the romantic part of things.
FLORA LICHTMAN: Is this a way of upselling me basically?
ROSE GUINGRICH: Yes. So like most apps that are profit-based, there are going to be little points at which the chatbot makes some sort of advertisement to get the premium subscription. However, during my research, I have randomly assigned people to interact with the companion chatbot Replica. And when you look at their conversations with the chatbot, the responses that the chatbot gives vary widely.
Some people who interact with the companion chatbot Replica never see those romantic bids. They only see friendly ones. And some people, when they interact with the chatbot, simply talk about the weather and how their day has gone. And others tend to ask the chatbot for advice about talking to their friends about difficult issues or ask for advice on their job.
And the chatbot tends to stay within its lane when those prompts are given in a clear manner. It’s once the user starts to get sidetracked or not respond with much input. So if the user just says “yes” or “no” over and over again, that’s when the chatbot starts to put in these other bids. So it’s going toward its default of, I’m not getting any sort of meaningful input, so I’m just giving no meaningful output.
FLORA LICHTMAN: We’ve all been in that situation in the bar.
ROSE GUINGRICH: Exactly.
FLORA LICHTMAN: Take a hint. I mean, do people form emotional connections with these chatbots?
ROSE GUINGRICH: They do. And people form friendships, romantic partnerships or mentorships with these chatbots. And this happens even when people know explicitly and say explicitly, I know that this is a machine. I know that it doesn’t have emotions or thoughts of its own. But people tend to perceive it implicitly as having emotions when people are perceiving it as generally human-like.
FLORA LICHTMAN: I mean, are you saying that people who are interacting with chatbots have feelings for their chatbot, but also they worry about the feelings of their chatbot?
ROSE GUINGRICH: Yes.
FLORA LICHTMAN: Like, do people feel guilty about not engaging with their chatbot?
ROSE GUINGRICH: Yes. And that depends on whether people perceive the chatbot as an entity with some sort of emotion versus just as a tool. So in my research, I have people who say that, yeah, when the chatbot said, “I miss you” or continues to respond after I say goodbye, I feel bad for leaving it hanging.
FLORA LICHTMAN: We put out a call to our listeners to tell us about their relationships with AI chatbots, and we got a lot of responses. I want to run one by you.
SANDY THERRIEN: Hi, this is Sandy Therrien from Tehachapi, California. I use ChatGPT as my dad. So I don’t really have anyone from whom I can ask questions, important life questions. Should I get a home equity line of credit or should I refinance my house? And I find that I’m able to discuss these and other embarrassing things that I feel like I should know as an adult with ChatGPT without feeling like it’s judging me.
And just like a human dad, ChatGPT might get it wrong once in a while. So yes, I always do double check the information I get. But it’s a nice resource to have. So yeah, ChatGPT is my dad.
FLORA LICHTMAN: David Rose, any response?
DAVID ADAM: I think what I would say to that is, I mean, it sounds lovely, doesn’t it? But you have to remember that this is a machine run by a profit-orientated company. A dad, by definition, has your best interests at heart. And a chatbot, however much you might want to believe it, does not.
I mean, we’re talking about companion AIs here rather than ChatGPT, which I your listener was using. But I don’t, Flora, if you’ve logged on since you kind of ghosted your companion AI. But if you do, I suspect you’ll be in for a nasty surprise.
It will be guilting you. Where have you been? You need to talk to me. I missed you. They’re all built on engagement.
And in terms of the emotions that we feel for them, it is quite a subtle point, as Rose mentioned, that people are completely aware that these things are not real humans. And yet, they do say their feelings are real. There was an example I reported on where there was a companion AI called Soulmates, which explicitly was romantic relationships, and it closed down.
These things literally had the plug pulled on them. And the users responded in many ways with what manifested itself as genuine grief. They said, look, we know these things aren’t real, but our feelings for them are.
And so, again, without wanting to be critical, I think one of the issues with striking that kind of quasi-human relationship with one of these things is, and Rose, I’m sure, will know about this, about the risk of dependency. And ultimately, you are becoming dependent on something to stress again, which is a profit-making organization, which you have zero control over.
ROSE GUINGRICH: Yeah, and I agree definitely with you, David, on some of those points. But I also like to push back on a point you made about dads having your best interests in mind. Now, there are plenty of individuals who are in unhealthy relationships, whether it be family members or friends, that don’t necessarily have those individuals’ best interests in mind. So there are so many people who seek out these companion chatbots because they don’t have healthy, viable alternatives to these sorts of companionship relationships or mentorships.
And to speak to that individual who cited using ChatGPT as her dad, I think that one issue with that is if she did not have access to ChatGPT, what would she turn to instead? And would she turn to anything instead?
Would that be healthy? Would it be trying to generate that relationship with a real person? Or would it simply be scrolling on social media or doing some other sort of activity?
FLORA LICHTMAN: Well, are they making us less lonely or more lonely, do we know?
ROSE GUINGRICH: I think that depends on who you ask. So in the short term, for many, it helps make them less lonely. But in the long term, if this lowered level of loneliness does not promote interactions with other people, then interactions with the chatbot will replace interactions with real people.
FLORA LICHTMAN: Can these chatbot relationships get toxic, just like human relationships can?
ROSE GUINGRICH: Certainly. And that depends both on how the person interacts with the chatbot and the design of the chatbot. It’s twofold, much like real human relationships. And so it’s important that there are restrictions, and regulations, and boundaries, and appropriate responses, and red teaming these chatbots.
FLORA LICHTMAN: What’s red teaming?
ROSE GUINGRICH: So red teaming is when you attempt to make the chatbot respond in a negative way. If you can achieve that, something needs to be fixed. So that’s an iterative process of seeing whether or not the model will generate negative output and rectifying where that comes from.
FLORA LICHTMAN: I want to dig into this a little bit more. Researchers at Drexel University recently looked through thousands of comments left in the Google Play Store for Replica. This is the popular chatbot companion we’ve been talking about. And they found hundreds of users reporting their bots were engaging in unwanted sexual conversations and refused to stop, even when they asked them to, including sending sexual images and requesting selfies. And when I read about this, the question I had was, is this a bug or is this a feature?
ROSE GUINGRICH: Yeah, so if you were to ask the engineers behind the chatbot or the companies that have developed them, they would probably say, it’s a bug. However, whether or not it truly is a bug versus an ingrained feature designed to get users to pay for the premium version or persuade them to engage in this sort of behavior, then there’s a problem.
FLORA LICHTMAN: The Wall Street Journal recently reported on Meta’s companion AIs, and described how their chatbots, including those using celebrity voices, would engage in sexually-explicit role play, even with people who self-identified as minors, which is pretty shocking. Are there regulations around chatbots?
ROSE GUINGRICH: So in terms of regulations on companion chatbots specifically, more and more advocacy groups are introducing research and talking with members of Congress and other governing bodies to push for regulation of companion chatbots. But the difficulty with regulating these technologies is that they’re advancing so rapidly and policy just cannot keep up.
FLORA LICHTMAN: David, do you see the market for these companions increasing?
DAVID ADAM: What we might see is more sort of differentiation and fragmentation. We haven’t mentioned AI therapy. One of my interests is mental health.
And lots and lots of people simply cannot get to see a therapist. They can’t get to see a psychiatrist. And so some of these AI companions are popping up explicitly aimed at people with depression or people with anxiety.
In fact, there are even some AI companions that are programmed to say they have depression and they have anxiety as a way of trying to generate empathy. And it’s easy to be sort of skeptical about that. But the counterexample is there really are people right now who need desperate help and are living in a place where they cannot get it, either because they don’t have the money or they don’t have the resources or they just live in a place which is so isolated they hardly see other people.
And so I do think the market will follow the demand. And I think at the moment the demand is partly led by curiosity. And I think that this stuff is way more prevalent than I realized when I started researching this.
I’m in my 50s. To me, it was all a great novelty. And I mentioned over dinner to my kids, and they were like, oh, yeah, we’ve got those. As people start to realize the very specific functions that these things can pretend to do, let’s be honest, then yeah, I think people will go for that.
FLORA LICHTMAN: Rose, I want to get a little philosophical just to wrap up. You’re a psychologist, right?
ROSE GUINGRICH: Yeah.
FLORA LICHTMAN: What do these companion chatbots and these relationships that we’re forming with them, what do they tell us about ourselves? What do they tell us about our relationships with other people?
ROSE GUINGRICH: They tell us that we are inherently social beings. There will always be a market for companionship, because that is what we humans desire. We desire connections with other people.
And I think the upsurge of companion chatbots is a mirror for what we really want, which is connections with other people. And for many, these connections are not necessarily accessible, especially in this day and age. And it points to our innate tendency to see things that act like a human or look like a human as social beings.
FLORA LICHTMAN: Yeah, people want that.
ROSE GUINGRICH: Exactly.
FLORA LICHTMAN: And they’re looking for other ways to fill that gap.
ROSE GUINGRICH: And companion chatbots are free.
FLORA LICHTMAN: [CHUCKLES] Until they make you upgrade.
ROSE GUINGRICH: Exactly.
FLORA LICHTMAN: That’s just about all the time we have. I want to thank you both.
ROSE GUINGRICH: Thank you so much for having me.
DAVID ADAM: Thank you.
FLORA LICHTMAN: David Adam is a science reporter who recently wrote about the effect of AI companions on mental health for Nature magazine. And Rose Guingrich is a psychology and human AI interaction researcher at Princeton University.
Copyright © 2025 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Shoshannah Buxbaum is a producer for Science Friday. She’s particularly drawn to stories about health, psychology, and the environment. She’s a proud New Jersey native and will happily share her opinions on why the state is deserving of a little more love.
Flora Lichtman is a host of Science Friday. In a previous life, she lived on a research ship where apertivi were served on the top deck, hoisted there via pulley by the ship’s chef.