01/10/2025

‘Artificial General Intelligence’ Is Apparently Coming. What Is It?

16:50 minutes

A collage of a brain with gears around it floating above a hand
Credit: Shutterstock

For years, artificial intelligence companies have heralded the coming of artificial general intelligence, or AGI. OpenAI, which makes the chatbot ChatGPT, has said that their founding goal was to build AGI that “benefits all of humanity” and “gives everyone incredible new capabilities.”

Google DeepMind cofounder Dr. Demis Hassabis has described AGI as a system that “should be able to do pretty much any cognitive task that humans can do.” Last year, OpenAI CEO Sam Altman said AGI will arrive sooner than expected, but that it would matter much less than people think. And earlier this week, Altman said in a blog post that the company knows how to build AGI as we’ve “traditionally understood it.”

But what is artificial general intelligence supposed to be, anyway?

Ira Flatow is joined by Dr. Melanie Mitchell, a professor at Santa Fe University who studies cognition in artificial intelligence and machine systems. They talk about the history of AGI, how biologists study animal intelligence, and what could come next in the field.


Further Reading


Get The Radio Preview

We love it when our listeners ask us questions about science. Be the first to hear about our upcoming segments, and submit questions for our experts!

Subscribe

Segment Guests

Melanie Mitchell

Dr. Melanie Mitchell is a professor at the Santa Fe Institute in Sante Fe, New Mexico.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow. The tech world is in a race to create Artificial General Intelligence, called AGI. And it’s been promising AGI’s arrival for years. Now, if you’re not familiar with AGI, that’s what we’re going to do. We’re going to try to get you in the loop. Let’s start.

OpenAI is an organization founded in 2015. It makes the chatbot ChatGPT. It has also said that its founding goal was to build AGI. That is a system that, according to the founders, quote, “benefits all of humanity and gives everyone incredible new capability.”

Others in the field, like DeepMind co-founder Demis Hassabis, have called AGI, quote, “a system that should be able to do pretty much any cognitive task that humans can do.” On the other hand, OpenAI CEO Sam Altman said AGI would, quote, “matter much less than people think.”

So which is it? Will AGI elevate all of humanity, or will it end up not mattering that much? And just what is artificial general intelligence supposed to mean anyhow? A lot to unpack here.

Who better to answer these questions than someone who researches the intersection of machine and human intelligence. That would be Dr. Melanie Mitchell, Professor of Complexity at the Santa Fe Institute, where she researches cognition in artificial intelligence and machine systems. Welcome back to Science Friday. Been a while.

MELANIE MITCHELL: Yeah, thanks for having me.

IRA FLATOW: Nice to have you back. All right. Now that we’ve been rattling off all these definitions of AGI, what is your definition of AGI and if you think it is really definable?

MELANIE MITCHELL: Well, honestly, I’m not the one to give a definition of AGI because I’m not fond of the term, I have to say, because it is so ill-defined. And it makes the assumption that humans have something like general intelligence, which I think is not really true. Humans have very specific kinds of intelligence that are good for the kinds of environments that we find ourselves in, but not for everything. But AGI has been defined in so many different ways it’s almost lost any rigorous meaning.

IRA FLATOW: Really? So how is AGI then different than, let’s say, ChatGPT or Perplexity or any of those services you use and ask it a prompt or a question?

MELANIE MITCHELL: Well, it depends on how you define it, of course. Originally, AGI was defined as some system that goes beyond doing a narrow kind of task or capability. So if you think back to, say, Deep Blue, which played chess, or AlphaGo, which played Go, they were superhuman in their abilities to play those games, but they couldn’t do anything else. So they were what people called the kind of narrow AI.

General AI was supposed to be machines that are more like humans, that have the range of human intelligence, and that are more, you might say, like the AI that we see in movies that can do all kinds of things. So ChatGPT is more general than, say, AlphaGo or Deep Blue. But it certainly can’t do the range of things that humans can do.

So I think that when OpenAI says its goal is to produce AGI, what they mean, in some sense, is to get machines that can do really all of the kinds of things humans can do. But then they put a caveat on it. And you mentioned it at the beginning when you said cognitive tasks.

So they want to separate the idea of being able to do all the so-called cognitive things that we do from the physical things that we do. ChatGPT is not going to go fix your plumbing or reroof your house. So they don’t count those interactions with the physical world in their definition of AGI.

IRA FLATOW: And lately, some people, like Sam Altman, have started throwing around the word superintelligence more often in these conversations. Not to open another can of worms here, but how is that different from artificial general intelligence?

MELANIE MITCHELL: Good question. So I think we have this notion of human-level AI, which is the equivalent to AGI, aside from all those physical things that I mentioned. But superintelligence is AI that’s better than humans across the board.

We already have AI systems that are much better than humans at many different tasks. We’ve had that for a long time, including playing chess or perhaps other– navigating city very quickly with maps and so on. But superintelligence in this context, the kind that Sam Altman is looking for, is AI systems that are better than humans at everything.

IRA FLATOW: Is this like Ray Kurzweil’s singularity moment, when AI surpasses human intelligence and we become the robots?

MELANIE MITCHELL: [LAUGHS] Yeah, in some sense that’s right. So Kurzweil’s singularity is the moment that AI systems become smarter than humans. And in his view, they’re going to be able then to improve themselves.

And so you get this kind of feedback loop, positive feedback loop where they’re getting smarter and smarter and smarter. And then we have this singularity where machine intelligence sort of becomes, in some sense, incomprehensible to humans. That’s Kurzweil’s vision.

IRA FLATOW: Yeah. So that’s not the superintelligence Sam Altman is talking about, or is it?

MELANIE MITCHELL: I think it is. And Sam Altman has this idea that once we get superintelligence, we’ll get super, superintelligence and then super, super, superintelligence. And then have machines that have cured cancer and figured out how to colonize Mars and all of these things.

IRA FLATOW: Right. Let’s start talking about this word “intelligence.” And this is a big question. How do you study intelligence? What do people who do this work think about the concept of intelligence itself?

MELANIE MITCHELL: Right. So intelligence is kind of an umbrella term for a lot of different kinds of capabilities. And we think of our abilities to reason, to figure out what’s causing what in the world, to figure out how to interact with other people and understand them better.

I don’t think intelligence is any one thing. It’s a whole host of capabilities. And some people have strengths in some areas of intelligence and others have other kinds of strengths. Some animals are more intelligent in some areas than even humans. But humans have this capability for reasoning and for reasoning about our own reasoning and kind of being able to understand the world in a deeper way than perhaps any other species.

IRA FLATOW: Does that include self-awareness?

MELANIE MITCHELL: I think it does. I think self-awareness is a key part of intelligence because it helps us understand and reason about our own thinking. And I think self-awareness is something that current machines lack.

ChatGPT doesn’t have self-awareness. It doesn’t have the concept of itself as an entity, I believe. And that’s part of what’s keeping it from being more intelligent. It doesn’t have any sense of whether what it’s saying is true or false, or whether it has more confidence versus less confidence in some of its statements. And from that, we see that it can produce untrue things that are– it’s just as confident about as the true things that it generates.

IRA FLATOW: Well, you know, people who claim to be intelligent also can produce untrue things–

MELANIE MITCHELL: Absolutely.

IRA FLATOW: –that they know to be untrue.

MELANIE MITCHELL: Right. And that– when you say they know to be untrue, that’s a kind of self-awareness. That’s an intention. Whereas, these systems, like ChatGPT, don’t have these kinds of intentions. They don’t have the intention to be deceptive or to be truthful. They’re really just generating text according to some probabilities that they’ve calculated about what’s the most likely kinds of things that they should be saying.

IRA FLATOW: Mm-hmm. Let’s talk about the history a bit behind this term, AGI, but also, AI itself. I know you’ve been studying this for a while. How far does it go back?

MELANIE MITCHELL: So the term AI goes back to the 1950s, when a group of people had a meeting at Dartmouth College about this new field, and they had arguments about what to call it. And one of the founders of the field, John McCarthy, suggested artificial intelligence as a way to distinguish the field from other kinds of fields that were studying intelligence at the time.

He later regretted calling it artificial intelligence because why are we calling it artificial? We should be seeking actual, real intelligence.

IRA FLATOW: Yeah. Makes sense. Yeah.

MELANIE MITCHELL: But other people proposed other terms for this field. One of them was Herbert Simon, who suggested complex information processing, which avoided the sort of anthropomorphism of intelligence, the notion of intelligence. So you can imagine maybe the way that we think about these systems might have been a little different if we didn’t call them artificial intelligence.

IRA FLATOW: Right. And you’ve written how Star Trek has had a kind of outside influence on the direction of the field as a whole.

MELANIE MITCHELL: Yeah, exactly. As I said in the book I wrote on AI, there’s probably a pretty good correlation between the people who are drawn to study AI and the people who like Star Trek.

And one of the elements in the early Star Trek episodes was a computer– it was just called computer– that would answer any question. You could ask it anything, and it would give you a very cogent, concise, correct answer. It knew everything. And this computer was really what a lot of people in AI said that they felt it was sort of their North Star for building AI systems. They wanted an AI system that was like the computer in Star Trek.

IRA FLATOW: But we’re close to that, I mean, on a superficial level, aren’t we? You can ask ChatGPT and speak to it and get an answer.

MELANIE MITCHELL: Yeah. So we are closer than we’ve ever been for sure. But ChatGPT and these other generative AI systems lack something that that computer had, which is trustworthiness. Often, you could trust anything the Star Trek computer told you.

But with ChatGPT, while most of the things that it tells you are correct, it does have a tendency to do what people have called hallucinate, which is to generate very confident sounding answers that actually are untrue. So this is, I think, the next frontier, if you will, is– the final frontier– is trustworthiness with these kinds of machines.

IRA FLATOW: Do we need another breakthrough somewhere down the line to get this to be more of a self-aware? I guess, it’s the intelligence that we can’t define. You can’t really define AGI right now. So it’s the old phrase, I don’t know exactly what it is, but I’ll know it when I see it.

MELANIE MITCHELL: Yeah. I think we need a couple breakthroughs. One, it would be in how to make these systems more trustworthy, more self-aware, more have a better notion of what they’re talking about and whether it’s true or false.

We also need to better understand what we mean by intelligence. As you’ve said, we know it when we see it. But one of the problems is that we’ve thought that for maybe millennia, and it turns out we’ve often been wrong.

So just as an example, people used to think before the age of Deep Blue, the chess-playing computer, that in order to play chess at a Grandmaster level or a superhuman level, you’d have to have superhuman general intelligence. But it turned out that you could get a computer to play chess at this superhuman level without anything like we’d call general intelligence.

The same thing has been said of things like speech recognition and conversation, like the kind that we have now with ChatGPT, that to get those kinds of abilities, you’d need something like general human intelligence, human-level intelligence.

But it’s turned out that we can accomplish these kinds of capabilities without having this idea of general intelligence, the kind that the pioneers of AI really were looking for. So it’s really taught us a lot about how hard it is to define what we mean by intelligence, and to know when we have a system that’s close to having that.

IRA FLATOW: Well, with that caveat in mind, especially about how wrong we always are in predicting the future, I want you to predict the future for me. Here it is, beginning of 2025. What should we be paying attention to this year? What kind of developments are you expecting to hear about in AGI or any of– or from any of these companies?

MELANIE MITCHELL: Well, I think one thing is that this word, AGI, has gotten so much cachet that people will be trying to redefine it into existence, to say, well, what we have at the end of 2025 is clearly AGI because– and then have some definition that captures what we have. So I do think that that is likely to happen.

But I also think that people are going to realize that these systems are actually lacking a lot of the very important aspects of what makes human intelligence more sort of trustworthy when it is and more general. And that those things people are going to start focusing much more effort on in their development of AI systems.

IRA FLATOW: Yeah. Because people in general are afraid. They’re fearful of computers from what they see in science fiction and what they’re watching in their real life now. You have to become more trustworthy.

MELANIE MITCHELL: Absolutely. There’s the fear that the machines will get too intelligent and will take over. But there’s also the opposite fear that the machines won’t be intelligent enough to do the things we give them to do. But we’ll trust them too much, and they will fail in ways that we didn’t expect. So I think both of those fears might be worth considering.

IRA FLATOW: Yeah, I can see that both in good and bad, like in medicine and even in computer warfare, if you trust your machines, right?

MELANIE MITCHELL: Yeah. We don’t want to trust them too much when they haven’t, in some sense, earned our trust.

IRA FLATOW: Well, you have earned our trust, Dr. Mitchell. I want to thank you for taking time to be with us today.

MELANIE MITCHELL: Well, thanks very much. This is a very important topic, and I’m thrilled to be able to talk about it with your audience.

IRA FLATOW: Well, you’re welcome. We’re happy to have you. Dr. Melanie Mitchell, Professor of Complexity at the Santa Fe Institute in Santa Fe, New Mexico.

Copyright © 2025 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About D Peterschmidt

D Peterschmidt is a producer, host of the podcast Universe of Art, and composes music for Science Friday’s podcasts. Their D&D character is a clumsy bard named Chip Chap Chopman.

About Ira Flatow

Ira Flatow is the founder and host of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More