An AI Leader’s Human-Centered Approach To Artificial Intelligence
Listen to this story and more on Science Friday’s podcast.
Just about every day there’s a new headline about artificial intelligence. OpenAI Founder and CEO Sam Altman was forced out, and then dramatically returned to his post—all in the span of a week. Then there’s the recent speculation about a revolutionary new model from the company, called Q*, which can solve basic math problems.
Beyond the inner workings of AI’s most high profile startup are stories about AI upending just about every part of society—healthcare, entertainment, the military, and the arts. AI is even being touted as a way to help solve the climate crisis.
How did we get to this moment? And how worried or excited should we be about the future of AI? No matter how it all shakes out, AI leader and early innovator Dr. Fei-Fei Li argues that humans should be at the center of the conversation and the technology itself.
Ira talks with Dr. Fei-Fei Li, founding director of the Institute for Human-Centered AI at Stanford University and author of the book The Worlds I See: Curiosity, Exploration and Discovery At The Dawn of AI, about her path from physics to computer science and the promise and potential of human-centered artificial intelligence.
Dr. Fei Fei Li is author of The Worlds I See: Curiosity, Exploration & Discovery At The Dawn of AI, and the Founding Director of the Human-Centered AI Institute at Stanford University in Stanford, California.
IRA FLATOW: This is Science Friday. I’m Ira Flatow.
Just about every day, there’s a new headline about AI, or artificial intelligence– the recent chaos involving OpenAI CEO Sam Altman being forced out and then dramatically returning to his post, speculation about a new revolutionary Q-star model for the company. But beyond the inner workings of AI’s most high-profile startup are stories about AI upending just about every part of society– health care, entertainment, the military, the arts. Who knows– AI might even help solve the climate crisis.
But how exactly did we get to this moment? How worried or excited should we be about the future of AI and us? No matter how it all shakes out, my next guest argues that humans should be at the center of the conversation and the technology itself. Dr. Fei-Fei Li is the author of the book, The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI. She’s also the founding director of the Human-Centered AI Institute. That’s at Stanford University out there in California.
Dr. Li, welcome to Science Friday.
FEI-FEI LI: Thank you, Ira, for inviting me.
IRA FLATOW: Nice to have you. Now, I mentioned this– when you say that AI needs to be human centered, explain that for us, please.
FEI-FEI LI: Well, OK. Well, let’s begin with what AI is. AI is a piece of tool. I know it’s a very intricate and intriguing piece of tool that humans have made. And I do believe this is a piece of tool that is very powerful and will transform human society, will transform business, and all that. But at the end of the day, it’s a piece of tool.
Tools are made by humans, are being deployed by humans, and should be used properly by humans. And no matter how you think about it, I put humans in the very center of this technology because of our responsibility in the creation and application of it.
IRA FLATOW: But people view these more than just tools. They view them as intelligent tools, and are fearful that they may become more intelligent than the toolmakers.
FEI-FEI LI: I hear you. I do hear you. And a lot of this is because it’s new. It’s unknown. And when we face something new and unknown, it’s scary. And this is not the first time humanity has faced that.
We think about the history when we first discovered fire as a species, when we created electricity, when we created PC– just every step along the way, major technological advancement in human history creates anxiety and disruption. This is the same. And of course, it is an intelligent piece of tool, in the sense of it takes data, understands patterns, helps to make decisions and all that. But as far as a piece of software, this is still very much a piece of tool.
IRA FLATOW: So you do not believe that AI poses an existential threat, correct?
FEI-FEI LI: Let me be more nuanced with my answer. First of all, as a scholar, I do respect discussions about this. Look, I live on Stanford campus, where my colleagues discuss about the archaeology of Roman Empire all the way to the smallest bacteria we can find in human bodies. So there’s a lot of curiosity in where intelligent machines are going as a potential piece of software is a worthy topic.
But as of now, I see AI’s more urgent and pressing risks in the social domain, such as disinformation for democracy, job changes, bias and privacy infringement, and then many more.
IRA FLATOW: It’s something you can actually feel.
FEI-FEI LI: It’s something that impacts everybody.
IRA FLATOW: Right.
FEI-FEI LI: Impacts everyday people.
IRA FLATOW: Let’s worry more about those than about what AI is up to.
FEI-FEI LI: I think we need to take responsibility in those issues immediately. And that’s important. And because of the imbalance of the discussion of this kind of risk and the issues versus the existential crisis, I feel it’s my responsibility– and especially being the co-director of Stanford Human Center AI Institute– we should be communicating this.
IRA FLATOW: Your book is part memoir, part history of AI. But I found it interesting that you had to be convinced to include your personal story in the book, right? Tell me about that.
FEI-FEI LI: Yes. So I was invited to write an AI science book to the popular audience about three and 1/2 years ago. I remember it was the beginning of COVID. And of course, as a scientist, I wrote for a year a first draft of a science book. And I showed it to my very good friend, Professor John Etchemendy, a philosopher and co-director of Stanford HAI.
And he literally said, you have to rewrite. And it was pretty hilarious. But to me it wasn’t that funny when someone told me to rewrite after a whole year. But he was very convincing. He said, look, Fei-Fei, there are many AI technologists who can focus on a pure science book. But if you are talking to the greater audience, there are so many immigrants, young women, people of all walks of life, people of all disciplines, they are lacking a voice they can identify with.
And he believed that I could embody that voice. And I think he’s right. So I had to rewrite the book in the double-helix structure, where I use my personal journey of coming of age as a scientist to carry the very serendipitously intertwined story of AI coming of age.
IRA FLATOW: Wow. That’s a great metaphor. I’ve never heard of the double-helix metaphor used in telling a story, but it certainly fits.
FEI-FEI LI: Yeah, I’m a nerd.
IRA FLATOW: You talk about moving from China to suburban New Jersey as a teenager. And how did this experience shape your curiosity and eventual career in AI?
FEI-FEI LI: Yeah, Ira, that’s a great question. It did dawn on me while I was writing the book, there’s so much similarity– far more than I thought– between being an immigrant, especially learning a new language and getting to know a new country, and being a scientist. Both really propels you or puts you in a situation of unknown. And then you have to explore. You have to find your inner North Star and just have that grit and determination and develop resourcefulness to go after something that you’re curious about.
So in a way, maybe the immigrant experience did shape me as a scientist in the sense of being very curious and not afraid of an unknown situation.
IRA FLATOW: You studied physics–
FEI-FEI LI: Yes.
IRA FLATOW: –as an undergrad. You liked physics.
FEI-FEI LI: I loved physics. Actually, that was my first North Star. Between Einstein and everything, this is why I went to Princeton and majored in physics.
IRA FLATOW: I share this love you do. I never had the kind of intelligence to do what you do. But how did physics lead you to computer science and then onto AI? What’s the connection there?
FEI-FEI LI: Yeah. Well, when I was a kid– a teenager kid– a rather lonely one since I didn’t speak much of–
IRA FLATOW: That I can relate to.
FEI-FEI LI: Yeah, I didn’t speak much of the language. I was busy trying to make a living. I read a lot of Einstein and physics. I loved the physics classes. So I wanted major in physics. In my book, I also talk about Neil deGrasse Tyson’s class. He taught me astrophysics.
IRA FLATOW: He did?
FEI-FEI LI: He did.
IRA FLATOW: A great teacher.
FEI-FEI LI: Yeah. Well, amazing teacher. I did not realize who he was when he was my professor. But what I really loved about physics is the audacity to ask the most fundamental questions about the universe. The physicists are not afraid. You see the stars moving, and then you start to imagine a gravitational force that can be captured in one equation that explains the movement of all the heavenly bodies. You go after, what is the beginning of space and time? You go after questions like, what’s the smallest matter? Can you break down an atom?
I mean, these are just, in a way, crazy questions to ask. Yet, physics as a discipline gives you both the rigor as well as the fearless curiosity to chase these questions. And that was what I loved about it. And then, in the middle of my physics study during Princeton years, I started reading great physicists of 20th century. And towards the second half of their career, they start to ponder questions beyond the physical world.
Like Schrodinger wrote, What is Life? Roger Penrose wrote about mind; and Einstein is always such a fluid mind of pondering about so many things. It kind of took me in an unexpected turn to become more curious about life.
And once I became more curious of life, I was naturally drawn to the most mysterious, audacious questions I could ask as a student at that time, which is, what is intelligence? What makes humans intelligent? And can we make machines intelligent? And that led me to artificial intelligence as well as human neuroscience.
So I do have, in a way, a relatively untraditional path into computer science. It was not video games and it was not just hacking software. It was physics.
IRA FLATOW: And being fearless about asking the questions?
FEI-FEI LI: Yes.
IRA FLATOW: That’s important in science, isn’t it?
FEI-FEI LI: Oh, it’s essential in science.
IRA FLATOW: And believing that you can understand what intelligence is. If you’re going to make an artificial intelligence, you have to have some sort of belief that you can decipher what intelligence is, do you not?
FEI-FEI LI: Yes and no. I believe that journey. I believe that we need to go on that quest. But what is really curious is that the process of making an intelligent machine and the process of understanding the human brain is simultaneously parallel and intertwined. The understanding of the brain inspires AI, but it’s not limiting us to make a different kind of machine– a thinking machine.
IRA FLATOW: Let’s talk about neural networks. We’ve heard about those things. You work with neural networks. What is a neural network? And how does it compare to what’s going on in my own brain?
FEI-FEI LI: Yeah. Well, let’s start with the most organic and amazing neural network nature has made, which is the brain. What does our brain look like?
There’s a piece of work that eventually won a Nobel Prize in Medicine, which is by neurophysiologist Hubel and Wiesel, in the late ’50s. They were wondering about how mammals see. And other than knowing the functions of retina and eyes, which is really a sensor that collects lights and sends electric signal back into the brain, we don’t really know how you go from photons stimulating your retina to, oh, I see a fish. That is a computational question. And they were probing mammalian brain using electrodes. At that time, it was a very, very advanced experimental technology.
But what they find out are two remarkable things about the mammalian visual brain, which eventually inspired the computer neural network. The one thing they find out– well, we know the brain is made of small cells called neurons– what they find out is every neuron in the cat visual brain, especially close to the retina– what we call the early stage visual brain– it responds to something simple. It responds to, say, a moving bar that’s–
IRA FLATOW: Shapes, if I remember correctly.
FEI-FEI LI: Right. But it’s really a simple shape. It’s really an edge.
IRA FLATOW: Right.
FEI-FEI LI: The edge of a particular orientation– say, 45 degree to the left, moving right to the left. And they found out that there’s millions and millions of these neurons that all respond to something slightly different– at the beginning, just edges– slightly different orientation. And then you go to the next layer, where these neurons send their signal to, and this next layer responds to something slightly more complex– maybe just a corner. And then you keep going. There’s a hierarchy of information propagation.
And eventually you go high enough in the brain, there’s something that corresponds to, I see an object that’s a fish. So what they found out is that the fundamental unit– computing unit of the brain, which is a neuron– responds to simple signals. And the meaning of them stacked together in a network can give you a more complex computation like seeing a fish. And the two concepts– individual neuron units put together in a hierarchical network that propagates information and learns about the input signal– is the foundation of a neural network.
IRA FLATOW: This is Science Friday, from WNYC Studios. If you’re just joining us, I’m talking with computer scientist Fei-Fei Li about her new book, The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI.
So you have to train the computer.
FEI-FEI LI: Absolutely, you have to train.
IRA FLATOW: And what does that look like?
FEI-FEI LI: You give the computer– for example, you want to train the computer to see a cup that’s in front of me. You give the computer many, many, many cups in different angles, different lighting. And then the neural network has all these neurons– they’re small, tiny mathematical equations. They’re connected by tiny mathematical functions. But mathematical function has parameters, right? You have to tune them. And then you use this training algorithm to tune these parameters.
And there is a goal for this algorithm. There are different types of goals. We call them mathematically “objectives.” Let’s just make an example. The objective here is to see this as a cup versus something else that’s not a cup. So it’s a simple goal of cup and non-cup.
Well, every time you give it a training picture of a cup, you tweak your parameter so that it tries to answer this picture as a cup. And if it’s wrong, the system sends a signal, saying you’re wrong, and then you tweak again. And then you do this many, many, many times. You train it with a cup picture or a non-cup picture. And then you eventually learn. That’s just one type of learning. I’m simplifying.
IRA FLATOW: Well, that’s good because I understood that. That was very good.
FEI-FEI LI: I’m glad.
IRA FLATOW: If you’re just joining us, we’re continuing our conversation with AI pioneer Fei-Fei Li, author of the new book, The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI. She’s also the founding director of the Human-Centered AI Institute at Stanford University in California.
I know you’re the creator of ImageNet, which uses this algorithm we’ve just been talking about. The project wasn’t exactly smooth sailing–
FEI-FEI LI: No.
IRA FLATOW: –all the time. It was not met with immediate adoration. When did you realize that it would shape the field of AI so profoundly; that you were right about what you were doing?
FEI-FEI LI: Well, there’s different ways of realizing you’re right. When I hypothesized this project, I was driven by the scientific mission and quest– I know that we need to use data to drive AI algorithms. So from that point of view, I was delusionally confident that I was right. I did not care that there are so many people who told me that I was wrong. So that was the one way of feeling I was right. But it doesn’t mean it was easy. I was facing a lot of pushbacks.
And then, of course, the project proceeded. We finished the project. And then, fast forward six years later– or five years later– after the onset of the project. We got to the moment that the world knows as the beginning of the deep learning revolution, when ImageNet, convolutional neural network, and literally two GPUs showed progress in a visual intelligent task that was really unexpectedly big. That was the moment of external validation.
IRA FLATOW: How do you keep going when so many people are telling you, well, maybe this is not right? I mean, what is there about your personality? Did you have this growing up?
FEI-FEI LI: Well, Ira, this goes back to what we talked about. Whether you call it personality or whatever, somehow I started with that North Star. As a scientist, I’m driven by a North Star, that audacious quest. And once I identify that audacious quest, it is relatively easy for me to tune out the other voices.
IRA FLATOW: I know one of the big challenges is the bias baked into some of these algorithms that we’re talking about. The algorithms are only as good as the data they’re based on, which replicates things like racism and sexism in the real world. How do we get better at that?
FEI-FEI LI: Yeah, it’s an important issue. I mean, algorithm bias is one of the many risks that AI technology brings. And there are multiple ways to mitigate this. There’s the technological way I’ll get into. But there is also the social norm and regulatory framework, which is also important.
On the technology side, we know a lot more today now about where bias come in. It starts with the way we design and curate data. It has to do with the algorithm itself. And it also has to do with how we use the output of the algorithm. And because we now know so much more, there are technological solutions– be careful with your training data, how to balance the data.
But there is also the social piece. Whether you’re a researcher or you’re developing a product, there’s more and more awareness in a social context of the harm of data bias and algorithm bias. And we try to mitigate that. Eventually, we will need some guardrails. Depending on the vertical space– whether it’s health care or finance– some of the guardrails needs to be assessing and evaluating issues like bias.
IRA FLATOW: There are so many fits and starts in the history of AI. It seems like a game changer, then it sort of falls to the wayside. Is the current AI any different than that? Are we on the right path? Is there a right path? Is that the wrong question to ask?
FEI-FEI LI: That’s a great question to ask. And it’s not a wrong question. It requires a nuanced answer. Let me first share with you– I do believe we’re at an inflection point. I know there has been bubbles and bubble bursts, hypes and deflations. But from a technological point of view, the latest wave– almost exactly one year ago, set forth by OpenAI, but also other technology companies in terms of large language model, in my opinion, is an inflection point of the capability of this technology. But it’s also an inflection point of the public awakening, including policy circle awakening.
IRA FLATOW: I want to ask you one last question. If you could take out your crystal ball– I mean, there are people alive now who are over 100 years old. Some of them are 110. And their lifetime has spanned just about all of modern physics, going back to Einstein and relativity and quantum mechanics and black holes. Can I have you take out this crystal ball– maybe not look so far ahead, 100 years from now– but maybe when I have you back in that seat 10 or 15 years from now.
FEI-FEI LI: I hope I get back earlier than that.
IRA FLATOW: OK. Well, tell me why you would be back earlier and tell me what would be happening to bring you back earlier and where would you see things going.
FEI-FEI LI: Well, I do think AI is a transformative force in our society’s upcoming change. And the continued dialogue and exchange of ideas with the public is very important. I do believe this technology will continue to progress. We have seen the language-based models getting more and more incredible. But we also are going to see multi-modal. We’re going to see vision and videos. We’re going to get into more robotic advancements. All this is part of AI’s future.
IRA FLATOW: Well, we have run out of time. I’m so happy to have you as a guest and to talk with you about all of this.
FEI-FEI LI: Thank you, Ira.
IRA FLATOW: You’re welcome. Dr. Fei-Fei Li is the author of the new book, The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI. She’s also the founding director of the Human-Centered AI Institute. That’s at Stanford University, based in Stanford, California.