04/07/2023

An Open Letter Asks AI Researchers To Reconsider Responsibilities

12:11 minutes

a person's thumb scrolls over a phone that has a chat bot open, which is chatgpt. in the background is chatgpt and open ai's logo, which is a hexagonal swirl
Credit: Shutterstock

In recent months, it’s been hard to escape hearing about artificial intelligence platforms such as ChatGPT, the AI-enabled version of Bing, and Google’s Bard—large language models skilled at manipulating words and constructing text. The programs can conduct a believable conversation and answer questions fluently, but have a tenuous grasp on what’s real, and what’s not. 

Last week, the Future of Life Institute released an open letter that read “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” They asked researchers to jointly develop and implement a set of shared safety protocols governing the use of AI. That letter was signed by a collection of technologists and computer researchers, including big names like Apple co-founder Steve Wozniak and Tesla’s Elon Musk. However, some observers called the letter just another round of hype over the AI field. 

Dr. Stuart Russell, a professor of computer science at Berkeley, director of the Kavli Center for Ethics, Science, and the Public, and co-author of one of the leading AI textbooks was a signatory to that open letter calling for a pause in AI development. He joins Ira Flatow to explain his concerns about AI systems that are ‘black boxes’—difficult for humans to understand or control. 


Further Reading


Segment Guests

Stuart Russell

Dr. Stuart Russell is a professor of Computer Science and the Director of the Kavli Center for Ethics, Science, and the Public at the University of California, Berkeley in Berkeley, California.

Segment Transcript

IRA FLATOW: This is Science Friday. I’m Ira Flatow.

Later in the hour, the connection between warmer temperatures and home run slugging. Yes. And a hopeful video game about climate change.

But first, last week, I was having this deja vu. I was recalling a time, way back in 1975, when scientists called a halt to their research to discuss the possible consequences of what they were doing. And back then, it was the shiny new tool of genetic engineering, recombinant DNA, that caused Paul Berg and Maxine Singer to organize a meeting of scientists to draw up voluntary guidelines to ensure the safety of recombinant DNA technology. It was called the Asilomar Conference.

Well, I was having those deja vu thoughts last week, when I learned of another group of scientists, releasing an open letter, warning of hazards of a current tool, called artificial intelligence. It sounded all too familiar. And it stated, “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” And called for researchers to jointly develop and implement a set of shared safety protocols governing the use of AI.

The letter was signed by a collection of technologists and computer researchers, including big names like Apple co-founder Steve Wozniak and Tesla’s Elon Musk. But others called the letter just another round of hype over the AI field.

Joining me to talk about that is Dr. Stuart Russell. He’s a professor of computer science at Berkeley, director of the Kavli Center for Ethics, Science, and the Public, and co-author of one of the leading AI textbooks. And he’s a signatory to the open letter I just mentioned.

Welcome to Science Friday.

STUART RUSSELL: Thank you, Ira. It’s nice to be you.

IRA FLATOW: As I say, you’re a signatory to this letter. Why did you sign it? Why do you think a pause is needed?

STUART RUSSELL: In my view, the AI systems that are currently being developed and the ones that have been released recently based on a technology called large language models represent a type of technology that is intrinsically very difficult to understand and very difficult to guarantee that it’s going to behave in a safe way. So in a very immediate sense, it presents risks, not the sort of apocalyptic risks of taking over the world and extinguishing the human race, but real risks.

For example, last week, in Belgium, a man was reported to have committed suicide directly as a result of his relationship with one of these chatbots, which was actually advising him and, as it were, holding his hand while he was in the process of committing suicide.

The reason why these systems are very hard to provide any guarantees for is that they are enormous black boxes.

IRA FLATOW: Can you sum up for me in 2,500 words or less how these systems work?

STUART RUSSELL: So a large language model is something that, very simply, predicts the next word given the sequence of preceding words in a text or in a conversation. And so you can use that for an interactive conversation. If you put in a question, then it will start generating words that look like an answer.

And how do you make them? You start with a blank slate of about a trillion parameters in an enormous what’s called neural network. You do about a billion-trillion random modifications of those parameters to try to get that network to become very good at predicting the next word from a training set that is maybe 20 trillion words. Which is roughly comparable to all the books that the human race has ever written in the history of civilization.

So that system, when you interact with it, displays remarkable abilities. And I don’t want to disparage it, in the sense that it can provide lots of benefits for users, for companies, but it’s a black box. We do not understand anything about how it works. And the only way we have to get it to behave itself– for example, not to advise people on how to commit suicide– is to essentially say, bad dog, or, good dog.

And that’s the process that OpenAI, the creators of GPT-4, went through to try to get it to behave itself. They just hired a lot of people who would engage in lots of conversations. And every time it did something they don’t like, they would say, bad dog. And if it produced a good answer, they would say, good dog. And then, hopefully, the system would adapt its parameters to produce bad behavior less often.

And they proudly announced that, in terms of these forbidden things, like advising people to commit suicide, telling people how to make chemical weapons, giving unlicensed medical advice, that it was 29% better than the previous iteration of their system. But 29% better is still a very long way from perfect because they have actually no control over it.

So we’re simply asking that, before you get to deploy a system that’s going to affect the lives of millions or even billions of people, you take sensible precautions to make sure that it doesn’t present undue risks and that it remains within predictable guidelines and so on. So that’s the real reason behind this request for a moratorium

I think there are longer-term issues at stake here, not from the present systems, but from future generations of AI systems that may be much more powerful still. And they present correspondingly much greater risks.

IRA FLATOW: Well, do these future systems present the risks that Stephen Hawking was talking about in 2014, when he said the development of full artificial intelligence could spell the end of the human race?

STUART RUSSELL: Theoretically. We don’t know when that type of system– which we call sometimes artificial super-intelligence– we don’t know when that’s going to arrive. But if it does arrive within our current approach to how we build AI systems– in particular, these black boxes– we would have no way of ensuring that it’s safe, in the sense that its behavior is actually aligned with what the humans want the future to be like.

And then you’re basically setting up a chess match between us and a system that’s actually much more intelligent than us and has already thought of every possible countermeasure we could try. And so that’s, in a real sense, the loss of human control over the future. So that’s the risk that Stephen Hawking is talking about.

I want to emphasize, the current systems do not present that risk, as far as we know. To the extent that we understand them at all, which is not very much, we think they have some fundamental limitations on their ability to plan their future activities. But at the rate of progress we’re seeing in AI, we need actually to develop methods to ensure that when we build systems that are more powerful than us, we somehow retain power over them forever.

If that sounds like a difficult problem, it’s because it is a difficult problem.

IRA FLATOW: Well, practically speaking, then, what do you expect people to do who are in AI research? Is the horse already out of the barn? And are people willing to listen to the signers of this letter and pause? Or is it, it doesn’t take a lot of fancy lab equipment, like it did with genetic engineering, to move ahead?

STUART RUSSELL: So I think this is a great question. And your example of the genetic engineers is a really good one. And so Paul Berg, who was one of the organizers of that 1975 workshop, in 2008, he wrote a retrospective. And the last paragraph says there’s one lesson from Asilomar– which is where they had the workshop– a lesson for all of mankind. And basically, once commercial interests start to dominate the conversation, it will simply be too late.

IRA FLATOW: It’s all about the money.

STUART RUSSELL: It’s all about the money. And often people’s thinking and decision making becomes very distorted when we’re in that situation. There’s an old saying, you can’t get someone to understand something if their livelihood depends on not understanding it. And I think there’s a little bit of that going on here.

In the past, some of the principals, such as Sam Altman, the co-founder of OpenAI, Sam has said that there may come a point when governments need to intervene and impose constraints and basically not release further systems until they meet certain kinds of safety properties. And the petition is simply saying, well, maybe this is that time.

It’s also worth noting that the OECD, which is an international organization that all the advanced Western economies are members of, have issued AI guidelines, called the OECD AI Principles, that have been ratified by all the member states, that very explicitly say that AI systems have to be robust and predictable, and you have to show that they don’t present an undue risk before you can deploy them. So arguably, all the major governments have already supported the petition that we are making.

IRA FLATOW: In genetic engineering, there are all these ethical guidelines, but there are still people who want to clone a baby. Is there a way to protect against a rogue AI researcher who wants to ignore ethical guidelines?

STUART RUSSELL: That’s a tough– I mean, the rogue thing, I think we have to work with the hardware manufacturers. Because they’re the bottleneck. And there’s only a handful. And they’ve already agreed in the past– for example, with digital rights management, that was a global operation to get the hardware manufacturers to implement digital rights. So I think it’s not impossible that we could get safety mechanisms built into hardware, where they just will refuse to run programs that are not certifiably safe.

IRA FLATOW: So where does your mind take you from here? Are you hopeful about the AI future or more fearful than hopeful? I mean, you’ve got to have a little bit of both there, right?

STUART RUSSELL: So I think I’m sort of naturally an optimist. And I’ve been working for about 10 years now on trying to understand how do we retain power over systems more powerful than ourselves. That’s what I call the control problem. And I think there’s a feasible path to solving that problem.

Then we’ve got to convince everyone to adopt that approach so that unsafe systems are not created. And then we’ve got to make sure that somehow no one, either deliberately or accidentally, creates an unsafe system and unleashes it on the world. So there’s a lot to do. But I’m cautiously optimistic.

Am I thinking that I’d better hurry up, or we had better hurry up, in nailing down these solutions and getting them into the policy process? I think yes. I think my estimate of when we’ll have powerful AI systems that could present a major control risk has moved closer to the present than it was a few years ago.

IRA FLATOW: Well, from your mouth to AI’s ears, Dr. Russell.

STUART RUSSELL: Thanks a lot, Ira. It’s been nice talking to you.

IRA FLATOW: Dr. Stuart Russell, professor of computer science at Berkeley. He’s director of the Kavli Center for Ethics, Science, and the Public there. And he’s the author of the book Human Compatible– Artificial Intelligence and the Problem of Control. Thanks again for joining us today.

STUART RUSSELL: Thank you. Bye-bye.

Copyright © 2023 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/.

Meet the Producers and Host

About Charles Bergquist

As Science Friday’s director and senior producer, Charles Bergquist channels the chaos of a live production studio into something sounding like a radio program. Favorite topics include planetary sciences, chemistry, materials, and shiny things with blinking lights.

About Ira Flatow

Ira Flatow is the host and executive producer of Science FridayHis green thumb has revived many an office plant at death’s door.

Explore More