Advances In Brain-Computer Interfaces For People With Paralysis
17:24 minutes
An evolving technology is changing the lives of people with paralysis: brain-computer interfaces (BCI). These are devices that are implanted in the brain and record neural activity, then translate those signals into commands for a computer. This allows people to type, play computer games, and talk with others just by thinking, allowing more freedom to communicate.
For decades, this technology has looked like a person controlling a cursor on a screen. But this work has advanced, and in a recent breakthrough, a person with paralysis in all four limbs was able to move a virtual quadcopter with extreme precision by thinking about moving it with their fingers.
Another area of BCI research involves speech. Recent work has shown promise in allowing people with vocal paralysis to “speak” through a computer, using old recordings to recreate the person’s voice from before their paralysis.
Joining Host Flora Lichtman to discuss the state of this technology, and where it may be headed, are Dr. Matthew Willsey, assistant professor of neurosurgery and biomedical engineering at the University of Michigan, and Dr. Sergey Stavisky, assistant professor of neurosurgery and co-director of the Neuroprosthetics Lab at the University of California, Davis.
Invest in quality science journalism by making a donation to Science Friday.
Dr. Matthew Willsey is an assistant professor of Neurosurgery at the University of Michigan in Ann Arbor, Michigan.
Dr. Sergey Stavisky is co-director of the Neuroprosthetics Lab at the University of California, Davis in Davis, California.
FLORA LICHTMAN: This is Science Friday. I’m Flora Lichtman.
An evolving technology has the potential to change the lives of people with paralysis. The tech is called brain-computer interfaces. They’re devices that are implanted in the brain and record neural activity and translate those signals into commands for a computer. This allows people to type, play computer games, and talk with others just by thinking.
Today we’re checking in on this technology and where it’s headed with two researchers at the front lines of this work. Dr. Matthew Willsey is an assistant professor of neurosurgery and biomedical engineering at the University of Michigan in Ann Arbor, and Dr. Sergey Stavisky is an assistant professor of neurosurgery and codirector of the Neuroprosthetics Lab at the University of California, Davis. Welcome to you both to Science Friday.
SERGEY STAVISKY: Thank you.
MATTHEW WILLSEY: Yeah, thank you.
FLORA LICHTMAN: Matt, I want to start with you. You work on technology that lets people control objects on a screen by thinking. Does that sound right? Give me the 10,000-foot view of what you do.
MATTHEW WILLSEY: Yes, that’s accurate. So the research that I work on is aimed at people with paralysis. And so people that typically can’t move their arms or their legs, they have no way to really control these devices with movements that you and I would use. And so what we can do is we can actually place electrodes into people’s brain with the brain surgery and then interpret what they’re trying to do and use that signal to actually control devices on the computer screen.
FLORA LICHTMAN: Tell me about this recent paper where you had a participant with paralysis in all four limbs who was able to control what looks kind of like a drone in a video game just by thinking.
MATTHEW WILLSEY: Yeah, so this is a person who had spinal-cord injury. He was implanted by Jaimie Henderson at Stanford University in 2016. What we did is we used a system, an electrode system that could be planted into the brain itself. We then recorded the signals that were coming out of the brain and created a method where we could take the signals and interpret what he was trying to do with his fingers and then use this finger control to control a virtual quadcopter in a way similar to someone with normal movements of their hand would be able to control a video-game controller.
FLORA LICHTMAN: So you’re reading the signals of the hand-movement signals?
MATTHEW WILLSEY: That’s right. So the person would think, OK, I want to move my thumb in this direction, and then when he would do that, we would record the signals from the brain and say, oh, he was trying to move his thumb in this direction. And we would learn that pattern with our computer software and then control a virtual hand with a similar thumb in the direction that he was attempting to move his actual thumb, which was paralyzed.
FLORA LICHTMAN: And is it instantaneous, the translation?
MATTHEW WILLSEY: The translation is not exactly instantaneous but very close. We call it real time, so within tens of milliseconds.
FLORA LICHTMAN: And what’s the application?
MATTHEW WILLSEY: So that’s a good question. So for many of these people, they can’t feed themselves or even, necessarily, make a phone call. But what we’ve focused on in the past is trying to restore activities that we think, as kind of the medical community, is important to them. But when you really ask people, What are their missing needs? a lot of the things that they’re missing are things like leisurely activities or ways that they can interact with the able-bodied community at a level without a deficit, for example, or without paralysis.
And so this participant was extremely passionate about flying. And so the idea to control a virtual quadcopter was actually the participant’s idea and one of the reasons why he wanted to enroll in the study. He would describe it in a way that was like, well, since my injury, this will be the first time that I can figuratively rise up out of my bed and interact with the world. And so it was very moving to him.
And so it was kind of his idea. And so we created a game for him to fly a virtual quadcopter through an obstacle course, and he would try and fly it through and get personal record times. And when he did, we’d all celebrate, and he would send clips of this video to his friend. It was a very humanistic moment.
FLORA LICHTMAN: That’s cool. Sergey, how is your work different from Matt’s?
SERGEY STAVISKY: Yeah, so we’re using very similar technologies and techniques, but for different applications. So now instead of decoding attempted finger movements, which you can use for handwriting or playing games, flying a quadricopter, as Matt just described, we are putting the electrodes in a slightly different part of the brain, the speech motor cortex, which is what normally sends commands to the muscles that we use for speaking– so the jaw, the lips, the tongue, the diaphragm, the larynx, voicebox– and we’re decoding the neural correlates of when someone’s trying to speak. So we recently had a participant. He’s a man in his 40s with ALS. It’s a neurodegenerative disease that has left him unable to speak intelligibly. So he has a form of vocal-tract paralysis.
These same types of electrodes were implanted by Dr. David Brandman, the neurosurgeon I collaborate with here at UC Davis, in his speech motor cortex. And as he tries to speak, we pick up the activity from those several hundred neurons we can detect. We run them through a bunch of algorithms that decode the phonemes. So these are like the sound units that he’s trying to say. And those get strung together into words and sentences that appear on the screen in front of him and then are said out loud by the computer in what actually sounds like his voice because we have some old recordings, some podcasts that he’s done in the past that we were able to train a text-to-speech algorithm to sound like him. So you can think of it as part of the same family of technology, but now instead of decoding hand movements, we’re decoding speech movements and using that to communicate.
FLORA LICHTMAN: How does the device decipher between inner thoughts, like inner monologue and speech?
SERGEY STAVISKY: Oh, that’s a really good question. So when we started this, we thought it would basically not get any inner thoughts because we’re recording from the part of the brain that sends commands to the muscles. This is not the language network. This is the speech motor cortex. So think of it like the last stop on the way from thought to muscle movements. And so as the person is trying to speak, this area is very active, and that’s all clearly been borne out by the data.
That said, there is a new study that will come out soon from our colleagues at Stanford– actually, the same lab that both Matt and I trained in– where they found that there are kind of little murmurs of imagined speech. But I’ll suffice it to say that our system can distinguish between an inner voice, that inner monologue, and the attempt to speak. So it has not turned out to be a problem. You could think of it as it will only activate when the person is actually trying to talk.
FLORA LICHTMAN: I asked because it feels like it raises questions about privacy, and you’re then saying things you didn’t mean to say.
SERGEY STAVISKY: Right. That was really important. So one of the things that we tested extensively before we enabled the system to be used kind of 24/7 at home by a participant without the research team there is would, for example, it be activated when he’s just imagining or planning to speak? And the answer was no. Would it be activated when he’s hearing speech? I mean, you could imagine how annoying it would be if the radio is on and your brain-computer interface is basically transcribing what you’re listening to because it’s activating the same part of the brain, and that also turned out to not be the case.
So, really, this part of the brain is most active when the user is trying to speak. And so we get a lot of that privacy and reliability kind of for free. But it does take a little bit of careful design of the algorithms.
FLORA LICHTMAN: What was it like for you to see your patients unlock abilities that they had lost?
SERGEY STAVISKY: It was amazing to see the years of work that we had put into making the speech neuroprosthesis actually work. So that first day when our first participant was plugged in, we saw the brain signals, saw that there were good brain signals, clear measurements. And then, as he tried to speak those words that were appearing on the screen in front of him, we could see the joy in his eyes. And his wife and child were there in the room watching it happen, and there were tears of joy and high fives and hugs all around. It was wonderful.
FLORA LICHTMAN: I’m sure it’s why you do what you do.
MATTHEW WILLSEY: Absolutely.
FLORA LICHTMAN: Matt, what do these BCIs look like? How big are they? What should I picture?
MATTHEW WILLSEY: Yeah, so the actual chip that goes into the brain for the ones that we use is about the size of your thumbnail, and it looks kind of like– there’s a flat portion like a thumbtack, but instead of just a single thumbtack, there’s about a hundred thumbtacks that are very, very small. And this device is then implanted into the surface of the brain, and there’s a gold wire that exits through the bone, goes underneath the scalp, and then the gold wire connects to a pedestal, which goes through the skin. And then a connector can attach to the pedestal, which connects the whole system to a computer.
FLORA LICHTMAN: What about for you, Sergey?
SERGEY STAVISKY: Yeah, so we’re using the same types of electrodes, so everything Matt described holds true. And then once those signals come out of the brain, they go from literally an HDMI cable to kind of a little box that sends it to a bunch of computers. And, really, with pretty simple engineering, that could be made much smaller. So the kind of external component could just be a single laptop or a computer.
And like Matt said, there are now multiple startups developing various forms of these electrodes that are going to be fully implanted and fully wireless. And so I think in the near future, instead of thinking wire coming out of someone’s head to a bunch of computers on a cart, think of it as you don’t even see anything, kind of like a pacemaker. It’s transmitting data, maybe to something in their pocket, and that’s sending it to a bigger computer somewhere else or to the cloud. That’s not here yet, but I think, very soon, that’s going to be the reality.
FLORA LICHTMAN: Can you give me a sense of the scope of use? How many people have brain-computer interfaces?
SERGEY STAVISKY: So I would say roughly 50 people worldwide that we’re aware of have had systems like this.
FLORA LICHTMAN: Can you consult your doctor and ask for one?
MATTHEW WILLSEY: That’s a very good question. Yeah, the devices that Sergey is describing that require brain surgery to insert them into the brain, these are investigational devices that are part of research studies. As far as implantable brain-computer interfaces, they’re still not available for widespread clinical use, but this technology is being kind of currently developed.
We’re describing mostly systems that implant directly into the brain, but there’s a whole wide variety of systems that you could consider. There’s some that could go directly into the brain. There’s some that could lay on top of the brain, and then there’s some systems that you could actually place leads on the surface of the skin. Now, the capabilities of all these devices are different, depending on how close you can get to the brain signals themselves. But you could see a world where brain-computer interface could be pretty widely variable, many different types of devices, and the actual device you would want to use would be dependent on what you need to use it for.
FLORA LICHTMAN: Well, I want to talk about this a little bit. I mean, you two are both at academic institutions. Are private companies developing this technology?
SERGEY STAVISKY: Yeah, there are several private companies. Some of the more well-known ones include Neuralink, Paradromics–
FLORA LICHTMAN: Neuralink is Elon Musk’s company.
SERGEY STAVISKY: That’s right– Precision Neuroscience, Echo Neuroscience, which is founded by Eddie Chang, who’s kind of a pioneer in this field. And there’s several others– Synchron, which is this interesting endovascular, so it goes in through the veins, so it’s arguably less invasive.
FLORA LICHTMAN: Well, what is the market for this?
MATTHEW WILLSEY: That’s a great question. So it depends on what the problem is that you’re trying to fix. For example, when I’m looking at people with paralysis, there are studies show that somewhere on the order of 5 million or so people in this country have some sort of motor paralysis. It can be from a variety of different things. It can be from spinal-cord injury, but it can also be from stroke.
For Sergey’s– and I’ll let him comment on this. But for Sergey’s use case, which is for people that have difficulty producing speech, it’s a different market.
SERGEY STAVISKY: Yeah, so for vocal-tract paralysis, I believe in the US, it’s roughly 20,000 people a year. Most of that would be people with ALS and then also some forms of subcortical stroke. But there are efforts now towards building not speech neuroprosthesis but actually language neuroprosthesis that could help people who have lost the ability to speak due to more common types of stroke. That is very early days. This has not been done yet, but our clinical trial, our collaborators, and other groups are starting to think about, can we go even more upstream towards language brain areas as opposed to speech motor brain areas? And that could help really hundreds of thousands, if not millions of people, potentially.
FLORA LICHTMAN: How so?
SERGEY STAVISKY: So the idea there would be in that pathway from a thought to the specific words that you’re trying to say to the actual sounds, right now what we’ve done is working on that last step. So someone knows exactly what they’re trying to say. Those words are in the speech motor cortex, but they’re not reaching the muscles, and that’s what a speech neuroprosthesis helps.
If we go one step back from idea or concept to the exact words, there are various types of language disorders where that connection is broken, and that often happens after a stroke, and this affects millions of people. Some types of strokes, we think that language information is still there or the thought, the semantic information is there upstream. And so an active area of new research is, can we identify the neural signals corresponding to the idea or the meaning of what someone’s trying to communicate and start to decode that from their brain? But that’s going to take a while.
FLORA LICHTMAN: Do you have a timeline in mind for when these devices might be more broadly available for people who have paralysis or who have lost speech?
MATTHEW WILLSEY: Yeah, that’s a great question. I’d be very interested to see what you think, Sergey, but kind of my estimation would be somewhere between a 10- to 15-year timeline for when we could hope that there could be FDA clearance for use of one of these devices clinically so that you could go to your doctor and say, hey, I’m having this problem, and they might say, this is a potential therapy for you.
SERGEY STAVISKY: I think I might be a little bit more optimistic. I would hope that in five years, we might see market approval. But even before that, there will be larger clinical trials, so it might be much easier for someone who wants one of these devices to enroll in the trial.
FLORA LICHTMAN: I think for people who are not in the field, you can imagine the dystopic sci-fi concerns about these devices getting hacked or reading your mind. For you two who are experts, are there things that concern you, and what are they?
SERGEY STAVISKY: I think we do need to be careful to build in cybersecurity and privacy features. That said, I think these are very solvable engineering problems. For now, we’re still at this research phase where just getting it to work at all is really, really hard, and we’re excited that we’re doing that. So I think it’s a little bit of a step away to kind of worry about the dystopian applications of it, but you are right that we should be thinking about it.
FLORA LICHTMAN: I love to worry about dystopian applications. I think we should always be worrying about dystopian applications.
SERGEY STAVISKY: I don’t disagree.
MATTHEW WILLSEY: I just want to echo what Sergey said. We’re super excited when these devices work and do what we’re intending for them to do. And some of these more nefarious applications seem like they require a lot more capability than we’re able to provide at the moment, but it’s important, I think, to be open and transparent about what these devices are capable of. And we just have to be diligent and take it one step at a time and be open and transparent with the community so that we can work together as a team.
FLORA LICHTMAN: I mean, give us a sense of where this technology is. I mean, do you feel like you’re really on the cutting edge? Have things changed a lot in the last decade?
SERGEY STAVISKY: Absolutely. A decade ago, the state of the art was someone moving a computer cursor by trying to move their hand as decoded from their brain activity, and getting it so they could click on buttons or type letters on a virtual keyboard reliably was amazing. There were high-impact papers that everyone was super excited about because they could hit every button correctly all of a sudden instead of missing half of them.
We went from that 10 years ago to speaking with 98% accuracy or flying a drone in 3D space plus rotations or in other applications– people walking again after spinal-cord injury or people feeding themselves with a brain-controlled robot arm. So I think that pace over the last 10 years has been absolutely incredible. We’re in this field, so maybe we’re biased, but it feels one of the more exciting areas of medical science.
FLORA LICHTMAN: That’s about all the time we have for now. I want to thank you both for joining me today.
SERGEY STAVISKY: You’re very welcome.
MATTHEW WILLSEY: Oh, it’s my pleasure. Yeah, thank you.
FLORA LICHTMAN: Dr. Matthew Willsey is assistant professor of neurosurgery and biomedical engineering at the University of Michigan in Ann Arbor, and Dr. Sergey Stavisky is an assistant professor of neurosurgery and codirector of the Neuroprosthetics Lab at the University of California, Davis.
Copyright © 2025 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/
Kathleen Davis is a producer and fill-in host at Science Friday, which means she spends her weeks researching, writing, editing, and sometimes talking into a microphone. She’s always eager to talk about freshwater lakes and Coney Island diners.
Flora Lichtman is a host of Science Friday. In a previous life, she lived on a research ship where apertivi were served on the top deck, hoisted there via pulley by the ship’s chef.