Difficult Brain Science Brings Difficult Ethical Questions
In recent weeks, we’ve told you about efforts to explore and map the human brain through tissue donations, and the troubling tale of a bionic eye implant startup that left users without tech support. The two stories point to different aspects of the rapidly advancing field of neuroscience—and each comes with its own set of ethical questions.
As humans advance in their ability to understand, interpret, and even modify the human brain, what ethical controls are in place to protect patients, guide research, and ensure equitable access to neural technologies?
John Dankosky talks with neurotech ethicist and strategist Karen Rommelfanger, the founder of the Institute of Neuroethics Think and Do Tank, about some of the big ethical questions in neuroscience—and how the field might try to address the challenges of this emerging technology.
Invest in quality science journalism by making a donation to Science Friday.
Karen Rommelfanger is a neurotech ethicist and strategist and founder of the Institute of Neuroethics Think and Do Tank in Atlanta, Georgia.
JOHN DANKOSKY: This is Science Friday. I’m John Dankosky. In recent weeks on our show, we’ve told you about efforts to explore and map the human brain through tissue donations, and how researchers want tissue samples from all kinds of people to help them better understand the brain’s operation.
We’ve also talked about the troubling tale of a bionic-eye implant company that left users without tech support, and is now pivoting to brain implants. These two stories point to different aspects of the rapidly advancing field of neuroscience and technology. And each comes with its own set of ethical questions.
As humans advance in their ability to understand, interpret, and even modify the human brain, what ethical controls are in place to protect patients, to guide research, and to ensure equitable access to neural technologies?
Joining me now, is neurotech ethicist and strategist Karen Rommelfanger. She’s the founder of the Institute of Neuroethics Think and do Tank based in Atlanta, Georgia. Welcome to Science Friday. Thanks so much for being here.
KAREN ROMMELFANGER: Thanks for having me.
JOHN DANKOSKY: Neuorethics is a word that many people may not have heard before. So how would you define it?
KAREN ROMMELFANGER: Most simply, neuorethics explains and explores the ethical, legal, and social implications of new advances in brain science and brain technology.
JOHN DANKOSKY: Is there something special about neural research that would set it apart from other kinds of medical science or genetics when it comes to this ethical framework that you work in?
KAREN ROMMELFANGER: Sometimes there are very unique concerns, and sometimes there are concerns that are different in degree versus kind. If you think about having an appendectomy, you don’t necessarily feel like that changes who you are. But if you have part of your brain lost, or part of your brain function lost, you might, fundamentally, feel like you’re changed or that your loved one who has had that experience has changed.
So the ethical implications around any kind of brain intervention, are really around questions of how have we maybe altered someone’s ability to have their own privacy, to have the kind of inner sanctum of their mind, private? How have, maybe any of these interventions or what could they do, to challenge how we think about ourselves, or our ability to be authors of our own lives?
And also, how might certain kinds of information and ability to intervene with our brain challenge how we consider our ability to make decisions about things, and how people see us as humans.
JOHN DANKOSKY: That’s so interesting. Just to explore this a little bit more, any time you change, augment, do surgery on the human body, in some ways, you are, essentially changing who you are, but you’re really saying the brain is different.
We know how powerful it is. We know what role it plays, but it is different in that way in terms of how it is linked to our personality, the very us-ness of us.
KAREN ROMMELFANGER: Yeah, when we think about neuroscience and we think about discoveries about the brain, we’re looking into an organ that affords us the very thing that makes us feel like we’re us, maybe the most prized thing we have, our cognitive experience. So there’s something totally different about that.
But in addition to that, there’s something– it’s not just that the brain is special biologically, it’s also special culturally. So you can’t really divorce these biological findings from the cultural meaning that we imbue as a society.
So we have seen really fundamental and old brain technologies the EEG, or electroencephalogram, that are being used to make determinations of death are being used in part to make determinations of death that are being used, actually, have found their way into our court systems.
I study cross-cultural neuroethics, and part of that, involves me looking at cultural variations in defining the brain’s role in whether the brain is my identity. So in typical Western cultures, probably risking painting with too broad a brush, is that we really have a fairly separate identification of the brain and the mind.
The brain does one thing. The mind does the other thing. And we see that, even in our divisions in clinical practice with having neurology deal with certain brain diseases, but then we have psychiatry dealing with these mental disorders, which we know are actually tied to the brain. So it’s interesting how we divide this.
But if you look at cultures in East Asia, for example, the term for mind doesn’t really even exist in the way it does for us. In Japanese, there’s this term [SPEAKING JAPANESE], which is the mind, brain, spirit as inseparable from each other. And in Chinese, for example, there is [SPEAKING CHINESE] and in Korean it’s [SPEAKING KOREAN].
And when we look at those societies, and we do studies on asking publics, do you think that if we identified a brain injury in an individual or a brain abnormality, would this alleviate or reduce their culpability and accountability from a crime?
And in Western countries we have more of an inclination to apply those findings into the courtroom, whereas, in preliminary surveys in Taiwan, for instance, there is no difference considered with how one might think about someone’s guilt or culpability.
JOHN DANKOSKY: That’s so interesting. I guess I’m wondering, because of this vast diversity in the way that we consider the brain and the mind culturally, and the vast diversity in the way that we interact with the world, how do you think we should even consider the idea of what normal is, about what standard is, about what neurodivergent is?
KAREN ROMMELFANGER: That’s another great question. The term normal is such a dangerous phrase, actually. And its origins actually come from a more eugenicist origin. An attempt to separate those who could fit in a certain part of– if you look at the bell curve of what we think is normal, we want to cut off the tails. This is how we’ve tried to repeatedly ostracize and separate people.
And what’s really dangerous in making studies about the brain, is that as scientists, we’re really trained to look for certain differences, or even, mathematically, what we consider significant differences, but some of these differences, the lines that we use to divide difference are really socially constructed.
So if we think about gender or race, which are often used in these studies, and probably, hearkens back to some of what you talked about with your brain bank studies. These are very dangerous ways to make assumptions about people, and often reapply these biases that we already have in society and making them even more dangerous.
And the way that we think about difference, and especially, with an organ and science from the brain that is so identified with humans and has so much cultural meaning. You have an opportunity to– a bad opportunity, to reinforce these lines, these artificial invisible lines, that we demarcate around certain groups of people to further disenfranchise them.
So how do you get around that? And the way that we’ve been advocating to get around that, is really have inclusive conversations with patient advocates, with lived experience advocates. Can you get some input about how people identify their range of so-called normal? What is their experience? Can we integrate that, and think carefully about what biases we inevitably all have, and how that might make its way into science.
JOHN DANKOSKY: I’m wondering how these concerns inform research that might help to predict some kind of neural outcome later in life, or a biomarker that’s linked to brain function somehow. I’m thinking about everything from the autism spectrum to development of Alzheimer’s.
KAREN ROMMELFANGER: There is an amazing movement in science, in general, and towards predictive health, and certainly, in neuroscience towards predicting risk for developing certain brain diseases or conditions. And this offers us a rich opportunity to develop tools to intervene early that might be in positive ways, maybe even slow progression of certain diseases, or even shape more positive outcomes.
So there’s a lot of benefit in doing that work, but the challenge in doing that work well, is, again, making sure, how do you communicate to someone giving their information, or data, in this case? How do you communicate to them what you’re doing? How do you communicate to them how you’re stratifying people in your analysis?
How do you communicate to them, even if you are to develop a risk number, or a risk assessment, for their likelihood to develop autism or Alzheimer’s, especially Alzheimer’s, is one to think about.
Alzheimer’s is devastating, but what does a prediction that I’m likely to develop Alzheimer’s– what does that give me? There is no cure at this point. There really isn’t a treatment. So have you just given me a sentence for who I’m going to become? Or have you really given me some opportunity to change my lifestyle and do something better?
It’s hard for participants in those studies to evaluate that information. It’s hard for researchers to understand how to return that kind of information, particularly if you participate in those kinds of studies, you want to know. And that’s if you know the study is happening.
But there’s lots of studies now happening trying to predict certain patterns and trends and behaviors and disease development progression. Right now, there are studies going on trying to understand how to handle mental illness in a post-covid world, or not even really post-covid.
And lots of this information can be apprehended, not from a mind reading device, but through the information we give away on a regular basis. Through my social media updates, how am I feeling? What am I thinking? Through our pocket sensors, which are our phones. Our phones have an accelerometer. We have apps where we give information about how well we are or how we’re meditating.
Apple and Amazon are in the health care business now. Everybody is tapping into this wealth of information about people that isn’t necessarily even health information that is being turned into health information.
So we really need to understand how this ecosystem is evolving, and it is so hard to keep up for everyone. And this is why we need ethics designed into the very inception of a project before you even launch it off the ground. Have we developed the technology so that it’s accessible for everyone? Have we designed our questions, so that it’s fair and incorporate our biases?
Have we included our patient population, or user population, who is empowered and are respected throughout the process, and can actually properly have informed, decision making?
And in the end, are we sharing with our users what the results are in the limitations of the understanding of the results, and also, what safeguards we have in place for those kinds of inferences that are making, so that those kind of inferences aren’t going into streams or contexts we don’t want them to, like legal systems, maybe like health insurance, life insurance, like to our employers.
JOHN DANKOSKY: We’re talking about some of the big ethical questions in neuroscience research and technology with neurotic ethicist and strategist Karen Rommelfanger This is Science Friday from WNYC Studios.
You talked about some of the tech that we wear in our pocket or on our wrists. As I mentioned in the introduction, people are making technology that interfaces directly with the brain. I’m wondering how this ethical framework ties into that the idea that we are able to put devices into our brains, essentially to change our function. And that’s going to create a whole new set of data questions. It’s going to create a whole new set of ethical questions for us.
KAREN ROMMELFANGER: Yeah, those brain computer interfaces or BCIs have remarkable promise to alleviate suffering and promote wellness, in ways that other technologies interfacing with the body might not be able to. Brain computer interface’s now have restored movement, have restored communication abilities and independence in those who have suffered from stroke or spinal injuries.
And I myself, in my work with deep brain stimulation, have watched in the OR the implantation of electrodes into deep structures of a patient’s brain, a person with Parkinson’s disease, while they were awake, where their uncontrollable tremors were transformed into smooth movements. This is a person who couldn’t hold a spoon to feed themselves before.
Then there are patients who have intractable depression, just where nothing has worked. And they claim they’ve regained their independence, and they’ve said that these device’s have restored their humanity. These are remarkable, tear-jerking feats of science, but they’re still in their infancy.
And so with those kinds of technologies, we have some uncharted territory to think about with data security, identity, and blurred lines between the technology and ourselves, and stigma, as we talked about earlier as it relates to predictive technologies that are forecasting brain disease.
And what’s kind of interesting, and kind of urgent, to think about is these blurred lines of where the technology starts and where we end. And there are studies have shown that patients start to wonder, am I still the author of my own life? Am I the narrator, or is the technology narrating?
JOHN DANKOSKY: Well, I think all of this gets to my last question for you, and it has to do with the role of ethics and ethicists in all of this. It seems as though, as we develop these technologies, or as we approve these technologies for use at the governmental or regulatory level that that ethical framework needs to be baked in at every level. I guess, I’m wondering how we make sure so that we don’t have to be asking much later, what do we do?
KAREN ROMMELFANGER: John, this is my life’s work, actually. I think the important thing to do, is to figure out at what is the system, what are the incentive structures of different stakeholders, what tools are appropriate, and what timelines can you implement certain neuroethics guidance.
And as a professor, I was actively involved in engaging the next generation of scientists and trying to train them to think about science and ethics as an integrated question. So can we be trained as scientists to think about solving a technical problem, simultaneously with a social problem?
And we do this. We can do this. As an example, when we design wireless technology, we also have considerations of cybersecurity. What similar things can we do for neurotechnology?
And one of the examples we’ve put forth, with EEGs, for instance. This is a tool that’s still is not optimized for scalps that grow coarse, curly, natural hair, as in those with African descent. This is a technology ubiquitously used that doesn’t work for the global majority. How did this happen?
That happens from not having a sociotechnical framing in mind with science discovery and practice. Then there are other tools that we’re working on with transnational policy organizations. So the Organization for Economic Co-operation and Development, which is a transnational policy organization who has put forth a lot of guidance in emerging technology, but neurotechnology is the first one they felt compelled to put forward ethical principles.
So now I’m actually working with them to see how can we implement each of these principles and asking those exact questions that you’re saying. What are the tools we have? Some of them are soft-law tools like guidance or codes of ethics. How might some of these be translated into hard law, like legislation?
And also understanding that laws don’t always fix everything. So we need to use nimble instruments. We need to be creating cultures where people are allowed to think about ethical inquiry on a regular basis.
And finally, I also run a consulting entity where I roll up my sleeves when I’m not doing the policy and academic work, and get in there in the weeds with companies and figure out, what’s your regular protocol like. How can we make this easy and seamless?
JOHN DANKOSKY: Well, we’ve run out of time. Karen Rommelfanger is a neurotech ethicist and strategist. She’s the founder of the Institute of Neuroethics Think and do Tank in Atlanta, Georgia. Thank you so much for spending some time with us today and grappling with these big issues. I really appreciate it.
KAREN ROMMELFANGER: It’s my pleasure. Thank you.
John Dankosky works with the radio team to create our weekly show, and is helping to build our State of Science Reporting Network. He’s also been a long-time guest host on Science Friday. He and his wife have four cats, thousands of bees, and a yoga studio in the sleepy Northwest hills of Connecticut.