04/26/2024

Visualizing A Black Hole’s Flares In 3D

16:59 minutes

A 3D model of a swirling, red gassy substance moving around a black circle inside a transparent cube.
Based on radio telescope data and models of black hole physics, a team led by Caltech has used neural networks to reconstruct a 3D image that shows how explosive flare-ups in the disk of gas around our supermassive black hole, Sagittarius A* (Sgr A*), might look. Credit: A. Levis/A. Chael/K. Bouman/M. Wielgus/P. Srinivasan

The words “black hole” might bring to mind an infinite darkness. But the area right around a black hole, called the accretion disk, is actually pretty bright, with matter compressing hotter and hotter into a glowing plasma as it is sucked in. And amid that maelstrom, there are even brighter areas—bursts of energy that astronomers call flares.

Scientists are trying to better understand what those flares are, and what they can tell us about the nature of black holes. This week in the journal Nature Astronomy, a group of researchers published a video that they say is a 3D reconstruction of the movement of flares around the supermassive black hole at the heart of the Milky Way.

Dr. Katie Bouman, an assistant professor of computing and mathematical sciences, electrical engineering and astronomy at Caltech in Pasadena, California, joins guest host Arielle Duhaime-Ross to talk about the research, and how computational imaging techniques can help paint a picture of things that would be difficult or impossible to see naturally.


Further Reading


Sign Up For Picture Of The Week

Love photos of lava? Adore GIFs of ants? We’ve got you covered. Every Tuesday, we’ll share one amazing image (and the story behind it) from the world of science.

Subscribe

Segment Guests

Katherine L. (Katie) Bouman

Dr. Katherine L. (Katie) Bouman is an assistant professor of Computing and Mathematical Sciences, Electrical Engineering and Astronomy at Caltech in Pasadena, California.

Segment Transcript

ARIELLE DUHAIME-ROSS: This is Science Friday. I’m Arielle Duhaime-Ross in for Ira Flatow. When I hear black hole, I imagine an infinite darkness. Maybe it’s something to do with the phrase gravity so strong, even light cannot escape. But the area around a black hole, an area called the accretion disk, is actually pretty bright, with matter compressing hotter and hotter as it’s sucked in.

And amid that maelstrom, there are even brighter areas, bursts of energy that astronomers call flares. Researchers are trying to better understand what those flares are and what they can tell us about the nature of black holes. This week in the journal Nature Astronomy, they published a three-dimensional video that they say is a reconstruction of the movement of flares around the supermassive black hole at the heart of the Milky Way.

Joining me now to talk about that work is Dr. Katie Bouman. She’s an assistant professor of computing and mathematical sciences, electrical engineering, and astronomy at Caltech in Pasadena, California. Welcome to Science Friday.

KATIE BOUMAN: Hi. Thank you so much for inviting me. I’m really excited to be here and tell you a little bit about what we’ve been working on.

ARIELLE DUHAIME-ROSS: It’s great to have you. So first of all, do we know what these flares are, what makes a flare around a black hole?

KATIE BOUMAN: So for years, people have seen that around black holes there are these flares, this extreme brightening of the light. But people were unsure what it could be. There were kind of different kinds of theories. One of those theories was a hotspot, which would be that there were these compact regions that form that become really bright, and then they slowly dissipate as they’re rotating around the black hole. But this was debated for a while. But recently, there has been more evidence for the fact that hotspots could be causing this flare structure.

ARIELLE DUHAIME-ROSS: So basically, this reconstruction was a way to try and figure out what is causing those flares?

KATIE BOUMAN: Yeah. So what I find is really exciting is that what we’ve done is tried to take artificial intelligence and physics and combine them in a way to recover the potential 3D structure of how gas looks like during a flaring event around a black hole. And so we made it our goal to try to combine real observational data with our current understanding of black hole physics and modern computational tools from artificial intelligence in order to actually see the 3D structure of what a flare looks like around a black hole and to see if it looked like what we expect a hotspot to look like.

ARIELLE DUHAIME-ROSS: OK. So that’s fascinating, right? Because it sounds like what you’re telling me is that these images, this video that you guys created, it’s not a, quote unquote, “real” video, right? And it’s not a simulation either. So it’s like a third category. It’s something different.

KATIE BOUMAN: Yeah, exactly. So getting the 3D structure of what the flare looked like is a really, really hard, super challenging problem. Maybe let’s just first go back to how hard it is even just to take a 2D picture of a black hole. So if you might have seen, like about five years ago, the Event Horizon Telescope Collaboration, of which I’m a part of, along with a number of the other authors on this paper. So the collaboration produced the very first picture of a black hole.

ARIELLE DUHAIME-ROSS: I remember. That was a big moment.

KATIE BOUMAN: (LAUGHING) Yeah! It was really exciting! And doing that was really difficult. Black holes are really far away from us, really compact. And so they appear very small in the sky. And so it required that we put together this Earth-sized telescope to see the structure on the scale of a black hole’s event horizon, that point of no return around the black hole.

And so the Event Horizon Telescope had all these telescopes around the world. And they work together and acted like an imperfect telescope the size of the Earth. And then we computationally combined the information to make a picture. But even that was just a two-dimensional picture of a black hole.

And here our goal is to recover not the 2D, but the 3D around the black hole. And so that’s so much harder. And even further, the Event Horizon Telescope is using telescopes located around the world. But here we only had a telescope at one location, the Alma Telescope in Chile. And so from this one stream of data, from one telescope, we had to reconstruct not a 2D picture, but a 3D picture.

And so how is this even possible? Well, again, going back to the Event Horizon Telescope image, we tried to say in that work, let’s make no assumptions about the actual physics of black holes. Let’s not assume it’s a black hole at all. We didn’t want to make any assumptions about what the structure of the image looks like. We wanted to just purely see what the picture was in the sky.

It could have been something that didn’t look like a black hole at all, right? And so because of that, we needed all these telescopes working together. But in this new work here, we said, what if we actually allowed ourselves to bring that physics back in again and say not only that we trust that the black hole, but we also trust a lot of the physics that is happening around the black hole, for instance, the way the black hole’s immense gravity bends light and how gas moves around the black hole. If we trust the physics that we’ve built up over decades, then can we see more, even with less data?

And so that’s what we did. We tried to build that physics in to our method in order to recover a 3D picture. Let’s think about things here on Earth first that we do 3D reconstruction of. So for instance, let’s say your doctor says you need to go get a CT scan done to see inside your body. So CT stands for Computed Tomography.

So, OK, what happens when you get a CT scan? Well, what happens is you lay inside a machine, and what the machine does is it sends X-rays through your body and takes a picture of what comes out on the other end. But it doesn’t just do the from one direction. It spins around you and takes the picture of your body from all 360 degrees, all possible viewpoints.

And then there are methods that allow you to take those projected images and from it recover back the 3D structure. So the idea is we wanted to use a similar idea for doing the black hole 3D reconstruction. The only problem is we only have one viewpoint, right? We’re never going to be like the CT scanner, where you see multiple views, different angles of the human body. Here we’re only see the black hole from one direction, here on Earth.

ARIELLE DUHAIME-ROSS: Yeah, we’re extremely limited.

KATIE BOUMAN: Yeah. So we need the multiple views to disambiguate them. So let’s say instead you get into the CT scanner and the doctor says, oh, no, it’s broken. It’s not able to rotate around you anymore. But they really want to take the scan. So instead the doctor asks you to rotate your body inside of the scanner. And every time you rotate a little bit, the doctor takes a picture from the same direction, right?

ARIELLE DUHAIME-ROSS: Mm-hmm.

KATIE BOUMAN: Well, if the doctor knows exactly how much you rotated each time, then it’s the exact same information we have to do the 3D reconstruction perfectly.

ARIELLE DUHAIME-ROSS: Got it.

KATIE BOUMAN: And so we used a similar idea for this, for the black hole reconstruction. We said, we don’t have other views of the black hole from different orientations. But we have some understanding of how the gas is moving around the black hole. That’s where our black hole physics comes in, right?

And so it’s kind of like asking the patient to rotate in the CT scanner. The black hole is rotating for us. We know how the material’s rotating. And so we can use that information to constrain the 3D reconstruction.

And if that didn’t seem hard enough, there’s another challenge. We don’t actually get the full 2D picture of the black hole from one viewpoint over time. We only see the integrated light coming from it. So it’s like a single flickering pixel, like if the patient in the CT scanner were on the moon or something. It’s all just a blur. So to get around this, we also had to leverage additional properties of the physics, like the polarization of the light, to interpret that single pixel of flickering light as a 3D structure.

ARIELLE DUHAIME-ROSS: Does this mean that this technique only really works because the black hole happens to be rotating?

KATIE BOUMAN: That’s exactly correct, yeah. Because the gas around the black hole is rotating in a predictable way, then we can use that information, knowing how it moves from time 1 to time 2 to time 3, to simulate like we had multiple views of the black hole.

ARIELLE DUHAIME-ROSS: This video is a reconstruction based on what we know about the physics of black holes right now, which means it could change as we learn more. Is it sort of odd to know that it might not represent the full picture?

KATIE BOUMAN: Well, I think that– so I work in a field called computational imaging. And it’s all about how do we form pictures when we don’t just rely on optics, but we also allow ourselves to put in computation and models and underlying assumptions? And I think it’s really important that we don’t just restrict ourselves to results that are only achieved by building new optics and new telescopes that don’t involve any computation because then we’re limiting ourselves.

But here we’re saying, OK, what if we allow ourselves to move on that spectrum from very little assumptions to much stronger assumptions, as long as we are very honest with ourselves about what those assumptions are and how they can bias our solution? We’re able to do so much more if we just give ourselves that freedom to add back in those assumptions, of course, with the understanding that the result is only true up to our belief in those assumptions.

ARIELLE DUHAIME-ROSS: Tell me where AI comes in with this work.

KATIE BOUMAN: Yeah. So in this work, we made use of this really cool new computational tool that has kind of taken the computer vision and graphics area by storm. It’s called NRFs, or Neural Radiance Fields. And the basic idea of a NERF is that instead of representing a 3D volume as a bunch of different 3D pixels called voxels, instead we can represent it as a neural network.

So imagine that the space around you was split up into lots of cubes kind of, like, a room-sized Rubik’s cube, with lots and lots of little cubes. And this is the original way of representing 3D space. We take each little cube and assign a value to it with the color of the object inside of it.

But there are two disadvantages to this. First is that representation is discrete. So you might have a cube that’s on the boundary of an object. For instance, the cube overlaps with like a black mug and a white table it’s sitting on. So what should the value of the cube be? Should it be white or black or gray?

ARIELLE DUHAIME-ROSS: Right.

KATIE BOUMAN: None of these answers are great. So first is we would like to represent colors in a space in a continuous way, where we don’t have these difficulties at the boundary. And the second disadvantage of the cube or voxel representation is that it’s really inefficient. So in the universe, usually objects are continuous. Yet there was that transition from the black mug to the white table, but most of the time the value of the cube will be similar to its adjacent cube.

So adjacent cubes will land on the same object and the same color. There’ll be the similar value. So we’re wasting a lot of resources representing every cube in space independently, even though we know that most of the time we can get away with large regions of space being represented by just one number.

So these neural networks called NERFs help get around both of these issues. Rather than solving for all the cubes and space around the black hole, we solve for the parameters of a neural network that leads to a continuous 3D space. And we’re parameterizing that space with a neural network. And why is that so important to our problem? Well, we have this very, very little information that we’re working off of.

And so we want to try to encourage our solution to be smooth. And the NERF helps in this, making it possible for us to not just find a solution, but find a solution that is reasonable because it’s smooth. So actually the video that we reconstruct, the 3D reconstruction itself is a neural network.

ARIELLE DUHAIME-ROSS: So what did you figure out based on this reconstruction? What did you learn?

KATIE BOUMAN: Yeah. So we’ve assumed a bunch of physics in getting this 3D reconstruction. But one thing that we have not assumed that we really left completely open is what that 3D structure of the gas looks like around the black hole. So we could have reconstructed anything. It could have been like light scattered everywhere, no structure at all, just a mess.

So even though we constrained some of the physics, we allowed it to have arbitrary structure of what the gas looked like around the black hole. And so now, if you actually look at what we recovered, we see that it recovered two bright spots about 75,000,000km from the black hole. That’s about half the distance between us and the sun.

And so around that distance, two bright spots appeared right after a bright flare. And as time progressed, those two bright spots spread out as they rotated around the black hole. And so this compact structure that we got actually aligns with some current theory that is showing what could cause flares, these hot spots. So they look amazingly similar to a lot of simulations of black holes, but it’s one thing to have it in theory and another to actually see it from observation. So to me, that was very exciting.

ARIELLE DUHAIME-ROSS: As somebody who’s sort of not very connected to black holes, maybe doesn’t immediately feel a sense of wonder, what would you say to that person so that they could understand we really do need to understand this stuff?

KATIE BOUMAN: Well, I would say originally I come from more the computer science and electrical engineering areas. So I originally was not a physicist or an astronomer. And what really grabbed my attention with black holes and why I’m just so inspired by them is the mystery that surrounds them and the idea of, like, black holes should be invisible, right?

How can we see a black hole, or how can we understand what is going on around a black hole that’s 26,000 light-years away from us? How do we image the invisible? And how do we use combinations of amazing instruments, but also computation, and bring these things together? How does that allow us to see things that seem like it should be impossible to recover?

ARIELLE DUHAIME-ROSS: Right.

KATIE BOUMAN: And so to me, even if you’re not excited by black holes themselves, I think the idea of by bringing these ideas together, we’re able to do things that seem impossible. And to me, that’s the most exciting thing.

ARIELLE DUHAIME-ROSS: I’d agree. And it’s a chance to advance the technology as well, right? So I do have to ask if you’re coming at this from a computer science angle, how is it that you were able to publish work on black hole physics, if you don’t have an astronomy background?

KATIE BOUMAN: So it’s not one of those projects where you come up with a method and you throw it across the fence to your scientists, and you tell them, hey, use this method that I developed. This result really required people working together to build a method that incorporated both AI and physics seamlessly. They were really working together.

So Aviad Levis, who is currently a member of my team, but soon will be a faculty member at University of Toronto, led this paper, and he brought together an awesome team. Andrew Chael and Maciek Wielgus brought that black hole expertise that was necessary to incorporate the physics that we needed to leverage. And Pratul Srinivasan brought this amazing insight he had developed in developing the NERF method originally.

And so Aviad had worked closely with both groups of people in incorporating both these state-of-the-art methods and our predicted physics to achieve this result. And so to me, that’s the biggest achievement of all. There’s obviously some really cool science that came from this. But to me, that’s what is most exciting, that this was a true interdisciplinary collaboration you don’t see every day and that allowed us to get this really exciting result.

ARIELLE DUHAIME-ROSS: Yeah. It really does take an entire team. As computer analysis and AI models get better, is there less of a need for images that humans can actually see and interpret visually?

KATIE BOUMAN: It’s such a great question. I think that a picture is worth 1,000 words. This is an old saying. And I think that it’s so true, right? Can have points on a plot. But it’s another thing to see it, to see a picture.

And I think that the black hole image of M87 that came out five years ago is kind of evidence of that. People had predicted that there was a black hole for many years. But it’s one thing to have points on a plot and another to see a picture of this dark body with the gas surrounding it. And so I think that it just helps us so much in understanding.

And so similarly here, in my work I’m interested in how do we take the limited data that we have and from it get visual representations and construct imagery of what it is that we see? And I think that you can argue that points on a plot might give you the same amount of information. But I think it is just a totally different experience to see a picture.

ARIELLE DUHAIME-ROSS: Dr. Katie Bouman is an assistant professor of computing and mathematical sciences, electrical engineering, and astronomy at Caltech in Pasadena, California. Thank you so much for talking with me today.

KATIE BOUMAN: Yeah. Thank you so much.

Copyright © 2024 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Charles Bergquist

As Science Friday’s director and senior producer, Charles Bergquist channels the chaos of a live production studio into something sounding like a radio program. Favorite topics include planetary sciences, chemistry, materials, and shiny things with blinking lights.

About Arielle Duhaime-Ross

Arielle Duhaime-Ross is science reporter for The Verge in New York, New York.

Explore More