Could Ordinary Household Objects Be Used To Spy On You?

22:36 minutes

an abstract 3d computer rendering of a series of ripples
Credit: Shutterstock

In the movies, if a room is bugged, the microphone might be hidden in a potted plant. But in recent years, researchers have come up with ways to use the trembling leaves of a potted plant, light glancing off a potato chip bag, and even tiny jiggles in the head of a spinning hard drive caused by a nearby conversation to be able to listen to what’s happening in a room, or to gain information about what’s going on nearby. 

On a larger scale, other researchers have been able to use the vibrations of an entire building to paint a picture of movements within it—and even the health status of the people inside. 

The approach is known as a side-channel attack: Rather than observing something directly, you’re extracting information from something else that has a relationship with the target. Many of the approaches are not straightforward—they require an understanding of the physics involved, and sometimes heavy data-processing or machine learning to interpret the hazy information yielded by these techniques. 

Jon Callas of the Electronic Frontier Foundation, Hae Young Noh of Stanford, and Kevin Fu of the University of Michigan join host Sophie Bushwick to talk about the risks and opportunities afforded by these sneaky methods of surveillance, and how concerned you should be.

Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.


Segment Guests

Jon Callas

Jon Callas is director of technology projects at the Electronic Frontier Foundation in San Francisco, California.

Kevin Fu

Kevin Fu is an associate professor of Electrical Engineering and Computer Science at the University of Michigan in Ann Arbor, Michigan.

Hae Young Noh

Hae Young Noh is an associate professor of Civil and Environmental Engineering at Stanford University in Stanford, California.

Segment Transcript

SOPHIE BUSHWICK: This is Science Friday. I’m Sophie Bushwick.

You know how it works in the spy movies. The room is bugged with a microphone hidden in a potted plant. But what if the plant itself is the microphone? A few years ago, researchers found that video of a plant’s leaves was enough to show the tiny movements that happen when the leaves vibrate in response to sound, and that they could reconstruct the sounds in the room using only silent video of the leaves. And recently, researchers reported that with a lot of processing, they could use video of a seemingly blank wall to reconstruct the shadows of the people inside a room and get an idea of their movements.

In each case, you’re not pointing a microphone or a camera directly at your targets, but you’re using some other object or process to get information about what’s going on inside the room. Call it side channel surveillance. That’s what we’ll be talking about with my guests. Jon Callas is director of technology projects for the Electronic Frontier Foundation. Welcome.

JON CALLAS: Thank you very much.

SOPHIE BUSHWICK: Kevin Fu associate professor of Electrical Engineering and Computer Science at the University of Michigan. Welcome.

KEVIN FU: Thank you. Glad to be here.

SOPHIE BUSHWICK: And Haeyoung Noh, she’s an associate professor of Civil and Environmental Engineering at Stanford. Thanks for joining me today.

HAEYOUNG NOH: Sure, thank you. Nice to meet you all.

SOPHIE BUSHWICK: So Kevin, I talked about using this plant as a microphone or a wall as a camera. You’ve done work with using a hard drive as a listening device. What do all these things have in common? Is there a way to look at these different projects collectively?

KEVIN FU: Well I think, at least in the research in my laboratory, my students look at how sensors can be synthesized out of everyday objects. And so the group you’re referring to at MIT looked at how to use a potted plant. In our case, we looked at how components inside spinning magnetic hard drives could inadvertently become a synthesized microphone fully capable of reconstructing speech in the room.

SOPHIE BUSHWICK: How does that work? How do you use a hard drive as a listening device?

KEVIN FU: Well, there’s a lot of interesting angles on that. But the short answer is you can think of a hard drive, a magnetic hard drive, as almost like a record player. And there’s effectively a needle. It’s called a head and it moves around.

Now it turns out vibrations in the room caused this head to jiggle just a tiny amount. And so there’s a sensor effectively on the inside to make that head stay in the center of the track, because you don’t want it getting off the center of the track or you will not be able to read the data properly.

And there’s something called the position error signal– how many nanometers off the center of the track that head is. And so by looking at that error, that is, how far off the center of the track, you can effectively synthesize what behaves like the membrane of a microphone. How much that head is vibrating is directly proportional to the sound in the room.

SOPHIE BUSHWICK: Haeyoung, your work is on a bit larger of a scale. You’re using an entire structure, an entire building even, as a sensor. Tell me about that.

HAEYOUNG NOH: Yes. Yeah, that’s correct. So the big large structures like buildings and bridges or cars around us, we’ll usually think of them as something static, passive, a big chunk of concrete or metal that’s sitting there. But they are actually continuously interacting with humans inside or surrounding environment.

For example, when you’re in the building walking around, your individual footsteps will create small vibration on the floor, which will propagate through the entire building through the structure. So by capturing these structural vibrations, we can find a lot of information about you like who you are, where you are, what kind of activities you are doing or even your health status or cognitive status.

SOPHIE BUSHWICK: You can get all of that just from somebody footsteps?

HAEYOUNG NOH: Yes, that’s what my group has been working on. And when you capture these vibrations and analyze it, there’s a lot of unique gait patterns associated with your identity or the activity types or your health status.

SOPHIE BUSHWICK: Is that something you need vast amounts of training data to know what a normal person walking sounds like?

HAEYOUNG NOH: It depends on what your final goal is. If your goal is to look at what your health status is compared to all the other people, then yes, we’ll need to have a training data set from a large number of other people. But another way to look at this problem is just monitoring your gait pattern and how it has been changing from a week ago or a month ago.

And especially if you are in certain medical treatments, then we can also monitor your gait pattern, what it was like before the treatment started and afterwards, and see how effect of the treatment is. So in that case, you don’t need a lot of training data set. You would just need the data for from yourself.

SOPHIE BUSHWICK: Kevin, one tool that can be used to eavesdrop is this chip bag. Just like the way that a plant vibrates, a chip bag can also vibrate. And it seems like this is an analog object, but it’s being analyzed in a digital way. And a lot of these techniques do seem to involve these intersections between analog world and digital world. Can you talk a bit about that?

KEVIN FU: Yeah, the analog world is quite fascinating. It’s having a resurgence because we’ve tried to make everything digital, converting everything to bits and bytes. And sometimes in the engineering field, we forget that beneath all this is fundamental physics.

And so with the chips bag for instance, when vibration hits a reflective foil bag, it scatters photons differently. And if you have an appropriate light detector, you can begin to discern what was the vibration in the room based upon the changes in the light.

And there are all sorts of examples of these sort of bizarre ways that the physics can play out in that sensors can be synthesized through these kinds of materials, even if they weren’t built in designed to be a microphone, for instance.

SOPHIE BUSHWICK: It’s almost like these seemingly innocuous objects are being transformed into digital ones.

KEVIN FU: That’s right. And in fact, a colleague of mine and I, we coined the term transduction attack, where you’re tricking devices into doing sort of unintended transduction of physical phenomena into electrical signals. Sometimes, tricking the sensors into seeing a false reality, but on the other hand, sometimes causing the pickup of signals that you might think ought to be private. And that’s why I personally think students really need to spend time, even if they’re doing for instance programming or computer science– they need to appreciate some of the underlying physics to understand the limits of how some of these things work and how they can fail in bizarre ways.

SOPHIE BUSHWICK: Jon, you’ve been involved in security for a long time. Are these techniques things that people are actually using? Should I be worried about someone hacking into my hard drive and eavesdropping on me? Or is this more of a fun academic exercise?

JON CALLAS: I’m glad you brought that up, because that is, in fact, one of the things that I’m concerned about as well, which is how much of this is practical and how much of it is a demonstration to tell us what is about our connected world. And a lot of these cases, they are, in fact, not particularly practical. Or if somebody wanted to do it for real to surveil you, there would be better techniques than to do that.

We’re at a point now where usable microphones and cameras literally fit inside a button. Devices that have uses like help me find where I left my keys can in fact also be repurposed as tracking devices. So there’s some practicality here and some not. And it’s useful for us to study this so that we have an idea about how much we really should be concerned about these things.

SOPHIE BUSHWICK: And Haeyoung, do you worry that some of the research where you’ve got sensors scattered through a building or in a vehicle could create a privacy problem?

HAEYOUNG NOH: Yeah, so these vibration signals certainly contains a lot of information about humans. However, since the sensor data we have is more indirect sensing data, meaning when you look at the data, it looks like just bunch of a lot of noise. And the signal we’re collecting has very small magnitude of information compared to the other noise included in the data. So it needs a lot of processing in order to extract out the information we want.

So in that sense, it has a lot less privacy concern compared to other existing sensing modalities, for example, regions or the sound data. But it is true that with proper processing of the data, there could be potentially a lot of information can be leaked.

SOPHIE BUSHWICK: Is there anything I can do, anything I can do with my environment to really halt this kind of attack? One of the things that I think seems scary about this area of research is it turns all these everyday objects around me into potentially creepy surveilling objects. And short of throwing everything away and living in an empty padded cell, is there anything I can really do to change that?

JON CALLAS: There are lots of obvious things that you can do. For example, a lot of work that is done that is based upon reflections that come off of windows. Double-paned glass, curtains– these sorts of things that would reduce some of these issues. It is something that we understand intuitively. If you’ve lived in an apartment building or something else, you can hear other people around. And I think it is easy to recognize that when we are in any environment, the actions that we take radiate outward. And then being able to modify that by having better soundproofing in buildings, doing our own measures can, in fact, reduce these things a whole lot.

SOPHIE BUSHWICK: I’m talking with Jon Callas, Kevin Fy, and Haeyoung Noh about unexpected methods of side channel surveillance. I’m Sophie Bushwick and this is Science Friday from WNYC Studios.

One of the side channel methods I mentioned at the beginning of our conversation was a blank wall and using footage of a blank wall to determine what’s going on in the rest of the room. One of the ways that researchers managed that in this study was training a machine learning algorithm by essentially acting out various activities in a room while filming the wall so that their system could learn to recognize what patterns of shadows corresponded to what motions or the number of people who were causing these shadows. I was wondering whether– opening this up– any of you could comment on whether in general, side channel techniques require a lot of training in that way or whether there are some that work more immediately?

JON CALLAS: I think it depends upon what physical modality that we’re looking at. Sound has the advantage that it travels very well through most substances and not very well through a vacuum. So it will move through solids better than it moves through air. And that is part of what gets us the effects that we’ve seen on everything from disk drives to potato chip bags.

The light on a wall is something that yes indeed, you’d want to have a certain amount of training on. It’s also understandable. It’s like I am trying to think of what suspense movie, what mystery did I see where the shadows on the wall flickering were a significant plot point. And acting things out is going to be trainable, but it is also going to be hard to figure out exactly what is going on. And also, defense mechanisms will be there too.

I recently saw a small LED panel that could be programmed to mimic somebody watching television, and the idea would be that you would leave this in a room when you were away from home for a few days. And anybody who walked by your house would see the flickering of television going by, and they would thus think that the house was occupied. And this is a countermeasure for that very sort of thing.

HAEYOUNG NOH: I can add to that question, Sophie. So certainly, collecting a lot of training data for these human activities takes a lot of time and effort. So we’ve been working on developing methods that can reduce these data collection efforts.

For example, there are many domain knowledge and physics-based models that we have developed. For example, we know how the wave propagates within the buildings. And we have good idea of what kind of pressure is applied to the floor when a person walks. There are medical models and the mechanical models that people have developed over the past couple hundred years.

And recently, with the emerging technology from data science side, we can certainly analyze these sensor data. But by combining these physics-based models from each disciplines like mechanics or civil engineering or medical science with the data science, we can actually analyze these data with a lot of noise without requiring a lot of training data. So we’ve been developing methods like physics-informed machine learning approaches that can analyze very noisy data without requiring a lot of training data.

KEVIN FU: So your question raises two interesting points in my mind. One is a public policy, one is a technical. On the public policy side, the question arises, what is a reasonable expectation of privacy. And when you have machine learning and effectively, a supercomputer in everybody’s pocket, if you’ve trained this–


KEVIN FU: –you might be able to learn a lot more than a human in the room could just learn through observation. But then second, on the technical side, I’m continuously impressed with how much machine learning can discover through inference. Also make mistakes. For instance, one of my students built a power outlet that used machine learning on the our consumption patterns to learn whether you were infected with malware and also what website you were browsing. So there’s quite a bit of information that can leak. And machine learning training can give you effectively superhuman powers.

SOPHIE BUSHWICK: This is Science Friday. I’m Sophie Bushwick I’m. Talking with Jon Callas, Kevin Fu, and Haeyoung Noh about unexpected methods of side channel surveillance.

And Kevin, I wanted to revisit something you mentioned. We’ve been talking about using these techniques to observe or to listen in. But you can use similar approaches to change data or to make a digital device do something unexpected. Can you talk about that a bit?

KEVIN FU: Sure. It’s sort of the opposite side of the coin of a side channel. Side channel is violating reading confidentiality, privacy. And then the opposite side is modifying and injecting false information. You can almost think of it as sort of inception. And so one of the things my laboratory studies is how to defend against malicious injection of signals into sensors.

A couple of years ago, we showed how to use lasers to inject false conversations into voice assistants through glass windows, through a belch from a bell tower simply by causing minute vibrations on some semiconductors. There’s quite a bit of research on this opposite side of the coin. And I think one of the hardest challenges is how to defend against it, and that’s where we spend quite a bit of time.

There’s many different ways for these systems to fail. Being the defender is a much more challenging job. And the research definitely takes quite a bit of effort to come up with solutions that you can measure and demonstrate to be effective. And there are certainly quite a few solutions that have fallen away that didn’t work as well as we had hoped.

SOPHIE BUSHWICK: And for my final question, I’d like to open this up to all of you. Where do we go from here? Is this going to be a cat and mouse game of surveillance from here on out? Or do you think that we’re going to be able to apply these in some sort of helpful ways like better health monitoring?

JON CALLAS: I’ll say all of the above. We are getting a good deal of health monitoring through the devices that we carry from very simple things like your phone being carried with you can do things like measure step count just from its own internal measurements. We have devices that we’re doing as health monitoring specifically, and can potentially identify conditions before they really become apparent.

The question is going to be who has that data and what expectations we have around the use of it. And that is a huge society-wide conversation that we’re only starting to have right now, because everything is collecting data. And the ability to use it is growing exponentially.


HAEYOUNG NOH: Yeah, I agree with what Jon said. We are going to continue developing our technology so we can better understand what human needs are so we can provide better services. But that comes with the concern of these privacy issues. And there’ll be new ways of trying to leak the information. And then we’ll come up with better ways to defend those attacks.

For example, we are looking at how we can inject signals, vibration signals into these structures, so that we can actually reject a lot of attempts to listen into these vibrations for malicious purposes. So there are new ways being developed in order to protect our privacy. This is always going to be a work in progress. But our final goal is always to better understand human needs and human activities so that we can serve them better.

SOPHIE BUSHWICK: Kevin, do you also think this is a work in progress, that we’re going to have the sort of back and forth be ongoing?

KEVIN FU: I think for these types of technologies, there will always be a dual use. But I think the promise is great, especially in the health space. Now that instead of having to go to a doctor once a year for a physical, you can imagine future medical devices that are more longitudinal, using larger quantities of data.

Of course, at the same time, we need to be mindful of the risks and build in appropriate controls for those risks. That’s why in the laboratory, we’re very concerned with defensive approaches– to enable future technologies to give people the confidence to reduce that risk of privacy invasion and security threats.

SOPHIE BUSHWICK: We’ve run out of time. I’d like to Thank my guests. Haeyoung Noh, she’s an associate professor of Civil and Environmental Engineering at Stanford. Kevin Fu, associate professor of Electrical Engineering and Computer Science at the University of Michigan. And Jon Callas is director of technology projects for the Electronic Frontier Foundation. Thanks to all of you for talking with me today.

KEVIN FU: Well, Thank you for the discussion.

HAEYOUNG NOH: Thank you.

JON CALLAS: That was fun.

Copyright © 2021 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About Charles Bergquist

As Science Friday’s director and senior producer, Charles Bergquist channels the chaos of a live production studio into something sounding like a radio program. Favorite topics include planetary sciences, chemistry, materials, and shiny things with blinking lights.

About Sophie Bushwick

Sophie Bushwick is senior news editor at New Scientist in New York, New York. Previously, she was a senior editor at Popular Science and technology editor at Scientific American.

Explore More