A Mood Ring for Your Wrist
People who have Asperger’s Syndrome or who experience extreme anxiety during social interactions can benefit from having more explicit clues about others’ emotions. To that end, a team at MIT’s Computer Science and Artificial Intelligence Laboratory has developed a wearable device that can “read” the emotions in a conversation between the wearer and someone else. Science writer Daniel Oberhaus, who covered the story for Motherboard, shares the good and the bad of this new development in emotionally intelligent A.I.
Daniel Oberhaus is a science writer for Motherboard. He’s based in Phoenix, Arizona.
IRA FLATOW: And now it’s time to play Good Thing, Bad Thing–
[DRUM AND SHAKER SOUNDS]
IRA FLATOW: –because every story has a flip side. Ever been at a party, talking to a new friend, and wishing you had a better sense of whether they were happy, sad, or bored by what you were saying? Well, in an episode of the CBS show The Big Bang Theory, one of its main characters, Sheldon Cooper, was given a device that would help him read the emotions of people in the room. And for those with severe social anxiety, or even Asperger’s syndrome, reading those subtle social cues can be a genuine struggle.
And in a classic case of life imitating art, now a team at MIT’S Computer Science and Artificial Intelligence Lab has a possible solution, a wearable wrist device that could clue users in how their conversation’s partners are feeling. Here to explain, and share some of the good and bad of that, is Daniel Oberhaus, science writer based in Phoenix. He is covering the story for Motherboard. Welcome to Science Friday.
DANIEL OBERHAUS: Hey, thanks for having me.
IRA FLATOW: How did these researchers teach their AI to read emotions?
DANIEL OBERHAUS: Yeah, so what they did was they had 31 undergraduates come in to their lab at MIT. And they would tell a story that would be a couple of minutes long. And the researchers, while the students were telling the story, would have them wear a armband that would monitor their vitals, such as their skin temperature, blood pressure, heart rate, things like that, while also recording the story on an iPhone.
The students would then tell the researchers whether they– they would self-report whether they found the story to be happy or sad. And then, after the fact, they would use this to train a neural network they had developed to recognize whether other stories were happy or sad, based on the students’ responses. And it turns out the machine was actually pretty good at doing it. It had about an 83% success rate. It wasn’t so great at picking out smaller chunks in the story out of context to determine whether they were happy or sad, but it’s getting there.
IRA FLATOW: Could the machine in real time tell you what the state of the emotions are?
DANIEL OBERHAUS: Not quite yet. And so I think that’s kind of where they want to get to, to the point where if you were talking to someone in conversation, it might signal your cell phone to kind of give you a vibration to let you know maybe you’re boring the person.
IRA FLATOW: [LAUGHING]
DANIEL OBERHAUS: And the machine can tell that based on their heart rate or something. Yeah.
IRA FLATOW: Yeah. And so the good news is that they’re developing this. The bad news is that we’re not sure how well it works yet?
DANIEL OBERHAUS: I mean, they know it works pretty well. It’s just hard to implement in a real life scenario. And, like, when you imagine these things, they’re going to be this AI wearable that both people might have to wear to have it really be effective. Which, you know, wearing these things might actually make a situation more socially awkward than it was in the first place. So, I mean, getting some test cases for this is going to be, I think, interesting.
IRA FLATOW: Yeah. How does it know between, you know, people cry happy crying, and they’re happy then mostly, or they cry, you know, sad crying. Can it distinguish from things like that?
DANIEL OBERHAUS: You know, it’s at a pretty low level of emotional granularity. So it’s kind of happy or sad. And I think what’s really interesting– you know, you’d have to talk to the researchers about how the machine actually knows.
Because a lot of times when people tell a story, the story might be entirely sad for 90% of it, and then right at the end something happens that makes the story turn out to actually be a happy story. And so I think that, to me, is really the interesting aspect of this, is how a machine is able to learn this. And the researchers are hoping that eventually, once they have a bigger data set, it will be able to be applied with much finer emotional granularity, to the point where you can tell if the story was exciting or funny.
IRA FLATOW: Of course, we want to make sure the data is safe, right, and not going to be shared with everybody?
DANIEL OBERHAUS: And the researchers were thinking a lot about that as well. All the information that’s being collected is stored locally on the device. It’s not connected to the internet.
IRA FLATOW: That’s good to hear. Daniel, thank you for taking time to be with us today. It’s an interesting story.
DANIEL OBERHAUS: Yeah, of course.
IRA FLATOW: Daniel Oberhaus, a science writer based in Phoenix, Arizona.