How to Listen to Data
Scientists, composers, and programmers are experimenting with methods of conveying data audibly.
Scientists, composers, and programmers are experimenting with methods of conveying data audibly.
Why look at data when you can listen to it?
That’s the question some scientists, composers, and programmers are exploring in an effort to present data in creative ways. When people “hear” data, the thinking goes, they quickly and intuitively identify trends and anomalies, and react more viscerally to the information.
In general, so-called “data sonifiers” don’t expect their work to replace data visualizations, but rather complement the facts and figures they depict. Indeed, some take artistic license with their compositions, while others adhere more tightly to what the data reveals. Regardless, the notes tell a story about the numbers.
Science Friday spoke to a few data sonifiers about their approaches.
Brian Foo, Data-Driven DJ
By day, computer programmer Brian Foo digitizes materials for the New York Public Library, “thinking about how we can make the materials more accessible online,” he says. His regular gig—which entails “opening up some datasets and doing cool stuff with [them]”—complements a personal project that he launched in 2015, called Data-Driven DJ. It involves taking various datasets, such as income inequality in New York City, or coastal Louisiana’s land loss over many decades, and converting them into songs.
“It was mostly me trying to think about other ways to communicate data beyond data visualization or charts,” Foo says. As an artist, working with visual data “limited me in terms of curating a specific experience around a dataset,” he says. Foo liked the idea of using music because it could evoke emotion. And if a song becomes an earworm, “that underlying subject matter is also stuck in your head, hopefully.”
Foo starts by searching for datasets that are in the public domain, usually provided by government or universities. He keeps a spreadsheet of the ones that stand out aesthetically or topic-wise, and looks at a mix of factors when considering a new composition. “One is just the general shape of the data,” he says. “If you visualize the data and it’s just very flat, you’re probably going to get a flat song.”
“Flat” data helped emphasize a point in his song about representations of gender, race, and ethnicity in blockbuster movies. “Obviously [those representations are] not diverse, so I was forced to make a song that was very monotonous,” he says. A single “A” piano note—representing the roles of white males—clearly dominates the somber composition. It may not be complex, but “that conveys the data,” he says.
Foo is careful about how he represents information. “I just have to think about what is the right kind of experience” for the listener, he says. The challenge is to decide whether to objectively communicate the data or try to elicit a specific response. “It gets tricky with more sensitive data. So like for income inequality, I don’t want to make the poorer areas sound sadder or of less quality,” he says. In potentially sensitive cases, he generally opts for a straightforward approach, such as representing aspects of the data through changes in volume.
For each song, Foo uses a combination of Python (for analyzing the data) and ChucK (a programming language) to create an algorithm that stitches together all the sounds. He generally samples various songs and artists that are related to the topic at hand—his sonification of New York City income levels features local musicians, for instance. (All of his coding is open-source, and he invites others to use it to create their own sonifications.)
So far, Foo has completed 10 songs and is now exploring other ways of representing data. “In general, I’m looking in the data for a story, or some kind of narrative or some kind of experience to bring the listeners through from beginning to end,” he says.
Lauren Oakes and Nik Sawe
After three summers’ worth of field research, Lauren Oakes, an ecologist and natural systems scientist, had a robust dataset on the yellow cedar tree and neighboring conifers in the northern reaches of the Alexander Archipelago in southeastern Alaska. As part of her Ph.D. research at Stanford University, she had investigated how the forest community responded to a decades-long decline in yellow cedar, and how those changes affected the people in the region who had long relied on the tree.
Oakes had already published papers on her research when a fellow Ph.D. student, Nik Sawe, sent out a mass email last spring to colleagues requesting datasets that he might turn into musical compositions. Oakes was intrigued.
“I just immediately thought that the idea is fascinating, to be able to convey a scientific discovery through essentially what’s a universal language”—meaning music, says Oakes, who’s now a lecturer and researcher at Stanford. “When I present [data] as a scientist, I think that the facts and the data and the graphs are important, but also so is the story and how we convey those messages.”
Oakes and Sawe started sharing ideas right away. Sawe, who’s now a neuroeconomist at Stanford, built a computer program where he could take data points and translate them into different keys, pitches, and instruments. The duo’s final composition is a light, classical music piece that tracks changes in the tree population in the Alexander Archipelago from north to south, singling out individual trees, species, and even tree deaths, which are marked by silence.
“In the piece, each note represents a tree,” Oakes explains. “Each species also ‘plays’ a different instrument, and then there are aspects like tree height and diameter that affect the tone and duration of the note.”
The middle of the song becomes busy with an ensemble of instruments, representing different tree species competing to regenerate. Toward the end of the piece, what was once a melody dominated by piano—representing the yellow cedar—becomes centered on a flute, symbolizing hemlocks.
The song’s end surprised Sawe. While he carefully chooses how to represent different data in his sonifications, “I have no idea how it’s going to sound until I hear the whole thing,” he says.
Sawe sees music as a succinct way to present a large amount of data. When scientists are analyzing their data, he says, they have to find ways to simplify multiple graphs and dimensions, which can take a long time. But a data sonification can combine all of these elements into a track that the ear can easily follow. “That’s a huge boon to getting a raw sense of your data,” he says.
Oakes says that while they did make some stylistic choices for the composition, the facts and figures from her research remain the backbone of the project. “There is obviously some interpretation and decisions made, but it is true to the dataset,” she says. “I feel like it’s complementary to the science in some way.”
Oakes and Sawe are hoping to assemble a band of musicians for a live performance of the composition in the spring.
Domenico Vicinanza and Genevieve Williams
Genevieve Williams is a movement scientist studying biomechanics, and Domenico Vicinanza is a physicist and composer. Both work at Anglia Ruskin University in the United Kingdom, and bonded over a common interest: cyclical phenomena.
“Almost everything in our body is in [one] way or another related to cycles—waking up and sleeping, or even at a small microscopic level, if you look at cells, biological systems,” Vicinanza says. “The concept of cycles, or regularity, is so important, so crucial, in music as well.”
The duo realized that one way they could convey data about the human body could be through music. For their first collaboration, in 2015, they focused on the rhythms of finger wagging. You can try this experiment yourself: Hold your hands flat in front of you, index fingers pointing toward each other. Start wagging those fingers up and down in opposite directions, and increase your speed. At some point, your fingers will end up moving in unison, up and down.
“We wanted to try to listen to this transition and try to use music to describe its development,” Vicinanza says.
The researchers placed motion sensors on the tips of a participant’s index fingers and then tracked and measured the differences in the acceleration of each finger. They then mapped the movement to music notes using a custom-made algorithm they developed with the programming language Java. The result is a piano composition that starts out lively, then becomes nearly monotone by the end—a finale that represents how wagging fingers ultimately move in phase.
“The low and high piano notes are the small and large differences in acceleration of the two fingers, respectively,” says Williams.
At one point in the composition, a very low note plays, signifying when one finger (in this case, the left) becomes in-phase with the other.
“One of the two fingers had to sharply change direction to catch up with the other,” Vicinanza says. And “that sudden change in acceleration was responsible for that very low note that started the new melody.”
The two researchers see real-world applications of a data sonification like this. Say you injure your arm and need to go to physical therapy. Your physical therapist will show you exercises that you should do to help you recover. “But often when you get home, you struggle to remember what exercises you’ve been doing; you question whether you are doing it correctly,” says Williams.
Now, imagine if you could track your arm’s movement and range of motion through sound—you could get immediate feedback on whether or not you were doing a therapy exercise correctly.
“Your therapist can just prescribe a melody instead of an exercise,” says Vicinanza. “The right pace and the right sequence of notes would be the indication that the exercise is the right one.”
The pair is continuing to work on all kinds of data sonifications, including one focusing on different environmental and cultural aspects of a village in France.
Chau Tu is an associate editor at Slate Plus. She was formerly Science Friday’s story producer/reporter.