Getting musical to spot patterns in whole-brain imaging data: Q&A with Elizabeth Hillman

The new technique takes advantage of humans’ “extraordinary” sensory processing abilities, Hillman says.

Musical insight: Representing data through music makes it possible to spot patterns that link behavior, neural activity and hemodynamic activity in an awake mouse. Neural activity is represented as piano notes, whereas hemodynamic activity is encoded as violin chords.
Courtesy Thibodeaux et al., 2024, PLOS ONE, CC-BY 4.0

Elizabeth Hillman knows how to generate a mountain of data.

She has spent her career developing imaging techniques, such as wide-field optical mapping and swept confocally aligned planar excitation (SCAPE) microscopy, that enable her to measure neural activity across the whole brain in behaving animals, in real time. But it’s difficult to sort through the rich data such techniques produce and find interesting patterns, says Hillman, professor of biomedical engineering and radiology at Columbia University.

So, one day, inspired by a science-fiction book she read as a child, Hillman turned her data into music.

She wrote a program that translates aspects of imaging data into colors and musical notes of varying pitch. For example, activity in the caudal part of the brain is represented by dark blue and a low musical note. The resulting “audiovisualization” takes advantage of humans’ sensory processing abilities and makes it easier to spot patterns in activity across datasets, Hillman says.

“This is creative and novel work,” says Timothy Murphy, professor of psychiatry at the University of British Columbia. “There are potential uses of this in selectively visualizing many simultaneously active brain networks.”

Hillman and her team describe the method in a paper published today in PLOS ONE. They borrowed data from some of their prior studies to demonstrate three ways to use the program: One highlights spatial information, another focuses on the timing of neuronal firing, and the third demonstrates the overlap between behavioral, neuronal and hemodynamic data.

The Transmitter spoke with Hillman about how this technique came to be and how she envisions other scientists using it.

This interview has been edited for length and clarity.

The Transmitter: What motivated you to develop this technique?

Elizabeth Hillman: I trained in physics; my work focuses on developing brain imaging and microscopy methods. I have a long-standing interest in neurovascular coupling, so I started to develop technologies that could capture images of both neural activity and hemodynamics at the same time. I was particularly interested in increasing the speed and the signal-to-noise ratio of those imaging systems so that we can capture stuff in real time.

What led to this paper was an experiment where we were imaging activity across the surface of the brain in awake, behaving mice. We noticed these amazing, dynamic patterns in the brain when the animal was just sitting there, and then the oscillations changed when it started to run. The normal protocol for imaging studies is repeat, repeat, repeat; average, average, average. You need to average 50 times to get a signal.

But if we were to average all of those trials together, we’d get nothing. We had hours of these recordings and were trying to grasp how the brain activity we saw related to what the animal was doing. How could we start to piece this together and make sense of it? That’s when I decided to try playing it as music.

TT: How did you get that idea?

EH: I’d had this idea in the back of my head for years, ever since I read “Dirk Gently’s Holistic Detective Agency,” a novel by Douglas Adams, when I was about 15 years old. In one part of the book, a software programmer describes a program that converts a company’s financial data into music.

I quickly wrote a little script and started playing the sounds along with the videos. And then one of my students, David (Nic) Thibodeaux, got really inspired. He’s also a musician. He figured out all these other things that you could do with it and implemented it as a tool. We started applying it to data we were collecting for other studies. And then we got really hooked on it.

Rainbow rhythms: The colors and musical notes represent the spatial information of brain activity in awake (above) and anesthetized (below) mice. Activity in the caudal area is blue and a low note; activity in the rostral area is red and a high note.

 

TT: Describe some of the ways you audiovisualized data in your paper.  

EH: The simplest way is to take the signals and represent them with the volume and pitch of a note. We used data from a study comparing the brain activity of awake and anesthetized mice. We encoded things back to front, so you can really hear the directionality: Activity in the caudal area is blue and a low note; activity in the rostral area is red and a high note.

The second application was for SCAPE microscopy images of dendrites in the mouse somatosensory cortex. Spatial data wasn’t as meaningful, because it’s a relatively small field of view, so I thought it would be really neat to demonstrate how many of these neurons were firing multiple times. We assigned the neurons that fired first a lower pitch, and neurons that fired later a higher pitch.

The last approach was Thibodeaux’s idea. He suggested we put the files in GarageBand and use multiple musical instruments. He encoded neural firing as percussive piano strikes and hemodynamic activity as violin chords. The music shows you how spatially and temporally those things are coupled, without having to constantly switch your attention between three videos.

But what we’re really trying to show is that you can do anything you want. You could do heart rate as a drumbeat; you could encode specific aspects of behavior with different instruments; you could split left and right brain activity into left and right ear audio tracks to perceive asymmetries. We hope people go for it and use their imagination.

Firing frequency: Here, the music represents the timing of dendritic calcium activity in the mouse somatosensory cortex. The neurons that fired first have a lower pitch, and neurons that fired later have a higher pitch. The colors were randomly assigned to improve contrast.
 

TT: What does this technique offer that’s different from using artificial-intelligence-based methods to parse through large datasets?

EH: You can train a computer to do something you know how to do. And machine learning is good at predicting stuff, but pulling that back to understanding why, what really happened, what was the mechanism, is much less developed.

We’re leveraging this property of our brains to integrate visual and auditory information in ways that can recognize patterns and subtleties in these relationships that go beyond what you could program a computer to do automatically. Once you find an interesting motif to explore, then you can easily write scripts or train models to try and spot that motif in other datasets, because now you have a hypothesis that the motif relates to when an animal is doing a certain behavior.

Figuring that first part out, or expecting a machine to figure that part out, is really difficult. There’s no replacement for looking at your data. That’s why the first step of any data-analytics project is actually data visualization: hunting around, plotting scatter plots, looking at statistics, trying to figure out what the heck is going on. This is just a neat way of doing data visualization.

TT: How do you hope neuroscientists use this in their own work?

EH: There’s so much now in science that could benefit from this — it doesn’t have to be neuroimaging data. I hope that the people who are really thinking about behavior quantification could get some benefit out of it. And, again, anyone who is trying to connect neural activity to behavior.

New technologies have gotten us to the point where we can collect real-time and dynamic data, so we’ve got to have new ways of being able to understand and interpret it. I hope this technique brings a little joy, too.