Borne out of a “very Stanford-ish” collaboration between Chris Chafe, composer + professor of music research, and Josef Parvizi, epilepsy expert + professor of neurology, Brain Stethoscope epitomizes the potential for innovation that lies at the intersection between art + science. Part artistic experiment, part clinical // diagnostic tool, this Stanford duo has created an aural platform for data interpretation by translating traditionally visualized brain activity patterns into music for the ears.Inspired by a performance of the Kronos Quartet’s Sun Rings—which weaves NASA recordings from space into a series of sonic spacescapes for string—Parvizi approached Chafe with an idea to sonify seizure. Chafe, an expert in creating music from natural phenomena—a field known as sonification, or, as he prefers, musification—has developed music-synthesizing platforms to explore everything from Internet traffic properties to CO2 emissions from tomatoes ripening.
Because neurons communicate by firing out electrical messages, brain activity can be measured + recorded by placing a number of electrodes on a patient’s scalp. The resulting electroencephalogram [EEG] can be used to decipher + distinguish inner states of the brain, including the seizure episodes—or ictal states—that would be transformed into sound. Accordingly, in a pilot // proof-of-concept experiment, Parvizi + Chafe manually mined through gigabytes of EEG data—captured using over 100 scalp electrodes over the course of one week—to select the salient neurological moments corresponding to a seizure episode.
To use brain data to drive composition of the seizure song, Chafe integrated the data coming from each electrode to control pitch, tone quality + loudness using a technique known as frequency modulation [FM] synthesis, discovered by John Chowning in the mid-60s. The tones themselves are distinctly human: “The first inclination I had was this is a human brain, a human subject. So I went after a synthesis of human voice as the carrier of the information.” Chafe laughs, “You could call it voices from inside your head, but it’s just that kind of humanness this music is trying to relate to.”
The result is remarkable. According to Chafe: “When Josef first heard the results he knew we were on the right track. It was his encouragement and his recognition of the potential that really started this project off.” A seizure is essentially an electrical storm in the brain—the result of neurons suddenly + uncontrollably firing signals to one another. And indeed, the chorus of electrically-driven voices composed + conducted by these seizing neurons beautifully mirrors the nervous maelstrom in the brain. But beyond simply echoing this chaotic neurological state, the seizure sounds actually aurally relay the seizure’s progression, conveying relevant neurophysiological information. Parvizi explains:
Around 0:20, the patient’s seizure starts in the right hemisphere, and the patient is talking and acting normally. Around 1:50, the left hemisphere starts seizing while the right is in a post-ictal state. The patient is mute and confused. At 2:20 both hemispheres are in the post-ictal state.
Though on first listen you may not discern their precise meaning, you need not be a trained professional to hear these transitions in brain activity—the calm before the chaos, the high-pitched chatter of the right-side seizure, the low yep-yepping of the left-side takeover, the final fatigue of the post-ictal state. Parvizi adds: “It’s very intuitive. You can easily distinguish the very slow, steady sound at the very end of the audio [the post-ictal state] versus the very asynchronous, chaotic, oscillatory sound of the seizure.”While certainly not destined for the Top 40 charts, the seizure sounds have their own unique musicality. Chafe admits: “Every time I try to go in and recompose the data, it sounds worse. The brain dynamics are in and of themselves just so compelling that I don’t want to touch them—it just has this intrinsic kind of musicality to it. And really my job is to bring that out so it’s appreciated.” That delicate balance between Chafe’s computer composition and the inherent musicality embedded in these neuronal messages is undoubtedly where this brain music derives its potent power—readily relaying relevant neurological information.
Science has a long-standing bias towards visual observation + data, placing particular emphasis on charts, graphical representations + images as the predominant form of scientific “knowing.” And, of course, with good reason. Figures are an easy + effective form of distributing information, with data visualization even gaining particular popularity in the pop-sci world. In fact, as a biologist, I was taught to read scientific articles by first going through the figures to decide if the data was even compelling enough to bother reading the actual text. But listening to data adds an additional, and rather powerful, layer to understanding. Chafe notes:
The fundamental similarity is definitely there. So the graphs we convert into sound are immediately going to have the same landmarks you recognize visually and aurally. The intrinsic difference is really related to real-time, and our sense of hearing is extremely real-time—it’s how we react and how we perceive things that are fast. Our visual system is slower in real-time but very good at understanding time records and capturing patterns that span minutes, centuries, or millennia.
There’s something immediate about data listening. By quite literally giving data a voice, we may gain an intuitive sense of what sounds “normal” and where we ought to listen more deeply. Chafe adds:
Music exists somewhere in the continuum where on one end, we have random nonmusical character—extremely unpredictable with all frequencies going all the time—and on the other sounds that are extraordinarily predictable, like a clock ticking. So if you figure music is somewhere in between—sometimes it has more pattern or less pattern or more invention or more predictability—that’s a lot like the natural systems that we’re often trying to understand.
Given the real-time + intuitive nature of listening to this seizure music, Parvizi + Chafe saw a unique opportunity to apply their experiment to develop a powerful diagnostic medical tool—what they refer to as a brain stethoscope. To do so, the team is currently working on engineering a real-time platform to translate the brain’s electrical activity into sound. Much like a traditional stethoscope, users of this device can move scalp electrodes around a patient’s head and tune into the brain’s dynamics. Importantly, the brain stethoscope can provide a rapid + user-friendly alternative to EEGs, which require sufficient time + training to interpret.Moreover, patients experiencing seizures don’t always present symptoms. As a result, the brain stethoscope may play an invaluable diagnostic role in raising the volume of these silent seizures—both in the hospital and at home. In an interview with Stanford News, Parvizi notes: “Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place.” But even more broadly, this technology can be utilized to better understand everyday brain dynamics outside of a clinical setting—perhaps even listening to what music the brain makes while listening to music!
Because this is ArtLab + because this article happens to mark ArtLab’s one-year anniversary, I’ll end by sharing this: After one year of exploring the intersection between art + science, it truly re-ignited my excitement when Parvizi shared: “I’m optimistic that with more scientists and artists collaborating together, we can discover fields that have never been charted before.”