daily californian logo

BERKELEY'S NEWS • DECEMBER 12, 2023

Campus researchers use brain recordings to reconstruct Pink Floyd song

article image

ROBERT KNIGHT | COURTESY

Brain recordings of patients who can’t speak used to decode Pink Floyd song.

SUPPORT OUR NONPROFIT NEWSROOM

We're an independent student-run newspaper, and need your support to maintain our coverage.

AUGUST 25, 2023

Campus researchers used brain recordings to decode and reconstruct Pink Floyd’s song “Another Brick in the Wall, Part 1,” which sparked interest in the phenomenon of people who can’t speak being able to sing.

Using intracranial electroencephalography recordings, which entails placing electrodes inside the skull to record the brain’s electrical activity, Robert Knight, campus professor of neuroscience and psychology, recorded 29 patients listening to Pink Floyd’s song.

In 2017, he asked postdoctoral researcher Ludovic Bellier to “take a shot” at decoding the music.

Knight explained that each of the electrodes being recorded represent a musical element including frequency, melody and rhythm, which enabled them to reconstruct the song.

“Let’s say you’re a good piano player and you see somebody playing the piano but the sounds (have) no sound, but you see where their fingers the keys are hitting,” Knight said. “If you play piano you can reconstruct what they’re playing, it’s very easy because each key represents something. That’s what the electrodes in the brain are representing.”

Knight first focused his research on helping patients manage their seizure disorders through decoding words with implantable speech devices. He now wants to enhance communication for people with a disabling neurological disease through music.

He explained the benefits of studying music since it allows for human characteristics such as inflection and emotion in speech decoders. Knight also noted the universality of music across different cultures.

“You could go to any country in the world where you don’t know their language but you appreciate their music,” Knight said. “So music really evolved before language in many ways.”

Knight said the sound file of the reconstruction of Pink Floyd’s song sounds like it is “coming from underwater,” but it is not an issue of their machine learning techniques turning the electrical activity into sound but of the distance between the electrodes.

The observed electrodes averaged five millimeters, Knight added, which he explained made them difficult to study. He believes a higher density grid would “markedly enhance” the quality of the sound.

While he does not plan to continue reconstructing music, Knight said he hopes to focus his research on imagined, internal speech which occurs when someone thinks of speaking a word.

“We’re really in terms of music, the key next thing is not so much decoding the music like we did. It’s being able to decode imagining the music. Because decoding a word doesn’t do anything for the patient we have to decode when they think of the word,” Knight said. “They have to imagine it or else they can’t really use it for a device. So I think if anything, we’ll be looking at imagined music.”

Knight described the phenomenon known as aphasia, or damage to the language system which results in the inability to understand or express speech.

With music, however, Knight has found that some patients who can’t speak are able to sing.

Knight noted a potential cause to this phenomenon, explaining that while language is “strongly lateralized” to the left hemisphere of their brain, music is present in both sides of the brain.

“We are pursuing that research now with patients who have electrodes implanted for epilepsy, and they’re singing and speaking and we’re going to try to figure out what’s the neural network that allowed the brain to do that unbelievable thing,” Knight said.

Contact Matthew Yoshimoto at 

LAST UPDATED

AUGUST 25, 2023