Scientists have trained a computer to analyze the brain activity of someone listening to music and, based only on those neuronal patterns, recreate the song. The research, published on Tuesday, produced a recognizable, if muffled version of Pink Floyd's 1979 song, "Another Brick in the Wall (Part 1)." [...] To collect the data for the study, the researchers recorded from the brains of 29 epilepsy patients at Albany Medical Center in New York State from 2009 to 2015. As part of their epilepsy treatment, the patients had a net of nail-like electrodes implanted in their brains. This created a rare opportunity for the neuroscientists to record from their brain activity while they listened to music. The team chose the Pink Floyd song partly because older patients liked it. "If they said, 'I can't listen to this garbage,'" then the data would have been terrible, Dr. Schalk said. Plus, the song features 41 seconds of lyrics and two-and-a-half minutes of moody instrumentals, a combination that was useful for teasing out how the brain processes words versus melody.
Robert Knight, a neuroscientist at the University of California, Berkeley, and the leader of the team, asked one of his postdoctoral fellows, Ludovic Bellier, to try to use the data set to reconstruct the music "because he was in a band," Dr. Knight said. The lab had already done similar work reconstructing words. By analyzing data from every patient, Dr. Bellier identified what parts of the brain lit up during the song and what frequencies these areas were reacting to. Much like how the resolution of an image depends on its number of pixels, the quality of an audio recording depends on the number of frequencies it can represent. To legibly reconstruct "Another Brick in the Wall," the researchers used 128 frequency bands. That meant training 128 computer models, which collectively brought the song into focus. The researchers then ran the output from four individual brains through the model. The resulting recreations were all recognizably the Pink Floyd song but had noticeable differences. Patient electrode placement probably explains most of the variance, the researchers said, but personal characteristics, like whether a person was a musician, also matter.
The data captured fine-grained patterns from individual clusters of brain cells. But the approach was also limited: Scientists could see brain activity only where doctors had placed electrodes to search for seizures. That's part of why the recreated songs sound like they are being played underwater. [...] The researchers also found a spot in the brain's temporal lobe that reacted when volunteers heard the 16th notes of the song's guitar groove. They proposed that this particular area might be involved in our perception of rhythm. The findings offer a first step toward creating more expressive devices to assist people who can't speak. Over the past few years, scientists have made major breakthroughs in extracting words from the electrical signals produced by the brains of people with muscle paralysis when they attempt to speak.