http://www.bbc.co.uk/news/science-environment-16844432
ScienceShot: A Brain Wave Worth a Thousand Words
on 31 January 2012, 5:00 PMhttp://news.sciencemag.org/sciencenow/2012/01/scienceshot-a-brain-wave-worth-a.html
Credit: Adeen Flinker/UC Berkeley
If it wasn't enough that scientists could read your memories,
they can now listen in on
them, too. In a new study, neuroscientists connected a network
of electrodes to the hearing centers of 15 patients' brains (image
above) and recorded
the brain activity while they listened to words like "jazz" or
"Waldo." They saw that each word generated its own unique pattern in the
brain. So they
developed two different computer programs that could reconstruct
the words a patient heard just by analyzing his or her brain activity.
Reconstructions
from the better of the two programs (the third sound in the
audio; the first sound is the word the subjects heard, and the second is
the other computer
program's reconstruction) were good enough that the researchers
could accurately decipher the mystery word 80% to 90% percent of the
time. Because there's evidence that the words we hear and the words we
recall or imagine trigger similar brain processes, the study, published
online today
in PLoS Biology, suggests
scientists may one day be able to tune in to the words you're thinking—a potential boon for patients who are unable to speak due to Lou Gehrig's disease or other conditions.
http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001251
--
http://www.telegraph.co.uk/science/science-news/9051909/Mind-reading-device-could-become-reality.html
http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001251
--
http://www.telegraph.co.uk/science/science-news/9051909/Mind-reading-device-could-become-reality.html
10:00PM GMT 31 Jan 2012
In the video embedded above, each word spoken to a group of patients by an
electronic voice is replicated twice by a computer which analysed the
patients' brain waves to 'guess' what they had heard.
Researchers demonstrated that the brain breaks down words into complex
patterns of electrical activity, which can be decoded and translated back
into an approximate version of the original sound.
Because the brain is believed to process thought in a similar way to sound,
scientists hope the breakthrough could lead to an implant which can
interpret imagined speech in patients who cannot talk.
Any such device is a long way off because researchers would have to make the
technology much more accurate and find a way to apply it to sounds which the
patient merely thinks of, rather than hears.
It would also require electrodes to be placed beneath the skull onto the brain
itself, because no sensors exist which could detect the tiny patterns of
electrical activity non-invasively.
But the proof-of-concept study published in the Public Library of Sciences
Biology journal could offer hope to thousands of brain-damaged patients who
face the daily agony of being unable to communicate with their loved ones.
Prof Robert Knight, one of the researchers from the University of California at Berkeley, said: "This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak.
"If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."
The team studied 15 epilepsy patients who were undergoing exploratory surgery to find the cause of their seizures, a process in which a series or electrodes are connected to the brain through a hole in the skull.
While the electrodes were attached, the researchers monitored activity in the temporal lobe – a speech-processing area of the brain – as the patients listened to five to ten minutes of conversation.
By breaking down the conversation into its component sounds, they were able to build two computer models which matched distinct signals in the brain to individual sounds.
They then tested the models by playing a recording of a single word to the patients, and predicting from the brain activity what the word they had heard was.
The better of the two programmes was able to produce a close enough approximation of the word that scientists could guess what it was, from a list of two options, 90 per cent of the time.
Researchers said it could be made more accurate by studying patients' brain signals during a longer conversation, or examining other parts of the brain involved in speech-processing.
Dr Brian Pasley, who led the study, compared the method to a pianist who could watch a piano being played in a soundproof room and "hear" the music just by watching the movement of the keys.
Any concerns about sinister "mind-reading" devices which could spy on a person's secret thoughts would be misguided, he added, because the technique would rely on a patient consciously "hearing" a word in their mind.
He said: "This is just to understand how the brain converts sound into meaning, and that is a very complicated process. The clinical application would be down the road if we could find out more about those imaginary processes.
"This research is based on sounds a person actually hears, but to use this for a prosthetic device these principles would have to apply to someone who is imagining speech."
Jan Schnupp, Professor of Neuroscience at Oxford University, described the study as "remarkable".
He said: "Neuroscientists have long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity.
"But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications."
Prof Robert Knight, one of the researchers from the University of California at Berkeley, said: "This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak.
"If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."
The team studied 15 epilepsy patients who were undergoing exploratory surgery to find the cause of their seizures, a process in which a series or electrodes are connected to the brain through a hole in the skull.
While the electrodes were attached, the researchers monitored activity in the temporal lobe – a speech-processing area of the brain – as the patients listened to five to ten minutes of conversation.
By breaking down the conversation into its component sounds, they were able to build two computer models which matched distinct signals in the brain to individual sounds.
They then tested the models by playing a recording of a single word to the patients, and predicting from the brain activity what the word they had heard was.
The better of the two programmes was able to produce a close enough approximation of the word that scientists could guess what it was, from a list of two options, 90 per cent of the time.
Researchers said it could be made more accurate by studying patients' brain signals during a longer conversation, or examining other parts of the brain involved in speech-processing.
Dr Brian Pasley, who led the study, compared the method to a pianist who could watch a piano being played in a soundproof room and "hear" the music just by watching the movement of the keys.
Any concerns about sinister "mind-reading" devices which could spy on a person's secret thoughts would be misguided, he added, because the technique would rely on a patient consciously "hearing" a word in their mind.
He said: "This is just to understand how the brain converts sound into meaning, and that is a very complicated process. The clinical application would be down the road if we could find out more about those imaginary processes.
"This research is based on sounds a person actually hears, but to use this for a prosthetic device these principles would have to apply to someone who is imagining speech."
Jan Schnupp, Professor of Neuroscience at Oxford University, described the study as "remarkable".
He said: "Neuroscientists have long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity.
"But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications."