Highlights
"So instead of saying 'Siri, what is the weather like today' or 'Ok Google, where can I go for lunch?' I just imagine saying these things," explains Christian Herff, author of a review recently published in the journal Frontiers in Human Neuroscience.
In their review, Herff and co-author, Dr. Tanja Schultz, compare the pros and cons of using various brain imaging techniques to capture neural signals from the brain and then decode them to text.
The technologies like functional MRI and near infrared imaging that can detect neural signals based on metabolic activity of neurons, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes.
In contrast, the technology called electrocorticography or ECoG, showed promise in Herff's study. Electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR.
The experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity using electrocorticography.
This study presents the Brain-to-text system in which epilepsy patients who already had electrode grids implanted for treatment of their condition participated.
They read out texts presented on a screen in front of them while their brain activity was recorded. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or "phones".
When the researchers also included language and dictionary models in their algorithms, they were able to decode neural signals to text with a high degree of accuracy.
"For the first time, we could show that brain activity can be decoded specifically enough to use ASR technology on brain signals," says Herff. "However, the current need for implanted electrodes renders it far from usable in day-to-day life."
"A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that," concedes Herff.
The study results, while exciting, are still only a preliminary step towards this type of brain-computer interface.
- Verbal speech or text can now be decoded using a new technology.
- Electrocorticography or ECoG, a new promising technology, invasively measures the brain activity.
- This is fast enough to capture speech processes and is therefor better suited for Automatic Speech Recognition.
‘ECoG,could be useful for those with speech impairment or those who lack speech or motor function.’The review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition (ASR) technology.
"So instead of saying 'Siri, what is the weather like today' or 'Ok Google, where can I go for lunch?' I just imagine saying these things," explains Christian Herff, author of a review recently published in the journal Frontiers in Human Neuroscience.
In their review, Herff and co-author, Dr. Tanja Schultz, compare the pros and cons of using various brain imaging techniques to capture neural signals from the brain and then decode them to text.
The technologies like functional MRI and near infrared imaging that can detect neural signals based on metabolic activity of neurons, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes.
In contrast, the technology called electrocorticography or ECoG, showed promise in Herff's study. Electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR.
The experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity using electrocorticography.
This study presents the Brain-to-text system in which epilepsy patients who already had electrode grids implanted for treatment of their condition participated.
They read out texts presented on a screen in front of them while their brain activity was recorded. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or "phones".
When the researchers also included language and dictionary models in their algorithms, they were able to decode neural signals to text with a high degree of accuracy.
"For the first time, we could show that brain activity can be decoded specifically enough to use ASR technology on brain signals," says Herff. "However, the current need for implanted electrodes renders it far from usable in day-to-day life."
"A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that," concedes Herff.
The study results, while exciting, are still only a preliminary step towards this type of brain-computer interface.
No comments:
Post a Comment