Decoding Brain Signals To Synthesize Speech

There have been efforts to allow people who have been paralyzed or those who have lost their ability to speak such that they may regain that ability once again, though artificially.

Listen beautiful relax classics on our Youtube channel.

This time, a team from UCSF are conducting a study on decoding brain signals which a computer will process in order to synthesize speech.

It’s somewhat similar to how the late Dr. Stephen Hawking communicated but this time, they are trying eliminate the need for typing such that there is a direct vocal output from the brain signals, thus speeding up the communication process.

But spelling out letters “is not the most efficient way to communicate,” says Dr. Edward Chang, a neurosurgeon at UCSF and an author of the study. That approach allows a person to type fewer than 10 words a minute, compared with speaking about 150 words per minute with natural speech.

So Chang and a team of scientists have been looking for a way to let paralyzed patients produce entire words and sentences as if they were talking. The team studied five volunteers with severe epilepsy. As part of their treatment, these patients had electrodes temporarily placed on the surface of their brains.

The volunteers then read sentences out loud and a computer processed the data from the signals and used that to speak.

Chang was “shocked” at how intelligible and natural the simulated speech was. And a test on volunteers found that they could understand what the computer was saying most of the time. The technology doesn’t try to decode a person’s thoughts. Instead it decodes the brain signals produced when a person actually tries to speak.

(Image credit: geralt/Pixabay)

Source: neatorama

Rating Decoding Brain Signals To Synthesize Speech is 5.0 / 5 Votes: 2
Please wait...
Loading...