Advances in neuroprosthetics to restore ‘speech’ to people with severe paralysis

by time news

2023-08-23 17:00:04

Our brain remembers how to formulate words, even if the muscles responsible for pronouncing them aloud are paralyzed by disease. Thanks to the connection of the brain with computers, the dream of recover communication for these patients.

people with Neurological disorders, such as stroke or amyotrophic lateral sclerosis (ALS), are often faced with this loss and the problems associated with it. Until now, several studies have shown that it is possible to decode speech from the brain activities of a person with these diseases, but only in text form and with limited speed, accuracy and vocabulary.

Two articles collected by the magazine Nature this week reflect the results of two brain-computer interfaces (BCI, for its acronym in English) more advanced and capable of decoding brain activity. With the help of a device they make possible the oral communication of patients with paralysis.

“The first results showed that the device is stable when we tested it over a long period of time, while decoding 26 keywords. Being stable means that we can train a model and have it work for a long time without having to re-train it. This is important so that users do not have to constantly dedicate time to the device before using it”, he explains to SINC Sean Metzgera researcher at the University of San Francisco (USA) and co-author of one of the papers that was evaluated in a stroke patient.

The first results showed that the device is stable when we evaluate it over a long period of time.

Sean Metzger, a researcher at the University of San Francisco

For the study they used a method with electrodes that are placed on the surface of the brain and detect the activity of many cells throughout the entire speech cortex. This BCI decodes brain signals to generate three simultaneous outputs: text, audible speech, and a speaking avatar. The researchers trained the deep learning model to decipher neural data collected by this severely paralyzed patient, caused by a stroke, as he attempted to speak complete sentences silently.

“This is possible thanks to our electrodes located on the surface of the brain, which record tens of thousands of neurons, making them more resistant to small changes in the device or neural signals. We can’t wait to see how decoding stability evolves over longer periods of time and with more complex decoding tasks, but this is a promising first sign,” says Metzger.

Using this neuroprosthesis, brain-to-text translation generated an average rate of 78 words per minute, which is 4.3 times faster than the previous record and is even closer to the speed of natural conversation. The BCI achieved a 4.9% word error rate when decoding sentences out of a set of 50, which is 5 times fewer errors than the previous next-generation voice BCI.

Customizable synthesized voice sounds

The brain signals were also directly translated into synthesized voice sounds intelligible words that untrained listeners could understand, with a 28% word error rate for a set of 529 sentences, and customized to the participant’s speech before injury.

The BCI also decoded the neural activity of an avatar’s facial movements when speaking, as well as non-verbal expressions. “We have seen that our device can also decode motor movements of the hands, which can be useful for people with paralysis of the extremities, but for now the decoding it is limited”, explains the scientist.

In addition to people with stroke like our participant, we believe it may be useful for people with ALS, locked-in syndrome, and people with spinal cord injuries.

Sean Metzger, a researcher at the University of San Francisco

Taken together, this multimodal BCI offers people with paralysis more possibilities to communicate in a more natural and expressive way. “In addition to people with stroke like our participant, we believe it can be useful for people with ALS, locked-in syndrome, and people with spinal cord injuries that inhibit control of breathing or the larynx, necessary for speech. As the technology develops, we hope it will be useful for other patient populations and allow for new applications,” concludes Metzger.

The authors of this paper hope to have a clinically viable device ready in the next 5 to 10 years, but first they will need to validate this approach and the technology in more participants, especially those with different conditions.

Participant in the neuroprosthetics of speech study. / Noah Berger

brain implants

On the other hand, the second study of Nature, Also focused on transmitting neural activity to the computer screen, it uses a much more invasive technique. It consisted of placing the patient Pat Bennett a set of small silicon electrodes inserted into the brainwhile training an artificial neural network to decode their vocalizations.

Pat BennettNow 68, she was a former human resources director and was a daily horse-rider. In 2012 she was diagnosed with amyotrophic lateral sclerosis (ALS), a progressive neurodegenerative disease that attacks the neurons that control movement, causing physical weakness and eventually paralysis.

On March 29, 2022, a neurosurgeon at Stanford Medicine (USA) placed two tiny sensors, each in two different regions—both involved in speech production—on the surface of the brain. The sensors are part of a intracortical brain-computer interface (iBCI). Combined with software, they translate the brain activity that accompanies speech attempts into words on a screen.

In ALS patients, the difficulties begin with communication. I am unable to speak

Pat Bennett, sick with ALS

“When you think of ALS, you think of the impact on the arms and legs,” Bennett explained in an interview conducted by email. “But in ALS patients, the difficulties start with communication. I am unable to speak,” she adds.

About one month after the operation, Stanford scientists began conducting research sessions with her twice a week. Within four months, Bennett’s attempts were being translated into words on a computer screen at a rate of 62 words per minute, more than triple the previous record for BCI-assisted communication.

In addition, it achieved an error rate of 9.1% in a 50-word vocabulary, which is 2.7 times fewer errors than the latest generation BCI carried out in 2021. With a 125,000-word vocabulary, the error rate was 23.8%.

“This method uses the Utah microelectrode array, which has the best resolution for recording neural signals, at the level of individual neurons. This has allowed a great increase in performance in terms of precision and speed for the largest vocabulary size for a speech neuroprosthesis, compared to previous work in this field”, he indicates to SINC Erin Michelle Kunza Stanford University researcher participating in the study.

For the patient, “these initial results have proven the concept, and over time the technology will catch up to make it easily accessible to people who cannot speak. For those who have this problem, this means they can stay connected to the worldperhaps continue working, or maintain friendships and family relationships.

How the device works

This device works through implanted wombs that are attached to fine gold threads They come out through connections bolted to the skull, which are connected by cable to the computer. An artificial intelligence algorithm receives and decodes electronic information from Bennett’s brain, and he eventually learns to distinguish the brain activity associated with his attempts to formulate each of the phonemes that make up spoken English.

“Some training or customization will be required, though we anticipate minimal for each new user,” says Michelle Kunz.

It is trained to know which words should come before others and which phonemes make up which words.

Frank Willett of Stanford University

The system inputs its best estimate of the attempted phoneme sequence, with a sophisticated self-correction system, which converts the phoneme streams into the word sequence. “He is trained to know which words should come before others and which phonemes form which words,” he explained. Frank Willettthe Stanford.

To teach the algorithm to recognize the patterns of brain activity associated with each phoneme, Bennett underwent about 25 training sessions, each lasting about four hours, during which he tried to repeat randomly chosen phrases from a large data set, consisting of samples of conversations between people talking on the phone. The whole system improved as he became familiar with the patient’s brain activity during her attempts to speak.

“Imagine how different it would be to do everyday activities like shopping, going to appointments, ordering food, walking into a bank, talking on the phone, expressing love or appreciation—even arguing—when you can communicate your thoughts in real time,” Bennett says. .

“This is a scientific proof of concept, not an actual device that people can use in everyday life,” Willett says. The described device is currently licensed for research use only and is not commercially available.

Reference:

Francis R. Willett et al. “A high-performance speech neuroprosthesis”. Nature.

Sean L. Metzger et al. “A high-performance neuroprosthesis for speech decoding and avatar control”. Nature.

#Advances #neuroprosthetics #restore #speech #people #severe #paralysis

You may also like

Leave a Comment