Revolutionary AI Technology Allows Woman to Speak Again After Stroke

by time news

A woman who didn’t utter a word for years after a paralyzing stroke has regained the ability to speak through artificial intelligence.

The groundbreaking procedure uses an array of 253 electrodes, which were implanted in the brain of Ann Johnson, 48, and then linked to a bank of computers through a small port connection affixed to her head.

The electrodes, which cover the area of the brain where speech is processed, intercept her brain signals and send them to the computers, which in turn generate a brown-haired avatar representing Johnson.

The on-screen avatar — which Johnson chose herself — is then able to “speak” what she is thinking, using a copy of her voice recorded years ago during a 15-minute toast she gave at her wedding.

The avatar also blinks its eyes and uses facial expressions such as smiles, pursed lips, and raised eyebrows, making it seem more lifelike.

“We’re just trying to restore who people are,” Dr. Edward Chang, chairman of neurological surgery at the University of California, San Francisco, told the New York Times.

Johnson — a high school math teacher who was also active as a volleyball and basketball coach in Saskatchewan — was married for two years and had two children when a stroke rendered her paralytic.

“Not being able to hug and kiss my children hurt so bad, but it was my reality,” Johnson said. “The real nail in the coffin was being told I couldn’t have more children.”

After years of rehabilitation, she gradually regained some movement and facial expression, but Johnson remained unable to speak and had to be tube-fed until swallowing therapy allowed her to eat finely chopped or soft foods.

“My daughter and I love cupcakes,” Johnson said.

The team from UCSF, together with colleagues from the University of California, Berkeley, said this is the first time either speech or facial expressions have been synthesized from brain signals.

To train the AI system, Johnson had to silently “repeat” different phrases from a 1,024-word vocabulary over and over until the computer recognized the brain activity pattern associated with each sound.

Instead of whole words, the AI program was taught to recognize phonemes, the units of speech that form spoken words. “Hello,” for example, contains four phonemes: “HH,” “AH,” “L,” and “OW.”

By recognizing 39 phonemes, the AI program can decode Johnson’s brain signals into complete words at a rate of about 80 words a minute — roughly half the rate of normal person-to-person dialogue.

Sean Metzger, who developed the decoder in the joint Bioengineering Program at UC Berkeley and UCSF, told South West News Service that the program’s “accuracy, speed, and vocabulary are crucial.

“It’s what gives a user the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations.”

The team is now working on a wireless version, which means the user won’t have to be physically connected to the computers with wires or cables.

Chang has worked on the brain-computer interface for more than a decade and hopes the team’s innovation will lead to a system that enables speech to be created from brain signals in the near future.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” Chang told SWNS.

“These advancements bring us much closer to making this a real solution for patients,” Chang added.

You may also like

Leave a Comment