Neszed-Mobile-header-logo
Saturday, August 16, 2025
Newszed-Header-Logo
HomeGlobal EconomyAfter 18 Years Without A Voice, AI-Powered Brain Implant Helps Stroke Survivor...

After 18 Years Without A Voice, AI-Powered Brain Implant Helps Stroke Survivor Speak Again

At age 30, Ann Johnson’s life in Saskatchewan was full. She taught math and physical education at a high school, coached volleyball and basketball, and had recently married and welcomed her first child. At her wedding, she delivered a 15-minute speech filled with joy.

Everything changed in 2005, when she suffered a brainstem stroke while playing volleyball with friends. The stroke left her with locked-in syndrome – near-total paralysis and an inability to speak. “She would try to speak, but her mouth wouldn’t move and no sound would come out,” researchers said. For nearly two decades, she communicated slowly using an eye-tracking system, spelling out words one letter at a time.

In 2022, Johnson became the third participant in a clinical trial run by researchers at the University of California, San Francisco, and the University of California, Berkeley. The project aimed to restore speech using a brain-computer interface, or neuroprosthesis, that bypasses the body’s damaged connections.

Ann Johnson avatar 2520
Ann Johnson became paralyzed after a brainstem stroke in 2005, at age 30. As the third participant in a clinical trial led by researchers at UC Berkeley and UC San Francisco, she heard her voice again in 2022, the first time in 18 years. Noah Berger, 2023

“We were able to get a good sense of the part of the brain that is actually responsible for speech production,” said Gopala Anumanchipalli, an assistant professor at UC Berkeley who began the work in 2015 as a postdoctoral researcher with Edward Chang, a UCSF neurosurgeon. “From there, they figured out how to computationally model the process so that they could synthesize from brain activity what someone is trying to say.”

The device records signals from the brain’s speech centers, sending them to an AI model trained to translate the activity into text, sound, or even facial animation. “Just like how Siri translates your voice to text, this AI model translates the brain activity into the text or the audio or the facial animation,” said Kaylo Littlejohn, a Ph.D. student and co-lead on the study.

To give Johnson an embodied experience, researchers had her choose from a selection of avatars, and they used a recording of her wedding speech to recreate her voice. An implant plugged into a computer nearby rested on top of the region of her brain that processes speech, acting as a kind of thought decoder. Then they showed her sentences and asked her to try to say them.

“She can’t, because she has paralysis, but those signals are still being invoked from her brain, and the neural recording device is sensing those signals,” said Littlejohn. The neural decoding device then sends them to the computer where the AI model resides, where they’re translated. “Just like how Siri translates your voice to text, this AI model translates the brain activity into the text or the audio or the facial animation,” he said. –Berkeley.edu

For Johnson, the trial was emotional. “What do you think of my artificial voice? Tell me about yourself. I am doing well today,” she asked her husband during one session. The researchers had used a recording of her wedding speech to recreate her voice and paired it with a digital avatar she had chosen.

“We didn’t want to read her mind,” Anumanchipalli emphasized. “We really wanted to give her the agency to do this. In some sessions where she’s doing nothing, we have the decoder running, and it does nothing because she’s not trying to say anything. Only when she’s attempting to say something do we hear a sound or action command.”

The early version of the system had an eight-second delay between prompting Johnson and producing speech. But a March study in Nature Neuroscience described a streaming architecture that reduced that to about one second, enabling near-real-time translation. While the avatar in earlier tests bore only a passing resemblance to her, researchers say more lifelike 3D photorealistic versions are possible. “We can imagine that we could create a digital clone that is very much plugged in … with all the preferences, like how Zoom lets us have all these effects,” Anumanchipalli said.

Johnson’s implant was removed in February 2024 for reasons unrelated to the trial, but she continues to advise the research team. She has urged them to develop wireless implants and told them the streaming synthesis “made her feel in control.”

Looking ahead, Anumanchipalli said the goal is for neuroprostheses to be “plug-and-play” and part of standard medical care. “If that means they have a digital version of themselves communicating for them, that’s what they need to be able to do,” he said.

Johnson hopes to work as a counselor in a physical rehabilitation facility, ideally using such a device. “I want patients there to see me and to know their lives are not over now,” she wrote to a UCSF reporter. “I want to show them that disabilities don’t need to stop us or slow us down.”

Loading recommendations…

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments