AI Restores a Woman’s Voice 18 Years After Stroke Stole Her Speech
After 18 years unable to speak due to a stroke, Ann Johnson regained her voice through an AI-powered brain-computer interface. This technology translates brain signals into speech, restoring communication.

A Stroke Took Her Voice—18 Years Later, AI Helped Her Speak Again
Ann Johnson was 30 when a brainstem stroke left her paralyzed and unable to speak. For 18 years, she lived with locked-in syndrome, a rare condition marked by near-total paralysis and loss of natural communication. Then, in 2022, a clinical trial by researchers from UC Berkeley and UC San Francisco gave her back her voice through a brain-computer interface powered by AI.
From Brain Signals to Speech
In 2015, a team led by a neurosurgeon and an electrical engineering researcher set out to decode how the brain produces speech. They identified the brain region responsible for speech production and developed computational models that translate neural activity into synthesized speech.
By bypassing damaged nerves and muscles, their neuroprosthesis reads brain signals directly from the speech area. This technology formed the basis of the clinical trial that Ann joined as the third participant in 2022.
Communication After Paralysis
Since her stroke, Ann regained limited muscle control, including neck movement and facial expressions. She primarily used an eye-tracking system to spell words at a rate of about 14 words per minute—far slower than conversational speech, which averages around 160 words per minute.
Hearing her thoughts spoken aloud again after nearly two decades was deeply emotional for her. The trial demonstrated how AI can restore communication for people severely affected by paralysis.
Translating Thought Into Voice, Not Reading Minds
The system involves an implant placed on the brain’s speech-processing region. When Ann attempts to speak, the implant captures her brain activity and sends it to a nearby computer. There, an AI model decodes the signals into text or audio output.
Importantly, the technology only activates when Ann deliberately tries to speak. It does not read random thoughts or intentions. This design maintains user control and privacy.
During the trial, Ann chose an avatar to represent her, and researchers recreated her voice using a recording from her wedding speech, creating an embodied experience.
Current Capabilities and Future Improvements
The early version of the system had an eight-second delay between thought and spoken output, and the synthesized voice sounded somewhat robotic. However, recent advances published in Nature Neuroscience drastically reduced this delay to about one second, enabling near real-time speech synthesis using streaming AI models.
The avatar now moves its mouth and mimics facial expressions in sync with the speech, though it is not yet a photorealistic representation. Researchers anticipate 3D photorealistic avatars could be possible within a few years, pending further development in science, technology, and clinical translation.
Looking Ahead
Ann had her implant removed in early 2024 due to unrelated reasons but continues to communicate with the research team using her existing assistive technology. She values hearing her own voice and prefers wireless implants, a feature the team is actively working on.
Her hope is to become a counselor in physical rehabilitation, using neuroprostheses to communicate with clients and demonstrate that disabilities don’t have to limit life’s possibilities.
The researchers envision a future where neuroprostheses become plug-and-play devices, standard in care rather than experimental. These tools would restore essential communication, improving quality of life for many.
References
- Explore AI courses related to neural decoding and brain-computer interfaces
- Nature Neuroscience Journal