Researchers have developed a brain-computer interface capable of translating brain signals into speech in near real-time. This innovation offers hope to patients suffering from severe paralysis and speech loss, allowing them to communicate more effectively. The system operates by intercepting electrical brain signals before muscle movement occurs and can produce synthesized speech up to eight times faster than previous technologies. This advancement enhances communication fluency by leveraging deep learning algorithms, providing a significant step forward in neuroprosthetic development for those affected by speech impairments.
"Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses," said Gopala Anumanchipalli.
"Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming."
"We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control," explained Cheol Jun Cho.
The project improves on work published in 2023 by reducing the latency to decode thought and turn it into speech.
Collection
[
|
...
]