A brain-computer interface decoded instructed inner speech on command with up to 74% accuracy. Inner speech is robustly represented in neural signals and a proof-of-concept real-time inner-speech BCI decoded self-paced imagined sentences from a 125,000-word vocabulary. Real-time decoding achieved a word error rate as low as 26%. The system translates imagined speech into output using neural recordings. BCIs enable control of external devices to improve daily living, including wheelchairs, robotic limbs, computers, and smartphones. Decoding inner speech offers potential to restore communication for people unable to speak or move due to neurological conditions.
"We discovered that inner speech is robustly represented and demonstrated a proof-of-concept real-time inner-speech BCI that can decode self-paced imagined sentences from a large vocabulary (125,000 words)," wrote the study's senior author Frank Willett, PhD, Co-Director of the Neural Prosthetics Translational Laboratory and Assistant Professor of Neurosurgery at Stanford University, in collaboration with a team of over 20 scientists at Stanford Medicine, Massachusetts General Hospital, Harvard Medical School, Emory University, Georgia Institute of Technology, University of California, Davis, and Brown University.
In addition to Willett, the study co-authors include Erin Kunz, Benyamin Abramovich Krasa, Foram Kamdar, Donald Avansino, Nick Hahn, Seonghyun Yoon, Akansha Singh, Samuel Nason-Tomaszewski, Nicholas Card, Justin Jude, Brandon Jacques, Payton Bechefsky, Carrina Iacobacci, Leigh Hochberg, Daniel Rubin, Ziv Williams, David Brandman, Sergey Stavisky, Nicholas AuYong, Chethan Pandarinath, Shaul Druckmann, and Jaimie Henderson. The researchers report that the word error rate of the BCI's ability to decode inner-speech in real-time using the 125,000-word vocabulary was as low as 26%.
Collection
[
|
...
]