A mental privacy safeguard was implemented before testing. Patients imagined saying displayed short sentences while neural signals were recorded. Accuracy reached 86 percent for the best patient with a 50-word vocabulary and fell to 74 percent with a 125,000-word vocabulary. Decoding unstructured inner speech produced near-chance performance in a memorized arrow-sequence task. Attempts to decode complex uncued phrases produced unintelligible outputs. Hardware constraints such as limited electrode count and recording precision and possible involvement of other brain regions are likely limitations. Two follow-up projects are underway.
Once the mental privacy safeguard was in place, the team started testing their inner speech system with cued words first. The patients sat in front of the screen that displayed a short sentence and had to imagine saying it. The performance varied, reaching 86 percent accuracy with the best performing patient and on a limited vocabulary of 50 words, but dropping to 74 percent when the vocabulary was expanded to 125,000 words.
The first unstructured inner speech test involved watching arrows pointing up, right, or left in a sequence on a screen. The task was to repeat that sequence after a short delay using a joystick. The expectation was that the patients would repeat sequences like "up, right, up" in their heads to memorize them-the goal was to see if the prosthesis would catch it. It kind of did, but the performance was just above chance level.
In its current state, Krasa thinks, the inner speech neural prosthesis is a proof of concept. "We didn't think this would be possible, but we did it and that's exciting! The error rates were too high, though, for someone to use it regularly," Krasa says. He suggested the key limitation might be in hardware-the number of electrodes implanted in the brain and precision with which we can record the signal from the neurons.
Collection
[
|
...
]