Researchers at UC San Francisco have developed a brain-computer interface (BCI) that allowed a paralysed man to control a robotic arm by imagining movements, achieving consistent functionality for seven months. This advancement relies on an AI model that adapts to variations in brain signals, enhancing control accuracy over time. The study, published in the journal Cell, demonstrates a significant leap in BCI technology, building on insights from animal studies about evolving brain patterns during learning. Funding was provided by the US National Institutes of Health to explore these capabilities further.
This blending of learning between humans and AI is the next phase for these brain-computer interfaces. It's what we need to achieve sophisticated, lifelike function.
The key breakthrough involved understanding how brain activity shifts from day to day when the participant repeatedly imagines making these movements.
Once the AI system was trained to account for these changes, it maintained performance for months at a time.
Ganguly suspected the same process was occurring in humans, which explained why earlier BCIs quickly lost their ability to interpret brain signals.
Collection
[
|
...
]