AI Decodes Visual Brain Activityand Writes Captions for It
Briefly

AI Decodes Visual Brain Activityand Writes Captions for It
"Reading a person's mind using a recording of their brain activity sounds futuristic, but it's now one step closer to reality. A new technique called mind captioning' generates descriptive sentences of what a person is seeing or picturing in their mind using a read-out of their brain activity, with impressive accuracy. The technique, described in a paper published today in Science Advances, also offers clues for how the brain represents the world before thoughts are put into words."
"Previous attempts have identified only key words that describe what a person saw rather than the complete context, which might include the subject of a video and actions that occur in it, says Tomoyasu Horikawa, a computational neuroscientist at NTT Communication Science Laboratories in Kanagawa, Japan. Other attempts have used artificial intelligence (AI) models that can create sentence structure themselves, making it difficult to know whether the description was actually represented in the brain, he adds."
A technique called mind captioning generates descriptive sentences of visual experiences from brain-activity recordings with impressive accuracy. The model predicts what a person is looking at with considerable detail. Previous methods often produced only keywords or used generative AI that obscured whether sentence structure reflected brain content. Decoding complex content such as short videos or abstract shapes has proven difficult. The new approach links deep-language AI with brain read-outs to improve contextual decoding beyond single words. The technique offers clues about prelinguistic brain representations and could help people with language difficulties, such as those caused by strokes, communicate better.
Read at www.nature.com
Unable to calculate read time
[
|
]