Google Uses DolphinGemma AI to Decode Dolphin Communication | Entrepreneur
Briefly

Google has developed an AI model named DolphinGemma, in collaboration with the Georgia Institute of Technology and the Wild Dolphin Project, to interpret dolphin sounds. Utilizing 40 years of audio-visual data of Atlantic spotted dolphins, DolphinGemma analyzes patterns and structures in dolphin vocalizations, aiming to reveal common meanings in their communication. Equipped to identify nuances humans might miss, this AI model could vastly accelerate the study of dolphin language. Field trials with the model are set to begin, with researchers hopeful about new insights into dolphin interactions.
"Feeding dolphin sounds into an AI model like DolphinGemma will give us a really good look at if there are pattern subtleties that humans can't pick out."
"The goal would be to one day speak dolphin."
"Just like how an AI model predicts the next word in a typed sentence, Google's DolphinGemma AI model aims to use its training data to predict the next sound a dolphin makes based on observed patterns."
"AI has the advantage of picking up on patterns that human beings might not recognize in dolphin audio and analyzing the data far more quickly than humans can."
Read at Entrepreneur
[
|
]