Google DeepMind's new AI models help robots perform physical tasks, even without training
Briefly

Google DeepMind's Gemini Robotics introduces advanced AI models designed to improve robots' dexterity, interactivity, and general intelligence in real-world tasks. Built on the Gemini 2.0 architecture, this new model allows robots to understand and adapt to unfamiliar situations while performing precise actions, such as folding paper or removing bottle caps. The advancements focus on enabling robots to respond more effectively to changes in their environment. Gemini Robotics also includes an embodied reasoning version for enhanced understanding and interaction, promising more robust and capable robotic solutions.
Gemini Robotics enables robots to generalize scenarios and perform more interactive and precise tasks, significantly enhancing their capabilities and responsiveness in real-world applications.
Carolina Parada emphasized that the new model's advancements in generality, interactivity, and dexterity allow for the development of more capable and responsive robots.
Read at The Verge
[
|
]