Gemini Live will learn to peer through your camera lens in a few weeks
Briefly

At Mobile World Congress, Google announced that its Gemini AI will soon launch its Gemini Live feature, allowing users to interact with the AI through live video and screen sharing. This enhancement builds on Gemini's existing multimodal capabilities, which include processing text and images, but expands its functionality to include real-time video input. Users will be able to utilize their cameras to engage with Gemini Live, asking questions based on what it observes. Google previously demoed this functionality under Project Astra, showcasing active, contextual interactions that allow for more immersive communication.
As Gemini Live launches, users will be able to show the AI something in real-time rather than just verbally communicating with it.
The upcoming Gemini update promises a significant enhancement to its video functionality, allowing users to share live video streams and engage interactively.
Google's Astra demo showcased how Gemini Live could respond to live queries, providing real-time insights as users moved their camera around various objects.
Gemini AI's journey includes a range of projects, but Gemini Live aims to provide a more natural, conversational interaction with advanced video capabilities.
Read at Ars Technica
[
|
]