Google's Visual Search Can Now Answer Even More Complex Questions
Briefly

Google Lens, which was introduced in 2017, revolutionized searching by allowing users to point their camera at an object to identify it. This capability, reminiscent of science fiction, elevated the search experience by eliminating the need for cumbersome text descriptions. Lens has since evolved and now handles around 20 billion searches each month, showcasing our growing reliance on visual search technology.
With updates to Google Lens, users can expect an even richer shopping experience. The revamped lens provides comprehensive context for purchases, including direct links to buy items, customer reviews, and comparative shopping tools. This is crucial for Google as shopping remains a primary application of Lens, competing against platforms like Amazon and Pinterest, which also offer visual search functionalities.
The new multimodal features of Google Lens will allow users to combine video, images, and voice searches seamlessly. Rather than merely identifying objects, users can ask specific questions in real-time, such as identifying clouds or sneakers and where to purchase them. This addition makes searching considerably interactive and hands-free, enhancing the overall user experience of the tool.
Google Lens is evolving with real-time video capture functionalities, which sets it apart from traditional search methods. Users will find real-time identification of objects through a continual camera view, further escalating its capabilities beyond simply recognizing items in still images. This shift not only enhances everyday usability but also signifies the future direction for search technologies.
Read at WIRED
[
|
]