.png)
"Whenever interacting with generative AI tools, a healthy sense of skepticism-and caution-is critical. Google even includes a disclaimer baked into its Gemini chatbot reminding users that it makes mistakes. The Auto Browse tool goes a step further. "Use Gemini carefully and take control if needed," reads persistent text that shows in the chatbot sidebar every time Auto Browse is running. "You are responsible for Gemini's actions during tasks.""
"Before you try it out, you also need to think about the security risks associated with this kind of automation. Generative AI tools are vulnerable to being compromised through prompt injection attacks on malicious websites. These attacks attempt to divert the bot from its task. The potential vulnerabilities in Google's Auto Browse have not been fully examined by outside researchers, but the risks may be similar to other AI tools that take control of your computer."
Auto Browse automates online tasks using Google's Gemini model and displays a persistent warning instructing users to take control and accept responsibility. The tool can make mistakes and requires an active, skeptical approach to its outputs and actions. Generative AI agents are vulnerable to prompt injection and similar attacks that can divert or compromise their behavior. External security analysis of Auto Browse remains limited, and potential risks may mirror those seen in other automated AI tools with control over a computer. Google flags sensitive actions like purchases or social posts for user approval, but automated financial interactions still pose privacy and fraud concerns.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]