Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits
Briefly

Researchers Disclose Google Gemini AI Flaws Allowing Prompt Injection and Cloud Exploits
"A prompt injection flaw in Gemini Cloud Assist that could allow attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool is capable of summarizing logs pulled directly from raw logs, enabling the threat actor to conceal a prompt within a User-Agent header as part of an HTTP request to a Cloud Function and other services like Cloud Run, App Engine, Compute Engine, Cloud Endpoints, Cloud Asset API, Cloud Monitoring API, and Recommender API"
"The vulnerabilities have been collectively codenamed the Gemini Trifecta by the cybersecurity company. They reside in three distinct components of the Gemini suite - A search-injection flaw in the Gemini Search Personalization model that could allow attackers to inject prompts and control the AI chatbot's behavior to leak a user's saved information and location data by manipulating their Chrome search history using JavaScript and leveraging the model's inability to differentiate between legitimate user queries and injected prompts from external sources"
"An indirect prompt injection flaw in Gemini Browsing Tool that could allow attackers to exfiltrate a user's saved information and location data to an external server by taking advantage of the internal call Gemini makes to summarize the content of a web page"
Three now-patched security vulnerabilities affect Google's Gemini AI assistant and could have exposed users to privacy risks and data theft. The vulnerabilities were codenamed the Gemini Trifecta. One flaw enabled prompt injection in Gemini Cloud Assist by embedding prompts within raw logs and User-Agent headers, potentially compromising cloud functions and related services. Another flaw allowed search-injection in the Search Personalization model by manipulating Chrome search history with JavaScript, causing the model to treat injected prompts as legitimate queries. A third flaw enabled indirect prompt injection via the Browsing Tool to exfiltrate saved information and location data to external servers.
Read at The Hacker News
Unable to calculate read time
[
|
]