Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing AttackJohann Rehberger demonstrated a prompt injection attack on Google Gemini, exploiting delayed tool invocation to modify its long-term memories.
20% of Generative AI 'Jailbreak' Attacks are SuccessfulGenerative AI models have a 20% success rate in jailbreak attacks, often resulting in data leaks and highlighting significant vulnerabilities.
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AIUnauthorized JavaScript execution in AI chatbots risks account takeovers via prompt injection attacks.
Microsoft offers $10K for hackers to hijack LLM mail serviceMicrosoft's LLMail-Inject challenge aims to identify vulnerabilities in AI email systems via a prompt injection attack.
Slack AI can leak private data via prompt injectionSlack AI is vulnerable to prompt injection attacks that can expose private chat data.Attackers can manipulate queries to access sensitive information from restricted channels.
Here's how data thieves could co-opt Copilot and steal emailMicrosoft fixed Copilot flaws that allowed data theft via LLM-specific attacks including prompt injection.
Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing AttackJohann Rehberger demonstrated a prompt injection attack on Google Gemini, exploiting delayed tool invocation to modify its long-term memories.
20% of Generative AI 'Jailbreak' Attacks are SuccessfulGenerative AI models have a 20% success rate in jailbreak attacks, often resulting in data leaks and highlighting significant vulnerabilities.
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AIUnauthorized JavaScript execution in AI chatbots risks account takeovers via prompt injection attacks.
Microsoft offers $10K for hackers to hijack LLM mail serviceMicrosoft's LLMail-Inject challenge aims to identify vulnerabilities in AI email systems via a prompt injection attack.
Slack AI can leak private data via prompt injectionSlack AI is vulnerable to prompt injection attacks that can expose private chat data.Attackers can manipulate queries to access sensitive information from restricted channels.
Here's how data thieves could co-opt Copilot and steal emailMicrosoft fixed Copilot flaws that allowed data theft via LLM-specific attacks including prompt injection.
There's a Fascinating Reason OpenAI Is Afraid to Launch Its AI-Powered "Agents"OpenAI is delaying its AI agent release due to security concerns over prompt injection attacks.
ChatGPT search highly susceptible to manipulationChatGPT-based search engine can be manipulated, leading to compromised search results.Prompt injection poses significant security risks in using AI tools like ChatGPT for searches.
There's a Fascinating Reason OpenAI Is Afraid to Launch Its AI-Powered "Agents"OpenAI is delaying its AI agent release due to security concerns over prompt injection attacks.
ChatGPT search highly susceptible to manipulationChatGPT-based search engine can be manipulated, leading to compromised search results.Prompt injection poses significant security risks in using AI tools like ChatGPT for searches.
Ars Live: Our first encounter with manipulative AIBing Chat's unhinged behavior arose from poor persona design and real-time web interaction, leading to negative user engagements.
RAG Predictive Coding for AI Alignment Against Prompt Injections and Jailbreaks | HackerNoonStrengthening AI chatbot safety involves analyzing and anticipating input prompts and combinations to mitigate jailbreaks and prompt injections.
The Worst AI Nightmares Have Nothing To Do With HallucinationsGenerative AI like ChatGPT can expose lazy lawyering rather than cause issues.
We definitely messed up': why did Google AI tool make offensive historical images?Gemini AI model by Google faced criticism for biased image generationIssues with prompt injection in AI models like Gemini