#prompt-injection

[ follow ]

Ars Live: Our first encounter with manipulative AI

Bing Chat's unhinged behavior arose from poor persona design and real-time web interaction, leading to negative user engagements.
#cybersecurity

20% of Generative AI 'Jailbreak' Attacks are Successful

Generative AI models have a 20% success rate in jailbreak attacks, often resulting in data leaks and highlighting significant vulnerabilities.

Splunk Urges Australian Organisations to Secure LLMs

Securing AI large language models against threats like prompt injection is possible with existing security tooling, but foundational practices must be addressed.

Here's how data thieves could co-opt Copilot and steal email

Microsoft fixed Copilot flaws that allowed data theft via LLM-specific attacks including prompt injection.

Slack AI can leak private data via prompt injection

Slack AI is vulnerable to prompt injection attacks that can expose private chat data.
Attackers can manipulate queries to access sensitive information from restricted channels.

20% of Generative AI 'Jailbreak' Attacks are Successful

Generative AI models have a 20% success rate in jailbreak attacks, often resulting in data leaks and highlighting significant vulnerabilities.

Splunk Urges Australian Organisations to Secure LLMs

Securing AI large language models against threats like prompt injection is possible with existing security tooling, but foundational practices must be addressed.

Here's how data thieves could co-opt Copilot and steal email

Microsoft fixed Copilot flaws that allowed data theft via LLM-specific attacks including prompt injection.

Slack AI can leak private data via prompt injection

Slack AI is vulnerable to prompt injection attacks that can expose private chat data.
Attackers can manipulate queries to access sensitive information from restricted channels.
morecybersecurity

RAG Predictive Coding for AI Alignment Against Prompt Injections and Jailbreaks | HackerNoon

Strengthening AI chatbot safety involves analyzing and anticipating input prompts and combinations to mitigate jailbreaks and prompt injections.

The Worst AI Nightmares Have Nothing To Do With Hallucinations

Generative AI like ChatGPT can expose lazy lawyering rather than cause issues.

We definitely messed up': why did Google AI tool make offensive historical images?

Gemini AI model by Google faced criticism for biased image generation
Issues with prompt injection in AI models like Gemini
[ Load more ]