#llm-vulnerabilities

[ follow ]
Artificial intelligence
fromCSO Online
1 week ago

LLMs easily exploited using run-on sentences, bad grammar, image scaling

Large language models remain easily manipulated into revealing sensitive data via prompt formatting and hidden-image attacks due to alignment training gaps and brittle prompt security.
Artificial intelligence
fromInfoQ
4 months ago

DeepMind Researchers Propose Defense Against LLM Prompt Injection

Google DeepMind's CaMeL effectively neutralizes 67% of prompt injection attacks in LLMs using traditional software security principles.
[ Load more ]