#ai-vulnerabilities

[ follow ]
www.theguardian.com
5 months ago
Artificial intelligence

Many-shot jailbreaking': AI lab describes how tools' safety features can be bypassed

Many Shot Jailbreaking technique bypasses safety features on powerful AI tools by flooding them with examples of wrongdoing.
Newer, more complex AI systems are more vulnerable to attacks due to their larger context window capability. [ more ]
Dark Reading
9 months ago
Artificial intelligence

Unpatched Critical Vulnerabilities Open AI Models to Takeover

Researchers have identified critical vulnerabilities in AI infrastructure that could leave companies at risk.
Affected platforms include Ray, MLflow, ModelDB, and H20 version 3.
Vulnerabilities in AI systems can allow unauthorized access and compromise the rest of the network. [ more ]
[ Load more ]