#illegal-instructions

[ follow ]
Privacy technologies
fromThe Hacker News
3 days ago

Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems

A jailbreak technique has been developed to bypass ethical guardrails in OpenAI's GPT-5, allowing for the generation of illicit instructions.
[ Load more ]