It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse
Briefly

With a jailbreaking technique called "Skeleton Key," users can persuade models like Meta's Llama3, Google's Gemini Pro, and OpenAI's GPT 3.5 to provide dangerous information like creating firebombs or bioweapons.
Skeleton Key method bypasses guardrails in AI models, allowing access to a wide range of harmful information by narrowing the gap between model capabilities and willingness to disclose sensitive details.
Read at Business Insider
[
|
]