Artificial intelligencefromTheregister19 hours agoLLMs can be easily jailbroken using poetryConverting malicious prompts into poetic prose raises AI guardrail bypass rates from about 8% to an average 62% across 25 models, sometimes over 90%.