Many-shot jailbreaking': AI lab describes how tools' safety features can be bypassedMany Shot Jailbreaking technique bypasses safety features on powerful AI tools by flooding them with examples of wrongdoing.Newer, more complex AI systems are more vulnerable to attacks due to their larger context window capability.