Artificial intelligencefromTheregister1 week agoOne long sentence is all it takes to make LLMs misbehavePoorly punctuated, long run-on prompts can bypass LLM guardrails, enabling jailbreaks that expose harmful outputs despite alignment training.