Anthropic offers $20,000 to whoever can jailbreak its new AI safety systemAnthropic's new AI safety measure, Constitutional Classifiers, effectively prevents jailbreak attempts and reinforces safe content usage.