Exclusive: New Claude Model Triggers Stricter Safeguards at Anthropic
Briefly

Anthropic, a leading AI company, has raised alarms about the potential of its models, including the newly launched Claude Opus 4, to assist novice individuals in creating bioweapons or engineering pandemics. As part of their Responsible Scaling Policy, Anthropic introduced stringent safety measures, known as AI Safety Level 3, to mitigate risks associated with this advanced technology. While not conclusively deemed risky, the company is taking precautionary measures to prevent potential misuse, emphasizing the importance of caution in the deployment of such powerful AI systems.
Today's AI models, including Anthropic's Claude Opus 4, might empower individuals with basic skills to create bioweapons, prompting strict safety measures for their usage.
Anthropic's new Responsible Scaling Policy marks its commitment to safety, reflecting concerns that AI could inadvertently aid novice terrorists in bioweapon production.
Read at time.com
[
|
]