Anthropic, the AI company with a safety-first reputation, is changing a core guardrail | CBC News
Briefly

Anthropic, the AI company with a safety-first reputation, is changing a core guardrail | CBC News
"Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly. The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level."
"The updated guidelines say Anthropic would still require a 'strong argument that catastrophic risk is contained' when developing AI, it now says it will only delay development 'until and unless we no longer believe we have a significant lead' meaning it would keep developing if they don't believe they have a lead over their competitors."
"Anthropic was founded in 2021 by former employees of OpenAI who were concerned that company was putting development ahead of safety. CEO Dario Amodei has also voiced fears about the negative potential of AI including mass human catastrophe, and maintained that safety continued to be the 'highest-level focus' for Anthropic."
Anthropic has modified its responsible scaling policy, originally designed to prevent dangerous AI development. The updated guidelines now permit continued development if the company believes it lacks a competitive lead over rivals, despite requiring evidence that catastrophic risks are contained. This shift reflects changing priorities in the policy environment, where AI competitiveness and economic growth have superseded safety concerns at the federal level. The modification occurs amid Pentagon pressure to allow military applications of Anthropic's technology. This represents a significant departure from the company's founding mission, established by former OpenAI employees specifically concerned about prioritizing development over safety.
Read at www.cbc.ca
Unable to calculate read time
[
|
]