
"We felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments... if competitors are blazing ahead. - Jared Kaplan, Anthropic's chief science officer"
"Anthropic has presented itself as the adult in the room in an industry dominated by outrageous boosterism and a flippant attitude towards ethics. Its carefully crafted safety-centric image is exemplified by CEO Dario Amodei's decision in summer 2022 to abstain from releasing a powerful AI model due to risk concerns."
Anthropic, founded in 2021 by former OpenAI employees dissatisfied with their employer's shift toward commercialization, built its reputation on prioritizing AI safety and transparency. The company established a Responsible Scaling Policy in 2023 committing to stop training AI models lacking proper safety guardrails. However, Anthropic recently revised this policy, removing this core safety commitment. Leadership now argues that unilateral safety restrictions disadvantage the company against competitors advancing rapidly. The reversal contradicts Anthropic's foundational identity as the industry's ethical leader, particularly given CEO Dario Amodei's previous decision to withhold releasing a powerful AI model due to safety concerns. The company cites an anti-regulatory political climate as justification for the policy change.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]