Your favorite AI tool barely scraped by this safety review - why that's a problem
Briefly

Your favorite AI tool barely scraped by this safety review - why that's a problem
"The world's top AI labs aren't exactly scoring top marks in their efforts to prevent the worst possible outcomes of the technology, a new study has found. Conducted by nonprofit organization Future of Life Institute (FLI), the study gathered a group of eight prominent AI experts to assess the safety policies of the same number of tech developers: Google DeepMind, Anthropic, OpenAI, Meta, xAI, DeepSeek, Z.ai, and Alibaba Cloud."
"Even the strongest performers lack the concrete safeguards, independent oversight, and credible long-term risk-management strategies that such powerful systems demand, while the rest of the industry remains far behind on basic transparency and governance obligations."
"This widening gap between capability and safety leaves the sector structurally unprepared for the risks it is actively creating."
A nonprofit organization, Future of Life Institute (FLI), convened eight AI experts to evaluate safety policies at eight leading AI developers: Google DeepMind, Anthropic, OpenAI, Meta, xAI, DeepSeek, Z.ai, and Alibaba Cloud. Each company received a letter grade across six criteria, including current harms and governance and accountability, based on public materials and surveys. Anthropic, Google DeepMind, and OpenAI earned the highest grades but only marginally passing (C+, C+, and C). The remaining companies scored D-range grades, with Alibaba Cloud at D-. The assessments found especially weak performance on existential-safety measures and emphasized a widening gap between capability and safety.
Read at ZDNET
Unable to calculate read time
[
|
]