Nearly half the OpenAI staff focused on AGI safety have left, raising concerns about the future of safety measures for powerful AI systems.
As researchers like Jan Leike indicated, concerns for AGI safety have increasingly been overshadowed by the push for product development at OpenAI.
The resignations of many key figures from OpenAI's AGI safety team could seriously undermine efforts to ensure future AI does not pose catastrophic risks.
Daniel Kokotajlo reflects on the mass departures, highlighting that the focus on long-term AI safety appears to be diminishing amid corporate priorities.
Collection
[
|
...
]