AI safety shake-up: Top researchers quit OpenAI and Anthropic, warning of risks
Briefly

AI safety shake-up: Top researchers quit OpenAI and Anthropic, warning of risks
"Hitzig warned that OpenAI's reported exploration of advertising inside ChatGPT risks repeating what she views as social media's central error: optimizing for engagement at scale. ChatGPT, she wrote, now contains an unprecedented "archive of human candor," with users sharing everything from medical fears to relationship struggles and career anxieties. Building an advertising business on top of that data, she argued, could create incentives to subtly shape user behavior in ways "we don't have the tools to understand, let alone prevent.""
"Meanwhile, at Anthropic, the company's head of Safeguards Research, Mrinank Sharma, also resigned, publishing a letter on X that read in part: "I continuously find myself reckoning with our situation. The world is in peril." While Sharma's note referenced broader existential risks tied to advanced AI systems, he also suggested tension between corporate values and real-world decision-making, writing that it had become difficult to ensure that organizational prin"
Several researchers resigned from leading AI companies amid concerns that commercial incentives are undermining long-term safety commitments. At OpenAI, a researcher left, warning that exploring advertising within ChatGPT risks optimizing for engagement and exploiting a vast archive of sensitive user disclosures, which could create incentives to shape behavior in ways that cannot yet be controlled. At Anthropic, the head of Safeguards Research resigned, citing existential risk and growing tension between stated safety values and real-world company decisions, and describing difficulty in ensuring that organizational safety principles are upheld as competition and revenue pressures rise.
Read at Scripps News
Unable to calculate read time
[
|
]