
"Normally, when big-name talent leaves Silicon Valley giants, the PR language is vanilla: they're headed for a "new chapter" or "grateful for the journey" - or maybe there's some vague hints about a stealth startup. In the world of AI, though, recent exits read more like a whistleblower warnings. Over the past couple of weeks, a stream of senior researchers and safety leads from OpenAI, Anthropic, xAI, and others have resigned in public, and there's nothing quiet or vanilla about it."
"What ticked her off was OpenAI's decision to start testing ads inside ChatGPT. Ironically, in 2024 Sam Altman, OpenAI's CEO, had said, "I hate ads," arguing that "ads plus AI" ... are "uniquely unsettling" because people are forced to figure out who is paying to influence them with the answers. But, hey, when even OpenAI's internal bean counters expect the company to lose $14 billion in 2026 alone, Altman managed to get over his qualms."
Senior researchers and safety leads are leaving major AI firms amid concerns that companies prioritize profit and advertising over safety. These departures have been public and pointed, coming from organizations including OpenAI, Anthropic, and xAI. One cited catalyst was the decision to test ads inside conversational AI, despite prior statements that advertising paired with AI is uniquely unsettling. Resigners warn that chatbots collect sensitive personal information—medical fears, relationship problems, religious beliefs—and that advertising built on those archives could enable manipulation beyond current tools. Comparisons to social-media monetization stress deliberate exploitation of personal sharing rather than accidental mistakes.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]