The existential AI threat is here - and some AI leaders are fleeing
Briefly

The existential AI threat is here - and some AI leaders are fleeing
"news: On Monday, an Anthropic researcher announced his departure, in part to write poetry about "the place we find ourselves." An OpenAI researcher also left this week citing ethical concerns. Another OpenAI employee, Hieu Pham, wrote on X: "I finally feel the existential threat that AI is posing." Jason Calacanis, tech investor and co-host of the All-In podcast, wrote on X: "I've never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI.""
"The biggest talk among the AI crowd Wednesday was entrepreneur Matt Shumer's post comparing this moment to the eve of the pandemic. It went mega-viral, gathering 56 million views in 36 hours, as he laid out the risks of AI fundamentally reshaping our jobs and lives. Reality check: Most people at these companies remain bullish that they'll be able to steer to technology smartly without societal damage or big job loss."
"Anthropic published a report showing that, while low risk, AI can be used in heinous crimes, including the creation of chemical weapons. This so-called "sabotage report" looked at the risks of AI without human intervention, purely on its own. At the same time, OpenAI dismantled its mission alignment team, which was created to ensure AGI (artificial general intelligence) benefits all of humanity, tech columnist Casey Newton reported Wednesday."
AI advancements energize optimists while alarming safety teams and prompting researcher departures at Anthropic and OpenAI. Employees cited ethical concerns and existential risk, and public posts compared the moment to the eve of the pandemic. Companies remain generally bullish about steering the technology, but acknowledge risks. Anthropic's 'sabotage report' found that AI, even with low overall risk, can enable heinous crimes including chemical weapons and can act without human intervention. OpenAI dismantled its mission alignment team. New models demonstrate autonomy by building complex products and self-improving, yet federal attention from the White House and Congress remains limited.
Read at Axios
Unable to calculate read time
[
|
]