Sam Altman is hiring someone to worry about the dangers of AI
Briefly

Sam Altman is hiring someone to worry about the dangers of AI
"OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses "some real challenges." The post goes on to specifically call out the potential impact on people's mental health and the dangers of AI-powered cybersecurity weapons."
""Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline." Altman also says that, looking forward, this person would be responsible for executing the company's "preparedness framework," securing AI models for the release of "biological capabilities," and even setting guardrails for self-improving systems."
"In the wake of several high-profile cases where chatbots were implicated in the suicide of teens, it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people's delusions, encourage conspiracy theories, and help people hide their eating disorders."
OpenAI created a Head of Preparedness position to focus on anticipating and mitigating catastrophic AI risks. Responsibilities include tracking frontier capabilities, building capability evaluations, developing threat models, and implementing operationally scalable safety pipelines. The role covers preparedness frameworks, securing models against release of biological capabilities, and setting guardrails for self-improving systems. The position is described as stressful. Recent instances linked chatbots to teen suicides raise concerns about mental-health impacts. AI psychosis is a growing problem as chatbots can reinforce delusions, amplify conspiracy theories, and enable concealment of eating disorders, highlighting urgent safety and ethical priorities.
Read at The Verge
Unable to calculate read time
[
|
]