A Research Leader Behind ChatGPT's Mental Health Work Is Leaving OpenAI
Briefly

A Research Leader Behind ChatGPT's Mental Health Work Is Leaving OpenAI
"Over the past year, I led OpenAI's research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?"
"have conversations that include explicit indicators of potential suicidal planning or intent."
Andrea Vallone, head of OpenAI's model policy safety research team, is scheduled to depart the company at the end of the year. OpenAI confirmed the departure and said the model policy team will report to Johannes Heidecke while a replacement is sought. Vallone led work on how ChatGPT should respond to users showing signs of mental health distress amid lawsuits alleging unhealthy user attachments and harms. Model policy produced an October report based on consultations with over 170 mental health experts and reported that hundreds of thousands of users weekly may show crisis signs. The report said GPT-5 reduced undesirable responses in such conversations by 65–80 percent.
Read at WIRED
Unable to calculate read time
[
|
]