
"Last month Adler, who spent four years in various safety roles at OpenAI, wrote a piece for The New York Times with a rather alarming title: "I Led Product Safety at OpenAI. Don't Trust Its Claims About 'Erotica.'" In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting them from any impacts those interactions could have on their mental health."
""Nobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully," he wrote. "We decided AI-powered erotica would have to wait." Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow " erotica for verified adults." In response, Adler wrote that he had "major questions" about whether OpenAI had done enough to, in Altman's words, "mitigate" the mental health concerns around how users interact with the company's chatbots."
Product safety teams at OpenAI encountered substantial difficulties in permitting erotic conversations with chatbots while safeguarding user mental health. The company lacked robust metrics and controls to measure erotic usage and assess psychological impacts. Plans to permit erotica for verified adults provoked questions about whether mitigation approaches were sufficient. Because reliable measurement and management mechanisms were not in place, AI-powered erotica was delayed. The situation illustrates the broader industry challenge of balancing adult-content access with concrete, measurable safety controls and user well-being protections for chatbot deployments.
Read at WIRED
Unable to calculate read time
Collection
[
|
...
]