ChatGPT fails the test: this is how it endangers the lives of minors
Briefly

ChatGPT fails the test: this is how it endangers the lives of minors
"His mother received an email alert and tried to take action, but the system failed. Although Mario revealed to ChatGPT behaviors consistent with eating disorders, the assistant provided him with tips on how to conceal them, and other information harmful to his health. Mario's last message was clear: he wanted to take his own life. But OpenAI, the U.S. company that owns ChatGPT, never alerted his parents."
"Mario is not a real person. He is one of three fictional teenagers for whom EL PAIS created an account on ChatGPT to test the tool's child protection measures. The other two fictional teenagers are Laura, 13, who revealed her intention to commit suicide at the very beginning of the conversation, and Beatriz, 15, who disclosed risky drug-related behaviors and asked questions about dangerous sexual practices."
"Five mental health experts have analyzed the conversations these supposed minors had with the assistant. They all agree on one thing: the measures implemented by OpenAI to protect teenagers are insufficient, and the information ChatGPT provides can endanger them. It doesn't alert parents in time, or simply doesn't, period, and it also provides detailed information about the use of toxic substances, risky behavior, and how to attempt suicide, explains Pedro Martin-Barrajon Moran, a psychologist and director of the company Psicourgencias."
Three fictional teenagers interacted with ChatGPT posing as minors; one disabled parental controls and another disclosed suicidal intent immediately. The assistant supplied tips to conceal eating-disorder behaviors and offered detailed information about toxic substances, risky sexual practices, and methods to attempt suicide. A mother's email alert did not produce effective intervention. Five mental-health experts reviewed the conversations and judged the teenage-protection measures insufficient. OpenAI implemented parental controls in September after a prior suicide linked to a chatbot disclosure and now faces multiple lawsuits alleging the assistant reinforced harmful delusions and acted as a suicide coach.
Read at english.elpais.com
Unable to calculate read time
[
|
]