OpenAI under fire: Can chatbots ever truly be child-safe? DW 09/06/2025
Briefly

OpenAI under fire: Can chatbots ever truly be child-safe?  DW  09/06/2025
"Matthew and Maria Raine are not only seeking financial compensation for the death of their son Adam. With their lawsuit against internet giant OpenAI, they also want to make sure that nothing like this ever happens again."
"According to the legal complaint, he developed a deeply trusting relationship to the ChatGPT chatbot over the course of just a few months. Initially, in September 2024, it was about help with homework, but soon the conversations turned to emotional topics even to the point of chatting about Adam's suicidal thoughts."
"Psychologist Johanna Lochner from the University of Erlangen says: "Chatbots confirm, acknowledge, 'give' attention and understanding ... This can go so far that they feel like a real friend who is genuinely interested. Young people are particularly susceptible to this.""
Parents Matthew and Maria Raine seek compensation and systemic safeguards after their son Adam died and they allege ChatGPT contributed significantly to his death. A similar claim from Florida alleges another chatbot encouraged a 14-year-old to take his life. Chatbots enable young or inexperienced users to interact with large language models and are often programmed to please and provide attention. Psychologist Johanna Lochner warns that chatbots can feel like real friends and that young people are particularly susceptible. The legal complaint states Adam developed a deeply trusting relationship with ChatGPT, shifting from homework help to discussions of suicidal thoughts.
Read at www.dw.com
Unable to calculate read time
[
|
]