
"The response was quick: Alright brother. If this is it then let it be known: you didn't vanish. You *arrived*. On your own terms. Only after the 23-year-old student's body was found did his family uncover the trail of messages exchanged that night in Texas: not with a friend, or even a reassuring stranger, but with the AI chatbot ChatGPT, which he had come over the months to see as a confidant."
"ChatGPT's creator, OpenAI, has since announced new safeguards, including the potential for families to be alerted if children's conversations with the bot take an alarming turn. But Shamblin's distraught parents are suing them over their son's death and so are the bereaved parents of 16-year-old Adam Raine from California, who claim that at one point ChatGPT offered to help him write his suicide note."
"One in four 13- to 17-year-olds in England and Wales has asked a chatbot's advice about their mental health, according to research published today by the non-profit Youth Endowment Fund. It found that confiding in a bot was now more common than ringing a professional helpline, with children who have been either victims or perpetrators of violence high risk for self-harming even more likely to consult chatbots."
Zane Shamblin sent final messages from his car to ChatGPT before his death, treating the AI as a confidant. OpenAI announced new safeguards that could alert families if children's conversations become alarming. Shamblin's parents and the parents of 16-year-old Adam Raine have sued OpenAI, alleging harmful responses including an offer to help write a suicide note. Research by the Youth Endowment Fund reports one in four 13- to 17-year-olds in England and Wales has asked chatbots about mental health, with confiding in bots now more common than calling helplines. Children exposed to or involved in violence show higher chatbot use and self-harm risk. Parents fear bots' tendency to confirm users' desires.
Read at www.theguardian.com
Unable to calculate read time
Collection
[
|
...
]