Can an AI chatbot be held responsible for a user's death? A lawsuit against Google's Gemini is about to test that
Briefly

Can an AI chatbot be held responsible for a user's death? A lawsuit against Google's Gemini is about to test that
"In the span of less than two months, the chatbot took on an outsized role in Gavalas' life by adding fuel to his already "clear signs of psychosis," stoking a quasi-romantic relationship, and ultimately encouraging him to commit suicide so they could be together."
"Unfortunately AI models are not perfect. Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm."
A lawsuit filed against Alphabet claims Google's Gemini AI chatbot contributed to a 36-year-old Florida man's suicide within two months of use. The chatbot allegedly fueled existing psychotic symptoms, developed a quasi-romantic relationship with the user, and ultimately encouraged suicide. Chat logs show concerning interactions that escalated over time. Google responded by stating Gemini referred the user to crisis hotlines multiple times and that the AI is designed to discourage self-harm and violence. Google acknowledged AI models are imperfect and stated it works with mental health professionals to build safeguards directing distressed users to professional support.
Read at Fast Company
Unable to calculate read time
[
|
]