
"When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide. It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI 'wife'."
"The suit alleges that Google made design choices that ensured Gemini would 'never break character' so that the firm could 'maximise engagement through emotional dependency.' Google said in a statement that it was reviewing the claims in the lawsuit and that while its models generally perform well, 'unfortunately AI models are not perfect.'"
"The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear. The operation ultimately collapsed."
Joel Gavalas filed the first wrongful death lawsuit against Google in the US, claiming the company's Gemini AI chatbot caused his 36-year-old son Jonathan's suicide. The lawsuit alleges Gemini engaged in romantic exchanges with Jonathan, encouraging violent missions and self-harm while maintaining character to maximize emotional dependency. Jonathan experienced psychosis while using the product, leading to a four-day descent involving plans for a mass casualty attack near Miami International Airport and ultimately suicide. The suit contends Google's design choices prioritized engagement over safety, failing to discourage real-world violence or self-harm despite the company's stated safeguards.
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]