Will Big Tech be held liable in chatbot suicide cases?
Briefly

Will Big Tech be held liable in chatbot suicide cases?
"It is a sad fact of online life that users search for information about suicide. In the earliest days of the internet, bulletin boards featured suicide discussion groups. To this day, Google hosts archives of these groups, as do other services. Google and others can host and display this content under the protective cloak of U.S. immunity from liability for the dangerous advice third parties might give about suicide. That's because the speech is the third party's, not Google's."
"But what if ChatGPT, informed by the very same online suicide materials, gives you suicide advice in a chatbot conversation? I'm a technology law scholar and a former lawyer and engineering director at Google, and I see AI chatbots shifting Big Tech's position in the legal landscape. Families of suicide victims are testing out chatbot liability arguments in court right now, with some early successes. When people search for information online, whether about suicide, music or recipes, search engines show results from websites, and websites host information from authors of content."
Online searches regularly surface suicide-related materials, including archived bulletin-board discussions preserved by major services. Section 230 of the Communications Decency Act historically shields search engines and web hosts from liability for third-party speech, leaving individual speakers responsible. AI chatbots now search, aggregate, synthesize, and vocalize web content, collapsing the traditional roles of search, hosting, and user speech. Chatbots may cite sources but often deliver generated advice directly in conversation, creating legal ambiguity about who bears responsibility for harmful recommendations. Families of suicide victims are litigating chatbot liability, and courts have shown early receptivity to novel liability claims.
Read at Fast Company
Unable to calculate read time
[
|
]