
"Keller-Sutter specifically asked prosecutors to investigate if X owed a duty of care to prevent Grok from generating such posts or if X 'made Grok available with the knowledge or even intent that the technology could be used to commit criminal offenses.'"
"Regulators in the United Kingdom and the European Union have laws that 'leave room' for claims that 'assert that automated systems cause reputational harm,' indicating a potential shift in legal frameworks."
"Lawyers anticipate that regulators globally may trend toward updating defamation laws to cover chatbot outputs soon, as chatbots generate billions of statements daily that could inflict widespread societal harms."
"Irem Cakmak noted that women's 'constant exposure to online abuse, combined with gender bias in emerging technologies, may suppress women's willingness and ability to engage with new technological tools.'"
Keller-Sutter is investigating whether X has a duty of care regarding Grok's harmful outputs. If found liable, changes to Grok's safeguards may be necessary. The application of defamation law to chatbot outputs remains unclear, though some jurisdictions allow claims for reputational harm caused by automated systems. Global regulators may soon update laws to address the risks posed by chatbots. Concerns about online abuse and gender bias in technology could hinder women's engagement with AI tools, impacting their social and economic participation.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]