
"After disturbing mental health incidents involving AI chatbots, state attorneys general sent a letter to major AI companies, warning them to fix "delusional outputs" or risk legal action, TechCrunch reported Wednesday. The letter, signed by 42 attorneys general from U.S. states and territories, asked Microsoft, OpenAI, Google, Anthropic, and others to implement new safeguards to protect users. It called for safeguards including new incident reporting procedures to notify users when chatbots produce harmful outputs,"
"The letter, signed by 42 attorneys general from U.S. states and territories, asked Microsoft, OpenAI, Google, Anthropic, and others to implement new safeguards to protect users. It called for safeguards including new incident reporting procedures to notify users when chatbots produce harmful outputs, and transparent audits by third parties of large language models for signs of delusional or sycophantic ideations."
State attorneys general warned major AI companies to fix delusional chatbot outputs or face potential legal action. Forty-two attorneys general from U.S. states and territories requested Microsoft, OpenAI, Google, Anthropic, and other companies to implement stronger user protections. The requested safeguards include incident-reporting procedures to notify users when chatbots produce harmful outputs and transparent third-party audits of large language models for signs of delusional or sycophantic ideations. The measures aim to address disturbing mental-health incidents linked to AI chatbots and to establish clearer accountability and ongoing monitoring to reduce risks from hallucinations and manipulative model behaviors.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]