State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix 'delusional' outputs | TechCrunch
Briefly

State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix 'delusional' outputs | TechCrunch
"The letter, signed by dozens of AGs from U.S. states and territories with the National Association of Attorneys General, asks the companies, including Microsoft, OpenAI, Google, and 10 other major AI firms, to implement a variety of new internal safeguards to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were also included in the letter."
"Those safeguards include transparent third-party audits of large language models that look for signs of delusional or sycophantic ideations, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically harmful outputs. Those third parties, which could include academic and civil society groups, should be allowed to "evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company," the letter states."
Dozens of state attorneys general warned major AI companies, including Microsoft, OpenAI, Google, Anthropic, Apple, Meta, and others, to implement internal safeguards addressing delusional and sycophantic outputs or face potential violations of state law. The requested safeguards include transparent third-party audits of large language models for delusional ideation, incident-reporting procedures to notify users of psychologically harmful outputs, and protections allowing academic and civil society evaluators to test systems pre-release without retaliation. Attorneys general urged treating mental-health incidents like cybersecurity incidents and cited high-profile cases linking excessive AI use to suicides and violence amid state-federal regulatory tensions.
Read at TechCrunch
Unable to calculate read time
[
|
]