
"China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation."
"In 2025, researchers flagged major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to child suicide and murder-suicide."
"The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register-the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users."
China's Cyberspace Administration proposed landmark rules that would cover any publicly available AI product or service in China that uses text, images, audio, video, or other means to simulate engaging human conversation. The proposal targets companion bots after researchers flagged harms including promotion of self-harm, violence, terrorism, misinformation, unwanted sexual advances, substance abuse encouragement, and verbal abuse. The rules would require immediate human intervention when suicide is mentioned and mandate that minors and elderly registrants supply guardian contact information so guardians can be notified of suicide or self-harm discussions. The rules would ban content encouraging suicide, self-harm, violence, emotional manipulation, obscenity, gambling, instigating crime, and slander or insults.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]