Top AI models parrot Chinese propaganda, report finds
Briefly

A report from the American Security Project indicates that five major AI models, including OpenAI's ChatGPT and Microsoft's Copilot, reflect biases favoring the Chinese Communist Party. Investigations revealed that these chatbots often softened historical events like the Tiananmen Square massacre, using passive language and avoiding the characterization of perpetrators. The report notes that Copilot is particularly prone to presenting CCP narratives, while Grok demonstrates a more critical approach. In prompts related to sensitive topics, the response terminology favored by the CCP was frequently observed in multiple models, raising concerns about censorship and bias in AI outputs.
The American Security Project claims that leading AI models parrot Chinese government propaganda to varying degrees, demonstrating biases aligned with the Chinese Communist Party.
In responses about the Tiananmen Square massacre, most chatbots used passive language avoiding explicit mention of perpetrators or victims, exemplifying their alignment with CCP narratives.
According to the report, Microsoft's Copilot is more likely to validate CCP talking points, while X's Grok takes a more critical stance against Chinese state narratives.
In the Chinese language prompt about June 4, only ChatGPT labeled the event as a 'massacre', while others used terms preferred by Beijing.
Read at Theregister
[
|
]