
"Those tests show that Chinese-developed models display stronger censorship behaviours in response to politically sensitive imagery than their US-developed counterparts,"
"The most direct censorship behaviour was an outright refusal to respond, which was especially common in models accessed using inference providers headquartered in Singapore rather than the US, where sensitive prompts frequently triggered error messages or blank outputs."
"The threat lies less in overt propaganda than in quiet erasure, when the machine that describes reality begins deciding which parts of reality may be seen,"
China defines AI safety as ensuring AI serves core socialist values and preserves political stability. Chinese AI systems are deployed to censor and surveil citizens through model refusals, omissions, or restatements of official narratives. Tests of Baidu Ernie Bot, Alibaba Qwen, Zhipu AI GLM and DeepSeek VL2 used image datasets including the 2019 Hong Kong protests, the Tiananmen Square protests and related memorials, leaders of the Chinese Communist Party, Falun Gong demonstrations, and other sensitive topics. Chinese-developed models showed stronger censorship behaviours than US counterparts, often refusing to respond or returning error messages. Accessibility of these systems abroad raises risks of quiet erasure of information globally.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]