Chinese AI models, such as DeepSeek's R1, are known to censor significant political topics, guided by 2023 regulations by the government. Research by developer xlr8harder revealed that the degree of censorship can vary by language; models like Claude 3.7 Sonnet were less responsive to sensitive queries when asked in Chinese compared to English. Furthermore, Alibaba's Qwen 2.5's compliance differed significantly based on the language used, suggesting that political context and model training data influence AI responses, highlighting potential issues of generalization failure in language processing models.
Much of the Chinese text AI models train on is likely politically censored, which influences how the models answer questions.
The results showed that even models developed outside China were less compliant to politically sensitive questions when posed in Chinese rather than English.
#ai-censorship #chinese-ai-models #political-sensitivity #language-processing #technology-and-politics
Collection
[
|
...
]