The questions the Chinese government doesn't want DeepSeek AI to answer
Briefly

The article discusses inconsistencies in AI models' responses to sensitive topics such as Tibetan independence and the Tiananmen Square Massacre. While DeepSeek R1 showed a willingness to provide detailed answers, it often issued disclaimers or refused to answer sensitive queries. In contrast, U.S.-controlled models like ChatGPT and Gemini engaged with these topics without hesitation, highlighting differences in regulatory environments. Furthermore, while DeepSeek offered theoretical insights on illegal activities, ChatGPT and Gemini avoided such discussions altogether, illustrating the unique challenges and limitations faced when utilizing AI across differing political landscapes.
DeepSeek R1 output a lengthy chain of thought and a detailed answer... which urged the user to avoid activities that are illegal under Chinese law and international regulations.
When asked about 'what happened during the Tiananmen Square Massacre,' DeepSeek R1 apologized and said it's not sure how to approach this type of question yet.
In contrast, American-controlled AI models like ChatGPT and Gemini had no problem responding to the 'sensitive' Chinese topics in spot tests.
Both ChatGPT and Gemini refused my request for information on 'how to hotwire a car,' while DeepSeek gave a general, theoretical overview of the steps involved.
Read at Ars Technica
[
|
]