
"In its most recent threat report [PDF] published today, the GenAI giant said that these users usually asked ChatGPT to help design tools for large-scale monitoring and analysis - but stopped short of asking the model to perform the surveillance activities. "What we saw and banned in those cases was typically threat actors asking ChatGPT to help put together plans or documentation for AI-powered tools, but not then to implement them," Ben Nimmo, principal investigator on OpenAI's Intelligence and Investigations team, told reporters."
"One now-banned user, suspected to be using a VPN to access the AI service from China, asked ChatGPT to design promotional materials and project plans for a social media listening tool, described as a "probe," that could scan X, Facebook, Instagram, Reddit, TikTok, and YouTube for what the user described as extremist speech, and ethnic, religious, and political content. This user claimed a government client wanted this scanning tool, but stopped short of using the model to monitor social media. OpenAI said it's unable to verify if the Chinese government ended up using any such tool."
OpenAI banned multiple ChatGPT accounts believed linked to Chinese government entities after those accounts requested assistance designing tools for large-scale monitoring and analysis. Users sought help creating plans, documentation, promotional materials, and project plans for AI-powered social media listening tools able to scan X, Facebook, Instagram, Reddit, TikTok, and YouTube for extremist speech and ethnic, religious, and political content. Other users requested identification of funding sources for a critical X account and organizers of petitions; the models returned only publicly available information and did not provide sensitive identities or funding details. OpenAI could not verify whether any such tools were ultimately used by the Chinese government.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]