OpenAI says Russian and Israeli groups used its tools to spread disinformation
Briefly

OpenAI disrupted disinformation campaigns by malicious actors from Russia, China, Israel, and Iran utilizing generative AI models to create and spread propaganda content on social media platforms in multiple languages. The campaigns did not gain significant traction or reach extensive audiences.
OpenAI researchers identified and banned accounts associated with five covert influence operations, a mix of state and private actors, criticizing the US, Ukraine, Baltic nations, and promoting anti-US and anti-Israel sentiments via generative AI-generated content on various platforms.
The report sheds light on how generative AI models have been exploited for disinformation, raising concerns among researchers and lawmakers regarding the proliferation of online falsehoods. Companies like OpenAI have attempted to address these concerns but face challenges setting limitations on their technology.
In response to the misuse of generative AI for propaganda, OpenAI has taken action by proactively identifying and shutting down accounts linked to covert influence operations, showcasing efforts to combat the malicious use of their AI technologies.
Read at www.theguardian.com
[
add
]
[
|
|
]