
"OpenAI has intensified its efforts to combat the misuse of artificial intelligence. In a new report, the company reveals that it has dismantled several international networks in recent months that were using its models for cyberattacks, scams, and political influence. The analysis shows how malicious actors are becoming increasingly sophisticated in their use of AI, while OpenAI is simultaneously expanding its defense mechanisms."
"A notable section of the report covers the growth of organized scam networks in Cambodia, Myanmar, and Nigeria. These groups don't just use ChatGPT to translate messages; they also utilize it to generate content. They also use it to create convincing profiles and set up fraudulent investment campaigns. Some operators even asked the AI to remove text characteristics that could betray their use of generative models."
OpenAI dismantled multiple international networks that used its models for cyberattacks, scams, and political influence while expanding defensive measures. Malicious actors increasingly apply AI to accelerate traditional tactics like phishing, malware development, and propaganda rather than invent new attack types. Russian-linked groups refined malware components including remote-access Trojans and credential stealers. Korean-speaking operators developed command-and-control systems. Suspected Chinese-affiliated actors used AI to enhance phishing against Taiwan, US universities, and political organizations. Organized scam networks in Cambodia, Myanmar, and Nigeria used AI to generate content, create convincing profiles, and run fraudulent investment campaigns. Built-in safeguards blocked many direct malicious requests and models are used three times more often to detect fraud than to facilitate it.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]