
"Over the last few months, Google Threat Intelligence Group (GTIG) has observed threat actors using AI to gather information, create super-realistic phishing scams and develop malware. While we haven't observed direct attacks on frontier models or generative AI products from advanced persistent threat (APT) actors, we have seen and mitigated frequent model extraction attacks (a type of corporate espionage) from private sector entities all over the world - a threat other businesses with AI models will likely face in the near future."
"Google says that these scammers are using AI to accelerate the attack lifecycle, with AI tools helping them refine and adapt their approaches in response to threat detection, making scammers even more effective. Which makes sense. AI tools can improve productivity, which also relates to their usage for negative purpose, and if scammers can find a way to improve their approaches through systematic evolution, they will."
Threat actors use AI to gather information, create super-realistic phishing scams, and develop malware. Frequent model extraction attacks from private sector entities target corporate AI models and have been mitigated. AI accelerates the attack lifecycle by enabling refinement and adaptation of tactics in response to detections. Large language models serve government-backed actors for technical research, targeting, and rapid generation of nuanced phishing lures. Nation-state actors from DPRK, Iran, PRC, and Russia operationalized AI in late 2025. Adversarial misuse of generative AI appears across disrupted campaigns, increasing effectiveness and scale of scams and espionage.
Read at www.socialmediatoday.com
Unable to calculate read time
Collection
[
|
...
]