
"Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data (e.g., detailed help menus and the clean _C ANSI color class)."
"One of the most notable findings is that a prominent cybercrime group leveraged AI to develop a zero-day exploit designed to bypass two-factor authentication (2FA) on an open source web-based system administration tool. The exploit was implemented in a Python script. The hacker group and the targeted tool have not been named, but Google said it worked with the impacted vendor to prevent mass exploitation, which appeared to be the threat actor's plan."
"Google highlighted that Chinese and North Korean state-sponsored threat actors have been particularly interested in leveraging AI for vulnerability discovery. A China-linked actor was observed deploying agentic tools such as Strix and Hexstrike in attacks targeting a Japanese tech firm and a major East Asian cybersecurity company. UNC2814, a Chinese group known for targeting telecoms and government organizations, used a persona-driven jailbreak - in which the AI is instructed to act as a senior security auditor - to enhance vulnerability research on embedded devices, including TP-Link firmware with"
Google identified a zero-day exploit believed to have been developed with AI assistance. The findings come from observations using Gemini, the Google Threat Intelligence Group, and Mandiant data. A prominent cybercrime group used AI-supported methods to create a Python script implementing a zero-day designed to bypass two-factor authentication on an open source web-based system administration tool. Google worked with the affected vendor to prevent mass exploitation, which appeared to be the actor’s intent. Google stated it did not believe Gemini was used, but assessed high confidence that an AI model supported discovery and weaponization. Indicators included educational docstrings, a hallucinated CVSS score, and Python formatting characteristic of LLM training data. State-sponsored actors from China and North Korea showed strong interest in AI-driven vulnerability discovery, including agentic tools and persona-driven jailbreaks for embedded device research.
#zero-day-exploits #ai-in-cybersecurity #two-factor-authentication-bypass #threat-intelligence #state-sponsored-hacking
Read at SecurityWeek
Unable to calculate read time
Collection
[
|
...
]