
"Google’s Threat Intelligence Group said that it has identified what it believes is the first real-world case of cyber-baddies using AI to discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. The bug, a two-factor authentication bypass in a popular open source web-based administration platform, was reportedly developed by criminals working together on a large-scale intrusion operation."
"GTIG said that the attackers appear to have used an AI model to both identify the flaw and help turn it into a usable exploit. Google worked with the unnamed vendor to quietly patch the issue before the campaign could properly kick off, which it believes may have disrupted the operation before it gained traction."
"The company insists that neither Gemini nor Anthropic's Mythos was involved, but said that the exploit itself looked suspiciously machine-made. According to the report, the Python script included what Google described as "educational docstrings," a hallucinated CVSS score, and a polished textbook coding structure that looked heavily influenced by LLM training data."
"Google said that the issue stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that attackers could exploit to sidestep 2FA checks. According to the firm, those higher-level logic mistakes are exactly the kind of thing modern AI models are starting to get surprisingly good at finding. "While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies," the report said."
A zero-day vulnerability was found in a popular open source web-based administration platform, involving a two-factor authentication bypass. Google’s Threat Intelligence Group identified what it believes is the first real-world case of criminals using AI to discover and weaponize a zero-day for a planned mass-exploitation campaign. The attackers reportedly used an AI model to identify the flaw and help convert it into a usable exploit. Google worked with an unnamed vendor to quietly patch the issue before the campaign could gain traction. Google said the exploit appeared machine-made, including educational docstrings, a hallucinated CVSS score, and a polished coding structure influenced by LLM training data. The vulnerability came from a hard-coded trust exception in the authentication flow that allowed attackers to bypass 2FA checks.
#ai-enabled-cybercrime #zero-day-vulnerabilities #two-factor-authentication-bypass #threat-intelligence #exploit-development
Read at theregister
Unable to calculate read time
Collection
[
|
...
]