
"Our analysis of exploits associated with this campaign identified a zero-day vulnerability implemented in a Python script that enables the user to bypass two-factor authentication (2FA) on a popular open-source, web-based system administration tool. The tech giant said it worked with the impacted vendor to responsibly disclose the flaw and get it fixed in order to disrupt the activity. It did not disclose the name of the tool."
"Although there is no evidence to suggest that Google's Gemini AI tool was used to aid the threat actors, GTIG assessed with high confidence that an AI model was weaponized to facilitate the discovery and weaponization of the flaw via a Python script that featured all hallmarks typically associated with large language model (LLM)-generated code. For example, the script contains an abundance of educational docstrings, including a hallucinated CVSS score, and uses a structured, textbook Pythonic format highly characteristic of LLMs training data."
"The vulnerability, described as a 2FA bypass, requires valid user credentials for exploitation. It stems from a high-level semantic logic flaw arising as a result of a hard-coded trust assumption, something LLMs excel at spotting. AI is already accelerating vulnerability discovery, reducing the effort need"
Google identified an unknown threat actor using a zero-day exploit likely developed with an AI system, marking malicious use of AI for vulnerability discovery and exploit generation. The activity was attributed to cybercrime actors collaborating on a mass vulnerability exploitation operation. Google’s analysis found a zero-day vulnerability implemented in a Python script that bypasses two-factor authentication on a popular open-source, web-based system administration tool. Google worked with the impacted vendor to responsibly disclose the flaw and get it fixed to disrupt the activity, without naming the tool. No evidence linked Gemini to the activity, but Google assessed with high confidence that an AI model was weaponized to facilitate discovery and weaponization, with code features consistent with LLM-generated output. Exploitation required valid user credentials and relied on a semantic logic flaw from a hard-coded trust assumption.
#zero-day-vulnerabilities #ai-assisted-cybercrime #two-factor-authentication-bypass #python-exploit-development #responsible-disclosure
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]