
"It was only a month or so ago when OpenAI published a report on how AI is being used by threat actors, outlining key trends including malicious workflow efficiency, phishing, and surveillance. OpenAI -- the developer behind ChatGPT -- said at the time that there was no evidence that existing AI models were being used in novel attacks, but according to an update from Google's Threat Intelligence Group (GTIG), AI is being weaponized to develop adaptive malware."
"FRUITSHELL: A publicly available reverse shell containing hard-coded prompts meant to bypass detection or analysis by LLM-powered security systems. PROMPTFLUX: Experimental malware, a VBScript dropper with obfuscation, that abuses the Google Gemini API to dynamically rewrite its own source code. PROMPTLOCK: Another experimental strain of malware, a Go-based ransomware variant, that leverages an LLM to dynamically generate and execute malicious scripts. PROMPTSTEAL: An active Python data miner utilizing AI to generate data-stealing prompts."
Google's Threat Intelligence Group (GTIG) reported on November 5 that AI and large language models are being used to refine malware and create new families. Multiple novel strains were identified, including FRUITSHELL, a reverse shell with hard-coded prompts to evade LLM-based detection; PROMPTFLUX, a VBScript dropper that abuses the Google Gemini API to rewrite its source; PROMPTLOCK, a Go-based ransomware that uses an LLM to generate and run malicious scripts; PROMPTSTEAL, a Python data miner that generates data-stealing prompts; and QUIETVAULT, a JavaScript credential stealer targeting GitHub and NPM tokens. LLMs enable dynamic code generation, obfuscation, and secret discovery on infected hosts.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]