Hackers are already using AI-enabled malware, Google says
Briefly

Hackers are already using AI-enabled malware, Google says
"Zoom in: Google's team found PromptFlux while scanning uploads to VirusTotal, a popular malware-scanning tool, for any code that called back to Gemini. The malware appears to be in active development: Researchers observed the author uploading updated versions to VirusTotal, likely to test how good it is at evading detection. It uses Gemini to rewrite its own source code, disguise activity and attempt to move laterally to other connected systems."
"Meanwhile, Russian military hackers have used PromptSteal, another new AI-powered malware, in cyberattacks on Ukrainian entities, according to Google. The Ukrainian government first discovered the malware in July. Unlike conventional malware, PromptSteal lets hackers interact with it using prompts, much like querying an LLM. It's built around an open-source model hosted on Hugging Face and designed to move around a system and exfiltrate data as it goes."
"Between the lines: PromptSteal's reliance on an open-source model is something Google's team is watching closely, Billy Leonard, tech lead at Google Threat Intelligence Group, told Axios. "What we're concerned about there is that with Gemini, we're able to add guardrails and safety features and security features to those to mitigate this activity," Leonard said. "But as (hackers) download these open-source models, are they able to turn down the guardrails?""
Google's Threat Intelligence Group identified two emerging malware strains, PromptFlux and PromptSteal, that leverage large language models to change behavior during attacks. PromptFlux was found in VirusTotal uploads and calls back to Gemini to rewrite its source code, obfuscate actions and attempt lateral movement, with evidence of active development and iterative uploads. PromptSteal, observed in attacks on Ukrainian entities, allows interactive prompting, uses an open-source model hosted on Hugging Face, and is designed to roam systems and exfiltrate data. Both strains remain nascent but represent a concerning advance in AI-enabled offense and guardrail evasion risks.
Read at Axios
Unable to calculate read time
[
|
]