Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell
Briefly

Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell
"In a report examining the malicious use of LLMs, the cybersecurity company said AI models are being increasingly used by threat actors for operational support, as well as for embedding them into their tools - an emerging category called LLM-embedded malware that's exemplified by the appearance of LAMEHUG (aka PROMPTSTEAL) and PromptLock. This includes the discovery of a previously reported Windows executable called MalTerminal that uses OpenAI GPT-4 to dynamically generate ransomware code or a reverse shell."
"Present alongside the Windows binary are various Python scripts, some of which are functionally identical to the executable in that they prompt the user to choose between "ransomware" and "reverse shell." There also exists a defensive tool called FalconShield that checks for patterns in a target Python file, and asks the GPT model to determine if it's malicious and write a "malware analysis" report."
MalTerminal is a Windows executable that integrates OpenAI GPT-4 to dynamically generate ransomware code or a reverse shell at runtime. The binary includes an OpenAI chat completions API endpoint that was deprecated in early November 2023, indicating the sample likely predates that deprecation. No evidence of deployment in the wild has been found, leaving open the possibility that the sample is a proof-of-concept or red-team tool. Accompanying Python scripts mirror the executable's functionality, while a defensive utility named FalconShield uses a GPT model to analyze Python files and produce malware analysis reports. LLM-enabled malware enables on-demand generation of malicious logic and complicates detection and mitigation.
Read at The Hacker News
Unable to calculate read time
[
|
]