
"According to GTIG, a growing number of attackers are using large language models to write, modify, or conceal code. For example, LLMs are used to rewrite malware during execution, reducing the likelihood that detection systems will recognize malicious files. The report cites examples of malware families that use AI for so-called just-in-time code generation, in which the malicious component is compiled only at execution time."
"Researchers also note that cybercriminals are learning how to circumvent AI security measures. For example, they pose as students researching security vulnerabilities or as participants in a hacking competition. Through this deception, they can sometimes circumvent AI models' built-in security measures while still receiving help in developing attack techniques. Meanwhile, a mature market for AI-supported tools is emerging in the digital underworld. Forums now offer ready-made services that use AI to conduct phishing, generate malicious scripts, or automate social engineering campaigns."
AI capabilities now enable attackers to write, modify, and conceal malicious code, including just-in-time code generation where payloads are compiled at execution. Large language models are being used to rewrite malware during runtime to reduce detection. Attackers use deception to bypass AI safety measures, posing as students or competition participants to obtain exploit development help. A mature underground market offers AI-powered services for phishing, malicious script generation, and automated social engineering, extending AI use across reconnaissance, victim recruitment, payload delivery, and data exfiltration. Variants can dynamically generate commands or translate code to evade security tools.
Read at Techzine Global
Unable to calculate read time
Collection
[
|
...
]