5 AI-developed malware families analyzed by Google fail to work and are easily detected
Briefly

5 AI-developed malware families analyzed by Google fail to work and are easily detected
"The assessments provide a strong counterargument to the exaggerated narratives being trumpeted by AI companies, many seeking new rounds of venture funding, that AI-generated malware is widespread and part of a new paradigm that poses a current threat to traditional defenses. A typical example is Anthropic, which recently reported its discovery of a threat actor that used its Claude LLM to "develop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.""
"The post cited a separate report from OpenAI that found 20 separate threat actors using its ChatGPT AI engine to develop malware for tasks including identifying vulnerabilities, developing exploit code, and debugging that code. BugCrowd, meanwhile, said that in a survey of self-selected individuals, "74 percent of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join the fold.""
Multiple assessments contradict claims that AI-generated malware is widespread and has created a new, immediate threat to traditional defenses. Some companies, including Anthropic, report specific incidents of threat actors using LLMs to develop ransomware and other tools, and industry posts cite surveys and OpenAI findings of threat actors leveraging ChatGPT for vulnerability identification and exploit development. Other analyses, including Google and OpenAI, report no evidence of successful automation or breakthrough capabilities in AI-assisted malware. Many warnings emphasize risks, but prominent disclaimers about limitations are often downplayed amid sensationalized narratives.
Read at Ars Technica
Unable to calculate read time
[
|
]