
"Hugging Face, the repository that hosts more than a million machine learning models used by virtually every AI company on the planet, has been found to contain hundreds of malicious models capable of executing arbitrary code on the machines of anyone who downloads them. ClawHub, the public registry for OpenClaw's AI agent skills, has been infiltrated by a coordinated campaign that planted 341 malicious skills designed to steal credentials, open reverse shells, and hijack AI agents for cryptocurrency mining."
"The two most important software supply chains in artificial intelligence have been systematically compromised. Hugging Face has been aware of malicious models on its platform since at least 2024, when security firms JFrog and ReversingLabs independently identified models containing hidden backdoors. The problem has not been contained. It has scaled."
"Protect AI, which partnered with Hugging Face to scan the platform's model library, has examined more than four million models and identified approximately 352,000 unsafe or suspicious issues across 51,700 models. JFrog found more than 100 models capable of arbitrary code execution. The attack technique, known as "nullifAI," exploits Python's pickle serialisation format, the standard method for packaging machine learning models."
"Attackers embed malicious Python code at the start of the pickle byte stream and compress the file using 7z rather than the default ZIP format, which breaks Hugging Face"
Hugging Face hosts more than a million machine learning models used across the AI industry and contains hundreds of malicious models capable of executing arbitrary code on systems that download them. ClawHub, a registry for OpenClaw AI agent skills, has been infiltrated with 341 malicious skills designed to steal credentials, open reverse shells, and hijack AI agents for cryptocurrency mining. The attacks rely on the implicit trust developers place in shared repositories and use the same industry infrastructure that accelerates development. Hugging Face had awareness of malicious models since at least 2024, but the issue scaled. Scanning identified hundreds of thousands of unsafe or suspicious issues across tens of thousands of models, including more than 100 models enabling arbitrary code execution via a pickle-based technique.
#ai-model-supply-chain #hugging-face-security #malicious-model-backdoors #credential-theft #agent-hijacking
Read at TNW | Security
Unable to calculate read time
Collection
[
|
...
]