Cybersecurity researchers discovered two malicious machine learning models on Hugging Face that utilized a technique involving 'broken' pickle files to evade detection. The payload, in both instances, was a reverse shell connecting to a specific IP address. Dubbed nullifAI, this approach aims to bypass safeguards designed to identify harmful models. Notably, the models serve more as proof-of-concept rather than active threats, underscoring risks associated with Pickle serialization formats in Python that allow execution of arbitrary code.
Cybersecurity researchers uncovered two malicious ML models on Hugging Face using 'broken' pickle files to evade detection, highlighting risks of model serialization.
The nullifAI technique employs broken pickle files to skip safeguards, with models housing reverse shells connecting to hardcoded IPs.
The identified models, actually proof-of-concept variations, highlight security risks in Pickle serialization—allowing potential for arbitrary code execution upon deserialization.
An interesting feature of these Pickle files is their immediate failure post-malicious payload execution, complicating the process for security tools like Picklescan.
Collection
[
|
...
]