Picklescan Bugs Allow Malicious PyTorch Models to Evade Scans and Execute Code
Briefly

Picklescan Bugs Allow Malicious PyTorch Models to Evade Scans and Execute Code
"Three critical security flaws have been disclosed in an open-source utility called Picklescan that could allow malicious actors to execute arbitrary code by loading untrusted PyTorch models, effectively bypassing the tool's protections. Picklescan, developed and maintained by Matthieu Maitre (@mmaitre314), is a security scanner that's designed to parse Python pickle files and detect suspicious imports or function calls, before they are executed."
"Picklescan, at its core, works by examining the pickle files at bytecode level and checking the results against a blocklist of known hazardous imports and operations to flag similar behavior. This approach, as opposed to allowlisting, also means that it prevents the tools from detecting any new attack vector and requires the developers to take into account all possible malicious behaviors."
Three critical flaws in Picklescan can allow malicious actors to execute arbitrary code by loading untrusted PyTorch models. Picklescan parses Python pickle files and checks for suspicious imports or function calls before execution. The pickle serialization format can automatically trigger execution of arbitrary Python code when loaded, creating significant security risk. The vulnerabilities discovered by JFrog can bypass the scanner, mark scanned model files as safe, and enable malicious code execution, facilitating supply-chain attacks. The scanner uses a bytecode-level blocklist approach, which can miss new attack vectors and requires exhaustive consideration of malicious behaviors. CVE-2025-10155 is one identified issue.
Read at The Hacker News
Unable to calculate read time
[
|
]