Python libraries in AI/ML models can be poisoned w metadata
Briefly

Python libraries in AI/ML models can be poisoned w metadata
"Vulnerabilities in popular AI and ML Python libraries used in Hugging Face models with tens of millions of downloads allow remote attackers to hide malicious code in metadata. The code then executes automatically when a file containing the poisoned metadata is loaded. The open source libraries - NeMo, Uni2TS, and FlexTok - were created by Nvidia, Salesforce, and Apple working with the Swiss Federal Institute of Technology's Visual Intelligence and Learning Lab (EPFL VILAB), respectively."
"While the threat hunters say they haven't seen any in-the-wild abuse of these vulnerabilities to date, "there is ample opportunity for attackers to leverage them." "It is common for developers to create their own variations of state-of-the-art models with different fine-tunings and quantizations, often from researchers unaffiliated with any reputable institution," Unit 42 malware research engineer Curtis Carmony wrote in a Tuesday analysis."
Remote attackers can embed malicious code inside metadata of model files so that code executes automatically when a poisoned file is loaded. The vulnerable open-source libraries NeMo, Uni2TS, and FlexTok originate from Nvidia, Salesforce, and Apple with EPFL VILAB collaboration and all rely on Hydra's instantiate() function. Palo Alto Networks Unit 42 discovered and reported the flaws; maintainers issued warnings, fixes and two CVEs. No in-the-wild abuse has been observed, but attackers can distribute modified popular models with malicious metadata. Hugging Face metadata visibility and format handling increase the risk across many dependent Python libraries.
Read at Theregister
Unable to calculate read time
[
|
]