Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Avatar
Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of “broken” pickle files to evade detection. “The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file,” ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. “
[[{“value”:”

Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of “broken” pickle files to evade detection.

“The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file,” ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. “In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address.”

The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below –

glockr1/ballr7
who-r-u0000/0000000000000000000000000000000000000

It’s believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario.

The pickle serialization format, used common for distributing ML models, has been repeatedly found to be a security risk, as it offers ways to execute arbitrary code as soon as they are loaded and deserialized.

The two models detected by the cybersecurity company are stored in the PyTorch format, which is nothing but a compressed pickle file. While PyTorch uses the ZIP format for compression by default, the identified models have been found to be compressed using the 7z format.

Consequently, this behavior made it possible for the models to fly under the radar and avoid getting flagged as malicious by Picklescan, a tool used by Hugging Face to detect suspicious Pickle files.

“An interesting thing about this Pickle file is that the object serialization — the purpose of the Pickle file — breaks shortly after the malicious payload is executed, resulting in the failure of the object’s decompilation,” Zanki said.

Further analysis has revealed that such broken pickle files can still be partially deserialized owing to the discrepancy between Picklescan and how deserialization works, causing the malicious code to be executed despite the tool throwing an error message. The open-source utility has since been updated to rectify this bug.

“The explanation for this behavior is that the object deserialization is performed on Pickle files sequentially,” Zanki noted.

“Pickle opcodes are executed as they are encountered, and until all opcodes are executed or a broken instruction is encountered. In the case of the discovered model, since the malicious payload is inserted at the beginning of the Pickle stream, execution of the model wouldn’t be detected as unsafe by Hugging Face’s existing security scanning tools.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

“}]] The Hacker News 

Total
0
Shares
Previous Post

Hackers exploiting bug in popular Trimble Cityworks tool used by local gov’ts

Next Post

XE Hacker Group Exploits VeraCore Zero-Day to Deploy Persistent Web Shells

Related Posts

Hackers Exploit Aviatrix Controller Vulnerability to Deploy Backdoors and Crypto Miners

A recently disclosed critical security flaw impacting the Aviatrix Controller cloud networking platform has come under active exploitation in the wild to deploy backdoors and cryptocurrency miners. Cloud security firm Wiz said it's currently responding to "multiple incidents" involving the weaponization of CVE-2024-50603 (CVSS score: 10.0), a maximum severity bug that could result in
Avatar
Read More