AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Avatar
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. “We have suspicions that a subset of Spaces’ secrets could have been accessed without authorization,” it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a

Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week.

“We have suspicions that a subset of Spaces’ secrets could have been accessed without authorization,” it said in an advisory.

Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform.

In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it’s notifying users who had their tokens revoked via email.

“We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default,” it added.

Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has also alerted law enforcement agencies and data protection authorities of the breach.

The development comes as the explosive growth of the AI sector has landed AI-as-a-service (AIaaS) providers like Hugging Face in attackers’ crosshairs, who could exploit them for malicious purposes.

In early April, cloud security firm Wiz detailed security issues in Hugging Face that could permit an adversary to gain cross-tenant access and poison AI/ML models by taking over the continuous integration and continuous deployment (CI/CD) pipelines.

Previous research undertaken by HiddenLayer also unearthed flaws in the Hugging Face Safetensors conversion service that made it possible to hijack the AI models submitted by users and stage supply chain attacks.

“If a malicious actor were to compromise Hugging Face’s platform, they could potentially gain access to private AI models, datasets, and critical applications, leading to widespread damage and potential supply chain risk,” Wiz researchers noted in April.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

 The Hacker News 

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Boston Cybersecurity Conference

Next Post

Andariel Hackers Target South Korean Institutes with New Dora RAT Malware

Related Posts

Popular Rust Crate liblzma-sys Compromised with XZ Utils Backdoor Files

"Test files" associated with the XZ Utils backdoor have made their way to a Rust crate known as liblzma-sys, new findings from Phylum reveal. liblzma-sys, which has been downloaded over 21,000 times to date, provides Rust developers with bindings to the liblzma implementation, an underlying library that is part of the XZ Utils data compression software. The
Avatar
Read More