AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Avatar
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. “We have suspicions that a subset of Spaces’ secrets could have been accessed without authorization,” it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a

Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week.

“We have suspicions that a subset of Spaces’ secrets could have been accessed without authorization,” it said in an advisory.

Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform.

In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it’s notifying users who had their tokens revoked via email.

“We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default,” it added.

Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has also alerted law enforcement agencies and data protection authorities of the breach.

The development comes as the explosive growth of the AI sector has landed AI-as-a-service (AIaaS) providers like Hugging Face in attackers’ crosshairs, who could exploit them for malicious purposes.

In early April, cloud security firm Wiz detailed security issues in Hugging Face that could permit an adversary to gain cross-tenant access and poison AI/ML models by taking over the continuous integration and continuous deployment (CI/CD) pipelines.

Previous research undertaken by HiddenLayer also unearthed flaws in the Hugging Face Safetensors conversion service that made it possible to hijack the AI models submitted by users and stage supply chain attacks.

“If a malicious actor were to compromise Hugging Face’s platform, they could potentially gain access to private AI models, datasets, and critical applications, leading to widespread damage and potential supply chain risk,” Wiz researchers noted in April.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

 The Hacker News 

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Boston Cybersecurity Conference

Next Post

Andariel Hackers Target South Korean Institutes with New Dora RAT Malware

Related Posts

ZLoader Malware Evolves with Anti-Analysis Trick from Zeus Banking Trojan

The authors behind the resurfaced ZLoader malware have added a feature that was originally present in the Zeus banking trojan that it's based on, indicating that it's being actively developed. "The latest version, 2.4.1.0, introduces a feature to prevent execution on machines that differ from the original infection," Zscaler ThreatLabz researcher Santiago
Avatar
Read More

New Golang-Based Zergeca Botnet Capable of Powerful DDoS Attacks

Cybersecurity researchers have uncovered a new botnet called Zergeca that's capable of conducting distributed denial-of-service (DDoS) attacks. Written in Golang, the botnet is so named for its reference to a string named "ootheca" present in the command-and-control (C2) servers ("ootheca[.]pw" and "ootheca[.]top"). "Functionally, Zergeca is not just a typical DDoS botnet; besides supporting six
Avatar
Read More