Red Hat OpenShift AI Flaw Exposes Hybrid Cloud Infrastructure to Full Takeover

A severe security flaw has been disclosed in the Red Hat OpenShift AI service that could allow attackers to escalate privileges and take control of the complete infrastructure under certain conditions. OpenShift AI is a platform for managing the lifecycle of predictive and generative artificial intelligence (GenAI) models at scale and across hybrid cloud environments. It also facilitates data
[[{“value”:”

A severe security flaw has been disclosed in the Red Hat OpenShift AI service that could allow attackers to escalate privileges and take control of the complete infrastructure under certain conditions.

OpenShift AI is a platform for managing the lifecycle of predictive and generative artificial intelligence (GenAI) models at scale and across hybrid cloud environments. It also facilitates data acquisition and preparation, model training and fine-tuning, model serving and model monitoring, and hardware acceleration.

The vulnerability, tracked as CVE-2025-10725, carries a CVSS score of 9.9 out of a maximum of 10.0. It has been classified by Red Hat as “Important” and not “Critical” in severity owing to the need for a remote attacker to be authenticated in order to compromise the environment.

“A low-privileged attacker with access to an authenticated account, for example, as a data scientist using a standard Jupyter notebook, can escalate their privileges to a full cluster administrator,” Red Hat said in an advisory earlier this week.

DFIR Retainer Services

“This allows for the complete compromise of the cluster’s confidentiality, integrity, and availability. The attacker can steal sensitive data, disrupt all services, and take control of the underlying infrastructure, leading to a total breach of the platform and all applications hosted on it.”

The following versions are affected by the flaw –

  • Red Hat OpenShift AI 2.19
  • Red Hat OpenShift AI 2.21
  • Red Hat OpenShift AI (RHOAI)

As mitigations, Red Hat is recommending that users avoid granting broad permissions to system-level groups, and “the ClusterRoleBinding that associates the kueue-batch-user-role with the system:authenticated group.”

“The permission to create jobs should be granted on a more granular, as-needed basis to specific users or groups, adhering to the principle of least privilege,” it added.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

“}]] The Hacker News 

Total
0
Shares
Previous Post

Hackers Exploit Milesight Routers to Send Phishing SMS to European Users

Next Post

Learn How Leading Security Teams Blend AI + Human Workflows (Free Webinar)

Related Posts

How to Gain Control of AI Agents and Non-Human Identities

We hear this a lot: “We’ve got hundreds of service accounts and AI agents running in the background. We didn’t create most of them. We don’t know who owns them. How are we supposed to secure them?” Every enterprise today runs on more than users. Behind the scenes, thousands of non-human identities, from service accounts to API tokens to AI agents, access systems, move data, and execute tasks
Read More

CTM360 Exposes a Global WhatsApp Hijacking Campaign: HackOnChat

CTM360 has identified a rapidly expanding WhatsApp account-hacking campaign targeting users worldwide via a network of deceptive authentication portals and impersonation pages. The campaign, internally dubbed HackOnChat, abuses WhatsApp’s familiar web interface, using social engineering tactics to trick users into compromising their accounts. Investigators identified thousands of malicious URLs
Read More