89% of Enterprise GenAI Usage Is Invisible to Organizations Exposing Critical Security Risks, New Report Reveals

Avatar
Organizations are either already adopting GenAI solutions, evaluating strategies for integrating these tools into their business plans, or both. To drive informed decision-making and effective planning, the availability of hard data is essential—yet such data remains surprisingly scarce. The “Enterprise GenAI Data Security Report 2025” by LayerX delivers unprecedented insights

Organizations are either already adopting GenAI solutions, evaluating strategies for integrating these tools into their business plans, or both. To drive informed decision-making and effective planning, the availability of hard data is essential—yet such data remains surprisingly scarce.

The “Enterprise GenAI Data Security Report 2025” by LayerX delivers unprecedented insights into the practical application of AI tools in the workplace, while highlighting critical vulnerabilities. Drawing on real-world telemetry from LayerX’s enterprise clients, this report is one of the few reliable sources that details actual employee use of GenAI.

For instance, it reveals that nearly 90% of enterprise AI usage occurs outside the visibility of IT, exposing organizations to significant risks such as data leakage and unauthorized access.

Below we bring some of the report’s key findings. Read the full report to refine and enhance your security strategies, leverage data-driven decision-making for risk management, and evangelize for resources to enhance GenAI data protection measures.

To register to a webinar that will cover the key findings in this report, click here.

Use of GenAI in the Enterprise is Casual at Most (for Now)

While the GenAI hype may make it seem like the entire workforce has transitioned their office operations to GenAI, LayerX finds the actual use a tad more lukewarm. Approximately 15% of users access GenAI tools on a daily basis. This is not a percentage to be ignored, but it is not the majority.

Yet. Here at The New Stack we concur with LayerX’s analysis, predicting this trend will accelerate quickly. Especially since 50% of users currently use GenAI every other week.

In addition, they find that 39% of regular GenAI tool users are software developers, meaning that the highest potential of data leakage through GenAI is of source and proprietary code, as well as the risk of using risky code in your codebase.

How is GenAI Being Used? Who Knows?

Since LayerX is situated in the browser, the tool has visibility into the use of Shadow SaaS. This means they can see employees using tools that were not approved by the organization’s IT or through non-corporate accounts.

And while GenAI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through their personal accounts. If employees do access through corporate accounts, only about 12% is done with SSO. As a result, nearly 90% of GenAI usage is invisible to the organization. This leaves organizations blind to ‘shadow AI’ applications and the unsanctioned sharing of corporate information on AI tools.

50% of Pasting Activity intoGenAI Includes Corporate Data

Remember the Pareto principle? In this case, while not all users use GenAI on a daily basis, users who do paste into GenAI applications, do so frequently and of potentially confidential information.

LayerX found that pasting of corporate data occurs almost 4 times a day, on average, among users who submit data to GenAI tools. This could include business information, customer data, financial plans, source code, etc.

How to Plan for GenAI Usage: What Enterprises Must Do Now

The findings in the report signal an urgent need for new security strategies to manage GenAI risk. Traditional security tools fail to address the modern AI-driven workplace where applications are browser-based. They lack the ability to detect, control, and secure AI interactions at the source—the browser.

Browser-based security provides visibility into access to AI SaaS applications, unknown AI applications beyond ChatGOT, AI-enabled browser extensions, and more. This visibility can be used to employ DLP solutions for GenAI, allowing enterprises to safely include GenAI in their plans, future-proofing their business.

To access more data on how GenAI is being used, read the full report.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.

 The Hacker News 

Total
0
Shares
Previous Post

PolarEdge Botnet Exploits Cisco and Other Flaws to Hijack ASUS, QNAP, and Synology Devices

Next Post

Space Pirates Targets Russian IT Firms With New LuckyStrike Agent Malware

Related Posts

Russian Star Blizzard Targets WhatsApp Accounts in New Spear-Phishing Campaign

The Russian threat actor known as Star Blizzard has been linked to a new spear-phishing campaign that targets victims' WhatsApp accounts, signaling a departure from its longstanding tradecraft in a likely attempt to evade detection. "Star Blizzard's targets are most commonly related to government or diplomacy (both incumbent and former position holders), defense policy or international relations
Avatar
Read More

Amnesty Finds Cellebrite’s Zero-Day Used to Unlock Serbian Activist’s Android Phone

A 23-year-old Serbian youth activist had their Android phone targeted by a zero-day exploit developed by Cellebrite to unlock the device, according to a new report from Amnesty International. "The Android phone of one student protester was exploited and unlocked by a sophisticated zero-day exploit chain targeting Android USB drivers, developed by Cellebrite," the international non-governmental
Avatar
Read More

Microsoft MFA AuthQuake Flaw Enabled Unlimited Brute-Force Attempts Without Alerts

Cybersecurity researchers have flagged a "critical" security vulnerability in Microsoft's multi-factor authentication (MFA) implementation that allows an attacker to trivially sidestep the protection and gain unauthorized access to a victim's account. "The bypass was simple: it took around an hour to execute, required no user interaction and did not generate any notification or provide the
Avatar
Read More