Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic’s Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials. “The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing
[[{“value”:”

Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic’s Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials.

“The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories,” Check Point Research said in a report shared with The Hacker News.

The identified shortcomings fall under three broad categories –

  • No CVE (CVSS score: 8.7) – A code injection vulnerability stemming from a user consent bypass when starting Claude Code in a new directory that could result in arbitrary code execution without additional confirmation via untrusted project hooks defined in .claude/settings.json. (Fixed in version 1.0.87 in September 2025)
  • CVE-2025-59536 (CVSS score: 8.7) – A code injection vulnerability that allows execution of arbitrary shell commands automatically upon tool initialization when a user starts Claude Code in an untrusted directory. (Fixed in version 1.0.111 in October 2025)
  • CVE-2026-21852 (CVSS score: 5.3) – An information disclosure vulnerability in Claude Code’s project-load flow that allows a malicious repository to exfiltrate data, including Anthropic API keys. (Fixed in version 2.0.65 in January 2026)

“If a user started Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would issue API requests before showing the trust prompt, including potentially leaking the user’s API keys,” Anthropic said in an advisory for CVE-2026-21852.

In other words, simply opening a crafted repository is enough to exfiltrate a developer’s active API key, redirect authenticated API traffic to external infrastructure, and capture credentials. This, in turn, can permit the attacker to burrow deeper into the victim’s AI infrastructure.

This could potentially involve accessing shared project files, modifying/deleting cloud-stored data, uploading malicious content, and even generating unexpected API costs.

Successful exploitation of the first vulnerability could trigger stealthy execution on a developer’s machine without any additional interaction beyond launching the project.

CVE-2025-59536 also achieves a similar goal, the main difference being that repository-defined configurations defined through .mcp.json and claude/settings.json file could be exploited by an attacker to override explicit user approval prior to interacting with external tools and services through the Model Context Protocol (MCP). This is achieved by setting the “enableAllProjectMcpServers” option to true.

“As AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer,” Check Point said. “What was once considered operational context now directly influences system behavior.”

“This fundamentally alters the threat model. The risk is no longer limited to running untrusted code – it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code, but with the automation layers surrounding it.”

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

“}]] The Hacker News 

Total
0
Shares
Previous Post

PowerSchool, Chicago Public Schools to settle student data privacy lawsuit for $17 million

Next Post

Google Disrupts UNC2814 GRIDTIDE Campaign After 53 Breaches Across 42 Countries

Related Posts

Winning Against AI-Based Attacks Requires a Combined Defensive Approach

If there’s a constant in cybersecurity, it’s that adversaries are always innovating. The rise of offensive AI is transforming attack strategies and making them harder to detect. Google’s Threat Intelligence Group, recently reported on adversaries using Large Language Models (LLMs) to both conceal code and generate malicious scripts on the fly, letting malware shape-shift in real-time to evade
Read More

Chainlit AI Framework Flaws Enable Data Theft via File Read and SSRF Bugs

Security vulnerabilities were uncovered in the popular open-source artificial intelligence (AI) framework Chainlit that could allow attackers to steal sensitive data, which may allow for lateral movement within a susceptible organization. Zafran Security said the high-severity flaws, collectively dubbed ChainLeak, could be abused to leak cloud environment API keys and steal sensitive files, or
Read More

Researchers Null-Route Over 550 Kimwolf and Aisuru Botnet Command Servers

The Black Lotus Labs team at Lumen Technologies said it null-routed traffic to more than 550 command-and-control (C2) nodes associated with the AISURU/Kimwolf botnet since early October 2025. AISURU and its Android counterpart, Kimwolf, have emerged as some of the biggest botnets in recent times, capable of directing enslaved devices to participate in distributed denial-of-service (DDoS)
Read More