Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute

Anthropic on Friday hit back after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to designate the artificial intelligence (AI) upstart as a “supply chain risk.” “This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons,” the
Pentagon Designates Anthropic

Anthropic on Friday hit back after U.S. Secretary of Defense Pete Hegseth directed the Pentagon to designate the artificial intelligence (AI) upstart as a “supply chain risk.”

“This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons,” the company said.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

In a social media post on Truth Social, U.S. President Donald Trump said he was ordering all federal agencies to phase out the use of Anthropic technology within the next six months. A subsequent X post from Hegseth mandated that all contractors, suppliers, and partners doing business with the U.S. military cease any “commercial activity with Anthropic” effective immediately.

“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply Chain Risk to National Security,” Hegseth wrote.

The designation comes after weeks of negotiations between the Pentagon and Anthropic over the use of its AI models by the U.S. military. In a post published this week, the company argued that its contracts should not facilitate mass domestic surveillance or the development of autonomous weapons.

“We support the use of AI for lawful foreign intelligence and counterintelligence missions,” Anthropic noted. “But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”

The company also called out the U.S. Department of War’s (DoW) position that it will only work with AI companies that allow “any lawful use” of the technology, while removing any safeguards that may exist, as part of efforts to build an “AI-first” warfighting force and bolster national security.

“Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological ‘tuning’ that interferes with their ability to provide objectively truthful responses to user prompts,” a memorandum issued by the Pentagon last month reads.

“The Department must also utilize models free from usage policy constraints that may limit lawful military applications.”

Responding to the designation, Anthropic described it as “legally unsound” and said it would set a dangerous precedent for any American company that negotiates with the government. It also noted that a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of DoW contracts, and that it cannot affect the use of Claude to serve other customers.

Hundreds of employees at Google and OpenAI have signed an open letter urging their companies to stand with Anthropic in its clash with the Pentagon over military applications for AI tools like Claude.

The standoff between Anthropic and the U.S. government comes as OpenAI CEO Sam Altman said OpenAI reached an agreement with the U.S. Department of Defense (DoD) to deploy its models in their classified network. It also asked DoD to extend those terms to all AI companies.

“AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said in a post on X. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

 The Hacker News 

Total
0
Shares
Previous Post

DoJ Seizes $61 Million in Tether Linked to Pig Butchering Crypto Scams

Next Post

Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

Related Posts

New Fluent Bit Flaws Expose Cloud to RCE and Stealthy Infrastructure Intrusions

Cybersecurity researchers have discovered five vulnerabilities in Fluent Bit, an open-source and lightweight telemetry agent, that could be chained to compromise and take over cloud infrastructures. The security defects "allow attackers to bypass authentication, perform path traversal, achieve remote code execution, cause denial-of-service conditions, and manipulate tags," Oligo Security said in
Read More

⚡ Weekly Recap: IoT Exploits, Wallet Breaches, Rogue Extensions, AI Abuse & More

The year opened without a reset. The same pressure carried over, and in some places it tightened. Systems people assume are boring or stable are showing up in the wrong places. Attacks moved quietly, reused familiar paths, and kept working longer than anyone wants to admit. This week’s stories share one pattern. Nothing flashy. No single moment. Just steady abuse of trust — updates, extensions,
Read More

OpenClaw Bug Enables One-Click Remote Code Execution via Malicious Link

A high-severity security flaw has been disclosed in OpenClaw (formerly referred to as Clawdbot and Moltbot) that could allow remote code execution (RCE) through a crafted malicious link. The issue, which is tracked as CVE-2026-25253 (CVSS score: 8.8), has been addressed in version 2026.1.29 released on January 30, 2026. It has been described as a token exfiltration vulnerability that leads to
Read More