Pentagon Pressures Anthropic Over AI Terms

The Pentagon is reportedly threatening to cut ties with AI firm Anthropic over a dispute concerning the ethical safeguards for its Claude AI in defense applications.

2 min read
A stylized image representing the Pentagon and Anthropic's AI, Claude, with a visual metaphor for a dispute or negotiation.
Pentagon Threatens to End Anthropic Work in Feud Over AI Terms — Bloomberg Podcast on YouTube

The Pentagon is reportedly threatening to terminate its contracts with AI developer Anthropic amid a heated dispute over the usage terms for the company's Claude large language model. This escalating disagreement, first reported by a Bloomberg Podcast, centers on Anthropic's insistence on safeguards that would prevent its AI from being deployed for mass domestic surveillance or fully autonomous lethal weapons without human intervention.

The Department of Defense (DoD), however, demands unrestricted access, arguing that such limitations impede critical national security operations. Officials are reportedly considering designating Anthropic as a 'supply chain risk,' a drastic measure typically reserved for foreign entities or those posing significant security threats.

This standoff comes at a critical juncture, as Anthropic's Claude is currently the only active AI model operating on classified DoD military networks. The company, which secured a $200 million contract in July 2025, has been praised by DoD users for its capabilities and has reportedly already relaxed many of its initial usage restrictions.

Pentagon Threatens to End Anthropic Work in Feud Over AI Terms — from Bloomberg Podcast
Pentagon Threatens to End Anthropic Work in Feud Over AI Terms — from Bloomberg Podcast

Gregory Allen, director of the Wadhwani Center for AI and Advanced Technologies, views the supply chain risk threat as an 'unreasonable escalation.' He argues that such a designation could severely damage Anthropic's commercial business and deter other innovative startups from engaging in crucial Pentagon AI work.

The Pentagon's position is rooted in the need for operational flexibility, especially when confronting adversaries already leveraging AI in warfare, such as Russia's use of autonomous AI drones in Ukraine. They argue that the ability to match or exceed competitor capabilities should not be subject to ongoing debate with private contractors.

This 'Pentagon Anthropic AI dispute' highlights a growing tension across the industry: balancing the transformative potential of AI with critical ethical and safety considerations. The outcome could set a precedent for future collaborations between AI developers and military organizations, impacting both national security and the broader commercial AI landscape.