Dr Brianna Rosen
On 4 March, the Pentagon formally notified Anthropic that it had been deemed a supply chain risk to national security, an unprecedented move against an American company.
The designation followed Anthropic’s refusal to accept contract language permitting the use of its technology for “all lawful purposes,” with CEO Dario Amodei insisting on retaining two redlines prohibiting mass domestic surveillance and fully autonomous weapons systems. After intensive negotiations, US Secretary of Defense Pete Hegseth announced the Department of Defense (DoD) would transition away from Anthropic products within six months, even as reports surfaced that the Pentagon is relying extensively on Anthropic’s model Claude in its ongoing war with Iran.
The dispute has been widely characterised as a clash between ethics and national security. In reality, it points to deeper structural challenges. The Pentagon-Anthropic dispute reveals longstanding governance gaps in the integration of AI into military and intelligence operations — gaps that predate this administration and will outlast the present controversy. In the absence of clear institutional frameworks, private companies such as Anthropic have attempted to impose limits through usage policies that define how their models may be deployed. The dispute underscores the shortcomings of that approach. Contractual mechanisms are not a substitute for governance frameworks capable of keeping pace with the operational realities of AI-enabled warfare.
The dispute has been widely characterised as a clash between ethics and national security. In reality, it points to deeper structural challenges.
An unprecedented instrument without justification
The mechanism Secretary Hegseth invoked, 10 USC §3252, is a supply chain security statute designed to address foreign threats to the integrity of defence systems. It has historically been applied to adversary-linked vendors such as China’s Huawei. Its application to a domestic American company therefore represents a marked departure from past practice, and the evidentiary basis for treating a contractual disagreement over usage terms as equivalent to foreign compromise or sabotage has not yet been publicly established.
The Pentagon-Anthropic dispute reveals longstanding governance gaps in the integration of AI into military and intelligence operations — gaps that predate this administration and will outlast the present controversy…. Contractual mechanisms are not a substitute for governance frameworks capable of keeping pace with the operational realities of AI-enabled warfare.
The Trump administration originally accepted Anthropic’s usage restrictions when the $200 million contract was awarded in July 2025. The Pentagon’s January 2026 Artificial Intelligence Strategy memorandum, however, changed the way that the DoD works with contractors by directing the Department to incorporate a standard “any lawful use” clause into all contracts within 180 days. This memorandum represents a broader push within the Department to focus on “accelerating America’s military AI dominance” to outpace China, even if safeguards are fully established. The memorandum explicitly states that the DoD “must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.”
Still, other policy options were available to the administration in its dispute with Anthropic, including contract termination or competitive re-solicitation. Instead, the Pentagon invoked a national security supply chain designation as it finalised an agreement with Anthropic’s competitor, OpenAI. The designation suggests an attempt to rewrite the terms under which frontier AI companies may do business with the US government, which may have a chilling effect across industry that risks damaging public-private partnerships across the defence sector.
A governance vacuum that procurement cannot resolve
The more fundamental problem the dispute highlights is structural. Existing law leaves significant gaps in the governance of AI-enabled domestic surveillance and autonomous weapons systems — gaps that, in some cases, are open to contested interpretation.
The more fundamental problem the dispute highlights is structural. Existing law leaves significant gaps in the governance of AI-enabled domestic surveillance and autonomous weapons systems — gaps that, in some cases, are open to contested interpretation. The January 2023 DoD Directive 3000.09, which requires lethal autonomous systems to undergo rigorous testing prior to deployment, exists as internal policy rather than statute. Updating such directives typically involves a lengthy policy process that is simply not designed to keep pace with rapidly advancing technological capabilities. Meanwhile, the use of AI in systems that fall below the threshold of lethal autonomy but nevertheless contribute to kinetic effects — including in decision support systems and target generation — is already well underway in warfare, including in Gaza, Ukraine, and Iran. Neither policy nor law has sufficiently grappled with the civilian harm implications of that operational reality.
The OpenAI agreement is unlikely to bridge these gaps. OpenAI accepted the “any lawful purposes” clause while negotiating safeguards that reportedly include restrictions on mass domestic surveillance, prohibitions on directing fully autonomous weapons systems, cloud-only deployments, and security-cleared engineers embedded within the Pentagon. The full scope of these provisions remains uncertain, as the contract has not been released publicly. But any safeguards were negotiated under significant time constraints, behind closed doors, and without congressional oversight. It is also notable that neither the Anthropic nor OpenAI agreements prohibit mass surveillance of foreign nationals, a longstanding concern among allied partners given Snowden-era disclosures about the reach of US intelligence collection.
Consequences that extend far beyond Washington
The designation of Anthropic as a supply chain risk may create legal, operational, and financial challenges for NATO and Five Eyes partners that have integrated Anthropic models into shared platforms and joint programmes, raising questions about the legal status of continued use, who bears remediation costs, and the timeline on which Washington might ultimately require removal.
Allied governments are now confronting the implications of the Pentagon-Anthropic dispute. The designation of Anthropic as a supply chain risk may create legal, operational, and financial challenges for NATO and Five Eyes partners that have integrated Anthropic models into shared platforms and joint programmes, raising questions about the legal status of continued use, who bears remediation costs, and the timeline on which Washington might ultimately require removal. The United Kingdom, for its part, faces potential exposure through Palantir, the primary vehicle through which Anthropic’s models reach the UK Ministry of Defence. Beyond these immediate questions, the episode bears on broader debates about allied defence interoperability and the conditions under which US technology partnerships can be relied upon — an issue that may now take on renewed urgency in allied capitals.
The dispute is also being closely observed by strategic competitors. Chinese state-affiliated commentary has framed the episode as evidence of structural instability in the American AI ecosystem, and as confirmation that China’s military-civil fusion model confers an institutional advantage the United States lacks. The public visibility of this breakdown — the competing company announcements, the litigation threats, and the internal contradictions of the designation itself — provides an unusually detailed window into where US military AI governance is contested and how quickly those arrangements can shift. This visibility constitutes a significant intelligence dividend. It also sends a message to middle powers weighing whether to adopt an American or Chinese technology stack. Ultimately, the only clear winner in this dispute may be China.
Chinese state-affiliated commentary has framed the episode as evidence of structural instability in the American AI ecosystem, and as confirmation that China’s military-civil fusion model confers an institutional advantage the United States lacks.
The United States is deploying frontier AI into consequential military and intelligence environments without the statutory frameworks or structured oversight processes that the scale and stakes of that deployment demand. The Pentagon-Anthropic dispute has made the governance gap surrounding military AI impossible to ignore. Policymakers in the United States and allied countries must now determine how it will be addressed.
For more information about this story or republishing this content, please contact [email protected]
“The University of Oxford is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation.”
Please visit the firm link to site

