
In a defining moment for the future of artificial intelligence in national defense, the Department of Defense has issued a stark ultimatum to AI safety startup Anthropic. Defense Secretary Pete Hegseth has given the company until this Friday, February 27, 2026, to grant the U.S. military unrestricted access to its flagship Claude AI model or face severe federal intervention.
The confrontation, which escalated following a tense closed-door meeting on Tuesday between Secretary Hegseth and Anthropic CEO Dario Amodei, centers on a fundamental disagreement regarding the ethical "guardrails" embedded in AI systems. The Pentagon is threatening to invoke the Defense Production Act (DPA) or label Anthropic a "supply chain risk"—a designation that could effectively blacklist the company from the federal marketplace—unless it removes restrictions preventing its software from being used for autonomous weapons and domestic surveillance.
Here at Creati.ai, we are closely monitoring this standoff as it represents the most significant clash to date between Silicon Valley’s safety-focused AI labs and the operational imperatives of the U.S. national security apparatus.
At the heart of the conflict is Anthropic’s refusal to modify its Terms of Service for military applications. While Anthropic has secured a $200 million contract with the Department of Defense and its Claude model is currently the only Large Language Model (LLM) approved for use on classified Pentagon networks, the company has drawn firm "red lines" regarding how its technology can be deployed.
Specifically, Anthropic restricts the use of Claude for:
Secretary Hegseth and Pentagon officials argue that these private corporate restrictions are unnecessary impediments to national security. The Defense Department’s position is that its operations are already governed by the U.S. Constitution and federal law, rendering Anthropic’s additional ethical layers redundant and obstructionist.
According to sources familiar with the Tuesday meeting, Hegseth explicitly told Amodei that the military requires access to Claude for "all lawful purposes," without "ideological constraints." This aligns with Hegseth’s broader initiative to ensure military AI systems are lethal and unhindered, a stance he summarized earlier this year with the declaration that the Pentagon’s "AI will not be woke."
The threat delivered to Anthropic is unprecedented in the commercial AI sector. If the Friday deadline passes without concession, the Pentagon has outlined two potential courses of action, both of which would have devastating consequences for Anthropic’s business model.
The Defense Production Act of 1950 grants the President broad authority to compel domestic industries to prioritize government contracts and allocate resources for national defense. Historically used during wartime or emergencies like the COVID-19 pandemic, invoking the DPA to force a software company to alter its product’s safety code would be a novel legal application. Essentially, the government could legally mandate that Anthropic provide an uncensored version of Claude, overriding the company’s safety constitution.
Perhaps even more damaging is the threat to label Anthropic a "supply chain risk." This designation is typically reserved for foreign adversaries or compromised entities (similar to actions taken against Huawei or Kaspersky).
If applied to Anthropic, this label would:
The urgency of the Pentagon's demand appears linked to recent operational friction. Reports indicate that the U.S. military utilized Claude during a special operations raid in January 2026 that resulted in the capture of Venezuelan President Nicolás Maduro. While the operation was successful, subsequent disputes arose regarding whether the use of the AI violated Anthropic’s usage policies, leading to what officials described as a "trust deficit."
Furthermore, the landscape of military AI is shifting rapidly. While Anthropic delays, its competitors are moving to fill the void. Elon Musk’s xAI has reportedly agreed to the Pentagon’s "any lawful use" terms, and its models were recently cleared for classified work. Similarly, OpenAI and Google are actively vying for a larger share of the defense budget.
For Anthropic, the dilemma is existential. CEO Dario Amodei has built the company’s reputation on "safety-first" development. Capitulating to the demand for autonomous weaponization would alienate its core research staff and contradict its founding mission. However, losing the U.S. government as a client—and facing the DPA—could cripple its financial standing and market access.
To clarify the stakes of this Friday’s deadline, we have broken down the opposing stances below.
Table: Pentagon Demands vs. Anthropic Policy
---|---|----
Core Requirement|Pentagon Position|Anthropic Position
Usage Limits|Demands "all lawful use cases" including combat applications.|Prohibits autonomous targeting and mass surveillance.
Oversight|Relies on U.S. Law, Constitution, and Chain of Command.|Relies on internal "Constitutional AI" safety protocols.
Human Control|Open to autonomous systems if legally cleared.|Requires "human-in-the-loop" for all lethal decisions.
Consequence of Failure|Invocation of Defense Production Act; Supply Chain Blacklist.|Potential loss of $200M contract; reputational damage.
Status|Seeking "operational supremacy" without vendor constraints.|Defending "responsible scaling" and ethical boundaries.
The outcome of this standoff will set a permanent precedent for the relationship between the U.S. government and the private AI sector. If the Pentagon successfully uses the DPA to strip away safety guardrails, it effectively nationalizes the ethical standards of AI models used for defense. Private companies would no longer retain the right to refuse service for specific military applications based on moral objections.
Conversely, if Anthropic holds the line and survives the fallout, it establishes that top-tier AI capabilities remain a seller's market, where the provider dictates the terms of engagement.
The Friday deadline serves as a critical juncture. The tech industry is waiting to see if the Biden-Trump transition era dynamics—where national security aggressively trumps corporate policy—will force the hand of Silicon Valley’s most idealistic player. As it stands, Anthropic is the only major lab resisting the full integration of its intelligence into the machinery of war. By the weekend, that distinction may no longer exist.
Creati.ai will continue to update this story as the deadline approaches.