
Washington, D.C. — In a historic and unprecedented escalation of the conflict between Silicon Valley AI labs and the federal government, the Department of War (formerly the Department of Defense) has terminated its $200 million contract with Anthropic. The move comes after the AI safety firm refused to lift operational restrictions on the use of its models for autonomous weaponry and mass surveillance.
In a directive issued late yesterday, Secretary of War Pete Hegseth formally designated Anthropic a "Supply-Chain Risk to National Security," a label historically reserved for foreign adversaries and telecommunications vendors deemed hostile to U.S. interests. The designation triggers an immediate prohibition on all commercial interactions between the U.S. military’s vast network of contractors and the San Francisco-based AI company.
The termination marks the climax of a months-long dispute regarding the ethical "red lines" embedded in Anthropic’s flagship model, Claude. While the Pentagon has sought "unrestricted" access to frontier AI models to maintain strategic dominance, Anthropic has steadfastly refused to modify its Acceptable Use Policy (AUP) to accommodate specific lethal and surveillance applications.
Sources close to the negotiations report that the Department of War demanded the removal of safeguards that prevent Claude from being used to pilot fully autonomous drones or analyze domestic communications data at scale. Anthropic CEO Dario Amodei reportedly rejected these terms, stating that the company could not "in good conscience" allow its technology to be deployed in ways that might undermine democratic values or lack meaningful human oversight.
"This is not a failure of capability, but a clash of principles," said a senior Creati.ai analyst. "Anthropic is effectively betting its business on the idea that AI safety is non-negotiable, even when the customer is the most powerful military on earth."
The most significant development is not the loss of the contract itself, but the invocation of the "Supply-Chain Risk" authority against a domestic American company. Secretary Hegseth’s order extends far beyond a simple cancellation of services.
Key Implications of the Designation:
Legal scholars have already questioned the validity of using supply-chain authorities—designed to prevent foreign espionage and sabotage—to punish a domestic contract dispute over ethical guidelines. Anthropic has vowed to challenge the designation in federal court, describing it as an "ideological retaliation" rather than a genuine security assessment.
As Anthropic exits the Pentagon’s portfolio, rival laboratory OpenAI has moved swiftly to solidify its position as the military’s primary AI partner. Reports indicate that OpenAI has agreed to terms that Anthropic rejected, leading to a lucrative new agreement to deploy GPT-4 based systems within classified networks.
The transition has sparked a bitter public feud between the two AI giants. Following reports of OpenAI's new deal, Dario Amodei issued a stinging internal memo, portions of which were leaked to tech media. Amodei accused OpenAI of engaging in "safety theater" and characterized their public messaging regarding military safeguards as "straight up lies."
"OpenAI is presenting itself as a peacemaker while signing deals that erode the very safety standards they claim to uphold," Amodei wrote. He alleged that OpenAI’s agreement likely includes "loopholes" that effectively nullify restrictions on autonomous lethality, a claim OpenAI has vigorously denied, stating their partnership focuses solely on "lawful defensive applications."
The blacklisting of Anthropic sends a chilling signal to the broader AI ecosystem. It suggests that the U.S. government is no longer willing to tolerate "ethical friction" from its technology providers. For venture-backed AI startups, the message is clear: alignment with Department of War objectives is now a prerequisite for viability in the public sector.
The following table outlines the diverging paths of the major AI labs regarding military collaboration as of March 2026:
Status of Major AI Labs with Dept. of War
| Lab Name | Contract Status | Stance on Lethal Autonomy |
|---|---|---|
| Anthropic | Terminated / Blacklisted | Strict prohibition via constitutional AI |
| OpenAI | Active / Expanded | Permits "lawful defensive" use |
| Google DeepMind | Under Review | Restricted, human-in-the-loop only |
| xAI | Active / Classified | Unrestricted "lawful use" adherence |
The coming weeks will be critical for Anthropic. If the "supply-chain risk" designation survives legal scrutiny, the company faces an existential threat to its enterprise business. Corporate clients who also service the defense sector may be forced to migrate to alternative models to avoid regulatory crossfire.
Meanwhile, the Department of War has signaled that its patience with "woke AI"—a term used by Secretary Hegseth to describe models with heavy safety filtering—has run out. As the U.S. races to integrate generative AI into its command-and-control infrastructure, the divide between "patriotic" AI developers and "safety-first" labs is set to become the industry's defining fault line.