Trump Administration Appeals Court Ruling Blocking Pentagon's Anthropic Ban
The DOJ is appealing a federal judge's order that blocked the Pentagon's supply-chain risk designation against Anthropic over Claude's military use guardrails.
The DOJ is appealing a federal judge's order that blocked the Pentagon's supply-chain risk designation against Anthropic over Claude's military use guardrails.
U.S. District Judge Rita Lin issued a preliminary injunction halting the Pentagon's designation of Anthropic as a national security supply-chain risk, ruling the move was likely unlawful retaliation for the company's refusal to remove AI safety guardrails for military use.
Anthropic has filed a court response denying that it ever agreed to allow the Pentagon to sabotage or disable its Claude AI tools, contradicting DoD claims and escalating a high-profile dispute over AI safety guardrails in US military applications.
A memo from Deputy Secretary of Defense Steve Feinberg confirms the Pentagon will designate Palantir's Maven AI as an official program of record, embedding its weapons-targeting technology across all US military branches by September 2026.
The US Department of War has raised fresh national security concerns about Anthropic in a court filing, citing the AI company's employment of a large number of foreign nationals including workers from China, arguing this increases adversarial risk under China's National Intelligence Law.
At a Google DeepMind town hall, VP Tom Lue and CEO Demis Hassabis told employees the company is actively expanding national security AI contracts with the Pentagon, having removed its previous pledge against weapons-related AI use.
The Pentagon's chief technology officer says the DoD is confident it can phase out Anthropic's Claude within the six-month deadline, deploying OpenAI and Gemini as alternatives, even as military users and industry experts warn the transition will be complex.
The Trump administration filed a court brief arguing Anthropic's AI safety restrictions make it an unacceptable national security risk, defending the Pentagon's supply-chain-risk designation.
Senator Elissa Slotkin introduced legislation to codify into law that AI cannot autonomously decide to kill a target, cannot be used for mass surveillance of Americans, and cannot launch nuclear weapons — directly responding to the Pentagon-Anthropic conflict.
The U.S. Department of Defense has begun engineering work to replace Anthropic's Claude AI across its systems after labeling the company a supply-chain risk, while Anthropic sues the Pentagon in federal court challenging the unprecedented blacklist.
Anthropic's lawsuit against the Pentagon over its 'supply chain risk' designation gained new momentum as the ACLU and CDT filed an amicus brief, arguing the designation unlawfully punishes the company's First Amendment-protected AI safety advocacy.
Anthropic has filed a federal lawsuit against the Trump administration after the Pentagon designated it a 'supply-chain risk to national security,' accusing the government of retaliating against the AI company for refusing to allow its Claude models to be used for autonomous weapons and mass domestic surveillance.
Anthropic's legal and public dispute with the U.S. Department of Defense has intensified after the Pentagon's AI chief called the company's stance 'bananas.' The conflict centers on a supply-chain risk designation that Anthropic argues threatens its ability to operate independently.
Microsoft filed an amicus brief in federal court supporting Anthropic's lawsuit against the Pentagon, urging a judge to issue a temporary restraining order blocking the DOD's supply chain risk designation and warning the ban could 'hamper U.S. warfighters' by disrupting AI systems already in use.
Employees from OpenAI, Google DeepMind, and other AI companies have rushed to Anthropic's defense by filing an amicus brief in its lawsuit against the Department of Defense over AI safety restrictions.
Caitlin Kalinowski, head of OpenAI's robotics team, has resigned citing concerns that OpenAI's agreement with the Department of Defense lacked sufficient guardrails against domestic surveillance and lethal autonomous weapons, calling it a governance failure.
Anthropic CEO Dario Amodei vows to challenge in court the Pentagon's unprecedented decision to label the AI company a national security supply chain risk, the first time a US firm has received such a designation.
Anthropic CEO Dario Amodei is reportedly back at the negotiating table with Pentagon official Emil Michael, seeking a compromise AI contract even after the DoD blacklisted the company and struck a rival deal with OpenAI.
The U.S. Department of War terminated its $200M contract with Anthropic after the AI company refused to lift safety restrictions on autonomous weapons and mass surveillance, designating it an unprecedented supply-chain risk to national security.
OpenAI CEO Sam Altman confirms the company is amending its Department of Defense contract to add explicit limits on AI use for mass domestic surveillance and fully autonomous weapons, following intense public backlash.