
In a rare display of cross-industry solidarity that transcends corporate rivalries, employees from Google and OpenAI have joined forces to support Anthropic in its high-stakes standoff with the U.S. Department of Defense. On Friday, February 27, 2026, over 300 employees from Google DeepMind and more than 60 from OpenAI signed an open letter titled "We Will Not Be Divided," urging their respective leaderships to stand firm against the Pentagon's demand for unrestricted access to advanced AI models.
This collective action marks a pivotal moment in the ongoing debate over the militarization of artificial intelligence. It highlights a growing rift between the developers of frontier AI models, who seek to maintain ethical guardrails, and defense officials who view these technologies as critical assets in a "wartime arms race." As the Pentagon threatens to invoke the Defense Production Act to compel compliance, the tech workforce is signaling that ethical "red lines" regarding autonomous weapons and domestic surveillance are non-negotiable.
At the heart of this controversy is Anthropic’s refusal to modify its Terms of Service to accommodate the Pentagon's request for "unrestricted use" of its Claude models. Anthropic, led by CEO Dario Amodei, has consistently maintained two specific prohibitions in its government contracts:
The Department of Defense, represented by Defense Secretary Pete Hegseth, has reportedly demanded that Anthropic waive these specific restrictions and allow for "all lawful purposes." While the Pentagon asserts it has no intention of conducting illegal surveillance, it argues that contract language restricting "lawful" use creates operational vulnerabilities and sets a dangerous precedent for vendor control over military capabilities.
The open letter from Google and OpenAI employees explicitly endorses Anthropic's position. "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands," the letter states. The signatories argue that the government's pressure campaign is designed to fracture the industry's ethical consensus by playing the companies against one another.
The standoff escalated significantly in late February 2026 when the Pentagon reportedly issued an ultimatum to Anthropic. Failure to comply by the Friday deadline could result in the invocation of the Defense Production Act (DPA)—a Cold War-era law that allows the president to compel private companies to prioritize national defense contracts.
Furthermore, defense officials have threatened to label Anthropic a "supply chain risk" if it does not accede to the demands. This designation, typically reserved for foreign entities deemed hostile to U.S. interests (such as certain Chinese telecommunications firms), would effectively blacklist Anthropic from federal procurement and could damage its commercial reputation.
The open letter condemns these tactics as retaliatory. "The Department of War is threatening to invoke the Defense Production Act... all in retaliation for Anthropic sticking to their red lines," the text reads. The employees warn that if the Pentagon succeeds in coercing Anthropic, it will turn its attention to Google and OpenAI next, demanding the removal of their remaining safeguards under the guise of standardization.
A key theme of the employee letter is the accusation that the Pentagon is utilizing a "divide and conquer" strategy. By negotiating separately with Google, OpenAI, and Anthropic, the Department of Defense creates a prisoner's dilemma: each company fears that if they hold out for ethical standards, a competitor will capitulate and secure the lucrative government contracts.
"They're trying to divide each company with fear that the other will give in," the letter explains. "That strategy only works if none of us know where the others stand."
This fear is not unfounded. Reports indicate that OpenAI and Google have previously shown more flexibility in their negotiations regarding "unclassified" military use. In early 2024, OpenAI quietly removed the explicit ban on "military and warfare" from its usage policies, replacing it with a more general prohibition on "harm," ostensibly to allow for cybersecurity and disaster relief collaborations with the military. However, the current demand for unrestricted lethal autonomy and surveillance capabilities appears to be a bridge too far even for the workforces of these more permissive companies.
To understand the landscape of this conflict, it is essential to compare the current public stances of the major AI labs regarding military integration.
Table: Major AI Labs and Military Policy Stances
| Company | Core Stance on Military Use | Specific "Red Lines" | Status of Pentagon Relations |
|---|---|---|---|
| Anthropic | Strict Conditional Cooperation | Non-negotiable: No domestic surveillance. Non-negotiable: No autonomous lethal weapons. |
Hostile: Facing threats of DPA invocation and "Supply Chain Risk" labeling. |
| OpenAI | Collaborative / Evolving | Generally opposes autonomous weapons. Policy shifted in 2024 to allow "national security" use. |
Active Negotiation: CEO Sam Altman reportedly seeking similar safeguards to Anthropic. |
| Cautious / Internal Tension | AI Principles (2018): No weapons or surveillance violating human rights. bans technology whose principal purpose is harm. |
Strained: Employees pressuring leadership (Jeff Dean) to uphold 2018 commitments. | |
| Microsoft | Strategic Partner | Aligns with customer (DoD) legality. Provides infrastructure for classified models. |
Integrated: Deeply embedded in defense infrastructure via Azure/OpenAI partnership. |
The current wave of activism draws direct parallels to Google’s "Project Maven" controversy in 2018. During that period, thousands of Google employees signed a petition protesting a Pentagon contract to use AI for analyzing drone footage, eventually forcing the company to let the contract expire and release a set of "AI Principles" that forbade the use of Google AI for weaponry.
However, the geopolitical climate of 2026 is markedly different. With the intensification of global AI competition and perceived threats from rival nations, the pressure to deploy "sovereign AI" capabilities has increased. The "We Will Not Be Divided" letter suggests that tech workers are acutely aware that the ethical victories of 2018 are being eroded.
"This letter serves to create shared understanding and solidarity in the face of this pressure," the employees wrote. The coalition includes engineers, researchers, and policy staff who argue that the deployment of AI in warfare requires strict human accountability—a feature that fully autonomous systems inherently lack.
The response from corporate leadership has been mixed but indicates the pressure is working. Following the release of the letter, reports surfaced that OpenAI CEO Sam Altman addressed staff, stating that he, too, is seeking an agreement with the Pentagon that includes red lines similar to Anthropic’s. This suggests that the "united front" requested by the employees might be forming at the executive level as well.
At Google, the internal pressure is directed at Jeff Dean, Chief Scientist of Google DeepMind. A separate internal letter signed by over 100 Google staff specifically urged him to ensure Google does not undermine Anthropic’s position.
The confrontation between Anthropic and the Pentagon, amplified by the support of Google and OpenAI employees, represents a critical juncture in the history of technology governance. It poses a fundamental question: Who ultimately decides how powerful AI is used—the government that funds it, or the creators who understand its risks?
If the Pentagon successfully uses the Defense Production Act to force Anthropic’s hand, it could establish a precedent that national security interests override private sector ethical commitments. Conversely, if the "united front" of tech workers holds, it could force the military to accept that even in warfare, the deployment of artificial intelligence must have boundaries.
As the Friday deadline passes and the industry awaits the Pentagon's next move, the solidarity shown by these 360+ employees serves as a reminder that the human element in AI development remains a potent political force.
This report is based on events occurring up to March 1, 2026.