
In a revelation that exposes the widening chasm between Washington’s political directives and the operational realities of modern warfare, reports confirm that the US military relied heavily on Anthropic’s Claude AI during Saturday’s massive airstrikes on Iran. This usage occurred mere hours after President Donald Trump issued a sweeping executive order banning the tool across all federal agencies.
The operation, which targeted key Iranian missile sites and command-and-control infrastructure, was executed with precision partially attributed to advanced intelligence assessments processed by Claude. The incident highlights a stark contradiction: the very technology designated a "national security risk" by the Commander-in-Chief on Friday night was, by Saturday morning, instrumental in executing one of the most significant military engagements of the decade.
The sequence of events leading up to the strikes illustrates the friction between rapid technological integration and bureaucratic governance. On Friday, February 27, 2026, the White House escalated its months-long feud with Anthropic. Citing the company’s refusal to remove ethical "red lines" regarding autonomous lethal force, President Trump ordered an immediate cessation of all government contracts with the San Francisco-based AI firm.
Defense Secretary Pete Hegseth reinforced this directive late Friday evening, formally designating Anthropic a "supply chain risk"—a classification historically reserved for foreign adversaries like Huawei. Yet, as the sun rose over the Middle East on Saturday, US Central Command (CENTCOM) was already deep in the operational phase of the strikes, utilizing the banned software to process real-time battlefield data.
The following table outlines the conflicting directives and actions that defined this chaotic 24-hour window:
Directive vs. Reality: The 24-Hour Conflict
---|---|----
Policy Directive (Friday)|Operational Reality (Saturday)|Outcome
Presidential Order: Immediate cessation of all Anthropic tools.|CENTCOM Usage: Continued use of Claude for real-time intel.|Direct violation of Executive Order during active combat.
DoD Classification: "Supply Chain Risk" and "National Security Threat."|Battlefield Reliance: Used for target identification and scenario simulation.|Proven dependency on "banned" tech for mission success.
Vendor Status: All contractors ordered to sever ties.|Integration Level: Deeply embedded in command workflows.|Immediate removal deemed operationally impossible.
According to defense sources, the specific application of Claude during the Iran strikes went far beyond basic administrative tasks. The AI model was reportedly used to synthesize vast amounts of satellite imagery, signals intelligence (SIGINT), and open-source data to identify viable targets within Iran’s complex air defense network.
Military insiders suggest that Claude’s ability to process "unstructured" data gave planners a critical speed advantage. In the hours preceding the strike, the AI assisted in:
The reliance on Claude for these lethal applications directly contradicts Anthropic’s own Terms of Service, which strictly prohibit the use of their models for "violent ends" or "weapons development." This suggests that the Pentagon may have been utilizing a "jailbroken" or localized version of the model, or simply ignoring the vendor’s ethical constraints in the heat of battle.
The ban issued by President Trump was not merely procedural but ideological. The administration has frequently criticized Anthropic for its "Constitutional AI" approach, which embeds safety principles that the White House views as "woke" impediments to American military dominance.
On Truth Social, the President lambasted the company as a "Radical Left" organization, arguing that its refusal to grant the Pentagon unrestricted access to its code constituted a betrayal. The administration’s position is clear: in a time of war, the US military must have absolute control over its digital arsenal, unencumbered by the moral qualms of Silicon Valley engineers.
This political stance culminated in the Friday ultimatum delivered by Secretary Hegseth. The demand was simple: Anthropic must waive its "red lines"—specifically those preventing autonomous targeting without human oversight—or face a total federal blacklist. When CEO Dario Amodei refused, the ban was signed.
The incident has triggered a quiet but intense crisis within the Pentagon. While the political leadership under Hegseth is aligned with the President’s ban, the uniformed leadership and operational commanders face a different reality. For them, AI tools like Claude have become as essential as GPS.
"You cannot simply 'uninstall' a foundational intelligence layer six hours before a strike," noted one defense analyst. "The White House treated this like deleting an app from a phone, but in reality, it is more like ripping out the wiring of a house while the lights are on."
This disconnect suggests that while the political will exists to pivot away from "ethically constrained" AI providers, the technical transition is far slower. The immediate fallout has been a rush by competitors to fill the void, with OpenAI reportedly finalizing a classified deal to deploy its models within DoD systems just hours after the Anthropic ban was announced.
The events of this weekend serve as a critical case study for the future of AI governance. The "Anthropic Paradox"—where a tool is simultaneously a banned security risk and a critical warfighting asset—exposes the fragility of current AI procurement policies.
Key takeaways for the defense industry include:
As the dust settles over the targets in Iran, the battle in Washington is just beginning. The administration is expected to launch an investigation into CENTCOM's defiance of the ban, potentially leading to court-martials or resignations. However, the silent victory belongs to the algorithm: regardless of the policy, the AI was in the loop, and it worked.
The US strikes on Iran have reshaped the geopolitical landscape, but they have also drawn a new battle line in the domestic technology sector. The use of Anthropic’s Claude AI in direct violation of a presidential order proves that in the modern era, technological capability often overrides political authority.
For Creati.ai, this marks a turning point. The era of "dual-use" AI—where the same model writes poetry and plans airstrikes—may be coming to an end. In its place, we are witnessing the bifurcation of the industry into "civilian safe" AI and "military grade" tools, with the US government demanding that its silicon soldiers follow orders just as blindly as its human ones.