Boeing Stock Mentioned as US Military Used Claude in Iran Strikes After Presidential Ban
boeing stock surfaced in online and policy discussions as the US military used Anthropic’s AI model Claude to inform a joint US–Israel bombardment of Iran hours after the president ordered federal agencies to stop using the tool. The sequence of orders and battlefield use underscores how embedded commercial AI has become in military operations and why the dispute matters now.
Boeing Stock and the Timing of Claude’s Use in the Iran Operation
The military’s adoption of Claude for intelligence, target selection and battlefield simulations overlapped with a politically charged directive that came only hours earlier: the president ordered all federal agencies to cease using the company’s tools immediately. That directive preceded a massive joint US–Israel bombardment of Iran that began on Saturday, yet military commands continued to rely on Claude during planning and execution. The proximity of the order to the operation highlights friction between rapid operational needs and political decisions.
What makes this notable is the narrow window between the presidential ban and the start of the strikes; the defense apparatus moved forward with systems that were already woven into classified workflows, creating a practical barrier to instant compliance when active missions were under way.
Anthropic, Dario Amodei and the Contract Conflict
Anthropic’s leadership, including CEO Dario Amodei, has pushed to amend existing contracts to constrain uses they judge outside safe bounds—specifically citing mass surveillance and fully autonomous weapons as unacceptable. The company has stated it has deployed models across several classified federal networks and has not previously objected to particular military operations in an ad hoc manner, but it has drawn a line where it believes AI undermines democratic values or exceeds technical reliability.
Relations frayed after Anthropic objected to military use of Claude in an earlier operation to capture the president of Venezuela in January, invoking terms of use that prohibit applying the model to violent ends, weapon development or surveillance. Those objections set the stage for the current clash over how broadly the Defense Department can apply Anthropic’s technology.
Pete Hegseth, Transition Timeline and Operational Consequences
Defense Secretary Pete Hegseth responded to the dispute by demanding full and unrestricted access to Anthropic’s models for all lawful purposes and threatening to treat the company as a supply chain risk that could lose government contracts. At the same time he acknowledged the difficulty of cutting over from an entrenched toolset, instructing that Anthropic continue providing services for no more than six months to allow a seamless transition to an alternative.