Claude Ai at the Center of a Pentagon Ultimatum After Anthropic Rejection
Anthropic has rejected a Pentagon demand to remove safety guardrails from claude ai, saying “we cannot in good conscience accede to their request, ” and the Defense Department has given the company a 5: 01 p. m. ET deadline today or risk being labeled a “supply chain risk, ” a designation that would bar it from classified national security work.
Claude Ai at center of supply-chain threat
The Pentagon’s threat to designate Anthropic a supply chain risk would prevent the company from continuing classified work tied to a roughly $200 million government contract and would require other firms doing business with the department to certify they do not use Claude in their workflows. Secretary Hegseth set the deadline and has framed the demand as part of a broader push to change contracting language across the Defense Department.
The contract language the Pentagon wants
In January, Secretary Hegseth directed that all Defense Department contracts incorporate standard “any lawful use” language within 180 days, a move meant to remove customer-side guardrails that had conditioned applications such as autonomous weapons use and mass surveillance of Americans. The Pentagon’s current notice focuses nominally on the $200 million contract with Anthropic, but the department’s directive would apply across future contracts if implemented on the department’s timeline of 180 days.
Commercial footprint and the company response
Anthropic serves eight of the ten largest U. S. companies and more than 500 customers who spend over $1 million annually on its products, a commercial footprint that executives warn could be damaged if the company is made radioactive to government partners. Anthropic CEO Dario Amodei said in a public statement that he and the company “believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat out autocratic adversaries. ”
A senior Pentagon official has acknowledged that the standoff with Anthropic has also been viewed internally as a way to set expectations with other AI firms negotiating with the department. That internal framing appears tied to the department’s push to standardize the “any lawful use” clause across contracts within the 180-day window set by Secretary Hegseth.
The immediate legal effect of a supply chain risk designation would be narrow and specific: it would cut Anthropic off from classified national security work tied to current contracts and require other defense contractors to certify they do not integrate Claude into systems used on government work. The designation is typically used for foreign firms from geopolitical adversaries; applying it to a U. S. -based company would be an unusual step with both contracting and commercial consequences.
Anthropic framed its stance as a matter of principle in rejecting the Pentagon’s terms, telling the department it could not drop the safeguards. The Pentagon’s demand carries a hard, near-term calendar point: Anthropic’s CEO was given a 5: 01 p. m. ET deadline today to comply.
What happens next is clear on the department’s schedule: Anthropic faces the 5: 01 p. m. ET deadline to alter its safeguards, and the Defense Department intends to implement “any lawful use” contract language across its contracts within 180 days. Officials have positioned those two deadlines as the immediate milestones in the dispute.