Ai standoff with the Pentagon raises new risks as Hegseth sets Friday deadline for Anthropic CEO

Ai standoff with the Pentagon raises new risks as Hegseth sets Friday deadline for Anthropic CEO

The dispute over how the US military can use Anthropic’s Claude is no longer theoretical — a hard deadline has made it a near-term risk decision for both government and industry. The move matters because a forced rollback of guardrails would change what the DoD can do with ai models, who controls acceptable military use, and whether other firms will resist similar pressure.

Ai uncertainty: a deadline that could rewrite access and guardrails

At stake is whether the Pentagon secures unfettered access to capabilities it says it needs, or whether safety-first design choices remain binding limits. The deadline compresses months of disagreement into a binary outcome that could produce immediate penalties, alter procurement relationships and send a signal to researchers and companies weighing how far to accommodate military use.

What happened in the meeting and the immediate demand

US military leaders, including Pete Hegseth, the defense secretary, met with Anthropic executives on Tuesday to negotiate terms for the military’s use of the company’s Claude model. Hegseth gave Dario Amodei, the Anthropic CEO, until the end of the day on Friday to accept the department’s terms or face penalties. The Department of Defense has already integrated Claude into its operations but has warned it may sever the relationship if it encounters what top brass view as roadblocks.

Negotiation pressure points, prior deals and possible penalties

Defense officials have pushed for broad access to Claude’s capabilities while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can use ai to kill people without human input. Senior officials have threatened punitive steps if Anthropic does not comply, including canceling a massive contract and designating the company a “supply chain risk. ” The DoD struck deals in July last year with several major ai firms, including Anthropic, Google and OpenAI, offering contracts worth up to $200m. Until this week, Claude was the only model permitted for use in the military’s classified systems; on Monday the department signed a deal allowing military personnel to use another chatbot in classified systems. That chatbot has recently faced backlash over producing nonconsensual sexualized images of children.

Political pressure, industry responses and individuals pushing for a change

There is a broader push from the Trump administration to integrate ai into the military, and Donald Trump has repeatedly vowed the US will win a global ai arms race. Emil Michael, the Pentagon’s chief technology officer and a former Uber executive, has publicly campaigned for Anthropic to “cross the Rubicon” and agree to the government’s terms. Michael said last week that “if someone wants to make money from the government, from the US Department of War, those guardrails ought to be tuned for our use cases – so long as they’re lawful. ”

Amodei has long spoken in favor of greater regulation on ai, and his company has backed a political action committee advocating for stronger safeguards over artificial intelligence. He opposed Trump during the 2024 US presidential campaign, and Anthropic has hired — unclear in the provided context.

Here’s the part that matters for readers tracking the sector: this dispute is both a procurement fight and a test case for industry norms on safety versus military demand. The bigger signal here is how the balance will shape vendor behavior going forward.

  • DoD deals in July last year offered up to $200m to major ai firms, including Anthropic, Google and OpenAI.
  • Claude was the only model cleared for classified systems until a new agreement on Monday allowed another chatbot in classified use.
  • The meeting occurred a month after the US military reportedly used Claude to assist in the capture of Nicolás Maduro.
  • Hegseth gave Amodei a deadline of end of day Friday to accept the department’s terms or face penalties.
  • Possible penalties include cancelling a massive contract and labeling Anthropic a “supply chain risk. ”
  • Anthropic resists use of Claude for mass surveillance and autonomous weapons that can kill without human input.

Key takeaways:

  • This deadline forces a rapid decision that could change the DoD’s access to Claude and set precedents for other vendors.
  • Pressure from the Pentagon and public political commitments to an ai arms race raise the stakes for corporate safety commitments.
  • Other firms have already accepted government terms; one defense official said OpenAI allowed its model for “all lawful purposes, ” and OpenAI did not immediately respond to a request for comment on its agreement.
  • The unfolding outcome will be an early indicator of whether safety-forward firms can maintain limits under direct military pressure.

The real question now is how Anthropic responds before Friday and whether the DoD follows through on threats that would have immediate procurement and reputational consequences. Recent updates indicate details may evolve as negotiations continue.

It’s easy to overlook, but the narrow choices made in the next days could shape where the ai industry draws its red lines on military use for years.