Ai showdown: Pentagon gives Anthropic a Friday ultimatum over guardrails
Anthropic is locked in a public dispute with the US Defense Department after its Claude model was linked to a US military operation in January, and Defense Secretary Pete Hegseth has given the company until Friday to loosen rules governing the technology or risk losing its government contract. The confrontation comes as questions mount about how companies set safeguards for ai tools used on classified networks.
How the row began
A row is simmering between the United States government and Anthropic after news agencies said Claude was used in a US military operation that resulted in the abduction of Venezuelan President Nicholas Maduro in January this year. US Defense Secretary Pete Hegseth set a Friday deadline for Anthropic to relax restrictions on how its tools can be used by the Pentagon, and warned the company could lose its government contract if it refuses. News agencies said this development was disclosed on Tuesday, quoting unnamed sources.
Ai guardrails at the center of the dispute
Anthropic has refused to back down over safeguards that explicitly prevent its technology from being used for US domestic surveillance and from programming autonomous weapons that can hit targets without human intervention. The company positions itself as a "responsible" developer and describes itself on its website as a "Public Benefit Corporation" committed to the "responsible development and maintenance of advanced AI for the long-term benefit of humanity. "
What Claude and LLMs can do — and what they’re not supposed to do
Anthropic is best known for Claude, a popular large language model (LLM). The context provided describes LLM as a type of AI technology that generates text, visual or audio output similar to human-created content after analysing massive datasets such as books, archives, websites, pictures and videos. For military and defence use, LLMs can summarise large volumes of text, analyse data, translate, transcribe and draft memos. In theory, LLMs can also be used to support autonomous or semi-autonomous weapons systems that can identify and hit targets without human instruction, though the context notes most AI companies have terms that prohibit that use.
Background on Anthropic and recent security concerns
Anthropic was founded in 2021 by former OpenAI executives and was the first AI developer to be used in classified operations by the US Defense Department, which is housed at the Pentagon in Washington, DC. Last summer the Pentagon awarded defence contracts to four AI companies — Anthropic, Google, OpenAI and xAI — with each contract worth up to $200m. Anthropic was the first company approved for classified military networks, and the company reportedly works with partners such as Palantir Technologies, a firm criticised for its links to the Israeli military.
Hacking, resignations and internal tensions
In November, Anthropic alleged that a Chinese state-sponsored hacking group had manipulated the Claude code in an attempt to infiltrate about 30 targets globally, listing government agencies, chemical companies, financial institutions and tech giants among those hit; some of those attempts were successful. Earlier this month, Mrinank Sharma, an AI safety researcher at Anthropic, resigned. On February 9, Sharma wrote on his X account that "the world is in peril, " adding he had repeatedly seen how hard it is to let values govern actions and that pressures exist to set aside what matters most.
The immediate consequence of the standoff is clear: Hegseth’s Friday ultimatum. The exact calendar date of that Friday is unclear in the provided context. The Pentagon’s earlier contract awards remain in place pending the outcome, and Anthropic’s stance on its guardrails will determine whether it keeps access to classified networks and its position on those defence contracts.