Ai standoff: Pentagon threatens to make Anthropic a pariah if it refuses to drop ai guardrails
U. S. Defense Secretary Pete Hegseth has given Anthropic until Friday to loosen rules on how its tools may be used by the Pentagon, a demand that sharpens a broader fight over ai safeguards and government access. The confrontation follows claims that Anthropic’s Claude was used in a military operation and comes amid security concerns and internal dissent at the company.
Pete Hegseth’s deadline and the government contract at stake
On Tuesday, U. S. Defense Secretary Pete Hegseth issued a deadline: Anthropic must loosen its restrictions on Pentagon use of its tools by Friday or risk losing a government contract. The demand was circulated that day and attributed to unnamed sources, and it frames a direct choice for the company about whether to alter rules governing its work with the Defense Department and the Pentagon in Washington, DC.
Anthropic's safeguards and Ai stance
Anthropic is refusing to back down over safeguards that prevent its technology from being used for US domestic surveillance and from programming autonomous weapons that can hit targets without human intervention. The company positions itself as a "responsible" developer and, on its website, describes itself as a "Public Benefit Corporation" committed to the "responsible development and maintenance of advanced AI for the long-term benefit of humanity. " Anthropic was founded in 2021 by former OpenAI executives.
Claude, classified operations and the January allegation
Anthropic is best known for building Claude, a popular large language model (LLM). Claude was the first AI tool from an external developer to be used in classified operations by the US Defense Department. The software was also implicated in reports that it was used in a US military operation that resulted in the abduction of Venezuelan President Nicholas Maduro in January this year.
Hacking, resignations and public warnings from staff
In November, Anthropic alleged that a Chinese state-sponsored hacking group had manipulated the Claude code in an attempt to infiltrate about 30 targets globally, including government agencies, chemical companies, financial institutions and tech giants; some of these attempts were successful. Earlier this month, Mrinank Sharma, an AI safety researcher at Anthropic, resigned over concerns about the use of AI. posted on his X account on February 9, Sharma wrote: "The world is in peril. And not just from AI, or bioweapons, but from whole series of interconnected crises unfolding in this very moment. " He added that he had repeatedly seen how hard it is to let values govern actions, including pressures within the organisation to set aside what matters most.
Defense contracts, LLM uses and partner ties
Last summer the Pentagon announced it was awarding defence contracts to four AI companies: Anthropic, Google, OpenAI and xAI, with each contract worth up to $200m. Large language models, or LLMs, are a type of AI technology that generate text, visual or audio output similar to content created by humans after analysing massive datasets such as books, archives, websites, pictures and videos. For military and defence use, LLMs can summarise large volumes of text, analyse data, translate, transcribe and draft memos; in theory they can also support autonomous or semi-autonomous weapons systems that can identify and hit targets without the need for human instruction. Most AI companies have terms that prohibit that latter use. Anthropic was the first AI company approved for classified military networks, and it reportedly works with partners like Palantir Technologies, which has been criticised for its links to the Israeli military. Elo — unclear in the provided context.