Ai firm Anthropic vs the Pentagon over ai guardrails
Anthropic is locked in a dispute with the United States government after its Claude model was used in a military operation linked to the January abduction of Venezuelan President Nicholas Maduro. The company says it will not remove safeguards that restrict certain ai uses, even as US Defense Secretary Pete Hegseth has given Anthropic until Friday to loosen those rules or risk losing a government contract.
Deadline set by Pete Hegseth
On Tuesday, The and news agencies, quoting unnamed sources, said Pete Hegseth had set the deadline and warned Anthropic to change how its tools can be used by the Pentagon. The warning centers on safeguards that Anthropic maintains to prevent its technology from being used for US domestic surveillance and to programme autonomous weapons that can hit targets without human intervention.
How Claude was used
The dispute intensified after Anthropic’s Claude software was used in a US military operation that resulted in the abduction of Venezuelan President Nicholas Maduro in January this year. Anthropic has refused to back down from the safeguards that limit uses such as domestic surveillance and fully autonomous targeting, stressing its restrictions remain in place despite Pentagon pressure.
Anthropic's founding and position
Anthropic was founded in 2021 by former OpenAI executives and is best known for building Claude, a popular large language model (LLM). The company positions itself as a “responsible” developer in the AI landscape and describes itself as a “Public Benefit Corporation” committed to the “responsible development and maintenance of advanced AI for the long-term benefit of humanity”.
LLMs and military functions
LLM is a type of AI technology which generates text, visual or audio output similar to content created by humans after analysing massive datasets such as books, archives, websites, pictures and videos. For military and defence use, LLMs can summarise large volumes of text, analyse data, translate, transcribe and draft memos. In theory, they can also be used to support autonomous or semi-autonomous weapons systems, which can identify and hit targets without the need for human instruction. However, most AI companies have terms that prohibit this use.
Security incidents and a resignation
In November, Anthropic alleged that a Chinese state-sponsored hacking group had manipulated the Claude code in an attempt to infiltrate about 30 targets globally, including government agencies, chemical companies, financial institutions and tech giants; some of these attempts were successful. Earlier this month, Mrinank Sharma, an AI safety researcher at Anthropic, resigned from his position over concerns about the use of AI. posted on his X account on February 9 Sharma wrote: “The world is in peril. And not just from AI, or bioweapons, but from whole series of interconnected crises unfolding in this very moment. ” He added: “Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too. ”
Contracts, partners and approvals
The Pentagon announced last summer that it was awarding defence contracts to four AI companies – Anthropic, Google, OpenAI and xAI. Each contract is worth up to $200m. Anthropic was the first AI developer to be used in classified operations by the US Defense Department and was the first AI company to be approved for classified military networks. It reportedly works on those networks with partners like Palantir Technologies, which has been criticised for its links to the Israeli military. The Defense Department is housed at the Pentagon in Washington, DC.
Elo unclear in the provided context.
Anthropic says it will maintain its current safeguards despite the Pentagon’s demand, leaving a confrontation over guardrails and government contracting unresolved as parties approach the Friday deadline.