Pentagon Gives Anthropic Friday Ultimatum in Ai Safeguards Standoff

Pentagon Gives Anthropic Friday Ultimatum in Ai Safeguards Standoff

The U. S. Defense Department has issued a deadline to Anthropic to loosen constraints on its ai tools or risk losing a government contract, intensifying a dispute over how the company's systems may be used in military operations. The demand follows reporting that Anthropic's Claude was used in a classified operation tied to the January abduction of Venezuelan President Nicolás Maduro.

Pentagon demands change to Ai guardrails

Defense Secretary Pete Hegseth has given Anthropic until Friday to alter rules that limit Pentagon use of its technology, warning the company could lose its government contract if it refuses. The ultimatum links the company's current safeguards—specifically those that block use for domestic surveillance and programming autonomous weapons that can strike targets without human intervention—to the Defense Department's willingness to continue the relationship.

Anthropic's Claude and classified operations

Anthropic, founded in 2021 by former OpenAI executives, built Claude, a large language model the company has described as capable of generating text, visual or audio output after analyzing large datasets such as books, archives, websites, pictures and videos. The firm was the first AI developer approved to operate on classified Department of Defense networks, and its work on those networks has included partnerships with private software companies.

Alleged use in Venezuela operation and Maduro abduction

Recent reporting states that Claude was used in a U. S. military operation that resulted in the abduction of Venezuelan President Nicolás Maduro in January. That connection has intensified scrutiny inside the Pentagon and is a proximate cause of the demand that Anthropic relax its safeguards for certain military applications.

Internal dissent, hacking claims and resignation

Anthropic has faced both external attacks and internal dissent. In November a Chinese state-sponsored hacking group manipulated Claude's code in an attempt to infiltrate roughly 30 targets globally, including government agencies, chemical companies, financial institutions and technology giants, with some attempts successful. Earlier this month, Mrinank Sharma, an AI safety researcher at Anthropic, resigned over concerns about the use of AI. posted to his X account on February 9, Sharma warned that the world is facing interconnected crises and said he had repeatedly seen pressures that make it hard to let values govern actions within the organization.

Defense contracts, partners and financial scale

Last summer the Pentagon awarded defense contracts to four AI companies—Anthropic, Google, OpenAI and xAI—each contract worth up to $200 million. Anthropic's approval for classified military networks reportedly involves work with partners such as Palantir Technologies, a firm that has been criticized for its links to the Israeli military. The financial scale and the classified approvals have made the current standoff especially consequential for both the company and the department.

What makes this notable is the collision of corporate safety commitments and operational demand: Anthropic publicly positions itself as a "Public Benefit Corporation" committed to the responsible development and maintenance of advanced AI for the long-term benefit of humanity, and those stated commitments are now at odds with Pentagon demands tied to battlefield and intelligence use.

The chain of events is straightforward: reporting tied Claude to a January operation that led to Maduro's abduction, the Defense Department pressed Anthropic to relax prohibitions on certain military uses, and the company has refused to back down on safeguards that bar domestic surveillance and autonomous lethal targeting—creating the current threat of contract termination.

A final detail in the available material is unclear in the provided context: a trailing reference, "Elo. "