Anthropic Stock in Focus After Trump Order and OpenAI‑Pentagon Deal

Anthropic Stock in Focus After Trump Order and OpenAI‑Pentagon Deal

Shares and attention around anthropic stock are suddenly central to a broader dispute between the White House, the Pentagon and leading AI companies after President Trump directed federal agencies to stop using Anthropic technology and OpenAI announced a separate agreement with the Pentagon. The moves underscore a standoff over how defensive and classified military networks may use advanced AI.

OpenAI reaches deal to supply AI to classified US military networks

OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks. CEO Sam Altman announced the move on Friday night and said the agreement includes explicit assurances that the technology will not be used for domestic mass surveillance or for autonomous weapon systems that kill without human input. Altman wrote that the Pentagon "agrees with these principles, reflects them in law and policy, and we put them into our agreement. " He added that the contract covers prohibited uses that fall outside lawful or suitable cloud deployments.

Anthropic Stock under scrutiny after Trump orders agencies to cease use

The OpenAI-Pentagon announcement came shortly after the president said he would direct all federal agencies to "IMMEDIATELY CEASE" all use of Anthropic technology. The agreement between Anthropic and the administration had broken down after Anthropic sought assurances its technology would not be used for mass surveillance or for autonomous weapons systems that can kill people without human input. The Pentagon had demanded that Anthropic loosen its ethical guidelines on its AI systems or face severe consequences.

President lashes out and a pointed public statement follows

The president publicly criticized Anthropic on his social platform, writing: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution. " That statement came amid the administration’s directive and helped crystallize the political dimension of the Pentagon’s push for access to commercial AI models.

Industry reaction: employees and rival companies push back

In the days around the government actions, Anthropic drew support from competitors and their staff. Nearly 500 OpenAI and Google employees signed an open letter declaring "we will not be divided. " The letter said, "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused, " and added, "They’re trying to divide each company with fear that the other will give in. " It remains unclear how OpenAI staff will respond to the company’s decision to negotiate deployment in classified environments.

Altman’s memo reiterates safety red lines and plans for classified deployment

Altman sought to reassure OpenAI employees in a memo sent on Friday night. In that memo, which was obtained by media outlets, he wrote: "Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance. " He added, "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines. " Altman also said the company would "see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles, " and that they would ask contracts to exclude uses that are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.

Anthropic’s safety posture and unresolved fragments of the dispute

Anthropic presents itself as the most safety-forward of the leading AI companies, a positioning that shaped its push for contractual guarantees against domestic surveillance and fully autonomous lethal systems. The context indicates Anthropic had been mired in months of developments but the narrative text is truncated and unclear in the provided context.