OpenAI Shares Contract Language as Anthropic Claude Faces Blacklist

OpenAI Shares Contract Language as Anthropic Claude Faces Blacklist

OpenAI published contract language from its agreement with the Department of War and argued the deal contains stronger safety guardrails than the terms that led to anthropic claude being blacklisted, a disclosure intended to reshape how the government and AI labs interact now.

OpenAI’s posted contract excerpts

The company released clauses it said bar use of its technology for mass domestic surveillance, powering autonomous weapons, and enabling high-stakes decision systems such as "social credit" scores. The post said the agreement preserves the company's control over its safety stack, requires deployment cloud, keeps cleared company personnel in the loop, and includes contractual protections layered on existing law.

Anthropic Claude and the blacklist

The blacklist arose after Anthropic refused to accept the military's terms of use for its frontier model. Anthropic was declared a supply chain risk following that refusal. The company later said that "no amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons" and vowed to challenge any supply chain risk designation in court.

Sam Altman’s public responses

OpenAI's chief executive engaged publicly after the contract post, answering questions and reiterating that OpenAI believes its agreement offers a more expansive, multi-layered approach to protecting red lines than the prior agreement at issue. that Anthropic "seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, " and suggested Anthropic may have sought greater operational control.

De-escalation and the next steps

OpenAI framed its agreement in part as an attempt to de-escalate tensions between the Department of War and AI labs. it requested that the same contractual terms be made available to all AI labs and urged the government to try to resolve the dispute with Anthropic. OpenAI also argued that Anthropic should not be designated a supply chain risk and explained the deal was intended to support deeper collaboration between the government and AI developers.

  • OpenAI shared contract language it says restricts domestic surveillance and autonomous weapons.
  • Anthropic was blacklisted after refusing the military's terms and plans to challenge the designation in court.
  • OpenAI says its deal aims to de-escalate and asked that similar terms be offered to other AI labs.

Analysis: The publication of contract excerpts signals a push for transparency around how commercial AI companies and the Department of War are defining operational and legal boundaries for classified deployments. If the government makes equivalent terms available and engages with Anthropic, the companies say it could reduce the current friction; if those steps do not occur, the dispute may proceed through legal and administrative channels that remain not publicly confirmed.