Pentagon Directive Commands Removal of Anthropic AI from Military Systems

Pentagon Directive Commands Removal of Anthropic AI from Military Systems

The U.S. Defense Department has mandated the removal of Anthropic’s artificial intelligence products from military systems within 180 days, as revealed in a memorandum obtained by CBS News. Dated March 6, this internal directive was issued shortly after the Pentagon classified Anthropic as a supply chain risk.

Pentagon Directive Commands Removal of Anthropic AI from Military Systems

The memorandum, signed by Chief Information Officer Kirsten Davies, states that Anthropic AI “presents an unacceptable supply chain risk” across all Department of Defense (DoD) systems. This directive highlights the significant actions military leaders must undertake to eliminate Anthropic’s technology from critical national security frameworks, including those involved in nuclear weapons operations, ballistic missile defense, and cybersecurity.

Timelines and Requirements

In addition to requiring the military to sever ties with Anthropic AI, the memo also stipulates that any contractor engaged with the Pentagon must cease using Anthropic products within the same time frame. Davies emphasized the potential risks posed by adversaries who might exploit vulnerabilities linked to Anthropic’s AI, which could result in catastrophic consequences for military operations.

  • Deadline for removal: 180 days
  • Systems affected: Nuclear, ballistic missile defense, and cyber warfare
  • Contractors must also comply with this directive

Exceptions may only be granted under specific circumstances. According to Davies, exemption requests necessitate a comprehensive risk mitigation plan and must relate directly to national security operations in cases where no alternative solutions exist.

Historical Context and Repercussions

This action marks an unprecedented move, as it’s the first time an American company has been designated a supply chain risk by the federal government. Previous actions primarily targeted foreign firms, such as the ban on the Chinese company Huawei during President Trump’s administration.

The clash between Anthropic and the Pentagon escalated following Anthropic’s requests for safeguards to prohibit military use of its AI for mass surveillance or fully autonomous weaponry. Anthropic’s CEO, Dario Amodei, stated that these requests were meant to uphold American values.

Meanwhile, the Pentagon has expressed a desire to deploy Anthropic’s AI, known as Claude, for all lawful purposes, claiming that concerns raised by the company have already been addressed under existing laws.

Current Use of Claude

Sources indicate that Claude is currently operational within the U.S. military’s strategies, particularly in intelligence operations related to conflicts, such as in Iran. Anthropic remains the only AI company with active deployments on classified Pentagon systems.

Recent discord between Anthropic and the Pentagon follows the conclusion of negotiations last month, which resulted in Anthropic filing lawsuits against the government. The company alleges that the Pentagon’s actions indicate illegal retaliation for their stance on company policies regarding AI usage.

White House spokesperson Liz Huston responded to the situation by stressing that national security should not be compromised by corporate interests. In the meantime, retired Navy Admiral Mark Montgomery noted that AI-enabled operations are processing intelligence significantly faster, enhancing military effectiveness.

As the situation unfolds, the Defense Department’s directive to remove Anthropic AI emphasizes the complex interplay of technology, security, and policy in modern military operations.