Trump Orders Administration to Halt Use of Anthropic’s AI Immediately

Trump Orders Administration to Halt Use of Anthropic’s AI Immediately

Donald Trump has mandated an immediate halt to the use of Anthropic’s artificial intelligence (AI) technology across all U.S. federal agencies. This decision comes after Anthropic, a California-based startup, declined to remove restrictions on the military’s access to its AI model, Claude.

Trump’s Order and Anthropic’s Response

The former president expressed his discontent on the social media platform Truth Social. He stated, “We do not need it, we do not want it, and we will no longer work with them.” Trump’s criticism emphasized Anthropic’s ethical stance regarding AI usage.

Background on the Controversy

The conflict intensified after Defense Secretary Pete Hegseth issued an ultimatum to Anthropic. He demanded that AI providers lift restrictions on their models by a deadline, which has since passed. Anthropic decided not to comply, establishing clear limits on the deployment of its technology.

Ethical Considerations in AI

  • Anthropic aims to prevent its technology from being used for mass surveillance of U.S. citizens.
  • The startup opposes the development of fully autonomous lethal weapons without human oversight.

Dario Amodei, the CEO of Anthropic, defended the company’s position, arguing that advanced AI systems are not yet reliable enough for military control. He stated that deploying such systems without appropriate safeguards is unsafe for both military personnel and civilians.

Reactions from the Tech Community

In response to Trump’s decision, approximately 400 employees from Google and OpenAI publicly supported Anthropic in an open letter. They urged leadership to unite and resist the Pentagon’s demands regarding AI usage.

Union Support

Additionally, unions representing employees from Amazon, Microsoft, and Google called on their employers to reject the Pentagon’s requests. This collective action indicates a growing concern among tech workers over military directives related to AI.

Future Implications

Trump’s directive marks the beginning of a six-month transition period during which the Department of Defense will phase out Anthropic’s tools. Founded in 2021 by former OpenAI employees, Anthropic has maintained an ethical approach in AI development.

In 2026, the company plans to publish guidelines for Claude focusing on restricting dangerous actions. Trump’s administration now faces the challenge of finding alternative AI solutions while addressing national security concerns.

Conclusion

The halt in using Anthropic’s AI raises significant questions about the intersection of technology, ethics, and national security. As this situation unfolds, it remains to be seen how the U.S. military will adapt its AI strategies moving forward.