Senator Proposes Bill to Restrict Pentagon’s Use of Lethal AI
Sen. Elissa Slotkin of Michigan has introduced the AI Guardrails Act. The proposal would bar the military from using AI to surveil Americans or to authorize lethal strikes without human intervention.
What the bill would do
The bill would ban decisions by AI to launch nuclear weapons. It also forbids fully autonomous targeting and mass surveillance of U.S. persons.
Slotkin, a former CIA analyst, said the measure is needed because AI use in conflict is accelerating. She made the remarks during a recent Armed Services Committee hearing.
Lawmakers and context
Slotkin has previously courted controversy for urging service members to refuse illegal orders. That action drew criticism from the prior administration.
She argued Congress has failed to provide clear rules for military AI. She called for bipartisan action to set legal guardrails.
Pentagon response
The Defense Department maintains it does not intend to use AI for mass surveillance of Americans. A department official said it also does not seek autonomous weapons that operate without humans.
The Pentagon says current practices already reflect those principles and that humans make final targeting decisions.
AI in recent operations
Filmogaz.com has reported widespread AI use in U.S. operations in Iran. Palantir’s Maven platform has been paired with Anthropic’s Claude model to process intelligence and mapping data.
Central Command’s Adm. Brad Cooper said these tools speed analysis, helping commanders identify potential targets faster. He emphasized that humans retain final authority over strikes.
The U.S. bombing campaign in Iran has drawn scrutiny after a likely U.S. strike destroyed a girls’ primary school in Minab. Reports say at least 175 people were killed in that attack.
Tech firms and procurement disputes
Major AI firms, including OpenAI, Google, xAI, and defense contractor Anduril, provide or have agreements to supply AI systems for defense. Those partnerships have grown rapidly.
The Pentagon and Anthropic butted heads over contract terms. Anthropic sought guarantees that its models would not be used for mass surveillance or fully autonomous weapons.
Anthropic’s leadership said it could not accept terms it viewed as unconscionable. CEO Dario Amodei made that statement last month.
Supply chain designation and legal fight
Defense Secretary Pete Hegseth declared Anthropic a supply chain risk. The Trump administration ordered federal agencies to stop using Anthropic technology within six months.
Anthropic filed suit to contest the designation. The company argues the decision amounts to ideological punishment.
Broader implications
Slotkin’s move reflects wider concern about how to restrict the Pentagon’s use of lethal AI while preserving lawful, human-led operations. Lawmakers will now debate how to balance ethics, security, and innovation.
Congressional action could set the first statutory limits on military uses of advanced AI. The debate is likely to shape policy for years to come.