27 February 2026

Stop Killer Robots, the global coalition of 270+ civil society organisations calling for new international law on autonomous weapons, has responded to reports that the U.S. Department of Defence pressured Anthropic to adjust the terms of use for the company’s frontier large language model, Claude. Anthropic’s terms currently stipulate that the AI model may not be used in fully autonomous weapon systems – machines capable of selecting and killing targets without human control. The company has refused to acquiesce to the Pentagon’s demands, on the grounds that “frontier AI systems are simply not reliable enough to power fully autonomous weapons” and that the oversight mechanisms needed to protect civilian lives and military personnel “don’t exist today.” Although the dispute here is about a large language model AI system, the same reservations should apply to all AI-enabled targeting systems.

Stop Killer Robots warns that the fact that Anthropic has made this caveat about the reliability of frontier AI models is a signal the world cannot afford to ignore. This is a technology company telling its government that it cannot responsibly provide what is being asked of it because the capability to do so safely does not exist, and neither do the guardrails to prevent catastrophic harm if it did.

If, as Anthropic acknowledges, the guardrails don’t exist for autonomous weapons, then the same is true for AI enabled decision support systems which they have not drawn a line against. The focus on fully autonomous weapons deflects from the realities of how militaries and companies are already using AI technologies. AI-enabled decision support systems, in which humans remain only nominally in the decision-making process, reduce human beings to proxy data points – weight, gender, age, social associations etc. sorted and processed by machines to generate targeting recommendations. When operators defer to those outputs or accept them in incredibly short timeframes, rather than exercising independent judgement, the line between a system that recommends a strike and one that executes it becomes dangerously thin. Automation bias is not a theoretical concern; it is an operational reality that diminishes human decision making and accountability.

Diplomatic discussions on autonomous weapons have been ongoing for over a decade. Later this year, at the Convention on Conventional Weapons (CCW) Review Conference at the United Nations in Geneva, States will have a concrete opportunity to act, to put guardrails in place on autonomous weapons systems and commence negotiations on new international law. That opportunity must not be squandered. The vast majority of States have now explicitly expressed their support for a new legally binding instrument, and the urgency to move forward cannot be overstated.

It is also essential that there is a strong international response to the integration of AI into the use of force in all its forms. Anything less will leave the most dangerous developments ungoverned and the most vulnerable people unprotected.

Nicole Van Rooijen, Executive Director of Stop Killer Robot explains:

“We are at a pivotal moment for humanity. When the companies building these technologies are themselves refusing to deploy it on safety grounds, it must raise alarm bells for governments and people everywhere. The standards Anthropic has chosen to maintain are a bare minimum of responsible conduct, not cause for celebration. And yet even those basic standards are already under pressure from the most powerful military in the world.”

“The rate of technological development will not wait for diplomacy to find its feet. Every year without binding international law, the gap between what these systems can do and our ability to govern them grows wider. Those who will pay the price are ordinary people: civilians in conflict zones and citizens in both authoritarian and democratic States who will face these dehumanising technologies as their use becomes normalised and human rights, and democratic values eroded.”

“This moment demands political and moral leadership of the highest order. States must come to the table this year not just to talk, but to act. The time for kicking the can down the road has passed, the moment has arrived and what is needed now is new law.”

Repost from Stop Killer Robots global coalition: https://www.stopkillerrobots.org/news/press-release-stop-killer-robots-responds-to-the-anthropic-pentagon-standoff