
From 2-6 March, states met in Geneva to discuss autonomous weapons systems. It was the first meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons (LAWS) of 2026, which takes place within the framework of the Convention on Certain Conventional Weapons (CCW) – the multilateral forum currently addressing autonomy in weapons systems. As states deliberated changes to elements of the ‘rolling text’, events beyond the room demonstrated that meaningful human control over the use of force is already being weakened to devastating humanitarian effect.
On Saturday, 28 February, just two days ahead of this UN meeting, the Wall Street Journal reported that the United States had used Anthropic’s large language model Claude to assist with target recommendation in their strikes on Iran. It was reported that in the first 24 hours, over 1000 targets were struck in the joint US-Israeli attacks. As with the AI databases Lavender and Gospel, decision support systems deployed by Israel in Gaza, Claude was used to analyse swaths of data to generate targets at speed. While not a large language model like Claude, the targets generated by the Lavender system were reviewed in mere seconds by military personnel who then authorised strikes based on these AI-generated recommendations. Digital dehumanisation is inherent to AI-enabled targeting, where human beings are reduced to data points, sorting and scoring them into target profiles and processing for elimination – attempts to normalize this digital dehumanisation at scale are unacceptable. Information continues to emerge on how exactly Claude assisted strikes, and the level of human review involved before authorising strikes. What is clear is that the line between a system that recommends a target with minimal human involvement and one that chooses targets and executes strikes without meaningful human control – in other words, an autonomous weapon – is getting very thin.
In the week preceding the GGE meeting, the pressure on industry to discard safeguards in order to effect mass AI-enabled lethality was also made clear. Tech firm Anthropic, which held a $200 million contract with the Pentagon to use its technologies in military applications, was recently deemed a “supply chain risk” by the US government after refusing to allow its technologies to be used in autonomous weapons and mass domestic surveillance tools. The tech giant said that their technology could not be safely or reliably used for either purpose.
Following this designation, all federal agencies were instructed to stop using Anthropic’s tools. Anthropic’s competitor, OpenAI, then picked up the contract, stating that they would indeed allow the Pentagon to use their tools for “all lawful purposes”, a promise that lacks the specificity to ensure adherence with the laws of war and which fails to demonstrate commitment to the protection of civilians. What this sequence reveals is the extraordinary pressure that AI companies are under by governments, and the extraordinary speed at which safeguards can be discarded or transferred when commercial and military interests align.
Meanwhile, at the United Nations, the GGE discussion on autonomous weapons continued. States discussed textual changes to the GGE Chair’s ‘rolling text’, the document that could form the basis of negotiations on an autonomous weapons treaty. After over 40 states expressed their support for moving to negotiations on the basis of the ‘rolling text’ last year, this week, even more states joined this call – including a group of African states – bringing the total to over 70. Stop Killer Robots hosted a mid-week side event “Inflection point at the CCW: Taking action on a decade of work”. The event and complementary briefing paper considered the decade of the forum’s work on autonomous weapons and produced a set of recommendations – prime among them launching the negotiation of a legally binding instrument to regulate and prohibit autonomous weapons.
The current CCW mandate to discuss autonomous weapons comes to a close this year, and as the GGE Chair said at the end of this week’s meeting, the conclusion of the three-year mandate “heightens expectations”. Decisions on what steps states take next will be made at the November 2026 Review Conference of the CCW, where negotiations on an autonomous weapons treaty could be launched. Innocent people are already paying the price of AI-enabled digital dehumanisation in Iran and Gaza, as well as in border control and policing elsewhere. States must not only consider the Chair’s ‘rolling text’, which has been helpful in structuring discussions and building the consensus under which this diplomatic forum operates, but also the existing and ongoing real-world impacts of increasing autonomy and AI-enabled military tools.
The urgency in governing these systems has never been clearer. Belgium reflected on a decade of discussions at the CCW, stating that “the world around us has changed significantly. The technological developments have made things that were once the domain of science fiction today’s reality… We should not forget that as we speak, people are being attacked by AI-driven autonomous systems…. At some point, this bubble we have built needs to produce something that connects with the real-world and with a real-world impact”.
This is a pivotal moment, a paradigm shift happening before our eyes. Do we accept a world where human beings are reduced to data points, processed by machines, and then killed? If we allow this to happen without challenge, we will have accepted a fundamental shift – technology deployed not for the benefit of humanity, but for our oppression, control, and death. It is time for our governments to draw legal and ethical lines to divert humanity away from the dangerous slide towards automated killing we are currently on.
Reposted from Stop Killer Robots Global Camaign https://www.stopkillerrobots.org/news/reports-of-ai-enabled-targeting-in-iran-bring-real-world-impacts-to-the-un/