From 2–6 March 2026, governments convened in Geneva for the first meeting of the UN Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), held under the Convention on Certain Conventional Weapons (CCW). During the meeting, states examined proposed revisions to the Chair’s “rolling text”, a document that could eventually form the basis for negotiations on an international treaty governing autonomous weapons. At the same time, developments beyond the conference room served as a stark reminder of how immediate and pressing these issues have become.
On 28 February, during the opening wave of airstrikes conducted by the US–Israeli coalition against Iran, the Shajareh Tayyebeh girls’ school in Minab was hit by a US strike, despite later efforts by President Trump to deny responsibility. Reports suggest that more than 165 Iranians were killed on the first day of the conflict, including many schoolchildren, and the number of civilian casualties has continued to climb as the fighting shows no sign of abating.
In the aftermath of the attack, numerous reports have questioned whether artificial intelligence played a role in US and Israeli targeting decisions. Although it has not been conclusively established that AI systems identified the Minab school as a target, there are indications that AI-enabled tools—such as the US military’s Maven Smart System—may have contributed to strike planning and the creation of target lists. The rapid pace and scale of the attacks in Iran strongly point to the use of AI-supported targeting.
Further investigations have also brought wider concerns about the role of AI in military operations to the forefront. A joint investigation released on 10 March 2026 by Airwars and The Independent indicates that the first officially acknowledged civilian death linked to an AI-assisted US military strike may have taken place in Iraq in February 2024. The victim, 20-year-old construction student Abdul-Rahman al-Rawi, was killed during US airstrikes against Iranian-backed militias that officials had said involved AI-supported targeting technologies such as Project Maven. While the US military later admitted that al-Rawi had been unintentionally killed and sent a letter of condolence to his family, a spokesperson has since stated that the military has “no way of knowing” whether the specific strike involved artificial intelligence. The case has heightened concerns regarding transparency, accountability, and the expanding role of AI in targeting decisions.
Regardless of whether AI systems were used in the Minab strike, the possible involvement of such technologies does not absolve the US military of responsibility for what Human Rights Watch has described as a war crime. At a minimum, it suggests a failure to take the necessary precautions required under international humanitarian law. At the same time, the potential role of algorithmic systems highlights growing concerns about the increasing reliance on automated tools in military decision-making and the risks this poses to civilian protection.
AI-driven targeting also introduces the danger of what has been termed “digital dehumanisation”, where people are reduced to data points processed within complex algorithms that may struggle to distinguish civilians from legitimate military objectives. Without meaningful human control—and effective oversight of targeting decisions—the increasing integration of AI into warfare risks accelerating violence while eroding mechanisms of accountability.
Recent developments also illustrate the mounting pressure on technology companies to support AI-enabled military applications. The Pentagon recently labelled the AI company Anthropic a supply-chain risk after it refused to remove safeguards preventing its technology from being used in autonomous weapons systems or domestic surveillance. Following this decision, President Trump directed federal agencies to discontinue use of Anthropic’s Claude system, demonstrating how quickly protective measures can come under pressure when military and commercial priorities intersect.
The fact that the Minab school was struck in the first round of attacks suggests it had been designated as a high-priority target. However, reports indicate that the site had been used by Iran’s Revolutionary Guard Corps prior to 2016, raising concerns that the intelligence used to justify the strike may have relied on information that was several years out of date. Without improved transparency regarding how targets are selected and verified, similar incidents could occur in the future.
We are approaching a pivotal moment. Militaries are embedding AI into their operations at exactly the same time as powerful states are telling us international law doesn’t really matter any more. That increases the risks of a toxic combination of digital dehumanisation, an abdication of human responsibility, and the delegation of power over life and death to automated systems. This is obviously a dangerous and dystopian direction of travel. That is why we all need to press our governments and AI companies to set clear legal and ethical limits – which must be applied to all.
Jane Kinninmont, CEO UNA-UK, member of the Stop Killer Robots UK Coalition.