Campaigners from Stop Killer Robots are at the REAIM summit  (4-5 February 2026) in A Coruña, Spain, which brings together state representatives, academics, civil society and industry to discuss the ethical and responsible use of AI in the military domain. 

The previous two iterations of REAIM concluded with a Call to Action (The Hague, 2023) and a Blueprint for Action (Seoul, 2024). Neither of these outcome documents presented legally binding rules for the development and use of AI nor advised that new legal safeguards were necessary to ensure responsible development and use of military AI. Campaigners expect the same from this year’s outcome document – despite the summit’s focus on moving from principles to implementation and on the importance of meaningful human control. 

The 2026 REAIM summit follows the recent Doomsday Clock announcement, which listed disruptive technologies, like autonomous weapons and the military AI discussed at REAIM, as one of the reasons why the global community is closer to catastrophe than ever. Stop Killer Robots campaigners attending REAIM are highlighting that truly responsible military AI governance must emphasize legal responses. 

Elizabeth Minor, Head of Policy at Stop Killer Robots said: 

“REAIM presents an opportunity for state representatives to take stock of the harmful trajectory of AI weapons and tools and ensure that more AI-enabled suffering on the battlefield does not come to pass. 

The use of unregulated military AI in Gaza and Ukraine clearly shows the dangers of these weapons and tools — and states’ lack of movement toward legal red lines is gravely insufficient. Our political leaders must create laws to protect civilians from increasing automated harms on the battlefield. The unenforceable guidelines expected to result from this summit aren’t enough.

This issue extends beyond conflict. Technologies used on the battlefield are often transferred to policing and border management, making anyone a potential victim of automated harm.

If REAIM seeks to create norms around the responsible use and development of AI, the profit-driven interests of arms developers and highly militarised states must be secondary to protecting civilians and strengthening international law.  

As states move toward new international law on autonomous weapons, the military AI weapons discussed at REAIM must be under the same legal scrutiny. Any weapon or military tool that reduces a person to a data point for killing is unacceptable. States must outlaw machines making life or death decisions. We need legal safeguards now.” 

Cross posted from the Stop Killer Robots global coalition: www.stopkillerrobots.org/news/reaim-2026/