UN Photo/Loey Felipe

Speaking at the United Nations last month, Ukrainian President Volodymyr Zelensky issued a stark warning: artificial intelligence is driving “the most destructive arms race in human history.” He called for urgent global rules to govern the use of AI in weapons, saying the challenge was “just as urgent as preventing the spread of nuclear weapons.”

Zelensky’s words reflect a growing and deeply troubling reality. Across the battlefields of Ukraine, autonomous systems powered by AI are no longer theoretical. They are being tested and deployed in ways that could permanently change the nature of warfare.

The rise of autonomous weapons

A recent report from the BBC revealed how both Russian and Ukrainian forces are using AI to identify, track, and attack targets. Ukraine’s deputy defence minister confirmed that AI now analyses more than 50,000 video feeds from the front line every month, helping the military to identify enemy positions in real time.

Developers are also designing drones capable of locking onto and striking targets autonomously, requiring little more than a single command from a soldier. Russian forces, meanwhile, have deployed AI-enabled drones that can find and attack targets without sending or receiving signals avoiding jamming but opining up ethical questions.

Once activated, such systems could operate entirely on their own. Locating, selecting, and engaging targets without human oversight. As one Ukrainian developer explained, “All a soldier will need to do is press a button… the drone will do the rest.”

While AI can enhance military decision-making, its integration into weapons raises profound ethical and legal concerns. There are fears that autonomous systems will violate the rules of war, putting civilians at risk.

“How will they avoid harming civilians, or distinguish soldiers who want to surrender?” the BBC asks. These are not technical details they are life-and-death questions. If a machine cannot tell the difference between a combatant and a civilian, or recognise an act of surrender, violations of international humanitarian law are inevitable.

Ukrainian developers have acknowledged this risk. One manufacturer said his company deliberately disables automatic firing functions, fearing that AI systems are not yet capable of safely distinguishing friend from foe.

What is happening in Ukraine signals a broader global trend, the accelerating militarisation of AI. As nations compete to develop increasingly autonomous systems, the risk of escalation, miscalculation, and civilian harm grows.

Zelensky’s warning should be a wake-up call for the international community: if the world fails to act, the AI arms race could spin beyond human control.

The need for global rules

Governments must act now to ensure that humans remain responsible for decisions over life and death.
To prevent the spread of fully autonomous weapons, we urgently need:

Without strong global rules, the AI arms race unfolding in Ukraine may become the model for future wars.