On 7 June, Prime Minister Rishi Sunak announced that the UK will convene the first global summit on AI safety. The aim of the summit is to ‘bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks from AI.’ Notably absent from this initial list of proposed participants is civil society. The summit is scheduled for this autumn although no exact date is set and further details have yet to be announced.
The Government must ensure that the summit is inclusive of all stakeholders, including civil society. The expertise that civil society and those with lived experience of AI harms have to offer is critical to the success of any initiative to address the harms associated with AI and prevent further digital dehumanisation.
One of the areas new technology and AI is impacting profoundly is defence and security. This area of concern must be placed high on the summit agenda. Since 2013, the Stop the Killer Robots campaign, an international coalition of non-governmental organisations, has been calling for an international treaty on autonomous weapons, with prohibitions and regulations to reject the automation of killing and keep human control over the use of force.
The Prime Minister says that the UK has a global opportunity to ensure this technology is developed and adopted safely and responsibly. Yet so far, along with the US and Russia, the UK has opposed efforts supported by more than 90 states, campaigners and the International Committee of the Red Cross to push for a treaty that would prohibit autonomous weapon systems (AWS) that select targets and attack them with violent force without meaningful human control.
Not only is progress on regulating AWS being thwarted by these states but it is also prevented by those states that are highly militarised and dominated by tech giants such as Google Deepmind, Anthropic and Palantir. There is an urgent need for global cooperation and this is why Latin American and Caribbean nations who earlier this year made significant progress in the push to prohibit and regulate AWS must be present at the summit.
In mid-July, the UK will be hosting the first UN Security Council meeting on the potential threats of AI to international peace and security. This is a positive opportunity to get the issue of AWS on the Security Council’s agenda. Two of the anticipated briefers, UN Secretary-General Antonio Guterres and Demis Hassabis, CEO of Google Deepmind have already called for a ban on killer robots. In 2018, Google Deepmind was one of over 160 AI-related companies and organisations from 36 countries to sign a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”
We hope this is a sign the UK is taking the issue more seriously and that it might be an indication that the UK is moving towards the idea of an international process to regulate weapons.
Recognising that AI in weapon systems is becoming a major issue, in January this year, UK parliamentarians established the Special Committee on AI in Weapon Systems. As the development of AWS gathers pace and concerns grow, a summit that is silent on this issue would miss a crucial dimension and lack relevance.