On Wednesday, Ministers from more than 28 nations met in Seoul for the second ministerial conference on AI safety. The summit, co-hosted by the UK and the Republic of Korea, saw commitments from governments and companies updates on the work since the first AI Safety Summit in Bletchley Park last November.

The UK feedback on the first six months work of the government backed AI Safety Institute, which included opening an office in San Francisco to work alongside experts from Silicon Valley. The Institute also completed its first testing on emerging AI models and formed partnerships with France, Singapore and Canada.

In the last six months similar institutes have also been launched by the government of the United States, Canada, Japan, Singapore, the Republic of Korea and the EU.

The conference also saw 16 companies sign safety commitments, pledging to improve AI safety and to refrain from releasing new models if the risks are too high.

The next AI ministerial summit will be held in France in early 2025. The focus of the 2025 meeting will be defining severe risks and what mitigations need to be implemented.

While the conference emphasised the need for ethical and controlled development and emphasised the need for meaningful human control, the impact of AI in weapon systems was not addressed during the minister’s statement in the House of Commons. With the United Nations Secretary-General calling on states to negotiate a treaty on autonomous weapons by 2026 this was a missed opportunity. Perticually as the majority of states now supporting such a treaty.

A full copy of the statement can be found on Hansard.