US, UK, EU sign international AI treaty

The European Union, the United States, the United Kingdom and others signed an international artificial intelligence treaty on Thursday, the Council of Europe said.

It said that the agreement was the first international legally binding treaty on the use of AI systems.

What else did the Council of Europe say?

“We must ensure that the rise of AI upholds our standards, rather than undermining them,” Council of Europee Secretary-General Marija Pejcinovic Buric said.

She said that the text was an “open treaty with a potentially global reach,” and urged more countries to sign the treaty and for countries who had already done so to ratify it.

The council’s statement said that the treaty “provides a legal framework covering the entire lifecycle of AI systems.”

“It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law,” it stressed.

The treaty was opened for signature at a conference of Council of Europe justice ministers in the Lithuanian capital, Vilnius.

It comes just months after EU ministers gave final approval to the bloc’s own Artificial Intelligence Act, which aims to regulate use of AI in “high-risk” sectors.

Who else signed the treaty?

Besides the EU, the US and the UK, the treaty was also signed by Andorra, Georgia, Iceland, Norway, Moldova, San Marino and Israel.

Also involved in negotiating the treaty were Argentina, Austrlaia, Canada, Costa Rica, the Vatican, Japan, Mexico, Peru and Uruguay.

The Council of Europe is an organization based in Strasbourg, France that is tasked with upholding human rights. It has 47 members states, including the 27 member states of the EU.NGO raises concern over ‘enforceability’ of provisions

Francesca Fanucci, a legal expert at the Hague-based European Center for Not-for-Profit Law (ECNL) NGO, told the Reuters news agency that the agreemented had been “watered down” into a broad set of principles.

“The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability,” she said.

Fanucci also pointed to exemptions on AI systems for national security purposes and limited scrutiny of private companies compared to the public sector.