Home » UK, US, EU Sign First International AI Treaty

UK, US, EU Sign First International AI Treaty

UK, US, EU Sign First International AI Treaty

The UK has signed the world’s first international treaty on artificial intelligence — alongside the European Union, the United States, and seven other countries.

The agreement commits signatories to adopting or maintaining measures that ensure the use of AI is consistent with human rights, democracy, and the law. These measures should protect the public against inherent risks of AI models, such as biassed training data, and those of their misuse, such as the spread of misinformation.

The Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law was opened for signatures during a conference of Council of Europe Ministers of Justice in Vilnius, Lithuania on September 5. Current signatories include:

  • Andorra.
  • EU.
  • Georgia.
  • Iceland.
  • Israel.
  • Norway.
  • Republic of Moldova.
  • San Marino.
  • U.K.
  • U.S.

The treaty adds to a growing set of international legislation that aims to curb AI risks, including the Bletchley Declaration, which was signed by 28 countries in November 2023.

More signatories are anticipated from other states that negotiated the treaty. These include the other 39 Council of Europe member states and the nine non-member states of Argentina, Australia, Canada, Costa Rica, the Holy See, Japan, Mexico, Peru, and Uruguay.

Lord Chancellor Shabana Mahmood represented the U.K. through his signature. She said in a statement, “Artificial Intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth. However, we must not let AI shape us — we must shape AI.

“This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”

SEE: UK, G7 Countries to Use AI to Boost Public Services

Council of Europe Secretary General Marija Pejčinović Burić said in a press release, “We must ensure that the rise of AI upholds our standards, rather than undermining them. The Framework Convention is designed to ensure just that.

“I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible.”

The treaty was adopted by the Council of Europe Committee of Ministers on May 17 this year. For it to be entered into force, five signatories, including at least three Council of Europe member states, must ratify it. Entry will occur three months after the fifth ratification, on the first day of the following month.

It is separate from the EU’s AI Act, which entered into force last month, as the Council of Europe is a 46-member organisation distinct from the EU, and non-EU states are able to sign.

The feasibility of an AI treaty was first examined in 2019. The legislation was succeeded in 2022 by the Council’s Committee on Artificial Intelligence. It was formally adopted on May 17 this year.

What does the treaty require signatories to do?

To protect human rights, democracy, and the rule of law, the Framework Convention requires signatories to:

  1. Ensure AI systems respect human dignity, autonomy, equality, non-discrimination, privacy, transparency, accountability, and reliability.
  2. Provide information about decisions made using AI and allow people to challenge the decisions or the use of the AI itself.
  3. Offer procedural safeguards, including complaint mechanisms and notice of AI interactions.
  4. Conduct ongoing risk assessments for human rights impacts and establish protective measures.
  5. Allow authorities to ban or pause certain AI applications if necessary.

The treaty covers the use of AI systems by public authorities, like the NHS, and private companies operating in the parties’ jurisdictions. It does not apply to activities relating to national security, national defence matters, or research and development unless they have the potential to interfere with human rights, democracy, or the rule of law.

According to the U.K. government, the treaty will work to enhance existing laws and measures, such as the Online Safety Act. It also intends to work with regulators, devolved administrations, and local authorities to ensure the treaty’s requirements can be implemented.

SEE: UK Government Announces £32m of AI Projects

It is up to the “Conference of the Parties,” a group composed of official representatives of the Parties to the Convention, to determine the extent to which the treaty’s provisions are being implemented and make recommendations.

The UK’s moves towards safe AI

The treaty says that, while regulating AI, it still promotes its progress and innovation. The U.K. government has been attempting to maintain this balance in its own actions.

In some ways, the government has suggested that it will be heavy-handed in its restriction of AI developers. It was announced in July’s King’s Speech that the government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

This supports Labour’s pre-election manifesto, which pledged to introduce “binding regulation on the handful of companies developing the most powerful AI models.” After the speech, Prime Minister Keir Starmer also told the House of Commons that his government “will harness the power of artificial intelligence as we look to strengthen safety frameworks.”

SEE: Delaying AI’s Rollout in the U.K. by Five Years Could Cost the Economy £150+ Billion, Microsoft Report Finds

The U.K. also established the first national AI Safety Institute in November 2023 with the primary goals of evaluating existing AI systems, performing foundational AI safety research, and sharing information with other national and international actors. Then, this April, the U.K. and U.S. governments agreed to work together on developing safety tests for advanced AI models, moving forward on plans made by their respective AI Safety Institutes.

Conversely, the U.K. government has promised tech companies that the incoming AI Bill will not be overly restrictive and has seemingly held fire on its introduction. It had been expected to include the bill in the named pieces of legislation that were announced as part of the King’s Speech but did not.