Jump to main content

Update following the 9 December 2023 agreement on the AI Act

15/12/2023

On Friday 8 December, after 36 hours of trialogue negotiations, the European institutions finally reached agreement on the AI Act. A joint press conference was held on 9 December where the two lead MEPs, the Secretary of State for digitalisation and artificial intelligence, and the EU Commissioner in charge presented the key elements of the agreement.

We have included some highlights below.

  • Systems covered: Only finished systems and models will be included in the scope of the legislation – systems which are under development will be exempted.
  • Definition of AI: The definition of artificial intelligence will be based on the OECD definition, ensuring consistency with international standards.
  • Adjustment of fine levels: The fine levels have been adjusted, and now range from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover.
  • Free/Open Source Software: There will be a general exemption from the regulation, but there are still requirements to respect copyright and to provide for transparency concerning data used for training the models.
  • General Purpose AI Models (GPAI, or so-called "foundation models"): Although France, Germany and Italy were against the regulation of foundation models (which are large general purpose AI models which may find many different uses, such as large language models, chatbots or ChatGPT), the trialogue has resulted in the inclusion also of these models in the scope of the legislation. Creators of foundational models will be obliged to draw up technical documentation, respect copyright and to provide for transparency concerning data used for training the models. The Commission may adapt these parameters to take account of technical progress.
  • Obligations for high risk systems (so-called top-tier models): Fundamental rights impact assessments are mandatory, and there are also requirements related to related to model assessments, systemic risk assessments and cybersecurity.
  • Prohibited practices: A list of prohibited uses is included. Manipulative techniques (including social scoring), and uses such as biometric categorisation, predictive policing, emotion recognition in the workplace or educational institutions and untargeted scraping of facial images from the internet or CCTV are banned to some extent.
  • Especially on emotion recognition: Emotional recognition was originally not in the high risk list, it was not even considered high risk previously. There were disagreements on banning in certain areas, but now, some such practices will be prohibited and the rest are considered high risk.
  • National security exemption: Exemptions are introduced, but with safeguards against abuse. It will be possible to use so-called crime analytics apps that do not link to any individuals but rather analyse trends (linked to existing facts and possible criminal investigations). Any technology predicting who might commit a crime, or who might become reoffenders, will be banned.
  • Remote biometric identification: Such practices are allowed for certain catalogue offences and under judicial control. Remote biometric identification systems may be used for identification after the fact.

Although France, Germany and Italy were against the regulation of foundation models (large general purpose AI models which may find many different uses, such as large language models, chatbots or ChatGPT), the trilogue has resulted in the inclusion also of these models in the scope of the legislation. Creators of foundation models will be obliged to draw up technical documentation, respect copyright and to provide for transparency concerning data used for training the models. The Commission may adapt these parameters to take account of technical progress.

Next steps (from European Commission press release):

The political agreement is now subject to formal approval by the European Parliament and the Council. Following this, it will enter into force 20 days after publication in the Official Journal. A "grace period" of two years will apply before its entry into force, except for the prohibitions, which will begin to apply after 6 months and the rules on General Purpose AI, which will begin to apply after 12 months.

To bridge the two-year transitional period, the Commission will launch an AI Pact, committing AI developers to implement key obligations on a voluntary basis prior to entry into force of the AI Act.

A dedicated AI Office will be established in the European Commission to enforce the implementation of the act and to conduct market surveillance at national level. This is expected to be put in place relatively quickly.

Relevant links:

Authors
Profile image of Lars Erik Steinkjer
Lars Erik Steinkjer
Partner
E-mail lst@wr.no
Profile image of Gry Hvidsten
Gry Hvidsten
Partner
E-mail ghv@wr.no
Profile image of Kristina Nesset Kjerstad
Kristina Nesset Kjerstad
Managing Associate
E-mail knk@wr.no

Subscribe to newsletter and invitations