Hopp til hovedinnholdet

Towards safe, ­reliable and human-centred AI

15.02.2024

The EU’s Artificial Intelligence (AI) Act marks a global first, introducing rules for the use and provision of AI systems. At the heart is a commitment to fostering trust in AI to unlock and maximise the vast social and ­economic possibilities offered by these technologies.

Lesetid 7 minutter

Europe is closer than ever to having the world’s first comprehensive legal framework in artificial intelligence. A draft final text on the AI Act was leaked on 22 January 2024 – a month and a half after the European Council and Parliament announced that they had reached political agreement on the rules. In this article, we touch upon the legislative status of the AI Act and its key elements.

Legislative status of the AI Act

The AI Act was proposed by the European Commission in April 2021. On 2 February 2024, the European Council, through the Permanent Representative Committee (COREPER), consisting of member states’ representatives, adopted the final text. 

The European Parliament is expected to undertake a first vote in committees in mid-February, followed by plenary votes in March or April.

If both the Council and the Parliament confirm the final text, the AI Act will be published in the Official Journal of the European Union and enter into force on the twentieth day following its publication. This is expected to happen before the summer. The Act will apply in the EU two years after its entry into force, with specific application dates for different provisions ranging from six months to 36 months following the entry into force.

The Act has been indicated by Norwegian government representatives as EEA relevant. Therefore, businesses in Norway providing, using, importing or distributing AI systems should familiarise themselves with the regulations under the AI Act and prepare for compliance. 
Our Technology and Digitalisation team is following the legislative developments closely and will be happy to answer your questions.

Key elements of the AI Act

Scope and application

The Act applies primarily to providers of AI systems (whether established in the EU or not), deployers of AI systems within the EU (or systems whose output is used in the EU), importers and distributors of AI systems and product manufacturers which place on the market or put into service an AI system together with their product.

There are certain exemptions in the AI Act for use of AI systems for military, defence or national security purposes, law enforcement, judicial cooperation and scientific research and development purposes.

Moreover, the Regulation shall not apply to AI systems which are under development and AI systems released under free and open source licences (with certain exceptions for high-risk AI systems and systems which are tested in real world conditions). AI systems which are used by natural persons in the course of a purely personal, non-professional activity are also exempt from the scope of the AI Act. The majority of the provisions in the AI Act are aimed at prohibiting certain AI systems and regulating high-risk AI systems. There are no obligations relating to lower-risk AI systems, such as simple chat bots, other than simple transparency obligations in some cases.

Prohibited AI systems

The AI Act prohibits the provision and use of AI systems which:

  • deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques;
  • exploit vulnerabilities of individuals or specific groups of persons due to their age, disability or a specific social or economic situation;
  • categorise individuals based on their biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;
  • evaluate or classify individuals or groups over a certain period of time based on their social behaviour or personal characteristics (social scoring); 
  • use biometrics to remotely identify individuals in ‘real-time’ in public spaces for law enforcement purposes unless strictly necessary for specified purposes (targeted victims search, threat prevention, localisation, identification or prosecution of suspects of certain criminal offences);
  • are used for predictive policing, based solely on the profiling of individuals or on assessing their personality traits and characteristics; 
  • create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; and
  • are used to infer individuals’ emotions in the workplace or educational institutions. 

High-risk AI systems 

While the above AI systems are prohibited, high-risk AI systems are allowed if certain criteria are fulfilled. Examples of high-risk AI systems are systems used:

  • as a safety component in products or which are products subject to legislation specified in Annex II of the Act (relating e.g. to machinery, toys, radio equipment, medical devices, civil aviation, marine equipment, rail interoperability, or motor vehicles);
  • with biometrics for remote biometric identification, biometric categorisation or emotion recognition;
  • as safety components to manage and operate critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity;
  • in the educational sector to determine access or admission, evaluate learning outcomes, assess the appropriate level of education for individuals or monitor and detect prohibited behaviour of students;
  • in the context of employment, to recruit individuals, make decisions affecting employees’ work terms, allocate tasks or monitor and evaluate employees’ performance;
  • to evaluate individuals’ eligibility for essential public assistance benefits and services or grant, and reduce, revoke or reclaim them;
  • to evaluate individuals’ creditworthiness/score;
  • to evaluate and classify emergency calls or to prioritise dispatch;
  • to assess individual risk and pricing for life and health insurance;
  • in law enforcement to assess the risk of individuals becoming victims or offenders, as support polygraphs and similar tools, to evaluate the reliability of evidence, or to profile individuals;
  • by immigration authorities as support polygraphs and similar tools, to assess specific risks posed by individuals, examine immigration applications and the reliability of evidence, or detect, recognise or identify individuals;
  • by judicial authorities to research and interpret facts and the law;
  • to influence the outcome of an election or referendum, or the voting behaviour of individuals. 

Obligations relating to high-risk AI systems

The main requirements for providers of high-risk AI systems are as follows:

  • establish, implement and document a risk management system;
  • only use data which meet certain quality criteria in training, validation or testing; 
  • draw up technical documentation before the high-risk system is placed on the market or put into service;
  • ensure that systems have capability for the automatic recording of events (logging); 
  • ensure that systems’ operation is sufficiently transparent to enable deployers to interpret the output and use it appropriately;
  • ensure that systems can be effectively overseen by natural persons;
  • design and develop systems in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity;

The main requirements for deployers (i.e. professional users) of high-risk AI systems are as follows:

  • take appropriate technical and organisational measures to ensure appropriate use of the system;
  • assign human oversight; 
  • ensure input data is relevant and sufficiently representative in view of the intended purpose of the system;
  • monitor the operation of the system and immediately inform first the provider, and then the importer or distributor and relevant authorities if they have identified any serious incident;
  • keep the logs automatically generated by that system to the extent such logs are under their control for a period appropriate to the intended purpose of the system, of at least six months;
  • perform Data Protection Impact Assessments (DPIAs) to the extent required under the GDPR;
  • where the deployer is a public body or private body operating public services, or where the deployer uses AI systems for credit scoring or to perform risk assessment and pricing towards individuals seeking health or life insurance, perform fundamental rights impact assessments; and
    where the deployer is an employer implementing an AI system in the workplace, inform the affected workers.

Importers and distributors shall verify that the provider has complied with its obligations under the AI Act before placing a high-risk AI system on the market. Importers and distributors are also subject to additional obligations which are described in Article 26 and Article 27 of the Act respectively. 

Providers of General Purpose AI systems (GPAI) and certain other AI systems are subject to additional transparency obligations, such as the following:

  • ensuring that AI systems intended to directly interact with individuals are designed and developed to inform individuals that they are interacting with an AI system;
  • deployers of emotion recognition systems or biometric categorisation systems shall inform affected individuals about their operation;
  • providers of AI systems (including GPAI) generating synthetic audio, image, video or text content, shall ensure the marking of outputs as artificially generated or manipulated; and 
  • deployers of AI systems that generate or manipulate image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.

There is a separate Chapter under the AI Act for GPAI systems, regulating classification of these systems and obligations for providers of GPAI models.

Consequences of non-compliance

Non-compliance with the provisions under the AI Act might lead to fines up to EUR 35 million or up to 7% of a company’s total worldwide annual turnover for the preceding financial year, whichever is higher.

Forfattere
Profile image of Lars Erik Steinkjer
Lars Erik Steinkjer
Partner
E-post lst@wr.no
Profile image of Gry Hvidsten
Gry Hvidsten
Partner
E-post ghv@wr.no
Profile image of Ekin Ince Ersvaer
Ekin Ince Ersvaer
Paralegal
E-post eie@wr.no

Abonner på nyhetsbrev og invitasjoner