4 – Artificial Intelligence Act: Safe, reliable and human-centred artificial intelligence

Artificial intelligence is increasingly becoming a part of businesses' day to day operations and our everyday lives. Many businesses are utilising artificial intelligence chatbots, others use artificial intelligence in credit check and recruitment processes. The European Union is aiming to be a global leader in the development and use of safe, reliable and human-centred artificial intelligence.

The rules focus on ensuring trust in artificial intelligence because this is considered to be essential in order to fully unlock the social and economic potential of artificial intelligence. The European Commission's regulation proposal on artificial intelligence is the first comprehensive legal framework in this area in the world.

In this article, we will outline the key provisions in the Artificial Intelligence Act ("AI Act") and also touch upon the European Commission's recent proposal for an Artificial Intelligence Liability Directive ("AI Liability Directive").

Legislative status of the AI Act

The AI Act was proposed by the European Commission in April 2021. On 6 December 2022, the Council provided its amendments to the AI Act proposal. Once the European Parliament finalises their amendments, the Council and Parliament will start discussing the AI Act. The AI Act is still in a relatively early stage and therefore, might be subject to changes until it enters into force.

As for the EEA countries including Norway, AI Act is considered to have EEA-relevance. However, Norway's current position is that it is too early to conclude on whether the proposal is acceptable or not.

Scope and application of the AI Act

The AI Act lays down harmonised rules for the use and provision of artificial intelligence systems ("AI systems"), and defines AI systems as software that (i) is developed with machine learning, logic- and knowledge-based and/or statistical approaches and/or with search and optimization methods and (ii) can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The proposal regulates the providers of artificial intelligence systems, and entities making use of them in a professional capacity. It does not cover use of an AI system in the course of a personal non-professional activity. The AI Act applies to:

  1. providers placing on the market or putting into service AI systems in the European Union, irrespective of whether those providers are established within the EU or in a third country;
  2. users of AI systems located within the EU; and
  3. providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.

AI Act applies to all sectors, except for AI systems for military purposes.

What is the essence of the AI Act?

The proposal has a risk-based approach which means that the higher the risk posed by the use of AI, the more strictly its use should be regulated. The AI Act therefore categorises AI into four different levels: unacceptable risk, high risk, limited risk, and minimal or no risk.

1. Unacceptable risk: Prohibited AI systems

The AI Act expressly prohibits the provision and use of the following AI systems:

  • An AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  • An AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
  • AI systems used or provided by public authorities for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
    • Detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
    • Detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity.

  • Use of 'real-time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement are prohibited, unless and in as far as such use is strictly necessary for one of the following objectives:
    • The targeted search for specific potential victims of crime, including missing children;
    • The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
    • The detection, localisation, identification or prosecution of perpetrators or suspects of certain criminal offences.

2. High-risk

High-risk AI systems are such systems that includes a high risk for personal health, security or fundamental rights. For the high-risk AI systems, rather than providing an exhaustive list, the law sets out certain classification rules.

Accordingly, an AI system shall be considered high-risk where both of the following conditions are fulfilled:

  • The AI system is intended to be used as a safety component of a product, or is itself a product that is covered by the EU legislation listed in Annex II of the AI Act. These products include but are not limited to recreational craft and personal watercraft, lifts and safety components for lifts and cableway installations.
  • The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment pursuant to the EU legislation listed in Annex II.

In addition to the AI systems fulfilling the foregoing criteria, the AI systems listed in Annex III of the AI Act are also considered high-risk. These AI systems include, among others, AI systems intended to be used for the 'real-time' and 'post' remote biometric identification of natural persons and AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.

According to the European Commission, other examples of high-risk AI systems include technologies used in educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams); employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures) and migration, asylum and border control management (e.g. verification of authenticity of travel documents).

Such high-risk AI systems are only allowed if certain requirements are fulfilled. The main requirements are as follows:

3. Limited risk: Transparency obligations

There are additional transparency obligations for the certain AI systems.

  • AI systems intended to interact with natural persons should be designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
  • Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto.
  • Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake'), shall disclose that the content has been artificially generated or manipulated.

4. Minimal risk

It is considered that most AI systems includes minimal risks to individuals and society, and AI systems which are not covered by the foregoing rules are not subject to special obligations under the AI Act.

Consequences of non-compliance

Non-compliance with the provisions under the AI Act might lead to fines up to 30 000 000 EUR or up to 6 % of a company's total worldwide annual turnover for the preceding financial year.

AI Liability Directive

In addition to proposing the AI Act, the European Commission published its proposal for the AI Liability Directive on 28 September 2022. This is another step within the context of the EU's work on safer, more reliable and more human-centred artificial intelligence.

The AI Liability Directive is aiming to close a very important gap and clarify how damages caused by an AI system may be dealt with. The autonomous behaviour and complexity of an AI system poses special challenges on where the liability lies with in the cases of damage.

In order to address these challenges, the AI Liability Directive gives a right to evidence to users. Accordingly, if a user requests information (to be used in a claim) from the provider of an AI system but the provider refuses to provide the requested information, national courts should be empowered to order the disclosure of such information.

Moreover, the AI Liability Directive creates a rebuttable presumption of a causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output, where all of the following conditions are met:

  • the claimant has demonstrated or the court has presumed, the fault of the defendant with respect to non-compliance with a duty of care laid down in EU or national law directly intended to protect against the damage that occurred;
  • it can be considered reasonably likely that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output;
  • the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

The proposal is still at a very early stage and the foregoing might be subject to change throughout the EU legislative process.

Artificial intelligence in Norway

Artificial intelligence has been increasingly becoming an important topic in Norway as well as other European countries. For example, the Norwegian Data Protection Authority has initiated a 'sandbox for responsible artificial intelligence' in order to promote the development and implementation of ethical and responsible artificial intelligence from a privacy perspective. The Authority expressly noted that they would like the sandbox to represent a wide range of organisations in Norway, including start-ups, private and public entities of different sizes.

Participants in the sandbox receive advice and guidance on the regulatory framework surrounding artificial intelligence. Usually a project in the sandbox is expected to last between three to six months and in this duration, the Authority is assisting the participating organisations with respect to data protection impact assessments, implementation of privacy by design and assessments and considerations of the balance between necessity and potential adverse effects on user privacy. In this period, the Authority may also carry out informal inspections to highlight the requirements applicable to the use of artificial intelligence by the relevant organisation.

Norwegian Labour and Welfare Administration is one of the organisations which have participated in the sandbox. The project addressed the Administration's development of an AI tool to predict sick leave on an individual level.

Applications for the new round of sandbox projects is open until 01.02.2023 and applications are to be submitted (only available in Norwegian) to the Norwegian Data Protection Authority.

***

Our Technology and Digitalisation team is following the legislative developments relating to artificial intelligence closely and will be happy to answer your questions.

Read previous articles:

  • Technology and digitalisation

    2022

    3 – Digital Markets Act: Fairer digital markets

    The European Union's regulation on contestable and fair markets in the digital sector, the Digital Markets Act ("DMA"), has entered into force in the EU. Today, there are a small number of very large online platforms globally that greatly influence the framework for innovation, consumer choice and competition in the digital markets. Certain large platforms therefore act as so-called gatekeepers. By establishing duties and prohibitions for such gatekeepers, the new rules seek to ensure fair competition in digital brands and to give users greater freedom of choice. The DMA also enables the European Commission to carry out market investigations and sanction non-compliance in ways heavily influenced by EU competition law enforcement.

  • Technology and digitalisation

    2022

    2 – Digital Services Act: A safer digital space

    The European Union's new Digital Services Act aims to create a safer digital space for citizens and businesses. The regulation seeks to provide for greater democratic control and supervision of digital platforms, and to reduce the risk of manipulation, disinformation and illegal content.

  • Technology and digitalisation, Intellectual Property, Data Protection

    2022

    1 - A Europe Fit for the Digital Age

    The European Union has recently been active in terms of legislative developments relating to technology and digitalisation. News about Digital Services Act, Digital Markets Act, European Union's new Artificial Intelligence Act, Data Act and Data Governance Act are emerging frequently and there is a new development almost every week.