4 – Artificial Intelligence Act: safe, reliable and human-centred artificial intelligence

Artificial intelligence is increasingly becoming a part of businesses' day to day operations and our everyday lives. Many businesses are utilising artificial intelligence chatbots, others use artificial intelligence in credit check and recruitment processes. The European Union is aiming to be a global leader in the development and use of safe, reliable and human-centred artificial intelligence.

The rules focus on ensuring trust in artificial intelligence because this is considered to be essential in order to fully unlock the social and economic potential of artificial intelligence. The European Commission's regulation proposal on artificial intelligence is the first comprehensive legal framework in this area in the world.

In this article, we will outline the key provisions in the Artificial Intelligence Act ("AI Act") and also touch upon the European Commission's recent proposal for an Artificial Intelligence Liability Directive ("AI Liability Directive").

Legislative status of the AI Act

The AI Act was proposed by the European Commission in April 2021. On 6 December 2022, the Council provided its amendments to the AI Act proposal. Once the European Parliament finalises their amendments, the Council and Parliament will start discussing the AI Act. The AI Act is still in a relatively early stage and therefore, might be subject to changes until it enters into force.

As for the EEA countries including Norway, AI Act is considered to have EEA-relevance. However, Norway's current position is that it is too early to conclude on whether the proposal is acceptable or not.

Scope and application of the AI Act

The AI Act lays down harmonised rules for the use and provision of artificial intelligence systems ("AI systems"), and defines AI systems as software that (i) is developed with machine learning, logic- and knowledge-based and/or statistical approaches and/or with search and optimization methods and (ii) can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The proposal regulates the providers of artificial intelligence systems, and entities making use of them in a professional capacity. It does not cover use of an AI system in the course of a personal non-professional activity. The AI Act applies to:

imageatoloc.png

AI Act applies to all sectors, except for AI systems for military purposes.

What is the essence of the AI Act?

The proposal has a risk-based approach which means that the higher the risk posed by the use of AI, the more strictly its use should be regulated. The AI Act therefore categorises AI into four different levels: unacceptable risk, high risk, limited risk, and minimal or no risk.

1. Unacceptable risk: Prohibited AI systems

The AI Act expressly prohibits the provision and use of the following AI systems:

  • An AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  • An AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
  • AI systems used or provided by public authorities for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
    • Detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
    • Detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity.

  • Use of 'real-time' remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement are prohibited, unless and in as far as such use is strictly necessary for one of the following objectives:
    • The targeted search for specific potential victims of crime, including missing children;
    • The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
    • The detection, localisation, identification or prosecution of perpetrators or suspects of certain criminal offences.

2. High-risk

High-risk AI systems are such systems that includes a high risk for personal health, security or fundamental rights. For the high-risk AI systems, rather than providing an exhaustive list, the law sets out certain classification rules.

Accordingly, an AI system shall be considered high-risk where both of the following conditions are fulfilled:

  • The AI system is intended to be used as a safety component of a product, or is itself a product that is covered by the EU legislation listed in Annex II of the AI Act. These products include but are not limited to recreational craft and personal watercraft, lifts and safety components for lifts and cableway installations.
  • The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment pursuant to the EU legislation listed in Annex II.

In addition to the AI systems fulfilling the foregoing criteria, the AI systems listed in Annex III of the AI Act are also considered high-risk. These AI systems include, among others, AI systems intended to be used for the 'real-time' and 'post' remote biometric identification of natural persons and AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.

According to the European Commission, other examples of high-risk AI systems include technologies used in educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams); employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures) and migration, asylum and border control management (e.g. verification of authenticity of travel documents).

Such high-risk AI systems are only allowed if certain requirements are fulfilled. The main requirements are as follows:

image1unp.png

3. Limited risk: Transparency obligations

There are additional transparency obligations for the certain AI systems.

  • AI systems intended to interact with natural persons should be designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
  • Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto.
  • Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake'), shall disclose that the content has been artificially generated or manipulated.

4. Minimal risk

It is considered that most AI systems includes minimal risks to individuals and society, and AI systems which are not covered by the foregoing rules are not subject to special obligations under the AI Act.

Consequences of non-compliance

Non-compliance with the provisions under the AI Act might lead to fines up to 30 000 000 EUR or up to 6 % of a company's total worldwide annual turnover for the preceding financial year.

AI Liability Directive

In addition to proposing the AI Act, the European Commission published its proposal for the AI Liability Directive on 28 September 2022. This is another step within the context of the EU's work on safer, more reliable and more human-centred artificial intelligence.

The AI Liability Directive is aiming to close a very important gap and clarify how damages caused by an AI system may be dealt with. The autonomous behaviour and complexity of an AI system poses special challenges on where the liability lies with in the cases of damage.

In order to address these challenges, the AI Liability Directive gives a right to evidence to users. Accordingly, if a user requests information (to be used in a claim) from the provider of an AI system but the provider refuses to provide the requested information, national courts should be empowered to order the disclosure of such information.

Moreover, the AI Liability Directive creates a rebuttable presumption of a causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output, where all of the following conditions are met:

  • the claimant has demonstrated or the court has presumed, the fault of the defendant with respect to non-compliance with a duty of care laid down in EU or national law directly intended to protect against the damage that occurred;
  • it can be considered reasonably likely that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output;
  • the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

The proposal is still at a very early stage and the foregoing might be subject to change throughout the EU legislative process.

Our Technology and Digitalisation team is following the legislative developments relating to artificial intelligence closely and will be happy to answer your questions.

Les flere artikler i serien:

  • Teknologi og digitalisering

    2022

    3 – Digital Markets Act: Mer rettferdige digitale markeder

    EUs nye forordning om åpne og rettferdige markeder i den digitale sektoren, Digital Markets Act ("DMA"), har trådt i kraft i EU. I dag er det et fåtall store plattformer globalt som i stor grad påvirker rammeverket for innovasjon, forbrukervalg og konkurranse i digitale markeder. Enkelte særlig store plattformer fungerer som såkalte portvoktere ("gatekeepers"). Gjennom å fastlegge plikter og forbud for slike portvoktere, søker de nye reglene å sikre rettferdig konkurranse i digitale markeder og å gi brukere større valgfrihet. DMA gir også EU-kommisjonen hjemmel for å ilegge sanksjoner som er sterkt påvirket av de sanksjonene EU har på konkurranserettsområdet.

  • Teknologi og digitalisering

    2022

    2 – Digital Services Act: Et tryggere digitalt rom

    EUs nye forordning Digital Services Act søker å skape et tryggere digitalt rom for innbyggere og virksomheter. Forordningen skal legge til rette for en større grad av demokratisk kontroll og tilsyn av internettbaserte plattformer (online plattformer), og redusere risikoen for manipulasjon, feilinformasjon og ulovlig innhold.

  • Teknologi og digitalisering, Immaterialrett, Personvern

    2022

    1 - A Europe Fit for the Digital Age

    EU har vært en aktiv pådriver for å utvikle regelverket for teknologi og digitalisering, og det dukker stadig opp nyheter om Digital Services Act, Digital Markets Act, Artificial Intelligence Act, Data Act og Data Governance Act.