Jump to main content

Who is liable when the use of AI leads to harm?

21/09/2023

Artificial intelligence (AI) is revolutionizing the way we perform tasks, from diagnosing patients to driving cars or answering exam questions. The increasing use of AI holds immense potential and can simplify and improve basic everyday tasks or those more important or critical. However, AI can also have unintended consequences.

Self-driving car hitting a pedestrian, an AI software diagnosing a patient incorrectly, or a software used for hiring could have a built-in unlawful bias that affects who is ultimately hired, could be among these unintended consequences.

Traditionally, human beings have been held accountable for their own actions. With AI systems, it becomes more challenging to determine who should bear responsibility and be held liable when these systems themselves can make decisions based on complex algorithms and vast amounts of data. This raises an important ethical and legal issue: Who should be held liable when the use of AI leads to harm?

Starting points

Under Norwegian tort law, three basic conditions must be met for someone to claim compensation. There must be a basis of liability, a financial loss, and a direct causal link between the basis of liability and the financial loss. If the claim for compensation is based on a contractual relationship, there must also be a breach of contract.

As a general rule, bodily injury is assessed under the rules of non-contractual liability, for example, if a self-driving car injures a pedestrian. In cases where parties have entered into an agreement, and damages arise in connection with the agreement, the rules of contractual liability will be precatory for determining liability. In general it is worth noting that contractual liability rules are strict, as long as one may pinpoint an objective deviation from what the AI system is supposed to perform according to the contract. The contract is however a decisive factor when determining the scope of the liability. Liability may be relevant, for example, if an AI system acquired to review large volumes of documents has a flaw that causes essential details to be overlooked, resulting in financial loss for the buyer of the system.

What form of liability?

When evaluating liability in the use of AI, the central question is whether there will be a basis for liability and, if so, for whom. Under Norwegian law, the starting point for non-contractual liability is that the party causing harm must have acted negligently. However, in some situations, one may invoke strict liability for AI-induced harm, such as in the use of cars and other products.

Product liability is a particularly relevant strict liability basis that manufacturers, product owners, and users of AI systems should be aware of (see the Product Liability Directive, Directive 85/374/EEC). Product liability holds manufacturers responsible for harm caused by defects in a product. As of now, software and the like are not included in the definition of "product," but the EU Commission has proposed amending the Product Liability Directive to include products such as software. The definition will then encompass both physical objects containing AI and pure AI systems. For liability to arise under the Norwegian act relating to product liability, the product must have a "safety defect" (nw: sikkerhetsmangel). If the injured party can demonstrate that the product did not have the security functions that could reasonably be expected, the manufacturer will be liable under the Norwegian act relating to product liability, and the injured party is entitled to compensation. This is a type of strict liability, meaning it does not matter whether the manufacturer should have acted differently or not; it is sufficient that there is a safety defect in the relevant product. If the manufacturer or owner of the AI system is not strictly liable for the harm caused by the system, negligence may be a relevant basis for liability.

Who should have acted differently?

To be held liable for negligent harm, someone must have demonstrated subjective fault. The question is whether that someone could and should have acted differently to prevent the harm from occurring.

Today, most AI systems are not autonomous actors; they operate within the framework of the rules and instructions they are programmed with. As yet, there is no legal basis for holding such systems liable in and of themselves, and potential liability must be attributed to the people who have used, developed, implemented, and maintained the AI system. This may include users, developers, engineers, designers, and managers responsible for the systems. Determining who should be held responsible is an assessment that will largely depend on where one may pinpoint a fault or a defect – or more specifically, what led to the harm that occurred. Can the fault be traced back to the user not accurately following the instructions of the AI system? Does the user manual contain errors, making the manufacturer liable? Was the AI system coded in a way that made a defect unavoidable, thus holding the developer liable? What if no errors were made in any stage, but harm still occurred?

A problem with AI is that algorithms built on large amounts of data can make decisions that are impossible to explain or rationalize – so-called "black-box" AI. Furthermore, such algorithms can evolve and further develop themselves as users feed the system with input. The difficulties in pinpointing where the fault occurred will therefore become more significant, and identifying what triggered the harm, not to mention establishing a causal connection, may become an impossible task. Hence, there is a rapidly growing need for legislation that specifically addresses the issue of liability for AI-induced harm, so that substantial resources are not spent on determining who should be held responsible.

The EU's proposal for a new AI Liability Directivmay help alleviate some of the headaches that arise alongside these questions. The proposal provides common EU rules for the production of evidence, the burden of proof, and a presumption of causality in the case of fault. The directive is still under consideration in the Council and the European Parliament, thus it will be a while before it comes into effect in the EU – and even longer before it is implemented in Norwegian law.

The potential for AI to cause harm conserns us all

Although current legislation and regulations can be used to allocate liability among parties in a damage incident, rapid technological development may lead to situations where the regulations are not adequately tailored to the various questions that arise when the use of AI causes harm. Politicians and fundamental societal institutions have a responsibility to develop flexible and adaptable regulations. However, it is likely that law makers are unable to address all scenarios that may occur, and that some incidents therefore may leave developers, owners and users of AI in uncertainty with regard to who should be held liable in the specific case.

Especially in the process of designing and developing AI, it is essential to ensure that systems have robust security mechanisms and adhere to ethical standards. In addition, owners of AI systems should make potential weaknesses in the systems visible to end-users to raise awareness of various risk points.

However, not all responsibility lies with the individuals behind the AI systems. Users of AI systems must also have a conscious attitude towards what is required in the use of AI. Users cannot rely on all output from an AI system but must critically evaluate and potentially verify output before making a decision. It is important to emphasize that adequate training and understanding of AI technologies and systems are critical for users to make informed decisions and use the systems as safely as possible. The various parties involved in the development and use of AI will, therefore, share responsibility of how the technology is managed. Everyone must contribute to the safe and secure development and use of AI in the future, even after the new EU regulations are in place.

Authors
Profile image of Guro S. Kyrkjebø Nybø
Guro S. Kyrkjebø Nybø
Associate
E-mail gny@wr.no
Profile image of Solveig Hodnemyr
Solveig Hodnemyr
Associate
E-mail sol@wr.no

Subscribe to newsletter and invitations