Научная статья на тему 'European Union civil liability frameworks in the age of artificial intelligence: Assessing current regimes and future prospects'

European Union civil liability frameworks in the age of artificial intelligence: Assessing current regimes and future prospects Текст научной статьи по специальности «Право»

CC BY
12
4
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
artificial intelligence / civil liability / burden of proof / standard of proof / causation / damage

Аннотация научной статьи по праву, автор научной работы — Marija Ampovska

This study intends to evaluate the suitability of the EU’s liability regimes in solving AI-related concerns by carefully examining the current legal environment. It also examines the consequences and viability of using these frameworks to consider the special features of AI technology. Furthermore, the article offers insight into the growing nature of AI liability inside the EU and its potential by thoroughly examining current regulations and evaluating the suggested revisions and proposals for prospects. Some of the questions that are the focus of the attention in the article are the burden of proof and its allocation in cases involving AI-caused damages, the standard of proof required to establish liability in AI-related cases, whether a preponderance of evidence, clear and convincing evidence, or a higher standard should be applied in different scenarios and the difficulties in establishing causation in AI-related damages. The method used in this paper will consist of conducting a comparative analysis based on the hypothesis (bearing in mind that there is no established court practice in the EU with regard to this matter) of how different jurisdictions in the EU handle liability and proof standards in AI-related cases, highlighting of any emerging trends or most suitable doctrines and practices. These questions certainly open a discussion of the potential future developments in AI liability law, considering the rapid advancements in AI technology and how legal standards might need to adapt as AI systems become more sophisticated. By focusing on these practical aspects of AI liability, this research can offer valuable insights into how the legal system can effectively address the challenges posed by AI technology and set the legal bases for fair and just outcomes for all parties involved.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «European Union civil liability frameworks in the age of artificial intelligence: Assessing current regimes and future prospects»

UDC 347.5

BecTHMK Cn6ry npaBO. 2024. T. 15. Bwn. 2

European Union civil liability frameworks in the age of artificial intelligence: Assessing current regimes and future prospects

M. Ampovska

Goce Delcev University,

10A, ul. Krste Misirkov, Stip 2000, Republic of North Macedonia

For citation: Ampovska, Marija. 2024. "European Union civil liability frameworks in the age of artificial intelligence: Assessing current regimes and future prospects". Vestnik of Saint Petersburg University. Law 2: 466-482. https://doi.org/10.21638/spbu14.2024.210

This study intends to evaluate the suitability of the EU's liability regimes in solving Al-related concerns by carefully examining the current legal environment. It also examines the consequences and viability of using these frameworks to consider the special features of AI technology. Furthermore, the article offers insight into the growing nature of AI liability inside the EU and its potential by thoroughly examining current regulations and evaluating the suggested revisions and proposals for prospects. Some of the questions that are the focus of the attention in the article are the burden of proof and its allocation in cases involving Al-caused damages, the standard of proof required to establish liability in Al-related cases, whether a preponderance of evidence, clear and convincing evidence, or a higher standard should be applied in different scenarios and the difficulties in establishing causation in Al-related damages. The method used in this paper will consist of conducting a comparative analysis based on the hypothesis (bearing in mind that there is no established court practice in the EU with regard to this matter) of how different jurisdictions in the EU handle liability and proof standards in Al-related cases, highlighting of any emerging trends or most suitable doctrines and practices. These questions certainly open a discussion of the potential future developments in AI liability law, considering the rapid advancements in AI technology and how legal standards might need to adapt as AI systems become more sophisticated. By focusing on these practical aspects of AI liability, this research can offer valuable insights into how the legal system can effectively address the challenges posed by AI technology and set the legal bases for fair and just outcomes for all parties involved.

Keywords: artificial intelligence, civil liability, burden of proof, standard of proof, causation, damage.

1. Introduction

In European Union (EU) legal sources1, artificial intelligence or AI system (the abbreviation AI will be used in the text) is defined as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or deci-

1 In the proposal for an EU regulatory framework on artificial intelligence (AI) in April 2021, the Commission proposes to establish a legal definition of "AI system" in EU law, which is largely based on a definition already used by the OECD (Recommendation of the Council on Artificial Intelligence. Accessed May 25, 2024. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449).

© St. Petersburg State University, 2024

sions influencing real or virtual environments"2. In addition, it is also noted in the legal instruments that AI systems are designed to operate with varying levels of autonomy3, while presenting specific characteristics not shown in so far known systems, products or services. In 2020, the European Council advocated addressing these characteristics, defining them as: the opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behavior of some AI systems, in an attempt to ensure their compatibility with fundamental rights and to facilitate the enforcement of legal rules4.

From a legal point of view, it is important that the notion of an AI system is clearly defined, given that the determination of what constitutes such a system is crucial for the allocation of legal responsibilities under any AI liability framework.

In today's evolving landscape, AI has emerged as a game-changing force with the potential to reshape various industries, economies and societal structures. As AI systems become increasingly integrated into sectors such as vehicles, healthcare diagnostics, predictive policing and financial algorithms, questions regarding liability have taken center stage. "Nonetheless, today AI-based robots and algorithms can and do inflict physical and non-physical damages upon us as a society and as individuals, and the legal approach to handle these damages is highly disputed" (Lior 2020, 1046).

This paper delves into an examination of the liability challenges in the era of AI such as burden of proof, standard of proof, and causation. At the same time, this paper delves into the substantial and personal scope of liability, concentrating specifically on civil liability. The analysis centers on the damage incurred by all potentially affected individuals or entities. Throughout this discourse, the term "damaged persons" is employed to encompass the broad spectrum of those who may experience harm within the context of the discussed liability.

One example of damaged people are the consumers. Consumers anticipate consistent levels of safety and the protection of their rights, regardless of whether a product or system utilizes AI technology. Nevertheless, certain aspects of AI, such as its opacity, unpredictability and self and continuous learning, can pose challenges in implementing and enforcing existing laws. Therefore, it is essential to assess the adequacy of current legislation in addressing AI-related risks, consider potential adjustments to existing laws, or explore the necessity of crafting new legislation to effectively address these issues5.

2 AI in USA law is defined by the National Artificial Intelligence Act 2020 as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to: a) perceive real and virtual environments; b) abstract such perceptions into models through analysis in an automated manner; and c) use model inference to formulate options for information or action (H. R. 6216 — National Artificial Intelligence Initiative Act of 2020. Accessed May 25, 2024. https://www. congress.gov/bill/116th-congress/house-bill/6216).

3 Proposal for an EU regulatory framework on artificial intelligence (AI), fn. 1 (Recommendation of the Council on Artificial Intelligence. Accessed May 25, 2024. https://legalinstruments.oecd.org/en/ instruments/0ECD-LEGAL-0449).

4 "Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts". European Commission. 2021. Accessed May 25, 2024. https://eur-lex.europa.eu/legal-content/EN/ ALL/?uri=CELEX:52021PC0206.

5 "White Paper on Artificial Intelligence: A European approach to excellence and trust". European Commission. 2022. Accessed May 25, 2024. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

The existing legal framework consists of the Product Liability Directive6 (PLD, the abbreviation to be used in this article) and of national liability rules on fault, strict or vicarious liability that can be applied to damage resulting from the use of emerging digital technologies such as AI (Geistfeld, Karner, Koch 2023, 23). The liability is the legal responsibility of a person to be punished, forced to compensate, or otherwise subjected to a sanction by the law (Lehmann, Breuker, Brouwer 2004, 290). In the case of fault liability, this legal responsibility is based on the presence of a certain type of fault of a person, the wrongdoer or the tortfeasor, while in the case of strict liability, the legal responsibility is based on the presence of risk for third party, risk that originates from the use of objects or performance of activities that can fall under the standard "dangerous objects/activities". Besides the general rule on strict liability, it can also be established in specific cases provided by law, such as the case of vicarious liability.

In its assessment of existing liability regimes in the wake of emerging digital technologies, the New Technologies Formation of the Expert Group has concluded that the liability regimes in force in the Member States ensure at least basic protection of victims whose damage is caused by the operation of such new technologies. However, the specific characteristics of these technologies and their applications — including complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cyber-security threats — may make it more difficult to offer these victims a claim for compensation in all cases where this seems justified7.

Regarding liability rules specifically created for damage resulting from the use of AI, the conclusion is that they are not present in the laws of Member States, with exceptions to those jurisdictions that allow experimental use of highly or fully automated vehicles8. But, in the absence of liability rules specifically applicable to damage resulting from the use of emerging digital technologies such as AI, the harmful effects of their operation can be compensated under existing so-called traditional laws on damages in contract and in tort in each Member State. This applies to all fields of application of AI and other emerging digital technologies.

It is notable that the EU is crafting new legislation that addresses the liability resulting from the use of AI. In September 2022, the European Commission published two proposals: one for a directive on adapting non-contractual civil liability rules to artificial intelligence and one for adapting the existing PLD to the challenges of digital technologies. The aim of the first proposal is to complement and modernize the EU liability framework by introducing new rules specific to damages caused by AI as the Commission seeks to introduce a new liability regime to ensure greater legal certainty, thereby enhancing consumer trust in AI and ensuring successful innovations across the EU9. According to the European Commission (EC), however, the "current national liability rules, in particular

6 Directive 85/374/EEC of 5 July 1985 on the approximation of the laws, regulations, and administrative provisions of the Member States concerning Liability for Defective Products. Accessed May 25, 2024. https:// eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:31985L0374.

7 "Liability for artificial intelligence and other emerging digital technologies". European Union. 2019. Accessed May 25, 2024. https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en.

8 Ibid., 3.

9 "Briefing EU legislation in process Artificial intelligence liability directive". European Parliament 2022. Accessed May 25, 2024. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_ BRI(2023)739342_EN.pdf.

based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services". In particular, the European Commission acknowledged that certain instances of injuries caused by AI may find themselves within what can be referred to as "compensation gaps" under national legal frameworks. As a result, they might not offer victims a degree of liability protection that is on par with what they would receive in analogous cases where AI is not involved (Nunez Duffourc, Gerke 2023, 1).

2. Basic research

2.1. Overview of the current liability regimes

and their application in cases of Al-inflicted damages

As we already mentioned, the person suffering damage from using AI systems can obtain compensation for damages through the fault-based liability regime, the strict liability regime or the vicarious liability regime, in the context of traditional liability regimes (the product liability regime is explained later in the text in the separate title). A party causing damage to another party, with the presence of a fault, is obliged to indemnify it. For damage due to objects or activities that increase the risk of damaging their surroundings, liability is established regardless of fault, indicated as strict liability. The law can also provide for liability in other cases of damage regardless of fault. If we turn the focus on the general prerequisites for establishing liability, we are starting from a theory of law on obligations that requires three elements: the presence of damage, the presence of a wrongful act that caused the damage and the existence of causation between them (Ampovska 2020, 160).

2.1.1. Fault liability regime

The fault-based liability regime requires that the injured party (the claimant) prove that the defendant caused the damage intentionally or negligently. When it comes to EU Member States, the majority of them apply an objective standard when assessing fault. This means that an applicable standard of care that the defendant should have fulfilled has to be identified, and the claimant has to prove that it was not fulfilled. "In the language of negligence, the issue would be whether the product or some of its key components were negligently designed, manufactured, deployed, secured, maintained, updated, monitored, marketed, operated, used, etc" (Fernandez Llorca et al. 2023, 614).

Under the general burden of proof rule, the injured party, in principle, is responsible for establishing the wrongdoer's fault (which does not apply to legal systems that recognize the system of presumed fault, as in the case of the Croatian legal system) while the causal relationship between the tortfeasor's conduct and/or the thereby created risks and the victim's harm is universally seen as a minimum condition to shift damage to the tortfeasor and establish their legal obligation for compensation. In terms of differentiating wrongfulness and fault, wrongfulness, commonly employed in Germanic countries, is an objective concept that denotes improper conduct, i. e., behavior deemed incorrect according to the law. On the other hand, fault is a subjective concept employed to attribute blame for specific misconduct. Systems lacking this distinction essentially follow a similar rationale by combining objective and subjective aspects under a single term, such as "fault"

in France, or by incorporating supplementary mechanisms, like "duties of care" in English and United States law (Erdelyi, Erdelyi 2021, 1315).

The legal doctrine reviews whether the existing fault-based liability regime can be applied to AI. These algorithms can learn from massive amounts of data, and once they are internalized, they are like humans in that they can make decisions experientially or intuitively. These characteristics are believed to be the reason that the intention and causation may not be applicable as institutes to AI liability (Bathaee 2018, 891).

Realizing the challenges imposed by the AI systems in the field of establishing fault liability by the claimant, certain authors believe that these difficulties can be overcome by adjusting the fault-liability regime in a way that will make it more claimant-friendly. Reversing the burden of proof while maintaining the fault-based liability regime is one strategy to make it easier for people to file damage claims. A presumption of fault or causality that is rebuttable might aid claimants in getting compensation and lessen information disparities between the injured party and the wrongdoer. A presumption regime may be connected to a wide range of factual circumstances that give rise to various kinds of risks and damages, including the liability of parents for harm caused by their children, employers for employees acting on their behalf, building owners, and people engaging in hazardous activities (Buiten, de Streel, Peitz 2023, 3).

The diverse fault liability frameworks across Europe are likely to produce varying outcomes concerning harm caused by AI systems. However, making adjustments to fault liability exclusively in this particular domain demands specific justification. It is not inherently clear why victims of harm from one specific source should receive preferential treatment compared to victims experiencing the same harm from another source. Actions like reducing or reversing the burden of proving fault, for instance, require careful consideration and thorough cost-benefit analyses when compared to alternative scenarios. A person involved in a conventional car accident may find it challenging to comprehend, without further explanation, why a victim with identical injuries caused by an autonomous vehicle should be in a better position under tort law, assuming both base their claims on the wrongdoings of the tortfeasor. This becomes particularly significant in cases where a more traditional technology is gradually being replaced by an AI alternative, leading to an extended period of coexistence for both technologies (Geistfeld, Karner, Koch 2023, 65). It is the opinion of certain theoreticians that the fault-based liability rules outlined above, despite their detailed divergence, are also applicable to autonomous and automated systems unless a jurisdiction has opted to implement an exclusive strict liability regime instead, with the notion that increasing automation in this context typically leads to adjustments of the standard of care (Geistfeld, Karner, Koch 2023, 45).

This type of recognition that some activities or technologies, including AI, may not fit neatly into the traditional liability paradigms with subsequent adaptations to the burden of proof within a fault-based liability system in an attempt to address unique circumstances and facilitate just and equitable outcomes in certain cases is a method that was introduced in earlier legal models created in EU.

The Principles of European Tort Law (PETL)10, for example, propose a blanket clause in Art. 4:201 para. 1: "The burden of proving fault may be reversed in light of the gravity of the danger presented by the activity". In the view of the European Group on Tort Law

10 "Principles of European Tort Law". European group of tort law 2005. Accessed May 25, 2024. http:// egtl.org/PETLEnglish.html.

(EGTL) that authored the principles, the danger required for a reversal of the burden of proving fault is one of intermediate intensity, between the "normal" risk which is inherent to any human activity and the extraordinary or "abnormally" high risk which triggers strict liability. In this context, there is a recognition that AI technology introduces a unique set of challenges. The level of danger posed by AI systems may not fit neatly into the traditional categories of "normal" human activity or "abnormally" high risk. Due to this new and changing technical landscape, certain legal discussions and proposals, such as those addressing AI-related damage, consider changing the burden of proof. This change does not mean that strict liability will be enforced, but rather that the fault-based liability regime may need to be modified to consider the advantages and difficulties that AI technology brings with it (Geistfeld, Karner, Koch 2023, 57).

2.1.2. Strict liability regime

A strict liability regime is introduced in all Member States laws and the concept on which this regime is based involves the establishment of liability regardless of fault and/or even when there is no fault within the liable person. Therefore, the basis for this liability is found in the "risk theory" as the most accepted theory in legal doctrine. Within this theory, there is an understanding that a person is permitted to use dangerous objects or to pursue a risk-prone activity for his/her purposes and consequently, this person is obliged by law to compensate for the loss if such risk should materialize.

But, what is of significance for our research is the adopted attitude in law towards risk-based liability in Member States. On one hand, there are legal systems that introduce the risk-based liability through singular instances. This is the case of the Germanic legal systems (Austria, Germany, Liechtenstein and Switzerland) where risk-based liability is regulated exclusively by special legislation that covers particular dangerous objects or activity. On the other hand, we have the example of Croatia, Czech Republic, Hungary, Estonia, Slovenia and others that prove a general clause of strict liability in the legal system and set the basis for the application of the standards "dangerous thing" and "dangerous activity" on behalf of the national courts (Geistfeld, Karner, Koch 2023, 73-75).

All previously reviewed, when applied to our question about the assessment of the possibility of application of a strict liability regime to AI-inflicted damages, leads to the understanding that differences between Member States will be present with regard to the possibility of extending strict liability by analogy to AI system or AI technologies as dangerous things or activities. In this regard, "civil law jurisdictions that do not foresee a general risk-based liability clause, but which have nevertheless introduced at least some instances thereof linked to specific, peculiar risks, will invariably face the problem of incompleteness" (Geistfeld, Karner, Koch 2023, 74). But, when it comes to the legal systems that contain the general clause of risk-based liability, then the theoreticians do not see obstacles form legal point of view to apply it on AI, as long as the courts find that the AI technology or the AI system falls under the standards "dangerous thing" or "dangerous activity", with the notion that "due to the wide range of possible applications of AI, it is clear from the outset, though, that not all of them may be deemed sufficiently dangerous to qualify as an obvious candidate for risk-based liability" (Geistfeld, Karner, Koch 2023, 70).

2.1.3. Vicarious liability regime

The legal doctrine of vicarious liability, also known as the respondeat superior doctrine ("let the master answer") is considered to be the case when damage has occurred, and a legal three-party relationship is formed. Here an auxiliary (in this case the AI entity) carries out an order from its superior or principal which inflicts damage on a third party or a third party's property (Lior 2020, 1096). "Vicarious liability under respondeat superior is a form of liability without fault — the imposition of liability on an innocent party for the tortious conduct of another based upon the existence of a particularized agency relationship. As such, it is an exception to our fault-based liability system and is imposed only where the principal has control or the right to control the physical conduct of the agent such that a master/servant relationship can be said to exist" (Lior 2020, 1097). To determine the applicability of vicarious liability in a particular case, the typical legal test employed by courts assesses whether the agent was acting "in the course of the agency" at the time when the harm occurred. The purpose of this test is to differentiate between actions conducted by the agent for which the principal will not be held responsible and those for which the principal will bear liability. In the context of AI, this differentiation becomes obsolete because AI agents lack the capacity to perform actions that would extend beyond the principal's liability scope. This is due to their single-purpose nature and unwavering commitment to their programmed tasks (Lior 2020, 1096).

Vicarious liability is a rather diverse concept in a European comparison. Therefore, when we talk about the possibility of application of the vicarious regime in cases of AI-inflicted damages, we have to bear in mind the fact that among EU Member States some jurisdictions are very restrictive in tort law and only in rather exceptional cases attribute the conduct of an auxiliary to her/his principal, whereas other countries are much more generous in this respect. Further differences show with respect to the expected relationship between the auxiliary and the principal (such as employment), or the actual context in which harm was caused by the former (Geistfeld, Karner, Koch 2023, 12). In addition, an opinion is introduced in legal theory that vicarious liability will be most suitable for autonomous AI stating that: "AI supervised by humans will pose the least problems for intent and causation tests, whereas autonomous AI will require liability schemes based on negligence, such as those used in agency law for the negligent hiring, training, or supervision of an agent. When the AI operates under human supervision the degree of transparency may shed light on the creator or user of the AI's intent. When the AI is permitted to operate autonomously, the creator or user of the AI should be held liable for his negligence in deploying or testing the AI" (Bathaee 2018, 932).

Some theoreticians support the application of the respondeat superior doctrine in the cases of autonomous AI entities, in light of the black-box problem. On one hand, some authors limit this application to certain circumstances, "when the AI operates autonomously in a mission-critical setting or one that has a high possibility of externalizing the risk of failure on others" which is based on their conclusion that vicarious liability will be less appropriate in "less dangerous or mission-critical settings" (Lior 2020, 1099). On the other hand, there are authors that do not agree with these approaches and stress that advocating for no-liability or lowering the liability bar (in the form of negligence supervision of the principal) will lead to problematic results in the AI industry that will eventually prevent it from internalizing its inflicted damages and improving its practices. These views of

the doctrine indicate that the institute of vicarious liability is connected to the existence of primary liable party (the actual tortfeasor or wrongdoer), but in the relationship of a human principal and an AI agent, the AI entity cannot be found liable which makes the principal, not vicariously liable but primarily liable. And although this is not important from the injured person's perspective, it will have legal consequences reflecting the right to claim reimbursement of the damages that the principle has paid to the injured party for the damages invoked by AI. Having in mind all said above, this doctrine claims:

When we discuss an AI agent, which lacks the ability to assume responsibility over its actions, the only entity we can claim as responsible is the human principal or principals pulling its strings. Thus, concepts of primary and vicarious liability should be treated differently in the AI agent context than in the case of a human agent. To prove vicarious liability, there is no obligation to point to an entity that is primarily liable, especially in the AI context where we know the AI lack the capability to be held liable. The human principal or principals will be named as liable and in fact they will be held primary liable for the actions of their AI agents (Lior 2020, 1098).

2.2. The notion of causation and standard of proof in light of AI damages

When it comes to causation, there is a classical distinction that exists in legal theory between causation in fact and legal causation. Understanding what occurred (i. e., what caused what) in a case is the problem of causality. Legal experts typically take this kind of factual interpretation for granted and believe it can be easily accomplished by common sense. On the other hand, "legal causation is the set of criteria that should be applied either when a clear common sense factual interpretation of the case is missing or when, despite having a clear causal interpretation of the case, legal policy considerations should be applied (e. g., foreseeability) and this results in adopting a causal interpretation that is different from the factual causal one. Typical examples of cases where a legal causal interpretation must be used because a factual interpretation is missing, are so-called cases of overdetermination" (Lehmann, Breuker, Brouwer 2004, 281).

The evidence and the burden of proof have their influence only on the legal causation and only from a procedural standpoint. They do not necessarily alter the type of information that a judge takes into account while determining the causal relationships in a case. Problems with the evidence or the burden of proof may, at most, be the quantity or format of the data presented to a court for creating a choice (Lehmann, Breuker, Brouwer 2004, 286).

All legal jurisdictions narrow down the extent of liability that is exclusively determined through causation. In this context, there is considerable inconsistency in approaches and terminology across different jurisdictions and periods. However, limitations are typically implemented through two fundamental methods: either by constraining causation itself or by restricting the overall scope of liability.

The first method, which involves limiting causation, operates with a broad concept of fault, assigning liability for all damages resulting from a particular conduct. Simultaneously, it treats causation as a normative concept rather than a purely natural one. In this view, whether causation exists is not solely determined by the establishment of a cause-and-effect relationship but also involves additional value judgments. This approach permits the use of concepts like the theory of adequacy, to exclude liability for atypical or

remote damage — damage arising from an entirely coincidental and objectively unforeseeable interplay of circumstances that the tortfeasor could not have reasonably controlled.

The second method, centered on limiting the scope of liability, understands causality in the natural sense of the term. Drawing on prediction theory, it constrains the scope of liability by confining the duty of care to foreseeable harms — those harms that the defendant could reasonably have been expected to avoid. Consequently, fault-based liability can only be attributed to foreseeable harms in either approach (Erdelyi, Erdelyi 2021, 1316-1317).

In the legal theory, most of the causation doctrines fail when black-box AI is involved because the causation inquiry will focus on what is foreseeable to the creator or user of the AI (Bathaee 2018, 922) and excludes other damages, leaving the claimant unable to prove the causality. Consequently, in case of a lack of causality, liability cannot be established.

Causality, or lack thereof, is closely correlated with data-driven AI systems' capacity to react to unanticipated circumstances, tied to unpredictability and generalization capabilities, and to remain robust when some interventions alter the statistical distribution of the inputs, usually undetectable or undetectable to humans, such as adding undetectable noise, or substantial alterations in the system's outputs, such as adding colors to photos or flipping text's letters (Fernandez Llorca et al. 2023, 620).

The procedural starting point when it comes to proving causation is the standard of proof. An applicable standard of proof is the degree of conviction that the judge must have in order to be satisfied that the burden of proof has been met. "The standard of proof varies, as it is known, according to the liability regime. Therefore, the plaintiff's defense is not the same if the claim is based on fault, in which case the plaintiff bears the burden of proof of the negligence, as, if it is based on strict liability rules, in which case, the plaintiff should only prove the cause, the damage and the causal link" (Navas 2020, 82). Speaking of the jurisdictions in the EU, there are significant differences with respect to the procedural threshold that lead to differences in the success of proving something in court. The successful prosecution of a case in court depends on the standard of evidence applicable within the jurisdiction, specifically the degree of conviction required by the judges to comply with the burden of proof. Concerning this procedural threshold, there are substantial differences between the different jurisdictions in Europe. The legal theory stresses two types of jurisdictions (Geistfeld, Karner, Koch 2023, 10).

The first type of jurisdiction uses the standard preponderance of the evidence (for example Cyprus, Ireland, Malta, and the Nordic counties). When the standard of proof applied in civil procedures is the preponderance of evidence, it is to be met when a proposition is shown to be more than 50 % likely to be true. Established by a preponderance of the evidence means evidence that shows that the fact sought to be proven is more probable than not. In other words, a preponderance of the evidence means such evidence, when considered and compared to the evidence opposed to it, carries greater persuasive weight and leads to a belief, in one's judgment, that the proposition being attempted to be proven is more likely to be true than not true. Several theorists have argued that this 50 %+ standard is too weak. There are circumstances in which a court should find that the defendant is not liable, even though the evidence presented makes it more than 50 % likely that the plaintiff's claim is true (Smith 2021, 183).

In the second type of jurisdiction the degree to which the fact finder must be persuaded is much higher, making it correspondingly much more difficult to prove something.

This is the case with most of the procedural laws in continental Europe. At certain points of the development of the law, the standard of proof that was required was "certainty", though today this standard has been reduced to a "high degree of probability" or a "substantial likelihood" that actually requires the judge to be fully convinced without setting exact percentages of probability (for example Austria and the Czech Republic) (Geistfeld, Karner, Koch 2023, 29).

This distinction has a direct bearing on how a case will turn out. For instance, if the claimant must demonstrate that an AI system was to blame for the loss, but the evidence only indicates a 51 % likelihood that this was the case, the claimant will win the case completely in the first type of jurisdiction (subject to the other requirements of liability) and lose it completely in the second one. The difference is that the full compensation will be awarded in the first countries and no compensation will be awarded in the second one (Geistfeld, Karner, Koch 2023, 10). Apart from these two types of jurisdictions, there are also jurisdictions that merely emphasize the discretion of the court to come to a conclusion without defining a specified standard of proof (Geistfeld, Karner, Koch 2023, 29).

Because the law governing a liability claim in a cross-border situation defaults to the laws of the country where the damage occurs, similar AI products or services deployed in multiple EU member states might be subject to varying liability regimes and burden of proof regulations, even if they result in identical types of harm. Consequently, businesses encounter legal ambiguity arising from outdated and unclear EU and national liability rules, and individuals harmed by AI products struggle to secure compensation within the EU11.

2.3. Liability for Al-inflicted damages under the product liability regime

Directive 85/374/EEC of July 5, 1985, on the approximation of the laws, regulations and administrative provisions of EU Member States concerning liability for defective products (also known as Product Liability Directive, or PLD12) is based on the principle that the producer is liable for damages caused by a defect in a product he has put into circulation. The product liability regime provided in this directive is a risk-based liability, that is, strict liability. Under the PLD, individuals who have been harmed by a product are required to demonstrate that the product was defective and that this defect directly caused their injury. Proving a defect can be especially challenging for consumers when dealing with technically complex products. To address this issue, national courts have developed various mechanisms to alleviate the burden of proof in such cases. These mechanisms may include imposing disclosure obligations on the product's manufacturer or allocating the costs associated with obtaining expert opinions. Although it is officially stated that its regime continues to serve as an effective tool and contributes to enhancing consumer protection, innovation and product safety, some key concepts and rules adopted in 1985 are challenged by the potential risks of emerging digital technologies (Navas 2020, 78).

11 Briefing EU legislation in process Artificial intelligence liability directive 2022, 3.

12 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products. Accessed May 25, 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:31985L0374.

For example, the PLD products are defined as movable goods, even when incorporated into another movable or immovable object (Art. 2). "AI systems challenge this notion of product. Firstly, because in AI systems, products and services interact and it is difficult to shape a crystal-clear distinction. Secondly, it is also questionable if software is covered by the legal concept of product or just as a product component part. Thirdly, if updates and upgrades or other data feeds are included in the concept of 'product' or, finally, whether the legal answer is different depending on handling with embedded or non-embedded software". In addition to this, an example is provided in the legal theory of a robot, with the notion that it can be considered a movable good and categorized as product. But if the robot is not a computer program embedded in a good, in which case there is no problem with applying the norms of liability: "...based on damages caused by products, but with a 'virtual robot', e. g. a stand-alone-software, the question that arises is whether those rules can be applied" (Navas 2020, 78).

Today, the possibility and the applicability of the product liability regime in cases of damages caused by AI technology are not doubted in legal theory. The claimant can base his claim for damages on the PLD which implies that the producer of AI is obliged to take a diligent level of care in designing, testing and employing AI-based solutions. AI systems move the center of power from consumers to producers. When using a technological product that does not rely on AI, the user has control over the mechanical device since the manufacturer controls the product's safety features and offers the interfaces between the product and the user. Users will have considerably less influence over AI systems. Accidents will consequently depend less on the degree of caution exercised by the individual user. For injured parties to get compensation, the producer's or manufacturer's liability will likely take on a greater significance as the user's liability gradually fades into the background (Buiten, de Streel, Peitz 2023, 12-13). The term manufacturers refers to developers of AI or AI components. The providers are the developers who place systems on the market under their own name, though there is significant overlap between the two terms. In addition to manufacturers, the PLD can apply to other economic operators, such as related service providers (Hacker 2023, 6).

But, in order to amend the material and procedural product liability law and align it with the newest developments of the digital era, the European Commission published a proposal for a new directive on the liability of defective products in September 2022 (subsequently the PLD Proposal13).

The PLD Proposal sets a wider definition of "product" (Art. 4 (1)) and a broader scope of liable parties (Art. 4 (16) and 7), than the existing PLD. To adapt to the digital age, the proposal covers: software (including software updates) — whether embedded or standalone, including AI systems; digital manufacturing files — enabling the automated control of machinery or tools, such as 3D printers; digital services — where these are necessary for products to function as components of the product with which they are interconnected or integrated (e. g. navigation services in an autonomous vehicle)14. The PLD

13 Proposal for a Directive of the European Parliament and of the Council on liability for defective products. Accessed January 11, 2024. https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=CELEX:52022PC0495

14 "Briefing EU legislation in process on new product liability directive". European Parliament. 2023. Accessed May 25, 2024. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739341/EPRS_ BRI(2023)739341_EN.pdf.

offers limited guidance when it comes to applying the concept of "defect" to autonomous AI systems. When an AI system is designed to function autonomously, a critical question arises: does any instance of harm automatically constitute a defect, or is it accepted that a well-functioning AI system may still cause damage (Buiten, de Streel, Peitz 2023, 14-15)? If some level of failure is deemed acceptable, the next question becomes, what level of failure is considered tolerable? These inquiries become even more intricate when we consider AI systems with self-learning capabilities, as it becomes challenging to differentiate between harm resulting from the AI's autonomous decisions and harm caused by a genuine defect.

The PLD Proposal allocates responsibility between producers and operators and sets the standard of care for producers. Art. 7 of the PLD Proposal lists the types of economic operators that can be held liable for defective products by introducing a layered approach to liability depending on the different qualifications of the economic operator. Among the list of economic operators are: the manufacturer of a product or component, the provider of a related service, the authorized representative, the importer, and the fulfillment service provider or the distributor. The manufacturer should be liable for damage caused by a defect in their product or components. An innovation introduced in the revised PLD considers any economic operator who has substantially modified the product outside the control of the manufacturer liable for any defect and such a party is then considered as a manufacturer15. This could be considered a good solution, especially keeping in mind that a model that places liability solely on the producer, even where the defect is not strictly a manufacturing defect and some individually identified persons or a research team have been involved in the design, may disincentivize investment (Navas 2020, 81).

The PLD Proposal avoids reversing the burden of proof completely, as this was deemed to potentially expose manufacturers to excessive liability risks. Nevertheless, the proposal does provide certain measures to alleviate the burden of proof. For example, it introduces a rebuttable presumption that establishes a causal link between the defendant's fault and the outcomes (or lack thereof) produced by the AI system. This provision is intended to help address the unique challenges associated with establishing liability in cases involving AI systems, striking a balance between consumer protection and manufacturers' liability concerns (Buiten, de Streel, Peitz 2023, 16).

The method used here, introduced in Art. 9 of the PLD Proposal, alleviates the burden of proof for the injured person by establishing a presumption of defectiveness and causal link under certain conditions. Defectiveness is presumed when: a manufacturer fails to comply with the obligation to disclose information; a product does not comply with mandatory safety requirements and damage is caused by an obvious product malfunction. On the other hand, a causal link is presumed when: damage is typically consistent with the defect in question; or technical or scientific complexity causes excessive difficulty in proving liability (e. g. "black box" AI systems). In these cases, the manufacturer retains the right to contest the existence of difficulties in achieving the burden of proof or to rebut the presumptions16. Under the PLD Proposal, victims still need to demonstrate that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.

15 Ibid., 1.

16 Ibid., 7.

2.4. The road ahead

Whether in fault-based or product liability frameworks, the difficulty of demonstrating causation for AI systems has been openly acknowledged in the literature. According to the Expert Group on Liability and New Technologies, it can be complex, time-consuming, and expensive to investigate the steps that led to a particular outcome for AI, such as comprehending how input data led to output data. This difficulty is highlighted concerning product liability, as victims find it challenging to identify faults and demonstrate causation due to the complexity and opacity of new digital technology. Considering these difficulties, professionals and academics suggest numerous solutions to alleviate the burden of proof placed on victims. Alternatives suggested in the doctrine include strict liability, reversing the burden of proof, and introducing rebuttable presumptions (Fernandez Llorca et al. 2023, 616-617).

There is ongoing reform of the EU liability framework that applies to AI. The reform is twofold. On one hand, it consists of the reform of the product liability directive and is presented in the PLD Proposal. At the same time, the EC has published a proposal for an AI Liability Directive (AILD in the text that follows)17.

The AILD from the European Commission is a complementary draft directive with the proposal for a new product liability directive that aims to enhance protection for harm caused by AI systems by reducing the burden of proof in compensation claims brought under national fault-based liability systems. To be more precise, the directive aims to establish a rebuttable 'presumption of causality,' simplifying the burden of proof for victims in demonstrating that an AI system caused damage. Additionally, the directive would empower national courts to mandate the disclosure of evidence concerning high-risk AI systems suspected of being responsible for harm. Thus, with the burden of proof firmly placed on the shoulders of AI beneficiaries, the main objective of the AI directive is to make it as simple as possible for ordinary people harmed by malfunctioning AI deployed by businesses, including large organizations, to seek compensation.

The AILD provides Member States with some room for interpretation within their legal frameworks. While it establishes EU-wide rules for the presumption of causality, it does not standardize regulations on which party bears the burden of proof or the level of certainty required for the standard of proof. These aspects remain under the jurisdiction of Member States within their national laws. Additionally, the directive adopts a minimum harmonization approach, allowing claimants to invoke more favorable rules available under national law, such as reversals of the burden of proof in fault-based regimes or national strict liability, especially in cases involving damage caused by AI systems18.

Although the PLD typically prohibits EU Member States from enacting national laws that deviate from the standards outlined in the directive (Art. 3), the AILD generally permits Member States to enact more stringent national laws to regulate non-contractual liability for damages caused by AI that goes beyond the scope of the PLD (Art. 1 (4), Recital 11). Consequently, EU Member States retain substantial discretion in formulating

17 Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). 2022. Accessed May 25, 2024. https:// eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0496

18 Briefing EU legislation in process Artificial intelligence liability directive 2022, 9.

national regulations governing liability for injuries resulting from AI systems (Nunez Duffourc, Gerke 2023, 1).

The absence of clear definitions for certain crucial concepts that are to be interpreted based on national laws and subject to the discretion of national judges poses a potential risk of divergent approaches. Notably, terms like 'fault' and 'standard of care (duty of care)' or 'user' present significant challenges in interpretation. The concern is underscored by the fact that determining whether the standard of 'reasonably likely' is met relies on a subjective evaluation by national judges, conducted on a case-by-case basis. This situation has the potential to undermine legal certainty and contribute to fragmentation across the EU, influenced by variations in national tort law traditions (Dheu, De Bruyne, Ducuing 2023). A complete reversal of the burden of proof could be applied if a strict liability regime is introduced in legal regulation for high-risk AI systems. The list of high-risk AI systems is provided in Annex III of the AI Act19.

There are serious issues with the existing choice of two directives, one targeting product liability and the other explicitly addressing AI. This is significant because it relates to the separation of the purportedly distinct scopes of the PLD and the AILD proposal. The revised PLD covers software more generally than the AILD Proposal, which is limited to AI. However, some businesses misrepresent their use of AI by claiming to employ AI when they are actually merely using common software or marginally more complicated algorithms. This might influence victims to select the incorrect compensation scheme (Hacker 2023, 3).

Notably, the AILD introduces regulations that are explicitly linked to the AI Act and is underpinned by the same fundamental concept. It distinguishes liability based on the category of system risk at play, classifying it as either high-risk or non-high-risk. Thus, in relation to non-high-risk AI systems, the presumption of causality applies only if the court considers it excessively difficult for the claimant to prove a causal link. However, for high-risk AI systems, five requirements are laid down and it is only when any one of them is out of compliance that the presumption of causality may be deemed to have been met.

Unquestionably, the European Commission is making commendable progress in identifying, assessing, and resolving liability issues brought on by AI systems. However, the legal doctrine has stressed that the case of black-box AI has been left behind. The notion given on behalf some theoreticians is that if injuries brought on by black-box medical AI are not covered under either EU Member States' strict or fault-based product liability laws for manufacturers or fault-based medical liability laws for healthcare providers, then the proposed PLD and AILD goal of reducing the fragmentation of national laws governing AI liability to provide stakeholders with legal certainty is not entirely achieved. The EU can only truly benefit from harmonized measures at the EU level when the EC finds and implements further measures to mitigate potential liability gaps for "black-box" medical AI, greatly strengthening the environment for deployment and development both of the law and the medical devices (Nunez Duffourc, Gerke 2023, 5).

19 The Annex is available at: Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. 2021. Accessed May 25, 2024. https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=celex%3A52021PC0206.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3. Conclusions

It has been determined that Member State laws do not contain liability frameworks developed expressly for damage caused by the use of AI, except for those countries that permit the use of partially or entirely automated vehicles for testing purposes. However, the damages inflicted using AI can be compensated under existing so-called traditional laws (liability regimes) in each Member State. This applies to all fields of application of AI and other emerging digital technologies, in the absence of liability rules specifically applicable to damage resulting from the use of emerging digital technologies like AI. The experts assess that the existing regimes offer "at least basic protection", although it is recognized that the specific characteristics of AI systems will present considerable challenges and difficulties, primarily for the injured person.

Both fault-based and risk-based liability can be applied in cases of AI — AI-inflicted damages with certain restrictions to legal systems that do not have a general clause on risk-based liability, and therefore can't apply the strict liability to AI technologies.

The fault liability regime in its traditional form faces numerous challenges when it comes to the application of the basic institutes to AI-related cases. In addition to this situation, we can also conclude that the differences between legal systems do not suit the purpose of unified and equal treatment of similar injuries in terms of the right to obtain compensation. The standard of proof in civil proceedings differs significantly among the Member States of the EU. Because different countries have different requirements for plaintiffs to satisfy, these variations could possibly result in different conclusions in cases involving similar AI-caused damages. In some places, the standard of proof may be so high that the party claiming damages must prove that AI caused the damage with a high degree of certainty or likelihood, while in others the standard preponderance will apply. Consequently, identical AI products or services deployed across multiple EU Member States may become subject to differing liability regimes and burden of proof requirements, even when they result in identical types of damages. This is one of the circumstances that emphasizes the urgent requirement for harmonization and a uniform approach to AI liability within the EU. It would be more equitable for all parties concerned if liability standards and burden of proof laws were uniform, as this would increase legal predictability and clarity. No matter where they are located in the EU, it would guarantee that anyone facing comparable AI-related damages receives the same treatment. Establishing a unified framework for liability and burden of proof requirements in the context of the developing AI landscape is essential for achieving legal cohesion and safeguarding the rights of all EU citizens.

The application of the vicarious liability regime to AI-inflicted damages is also a complex and evolving field, influenced by the unique characteristics of AI systems and the legal landscape in different jurisdictions. As AI technology continues to advance and infiltrate various aspects of society, the adaptation of legal doctrines like vicarious liability to these novel circumstances will be an ongoing challenge. Balancing the need for accountability with the recognition of AI's limited capacity for autonomous decision-making will be a critical consideration in shaping future legal frameworks.

The challenges of regulating AI liability are being recognized in the proposal for a new directive on the liability of defective products, published by the European Commission in September 2022. The proposal intends to alleviate the burden of proof for victims

in particular circumstances. In addition, Art. 9 of the revised PLD eases the burden of proof for the injured person by establishing a presumption of defectiveness and causal link under certain conditions.

The Proposal for PLD together with the AILD Proposal both published on September 28, 2022, are considered to be the final cornerstones of the EU Commission's approach to AI regulation. The analyses of their content show that this approach is in fact dual and controversial. It is stressed that the AILD Proposal seeks to harmonize procedural questions, such as disclosure of evidence and burden of proof across Member States' national liability regimes for the purposes of AI liability, while largely tying these instruments to violations of the AI Act. On the other hand, and "[independent] of the AI Act, the PLD Proposal suggests a general update of classical product liability, with a specific view, however, toward digital products more generally and AI more specifically" (Hacker 2023, 3). Both proposals follow the AI Act when it comes to differentiating high-risk and non-high-risk systems.

In essence, the PLD and related legal frameworks are struggling to keep pace with the unique challenges posed by autonomous AI systems. The law must evolve to provide clarity and define standards for these situations, ensuring that accountability and liability are appropriately attributed when autonomous AI systems are involved. Without a clear delineation between a system's intended operation and defects, the line between harm from autonomous AI decisions and harm due to defects remains blurred, raising complex legal and ethical questions that require thoughtful consideration and resolution.

In conclusion, the special qualities of AI systems and their growing autonomy provide serious issues for the existing liability regimes and their application to AI damages. Both proposals from the European Commission, are a forward-1 ooking and adaptive response to these difficulties. They seek to strike a compromise between encouraging innovation and guaranteeing accountability in the AI era by instituting a tiered system of liability, stringent liability for high-risk AI, and a focus on transparency and traceabil-ity. These recommendations are a big step in the right direction, as comprehensive and well-defined liability frameworks are needed as AI continues to change many features of our lives.

But it is also our conclusion and our opinion that since the two legislative intervention proposals are presented as directives the level of harmonization will be minimal, especially for the AI liability regime. A legal tool like a directive opens the path to harmonization and sets regulations for the entire EU but it also gives the Member States latitude in how they implement the directives. This is especially important when it comes to the proposal for an AI system liability regime as it is provided additionally in its provisions. Additionally, the AILD Proposal is predicated on ideas such as damage and liability, which vary substantially amongst legal systems, meaning that the result of applying the AI Liability Directive can be different from one Member State to another not providing the anticipated idea for harmonization.

References

Ampovska, Marija. 2020. "Differing unjust enrichment and damages in theory and practice under

Macedonian law". Balkan Social Science Review 16: 157-173. Bathaee, Yavar. 2018. "The artificial intelligence black box and the failure of intent and causation". Harvard

Journal of Law & Technology 31 (2): 890-938.

Buiten, Miriam, Alexandre de Streel, Martin Peitz. 2023. "The law and economics of AI liability". Computer Law & Security Review 48: 1-20.

Dheu, Orian, Jan De Bruyne, Charlotte Ducuing. 2023. "The European Commission's approach to extra-contractual liability and AI — A first analysis and evaluation of the AI liability directive and the revised product liability directive". Computer Law & Security Review 51. Accessed May 25, 2024. https://www. sciencedirect.com/science/article/abs/pii/S0267364923001048.

Erdelyi, Olivia, Gábor Erdelyi. 2021. "The AI liability puzzle and a fund-based work-around". Journal of Artificial Intelligence Research 70: 1309-1334.

Fernandez Llorca, David, Vicky Charisi, Ronan Hamon, Ignacio Sánchez, Emilia Gómez. 2023. "Liability regimes in the age of AI: A use-case driven analysis of the burden of proof". Journal of Artificial Intelligence Research 76: 613-644.

Geistfeld, Mark, Ernst Karner, Bernhard Koch. 2023. "Comparative study on civil liability for artificial intelligence". Tort and Insurance Law 37: 1-185.

Hacker, Philipp. 2023. "The European AI liability directives — Critique of a half-hearted approach and lessons for the future". Computer Law & Security Review 51: 1-42.

Lehmann, Jos, Joost Breuker, Bob Brouwer. 2004. "Causation in AI and law". Artificial Intelligence and Law 12 (4): 279-315.

Lior, Anat. 2020. "AI entities as AI agents: Artificial intelligence liability and the AI respondeat superior analogy". Mitchell Hamline Law Review 46 (5): 1040-1100.

Navas, Susana. 2020. "Producer liability for AI-based technologies in the European Union". International Law Research 9 (1): 77-84.

Nunez Duffourc, Mindy, Sara Gerke. 2023. "The proposed EU directives for AI liability leave worrying gaps likely to impact medical AI". Digital Medicine 6: 1-6.

Smith, Martin. 2021. "Civil liability and the 50 %+ standard of proof". The International Journal of Evidence & Proof25 (3): 183-199.

Received: August 8, 2023 Accepted: January 19, 2024

Author's information:

Marija Ampovska — PhD in Law, Associate Professor; marija.ampovska@ugd.edu.mk

i Надоели баннеры? Вы всегда можете отключить рекламу.