Научная статья на тему 'LEGAL MECHANISMS TO REGULATE CIVIL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN FEDERATION AND EUROPEAN UNION LAW'

LEGAL MECHANISMS TO REGULATE CIVIL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN FEDERATION AND EUROPEAN UNION LAW Текст научной статьи по специальности «Право»

CC BY
195
52
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Russian Law Journal
Scopus
ВАК
Область наук
Ключевые слова
ARTIFICIAL INTELLIGENCE / LIABILITY / EMERGING TECHNOLOGIES / LAW AND TECHNOLOGY / THEORY OF LAW

Аннотация научной статьи по праву, автор научной работы — Kondrateva Ksenya, Nikitin Timur

In this article authors discuss existing ideas about liability of artificial intelligence based on guilty and strict approaches to defining the elements of civil liability in the Russian Federation and European Union. These approaches have drawbacks, which are, first of all, in the excessive limitation of the development of innovations, and with low efficiency in achieving the goals of civil legal responsibility and the implementation of its functions. The risk-based approach proposed by the author to the determination of the elements of civil liability for the actions of artificial intelligence is intended to neutralize the named drawbacks. Based on the analysis of the spheres of application and artificial intelligence technology, the risk-based approach allows a more efficient and flexible approach to the definition of the subject of responsibility, its types and limits, ensuring a balance between the development of innovation and the goals of civil liability. As a result of the study, the author’s definition of a risk-based approach to civil liability for the actions of artificial intelligence has been given, its features, elements have been disclosed, and its advantages over existing approaches to civil liability have been demonstrated.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «LEGAL MECHANISMS TO REGULATE CIVIL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE IN THE RUSSIAN FEDERATION AND EUROPEAN UNION LAW»

LEGAL MECHANIsMs To REGuLATE CIVIL LIABILITY FoR ACTIons

of artificial intelligence in the Russian federation AND european uNioN LAw

KSENYA KONDRATEVA, National Research University Higher School of Economics (Perm, Russia)

TIMUR NIKITIN, Geo Track Technologies Inc. (Dover, DE, United States)

https://doi.org/10.17589/2309-8678-2021-9-3-60-82

In this article authors discuss existing ideas about liability of artificial intelligence based on guilty and strict approaches to defining the elements of civil liability in the Russian Federation and European Union. These approaches have drawbacks, which are, first of all, in the excessive limitation of the development of innovations, and with low efficiency in achieving the goals of civil legal responsibility and the implementation of its functions. The risk-based approach proposed by the author to the determination of the elements of civil liability for the actions of artificial intelligence is intended to neutralize the named drawbacks. Based on the analysis of the spheres of application and artificial intelligence technology, the risk-based approach allows a more efficient and flexible approach to the definition of the subject of responsibility, its types and limits, ensuring a balance between the development of innovation and the goals of civil liability. As a result of the study, the author's definition of a risk-based approach to civil liability for the actions of artificial intelligence has been given, its features, elements have been disclosed, and its advantages over existing approaches to civil liability have been demonstrated.

Keywords: artificial intelligence; liability; emerging technologies; law and technology; theory of law.

Recommended citation: Ksenya Kondrateva & Timur Nikitin, Legal Mechanisms to Regulate Civil Liability for Actions of Artificial Intelligence in the Russian Federation and European Union Law, 9(3) Russian Law Journal 60-82 (2021).

Table of Contents

Introduction

1. Strict (Non-Fault) and Fault Liability Approach of a Legal Mechanism to Regulate Civil Liability for Actions of Artificial Intelligence

2. Risk-Based Approach of a Legal Mechanism to Regulate Civil Liability for Actions of Artificial Intelligence in the Russian Federation and European Union Law

Conclusion

Introduction

Presidential Decree of 10 October 2019 No. 490 adopted the National Strategy for the Development of Artificial intelligence (hereinafter referred to as Ai) for the period up to 2030,1 its development is a condition for the entry of the Russian Federation into the group of leaders in the field of development and implementation of artificial intelligence technologies, as a result, the technological independence and competitiveness of the country.

19 August 2020 Order of the Government of the Russian Federation 2129-r approved the Concept of Regulation of Relations in the Field of Ai and Robotics Technologies2 defining basic approaches to transforming the regulatory system of public relations in the field of artificial intelligence technologies, as well as identifying legal barriers, obstacles to the development and application of Ai systems.

Section 1, paragraph 3, of the Concept (hereinafter referred to as the Concept) defines the principle of regulating social relations resulting from the development and application of artificial intelligence systems as the principle of regulatory influence, A risk-based, interdisciplinary approach involving the adoption of restrictive rules in the case of Ai applications.

Указ Президента Российской Федерации от 10 октября 2019 г. № 490 «О развитии искусственного интеллекта в Российской Федерации» (вместе с «Национальной стратегией развития искусственного интеллекта на период до 2030 года») // Собрание законодательства РФ. 2019. № 41. Ст. 5700 [Decree of the President of the Russian Federation No. 490 of 10 October 2019. On the Development of Artificial intelligence in the Russian Federation (together with the National Strategy for the Development of Artificial intelligence for the Period up to 2030), Legislation Bulletin of the Russian Federation, 2019, No. 41, Art. 5700].

2 Распоряжение Правительства Российской Федерации от 24 августа 2020 г. № 2129-р «Об утверждении Концепции развития регулирования отношений в сфере технологий искусственного интеллекта и робототехники на период до 2024 года» // СПС «КонсультантПлюс» [Order of the Government of the Russian Federation No. 2129-r of 24 August 2020. On Approval of the Concept for the Development of Regulation of Relations in the Field of Artificial intelligence Technologies and Robotics for the Period up to 2024, SPS "ConsultantPlus"] (Apr. 20, 2021), available at http://www.con-sultant.ru/document/cons_doc_LAW_360681/.

in the provision under consideration, the Concept indicates that further work is required on the mechanisms of civil, administrative and criminal liability in the event of injury to the Ai system, including with regard to the identification of persons who will be held responsible for their actions, the development, if necessary, of an innocent civil liability, and the possibility of using means to redress the harm caused by the actions of artificial intelligence and robotics (e.g. liability insurance, establishment of compensation funds, etc.)

The topic of artificial intelligence is receiving attention not only in Russia. in the European Union, since 2015, attempts have been made to design a special mechanism for the legal regulation of artificial intelligence: the researchers call the strategy of a single digital market for Europe as the starting point in the field.3

From that moment on, the legal horizon in the studied area is rapidly, intensively, and fully outlined in the EU territory: on 25 April 2018 the Strategy for Artificial intelligence (hereinafter referred to as the EU Strategy) was adopted,4 a High Expert Group on Artificial intelligence was created, on 27 November 2019, presented a Report on Responsibility for Artificial intelligence and the Latest Digital Technologies,5 and also adopted the European Commission's White Paper On Artificial intelligence - The European Way to Excellence and Trust (hereinafter referred to as the White Paper).6 The European legislator emphasizes that despite the fact that EU legislation as a whole is fundamentally applicable to relations in the field of artificial intelligence, it is important to assess the possibility of changing it.7

1. Strict (Non-Fault) and Fault Liability Approach of a Legal Mechanism to Regulate Civil Liability for Actions of Artificial Intelligence

However, in order to proceed to the solution of the task established by the Concept of "working out the mechanisms of civil liability," it should be noted that the doctrine of civil liability itself requires its rethinking.

3 Кашкин С.Ю., Покровский А.В. Искусственный интеллект, робототехника и защита прав человека в Европейском союзе // Вестник Университета имени О.Е. Кутафина (МГЮА). 2019. № 4. C. 67 [Sergei iu. Kashkin & Alexander V. Pokrovskii, Artificial Intelligence, Robotics and Human Rights Protection in the European Union, 4 Courier of Kutafin Moscow State Law University (MSAL) 64, 67 (2019)].

4 Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial intelligence for Europe, COM/2018/237 final (May 2, 2021), available at https://ec.europa.eu/digital-single-market/ en/news/communication-artificial-intelligence-europe.

5 European Commission, Liability for Artificial intelligence and Other Emerging Digital Technologies, Report from the Expert Group on Liability and New Technologies - New Technologies Formation (2019) (May 2, 2021), available at https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en/format-pdf.

6 European Commission, White Paper on Artificial intelligence, 19 February 2020, C0M(2020) 65 final, at 14 (May 2, 2021), available at https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

7 Id.

O.A. Kuznetsova, pointing out the need to change the methodological paradigm in the study of civil liability, reasonably notes that

in the established paradigm of legal liability in the civil sphere, anomalies appear more and more often - inexplicable facts that "undermine" the basic attitudes within the framework of the leading theory, but, frankly speaking, directly contradicting them.8

The reason for such a remark is that

civilians try by hook or by crook to explain that all "atypical" types of civil liability correspond to the signs of legal responsibility, and thereby"cram" them into the construction of responsibility or construct the concept of a special, curtailed, atypical responsibility. Thus, it is argued, for example, that when damage is caused by a source of increased danger, its owner always acts guilty, because he "overlooked" this source; parents are responsible not without guilt, but for their guilt in the form of improper upbringing and supervision, which is also a kind of indirect causal relationship between the child's behavior and harm, etc. Or it is explained that non-fault responsibility, despite the absence of all the obligatory signs of responsibility, is still a responsibility, but with a peculiarity.9

One of the named anomalies is the innocent or, as it is called in the acts of the European Union, the strict concept of civil liability.

The essence of the considered concept of civil liability boils down to the fact that the composition of the offense is truncated, and from which the subjective side of the person brought to justice is excluded. its existence is justified by the fact that the tortfeasor "should seek with maximum intensity the ways to prevent this harm,"10 "in order to stimulate in every possible way the further improvement of ... technology, the further growth of safety,"11 "to encourage the owners of sources of increased danger to take appropriate measures."12

Кузнецова О.А. Гражданско-правовая ответственность: необходимость смены методологической парадигмы // Проблемы взыскания убытков в российском правопорядке: сборник статей VI ежегодной международной научно-практической конференции «Коршуновские чтения» [Olga А. Kuznetsova, Civil Liability: The Need to Change the Methodological Paradigm in Problems of Recovery of Damages in the Russian Legal Order: Collection of Articles of VI Annual International Scientific and Practical Conference "Korshunov Readings"] 10 (Iulia S. Kharitonova ed., 2016).

Id.

Флейшиц Е.А. Обязательства из причинения вреда и неосновательного обогащения [Ekaterina А. Fleishits, Liabilities from Harm and Unjust Enrichment] 137 (1951).

ШварцХ.И. Значение вины в обязательствах из причинения вреда [Khanan I. Schwartz, The Value of Guilt in the Obligations of Causing Harm] 48 (1939).

Яичков К.К. Система обязательств из причинения вреда в советском праве // Вопросы гражданского права [Konstantin K. laichkov, The System of Obligations from Causing Harm in Soviet Law in Civil Law Issues] 169 (Ivan B. Novitskii ed., 1957).

However, the possibility of bringing to civil liability without establishing all the elements of the subjective side of the offense, as well as with the aim of encouraging the improvement of science and technology, the search for ways that will avoid causing harm, conflicts with the theory of law, which performs a methodological function in all legal sciences and "able to determine the most general limits of scientific research, to exclude certain decisions from circulation on the basis of their inconsistency with the methodological principles adopted in this science."13

The concept of legal responsibility is imminent connected with the concept of an offense, the essence of which is that a delinquent causes harm to society, encroaching on the benefit protected by the latter. At the heart of such harm is an arbitrarily broken communication link between the entitled subject and the offender, which is the basis of any legal relations. According to the communicative approach to law, the provisions of which are close to the authors of this study, legal communication is characterized by continuity, "meaning that the rights and obligations of the subjects of legal relations should not be interrupted against the will of the participants in legal communication."14 if this communication is to be severed arbitrarily, society deems it necessary to react to such actions and to take action aimed at restoring the broken communication, by bringing the offender to legal responsibility.

in accordance with the foregoing, the researchers rightly conclude that legal responsibility in its ontological status is a conviction of a delinquent for the legal communication he arbitrarily severed.

Such condemnation pursues as its goals the protection of law and order and the education of citizens in the spirit of respect for the law,15 and the functions of prevention, punishment and compensation.16

From the point of view of law, neither the goal nor the function of legal respon-sibility can be achieved if the offender did not experience (or should have experienced) a certain intellectual experience, performed a mental operation in case of distortion or breakdown of legal communication. Without such an intellectual and volitional element, the conviction of the offender loses all meaning - society cannot repay a person who has not expressed his will to violate the norm.

Obviously, in the light of the above, it seems that if the objective concept of guilt, which is itself found in civil law and is justly criticized,17 which includes a violation in

13 Кузнецова О.А. Проблемы учения о гражданско-правовой ответственности // Lex Russica. 2017. № 5(126). С. 12 [Olga A. Kuznetsova, Problems of the Doctrine of Civil Liability, 5(126) Lex Russica 11, 12 (2017)].

14 Поляков А.В. Общая теория права: проблемы интерпретации в контексте коммуникативного подхода [Andrei V. Poliakov, General Theory of Law: Problems of Interpretation in the Context of the Communicative Approach] 789 (2016).

15 Лазарев В.В. Общая теория права и государства [Valerii V. Lazarev, General Theory of Law and the State] 162 (2001).

16 Id.

17 Кузнецова О.А. Случай как основание исключения гражданско-правовой ответственности // Вестник Пермского университета. Юридические науки. 2013. Вып. 1(19). C. 148 [Olga A. Kuznetsova, Case as a Basis for Excluding Civil Liability, 1(19) Perm University Bulletin 145, 148 (2013)].

the form of failure to take necessary measures in defining guilt through behavior, has flaws in, then the possibility of bringing to civil liability in general regardless of guilt enters into direct methodological contradiction with the theory of law.

Bringing a person to civil liability for the very fact that he owns this or that object does not correspond to either the essence (conviction), or the goals (education), or the functions (prevention and punishment) of legal liability.

Theorists of law directly point out that bringing to legal responsibility pursues "the goal of ousting alien relations from the life of our society."18

it seems that the logical law on the formation of a concept is applicable to what has been said, according to which, if a phenomenon does not have the features that are included in the content of the concept, it cannot be attributed to the class of phenomena covered by the scope of the concept.

in this regard, the idea expressed in the Concept of a possible"revision, if necessary, of strict (non-fault) civil liability," the very existence of which is highly controversial in civil science, does not seem entirely justified from a methodological point of view.

Using the indicated conclusions about the doctrinal flaws of the concept of strict (non-fault) responsibility as a starting point, one should proceed to the analysis of its existing legislative embodiment and proposals of civil law experts on its applicability to civil liability for actions of artificial intelligence.

in the current Russian legislation, the concept of strict (non-fault) responsibility most clearly reflected in Article 1079 of the Civil Code of the Russian Federation (hereinafter referred to as Civil Code, Code), which imposed on the owners of sources of increased danger the obligation to compensate the victim for harm. At the same time, in the event of a collision of harm by several owners of different sources of increased danger, their responsibility is joint and several. Circumstances that, at the discretion of the court, may serve as a basis for terminating the obligation to compensate for harm in full or in part, the law names only force majeure, illegal and innocent for the owner the disposal of the source of increased danger from his possession, as well as the behavior of the victim himself.

The said provision of the Code does not contain a closed list of sources of increased danger, and its signs are disclosed in paragraph 18 Russian Federation Supreme Court Plenary Ruling of 26 January 2010 No. 1 "On the Application by Courts of Civil Legislation Regulating Relations Under Obligations Due to Harm to the Life or Health of a Citizen." According to which, any activity, the implementation of which creates an increased likelihood of harm due to the impossibility of full control over it on the part of a person, should be recognized as a source of increased danger, as well as activities related to the use, transportation, storage of objects, substances and other objects of production, economic or for another purpose with the same properties.

18 Алексеев С.С. Механизм правового регулирования в социалистическом государстве [Sergei S. Alekseev, The Mechanism of Legal Regulation in a Socialist State] 26 (1966).

it is specifically indicated to the courts that, taking into account the special properties of objects, substances or other objects used in the process of activity, the court has the right to recognize other activities as a source of increased danger.

An analysis of the Russian legal doctrine showed that researchers are inclined to apply the provisions of Article 1079 of the Civil Code to civil liability for actions of artificial intelligence. T.A. Bubnovskaia concludes that

artificial intelligence is a source of increased danger and harm caused to third parties (for example, in a cyberattack) will entail liability in the absence of the fault of the harm-giver.19

A.A. Antonov argues that the provisions of Article 1079 of the Civil Code should be supplemented with an indication of devices "appearing as a result of the rapid development of science and technology and representing previously unknown sources of increased danger," which include drones.20 The same researcher subsequently directly pointed out that

the use of norms regulating a special object of property rights - animals, can be compared with objects of artificial intelligence, since the latter can also behave autonomously and even aggressively21

and for this reason proposes to fix it in Article 1079 of the Code

since the need to supplement the list of the article of the considered norm of present and future sources of increased danger, for example, such as fighting dogs, electric vehicles or Ai systems, such as remotely controlled vehicles, etc., has already arisen in theory and practice.

A similar legislative approach is being taken by the European legislator. The preamble to Directive 85/374/EEC22 (hereinafter referred to as PLD) explicitly states that responsibility without fault placed on the manufacturer is the only way

19 Бубновская Т.А. Гражданско-правовая ответственность при использовании беспилотных автомобилей // Транспортное право. 2019. № 3. C. 6 [Tatiana A. Bubnovskia, Civil Liability When Using Self-Driving Cars, 3 Transport Law 6, 6 (2019)].

20 Антонов А.А. Некоторые аспекты ответственности за вред, причиненный источником повышенной опасности // Юрист. 2019. № 12. С. 26 [Alexander A. Antonov, Some Aspects of the Liability for Harm Caused by a Source of Increased Danger, 12 Lawyer 25, 26 (2019)].

21 Антонов А.А. Искусственный интеллект как источник повышенной опасности // Юрист. 2020. № 7. С. 72 [Alexander A. Antonov, Artificial Intelligence as a Source of Increased Danger, 7 Lawyer 69, 72 (2020)].

22 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (May 2, 2021), available at https://eur-lex.europa.eu/legal-content/GA/TXT/?uri=celex:31985L0374.

to adequately address the problem inherent in our age of increasing technology, fair distribution risks inherent in modern technological production.

The producer is liable, the definition of which is very broad, provided that the injured party proves the damage caused, the presence of the defect, as well as the causal relationship between the damage caused and the defect in the product (Art. 4 PLD).

Damage means any damage to the life or health of the victim, as well as property damage in excess of500 euros (not including the amount of goods with defects) caused to property that was used for personal purposes or intended for such purposes.

A defect in a product is considered to be established if the latter did not provide the expected level of safety, taking into account all circumstances, including the presentation of the product, the circumstances of its use, which are reasonably expected when it is launched into the market, as well as the time when the product was launched into the market (Art. 6 PLD).

An expert group set up by the European Commission to investigate the issues of civil liability for the actions of artificial intelligence, concludes that strict liability of the producer should play a key role in redressing the harm caused, no matter what form the artificial intelligence technology takes, since it allows the victim successfully receive compensation for the harm received.23

However, the outlined methodological shortcomings inherent in the concept of a strict (non-fault) concept of civil liability explain its main disadvantage: the threat of imposing the obligation to compensate for the harm caused to the owner of the artificial intelligence technology will have a deterrent effect on the activities of developers, acting as a barrier to the development of innovations. The above becomes especially obvious in relation to joint developments, since the creators of the technology will be jointly and severally liable for compensation for the damage caused.

it is particularly unfortunate in this regard are the provisions of paragraph 4 of Article 15 of the Law of the Russian Federation "On Protection of Consumer Rights"24 establishing the responsibility of the producer (executor) for harm caused to the life, health or property of the consumer in connection with the use of materials , equipment, tools and other means necessary for the production of goods (performance of work, provision of services), regardless of whether the level of scientific and technical knowledge allowed to reveal their special properties or not, which have been consistently confirmed in the practice of the Supreme Court of the Russian Federation.25

23 Liability for Artificial Intelligence, supra note 5.

24 Закон Российской Федерации от 7 февраля 1992 № 2300-I «О защите прав потребителей» // Собрание законодательства РФ. 1996. № 3. Ст. 140 [Law of the Russian Federation No. 2300-I of 7 February 1992. On Consumer Protection, Legislation Bulletin of the Russian Federation, 1996, No. 3, Art. 140].

25 Постановление Пленума Верховного Суда Российской Федерации от 28 июня 2012 № 17

«О рассмотрении судами гражданских дел по спорам о защите прав потребителей» // СПС «Кон-

сультантПлюс» [Resolution of the Plenum of the Supreme Court of the Russian Federation No. 17 of

28 June 2012. On Consideration by the Courts of Civil Cases in Disputes on the Protection of Consumer Rights, SPS "ConsultantPlus"] (May 1, 2021), available at http://www.consultant.ru/document/ cons_doc_LAW_131885/.

Noteworthy in this regard is the provision of subparagraph (e) of Article 7 of the PLD, which establishes a legal solution directly opposite to the Russian legislator: as one of the grounds for exempting the produce from liability, if the state of science and technology, which does not allow determining the presence of a defect at the time of launching the goods into circulation.

The considered threat in the foreign legal doctrine was called the chilling effect of liability law26 the solution of which researchers see in the application of non-fault-based compensation schemes (NFCS).27

Non-fault-based compensation schemes are designed to make it easier for victims to receive compensation for harm caused by the absence of the need to find the guilty person and establish a causal link between his actions (inaction) and the harm caused, thereby resolving the shortcomings of the institution of civil liability called by the authors.28 The authors of the proposed decision concluded that measures to prevent harm, so clearly expressed in the a strict (non-fault)) concept of civil liability, are the opposite side of innovation,29 however, this fact, in our opinion, is not a reason for abandoning the institution of civil liability, since, unlike the latter, NFCS cannot achieve the goals of protecting law and order and educating. NFCS simply do not have the necessary tools for this.

Another existing approach to civil liability is based on guilt, the essence of which boils down to the fact that for the onset of the civil law concept, it is necessary to establish the composition of the offense based on the use of the theory of law of the element: object, subject, objective and subjective side. Characteristic in the light of this work is the assertion of legal theorists that "the composition of offense becomes the only, sufficient basis for legal liability, helps to determine its nature, scope and limits."30

For the purposes of this work, the most interesting is the subjective side of the offense, which means the mental attitude of a person to what he has done, which consists in "the nature of the offender's assessment of his actions and foreseeing the socially dangerous consequences of his behavior."31

Despite the prevalence in the Civil Code of norms that consolidate the objective concept of the offender's guilt, Articles 538 and 777 of the Code consolidate the

26 Woodrow Barfield et al., The Cambridge Handbook of the Law of Algorithms 447 (2021); Maurice Schellekens, Self-Driving Cars and the Chilling Effect of Liability Law, 31(4) Comput. L. Secur. Rev. 506, 507 (2015).

27 Maurice Schellekens, No-Fault Compensation Schemes for Self-Driving Vehicles, 10(2) L. innov. Tech. 314 (2018).

28 Id. at 319.

29 Id. at 316.

30 Матузов Н.И., Малько А.В. Теория государства и права: курс лекций [Nikolai i. Matuzov & Alexander V. Malko, Theory of State and Law: A Course of Lectures] 579 (2020).

31 Id. at 582.

existence of a subjective concept of guilt in the form of intent or negligence of the inflictor of harm, that is, the mental attitude of a person to what he has done. it is this interpretation that follows from the analysis of law enforcement practice, reflected, among other things, in the Resolution of the Ninth Arbitration Court of Appeal of 13 July 2020 in case No. A40-163767/2019, and its doctrinal interpretation.32

The concept of objective (behavioral) guilt prevailing in the Code is criticized in the legal literature and, in our opinion, is criticized with good reason. The researchers note that

the behavioral theory of guilt has serious defects: firstly, it is impossible to define in it the concept of the form of guilt - intent and negligence; secondly, in terms of its content, the behavioral understanding of guilt coincides with the concept of inaction, there is a mixture of the objective and subjective sides of the offense; thirdly, it is completely unacceptable for tort offenses; fourthly, for the same offense, a person can be found not guilty in criminal proceedings and guilty in civil proceedings.33

Professor O.A. Kuznetsova makes a well-founded conclusion that

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

the behavioral concept of guilt has a decisive flaw for this study: the understanding of innocence inherent in it is unacceptable for extra-contractual liability, since it is impossible to take or not take measures for the proper execution of the tort obligation it occurs at the time of harm. The proper performance of a tort obligation is the actual compensation for the victim of harm. in the Civil Code, the issue of the concept of innocent infliction of harm in the commission of tort is not resolved at all.34

interesting in the light of the above are the provisions of paragraph 19 of the National Strategy for the Development of Artificial intelligence, which enshrined the list of basic principles for the development and use of Ai technologies, the observance of which is mandatory during its implementation. These include, among others: protection of human rights and freedoms; the inadmissibility of using artificial i ntelligence for the purpose of deliberately causi ng ha rm to citizens and lega l entities, as well as preventing and minimizing the risks of negative consequences of using artificial intelligence technologies; transparency: the explainability of the work of artificial intelligence and the process of achieving results by it, non-discriminatory access of users of products that are created using artificial intelligence technologies

32

33

Kuznetsova 2013, at 146. Id. at 148. Id. at 145.

to information about the algorithms of artificial intelligence used in these products, as well as support for competition.

it seems that the provision on the prohibition of direct and indirect intent as a form of mental attitude of the perpetrator of potential harm to citizens and legal entities should be assessed positively, since the prohibition of accidental harm can negatively affect the prospects for the development of artificial intelligence, placing potential Ai creators in a vulnerable legal position.

The foregoing, in our opinion, clearly demonstrates that the non-fault concept prevailing in civil law is subject to change by bringing it closer to the fault concept of liability.

However, the latter approach is also having drawbacks: the need to prove all elements of the offense, although fully consistent with the theory of law, but it can also significantly complicate the injured person to receive compensation for harm caused due to the peculiarities of artificial intelligence technologies. These features include the increasing autonomy and complexity in explaining the actions performed by Ai, as well as the qualitatively different risks that artificial intelligence can carry.

The White Paper emphasizes that the lack of openness of artificial intelligence complicates the detection and proving of possible facts of violation of the law, including provisions aimed at protecting fundamental rights, corresponding responsibility and fulfillment of the conditions necessary for the victims to receive compensation for the harm caused.35

These same ideas are in fact developed in the report of the expert group, noting that the more complex new digital technologies become, the less those who use their functions or encounter them can understand processes that can harm themselves or others. Algorithms are often no longer more or less readable code, but a self-taught black box that we may be able to test for its effects, but not enough to understand. This makes it increasingly difficult for victims to identify such technologies as even a possible source of harm, let alone why they caused it. After the victim has successfully demanded damages from the tortfeasor, the tortfeasor may face similar difficulties.36

in turn, the problems of Ai autonomy lie in the fact that the actions of the latter begin to depend less on the direct management or control of a person, and to a greater extent - on the independent choice of goals, often based on self-learning processes. The designated problem, combined with the fact that autonomy can be made dependent not only on the actions of the developer, but on other persons (for example, a person providing services for the maintenance of an artificial intelligence system) raises the question of the subjects of responsibility, the forms of their guilt, as well as the types and the limits of civil liability.

35 White Paper on Artificial intelligence, supra note 6, at 14.

36 Liability for Artificial intelligence, supra note 5.

in addition, the fault concept of civil liability in itself does not allow taking into account the objective differences in the risks inherent in various artificial intelligence systems. it is obvious that artificial intelligence technologies used in such an area as health care carry greater risks of legally protected benefits compared to advertising distribution.

it seems that the fault concept of civil liability will not allow to fully answer the questions of who is a participant in social relations complicated by artificial intelligence, to determine the degree of their involvement in certain processes of development, creation, use, possession and disposal of artificial intelligence technology, as well as the forms of guilt and type of civil liability.

in previous studies, we posed questions about how the classic model of a civil offense, built on the principle of guilt, would work in relation to the responsibility of an Ai developer? What actions of the developer should be considered guilty? When should a developer be held accountable for harm regardless of fault?37

it seems that the idea of a risk-based approach to civil liability that we are forming, which will be discussed in the following order, can help to largely level the indicated shortcomings and find answers to the questions posed.

2. Risk-Based Approach of a Legal Mechanism to Regulate Civil Liability for Actions of Artificial Intelligence in the Russian Federation and European Union Law

The need to be based on a risk-based approach as a basis for future changes in the field of legal regulation is noted in the White Paper.38

The legal definition of a risk-based approach (hereinafter referred to as RBA) is contained in Article 8.1 of the Federal Law "On the Protection of the Rights of Legal Entities and individual Entrepreneurs in the Exercise of State Control (Supervision) and Municipal Control."39 it is understood as the method of organizing and exercising state control (supervision), in which the choice of the intensity (form, duration, frequency) of control measures, measures to prevent violations of mandatory requirements is determined by the attribution of the activities of a legal entity, individual entrepreneur

37 Алексеев А.О. и др. Подходы к гражданско-правовой ответственности разработчика технологий искусственного интеллекта: на основе классификации технологий // Информационное общество. 2020. № 6. С. 48 [Alexander O. Alekseev et al., Approaches to Civil Liability of a Developer of Artificial Intelligence Technologies: Based on the Classification ofTechnologies, 6 I nformation Society 47, 48 (2020)].

38 White Paper on Artificial Intelligence, supra note 6, at 14.

39 Федеральный закон от 26 декабря 2008 № 294-ФЗ «О защите прав юридических лиц и индивидуальных предпринимателей при осуществлении государственного контроля (надзора) и муниципального контроля» // Собрание законодательства РФ. 2008. № 52 (ч. 1). Ст. 6249 [Federal Law No. 294-FZ of 26 December 2008. On the Protection of the Rights of Legal Entities and Individual Entrepreneurs in the Implementation of State Control (Supervision) And Municipal Control, Legislation Bulletin of the Russian Federation, 2008, No. 52 (Part 1), Art. 6249].

and (or) used by them in the implementation of such activities of production facilities to a certain risk category or a certain class (category) of hazard.

Despite the existence of a subjective, objective and dualistic concept of risk in the legal doctrine40 modern scientists define risk as an inalienable property of relations that are the subject of civil law regulation, expressed in the possibility of adverse property consequences, which are distributed between the parties through various civil legal methods of obligations or imposed on other persons when required by the goal of correcting actual inequality or the need to support any activity of public interest.41

in economic doctrine, risk is understood as the possibility of the occurrence (realization) for the subject of a random event caused by objectively existing uncertainty, manifested in adverse consequences characterized by a negative deviation of the actual result or event from the expected.42

The analysis of the above views allows us to establish that both economists and lawyers understand risk as the possibility of adverse consequences, therefore, this definition can be used in the future when proposing the author's concept of a risk-based approach to civil liability.

in economic theory, it is noted that the idea of a risk-based approach to control over the activities of organizations is that full control is economically impractical, and the efforts of regulatory bodies should be focused on the most significant violations of legislation in areas with a high level of violations.43

This definition, which reflects the essence of the economic content of RBA, should be rethought using the developments of the theory of law, because it not only summarizes the results of research in industrial sciences, but also brings them to a new, higher level of abstraction: "precisely in the sphere of abstractions, distracted from the accidental and temporary, concepts and categories are constructed that express the essential and necessary in the reality being studied."44

S.S. Alekseev understood the whole set of legal means taken in unity as the mechanism of legal regulation, with the help of which the legal impact on social relations is ensured.45

40 Болобонова М.О. История становления и развития категории «риск» в гражданском праве России // Гражданское право. 2016. № 6. C. 35 [Maria О. Bolobonova, History of the Formation and Development of the Category "Risk" in the Civil Law of Russia, 6 Civil Law 34, 35 (2016)].

41 МартиросянА.Г. К вопросу о риске в гражданском праве Российской Федерации // Современное право. 2009. № 9. C. 64 [Artem G. Martirosian, On the Issue of Risk in the Civil Law of the Russian Federation, 9 Modern Law 60, 64 (2009)].

42 Кунин В.А., Упорова И.В. Риск-ориентированный подход контрольно-надзорной деятельности: международный опыт и особенности применения в российских условиях // Экономика и управление. 2019. № 2(160). C. 60 [Vladimir A. Kunin & irina V. Uporova, Risk-Oriented Approach of Control and Supervisory Activity: International Experience and Peculiarities of Application in Russian Conditions, 2(160) Economics and Management 59, 60 (2019)].

43 Id.

44 Kuznetsova 2016, at 12.

45 Alekseev 1966, at 30.

According to the scientist, the mechanism of legal regulation consists of three main stages (regulation of public relations, the operation of legal norms and the implementation of subjective rights and obligations) and corresponding elements (legal norms, legal relations, acts of implementation of subjective legal rights and obligations).46

These provisions should be taken as a starting point along with the previously formulated concept of risk.

As indicated earlier in this work and our previous studies47 the existing approaches to civil liability for the actions of artificial intelligence have drawbacks: strict (non-fault) liability has a deterrent effect on the activities of developers, acting as a barrier to the development of innovations (chilling effect of liability law), and the concept based on guilt (the fault concept of civil liability), is unable to provide effective legal regulation since it does not take into account the peculiarities of artificial intelligence technologies. Features such as opacity in"decision-making" and the increasing autonomy of Ai carry the main feature that is difficult to cover by the named concepts, namely, the introduction of qualitatively different risks, depending on the scope of artificial intelligence.

Let us reveal the last thought with the following example: fighting dog, attributed by researchers and judicial practice to a source of increased danger,48 carries a threat of harm and is not in full control of a person, no matter where it is used, what has been said is true and applied to the vehicle, i.e. a device designed for the transport of people, goods or equipment installed on it by roads.49

in turn, the artificial intelligence technology designed to compile the game calendar in La Liga50 will significantly differ in the level of risks introduced from the artificial intelligence technology that controls fighter planes.51

46 Alekseev 1966, at 34.

47 Alexander Alekseev et al., Classification of Artificial Intelligence Technologies to Determine the Civil Liability, 1794(1) J. Phys. Conf. Ser. 012001 (2021) (Jun. 23, 2021), available at https://iopscience.iop. org/article/10.1088/1742-6596/1794/1/012001/pdf.

48 Мохов А.А., Копылов Д.Э. Псовые как объекты гражданских прав // Юридический мир. 2006. № 12. C. 37 [Alexander A. Mokhov & D.E. Kopylov, Dogs as Objects of Civil Rights, 12 Legal World 25, 37 (2006)].

49 Федеральный закон от 10 декабря 1995 г. № 196-ФЗ «О безопасности дорожного движения» // Собрание законодательства РФ. 1995. № 50. Ст. 4873 [Federal Law No. 196-FZ of 10 December 1995. On Road Safety, Legislation Bulletin of the Russian Federation, 1995, No. 50, Art. 4873].

50 Nick Friend, La Liga to employ AI to optimise fixture scheduling, SportsPro Media, 14 January 2019 (Apr. 10, 2021), available at https://www.sportspromedia.com/news/la-liga-fixtures-artificial-intelligence.

51 Детинич Г. Искусственный интеллект обучили управлять группой боевых истребителей F-16 в воздушном бою // 3D News. 23 марта 2021 г. [Gennadii Detinich, Artificial Intelligence Was Trained to Control a Group of F-16 Combat Fighters in Air Combat, 3D News, 23 March 2021] (May 5, 2021), available at https://3dnews.ru/1035586/iskusstvenniy-intellekt-obuchili-upravlyat-gruppoy-boevih-istrebiteley-f16-v-vozdushnom-boyu.

Consequently, the key, in our opinion, in a risk-based approach to the regulation of civil liability for the actions of artificial intelligence, is the division of the spheres of application of Ai depending on the category of risk inherent in the corresponding category of Ai.

Thus, the risk-based approach to civil liability for the actions of artificial intelligence should be understood as a mechanism of legal regulation of public relations complicated by Ai, in which its elements (legal norms, legal relations, acts of implementation of subjective legal rights and obligations) are determined by assigning the scope Ai to a specific risk category.

the above definition stipulates the need to disclose the elements of a risk-based approach to civil liability for the actions of artificial intelligence, which are legal norms, legal relations, as well as acts of implementation of subjective legal rights and obligations (see Figure 1).

Fig. 1. Elements of a risk-based mechanism of legal regulation of civil liability for the actions of artificial intelligence

Legal norms

By them, in accordance with the approach generally accepted in the Russian legal doctrine, we mean generally binding rules of conduct, for deviation from which civil liability is established. As an example of such norms, one can cite the provisions of the National Strategy of the russian Federation for the Development of Artificial Intelligence, the Concept, the White Book, etc.

In the considered element of the risk-based approach, it is necessary to determine the legal facts associated with the emergence, change or termination of legal relations in the field of civil liability for the actions of artificial intelligence, subjects of legal relations, the scope of their rights and obligations.

As one of the examples of legal facts related to the emergence in the field of civil liability for the actions of artificial intelligence, it should be pointed out that the artificial intelligence technology is not equipped with an objective control device ("black box"); changes - establishment of the fact of gross negligence of the victim in the harm caused; termination - compensation for damage caused.

A developer, producer, authorized body, seller, person providing services for its operation and (or) maintenance, owner, user.

in our opinion, the participation of an authorized body is possible as a person confirming the compliance of the created artificial intelligence technology with safety requirements and project documentation. Such participation is necessary in the most risky areas and/or artificial intelligence technologies. As a legal consequence of the issuance of a law enforcement act in relation to Ai technology that does not meet safety requirements and design documentation, the state may be brought to subsidiary liability for the debts of the inflictor of harm.

We believe that due to the fundamental difference in artificial intelligence technologies, the scope of the rights of obligations of all named subjects cannot be covered by the framework of this work and should be formulated in relation to each of them separately by conducting a separate study that takes into account both artificial intelligence technologies and their areas of application.

However, as a general consideration, we consider it necessary to highlight as one of the fundamental responsibilities of the developer the need to ensure the storage of information used in the creation and change of the parameters of artificial intelligence technology, as well as to ensure access to it by the interested person. One of the key obligations of the perpetrators of harm may be their joint responsibility for the harm caused, and the obligation to compensate for the harm imposed on the developer may be limited to the amount of remuneration received by him under the relevant contract for the invention of artificial intelligence technology.

The adoption of mandatory standards or standard instructions in relation to certain artificial intelligence technologies used in certain areas can be named as promising sources of legal norms.

Legal relations

They should be understood as social relations developing in the sphere of civil liability for the actions of artificial intelligence. Their essence is the subjective rights and obligations of the persons involved. in accordance with the above type of norms of law, aimed primarily at encouraging the development of social relations, the content of legal relations should first of all be reduced to the imposition of active legal obligations, expressed in the commission of positive actions, as well as subjective rights, consisting in the possibility of demanding appropriate behavior on the part of obliged persons.

in the context of this study, such norms may be the consolidation of the obligation of an artificial intelligence developer with a high degree of autonomy to notify the authorized body of its readiness for acceptance.

Acts of implementation of subjective legal rights and obligations

It is proposed to understand them as documents in written or electronic form aimed at creating, changing or terminating obligations in the field of civil liability.

Such acts, in turn, are divided into:

1.1. Regulatory acts. With the help of them, on the basis of the norms of law, individual regulation of social relations in general and in the field of civil liability for the actions of artificial intelligence in particular is achieved. As an example, an agreement on the creation of artificial intelligence technology can be cited. In the context of this work, the most interesting are the provisions that allow the internationalization of responsibility.

1.2. Enforcement acts. With the help of them, the implementation of subjective legal rights and obligations in the field of civil liability for the actions of artificial intelligence is ensured on the basis of the imperious, coercive activities of state bodies. As an example, a judicial act on the recovery of funds from the tortfeasor to the victim from the actions of artificial intelligence technology can be named.

These elements of the mechanism of a risk-based approach to civil liability should be subordinate to certain principles, by which theorists understand the initial normative and guiding principles that characterize its content, the foundations, enshrined in it, principles - this is what permeates the law, reveals its content in the form of initial, cross-cutting "ideas," its main principles, regulatory guidelines.52

Like any mechanism of legal regulation, in addition to intersectoral principles such as legality,53 equality, observance and protection of human and civil rights and freedoms,54 and civil law principles, risk-based approach to civil liability as a mechanism for the legal regulation of public relations complicated by AI, should be based on the following special principles:

1. Proportionality. The principle under consideration implies that the restrictions imposed by the rules of law in social relations complicated by AI, through the establishment of duties and prohibitions, should be proportionate to the goal that is being pursued. In order for the elements of the RBA mechanism to comply with the principle of proportionality, it is proposed to consistently answer a number of questions. At the same time, a negative answer to any of the questions means that the test for compliance of an element with the principle of proportionality is not passed (failed), and the estimated (tested) prohibition should be excluded from

52 Алексеев С.С. Проблемы теории права: основные вопросы общей теории социалистического права: Курс лекций: в 2 т. Т. 1 [Sergei S. Alekseev, Problems of the Theory of Law: Basic Questions of the General Theory of Socialist Law: A Course of Lectures. In 2 vols. Vol. 1] 102 (1972).

53 Alekseev 1966, at 32.

54 Тарасов Д.Ю. Принципы правового регулирования экономических отношений как условие достижения эффективности норм права // Правопорядок: история, теория, практика. 2016. № 2(9). C. 84 [Denis lu. Tarasov, Principles of Legal Regulation of Economic Relations as a Condition for Achieving the Effectiveness of the Rule of Law, 2(9) Law: Order, History, Theory, Practice 82, 84 (2016)].

the legal regulation mechanism (see Figure 2). it is important to note that for the purposes of this work, the examples given are not intended to demonstrate the implementation of the sanction of a legal norm, but are intended to illustrate the conceptual basis of the test in question.

is this measure suitable to achieve the goal that is being pursued?

1

1

yes

1

1

is this measure necessary to achieve the goal that is being pursued?

I

I

I

1

Fig. 2. Test of the norm of law for compliance with the principle of proportionality.

1.1. Question number 1: "Is this measure suitable for achieving the goal that is being pursued?"

The answer to this question is intended to assess the effectiveness of the adopted element through the presence of a causal relationship between it and the desired legal effect of the mechanism of legal regulation of civil liability.

Let us illustrate this with an example. The legislative goal has been set to reduce the number of awarded and unpaid funds awarded to individuals as compensation for damage by vehicles driven by artificial intelligence systems. To achieve this goal, a legislative decision is proposed to establish the obligation of the developer of the corresponding AI system, before its implementation in the vehicle, to make a security payment into a fund specially created for this purpose. Assessing this proposal, the legislator needs to evaluate it from the point of view of the principle of proportionality and answer the question of how appropriate a measure is to impose the obligation on the developer to incur significant costs in relation to the technology in order to reduce the number of facts of harm. During such an analysis, the legislator should come to the conclusion that the answer to the question posed will be negative, if only because the proposed measure is addressed to a person whose technology, although it is part of a vehicle controlled by an artificial intelligence system, is not fully covered by the activities of such a person. Therefore, making the developer responsible is inappropriate. Obtaining a negative answer to this question indicates that the measure does not meet the criterion of proportionality and should not be included in the rule of law as an element of the legal regulation mechanism.

1.2. Question number 2 reads as follows: "Is this measure necessary to achieve the goal that is being pursued?"

The answer to this question is intended to assess the feasibility of introducing into the mechanism of legal regulation of the element being assessed, which has successfully passed the previous phase of the test for compliance with the principle of proportionality.

Let us illustrate this with an example. A legislative goal has been set to reduce the number of facts of harm to the life and health of citizens caused by vehicles controlled by artificial intelligence systems. To achieve this goal, a legislative decision is proposed to establish the obligation to establish a standard for the materials of structural elements of vehicles controlled by artificial intelligence systems, for the manufacturer to change the existing technological solutions used in the manufacture of vehicles in order to reduce the number of facts of harm caused. During such an analysis, the legislator, taking into account all factors (the results of experiments carried out using new materials of structural elements of vehicles, the amount of additional costs incurred, the time it will take manufacturers to change technology, etc.) should come to the conclusion that there is no or there is a need in the proposed legislative solution. Obtaining a negative answer to this question indicates that the measure does not meet the criterion of proportionality and should not be included in the rule of law as an element of the legal regulation mechanism.

1.3. Question number 3 reads as follows: "How justifiably this measure restricts other rights and legitimate interests in achieving the goal that is being pursued?"

The answer to this question is intended to assess the emerging negative consequences in the form of affecting other legally protected benefits after the introduction of the element being assessed into the legal regulation mechanism, which has successfully passed the previous phases of the test for compliance with the principle of proportionality.

Let us illustrate this with an example. The legislative goal has been set to increase the lead to a uniform appearance for all types of vehicles controlled by artificial intelligence systems. To achieve this goal, a legislative decision is proposed to establish the obligation of all owners of such vehicles to paint them in one color. Evaluating this proposal, the legislator needs to evaluate this proposal from the point of view of the principle of proportionality and answer the question of how justifiably this measure restricts other rights and legitimate interests in order to achieve the goal that is being pursued. in the course of such an analysis, the legislator should come to the conclusion that the answer to the question posed will be negative, if only because the proposed measure is a violation of the principle of inadmissibility of arbitrary interference in private affairs established by Article 1 of the Code. Obtaining a negative answer to this question indicates that the measure does not meet the criterion of proportionality and should not be included in the rule of law as an element of the legal mechanism regulation.

2. Flexibility as a principle of the mechanism of a risk-based approach to civil liability for the actions of artificial intelligence. The principle under consideration means that its elements should be changed based on the changing technologies for the development of Ai through the passage of an appropriate reassessment. So, until the emergence of "strong" artificial intelligence technologies that have an independent property interest,55 it seems premature to raise the question of the legal personality of Ai.

in addition to the named elements and principles of a risk-based approach to civil liability, an important part of it is the categorization of the risks that artificial intelligence technologies bear.

Within the framework of such a categorization of risks, artificial intelligence technologies should be broken down into hazard classes (risk categories).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

After that, it is necessary to determine the risk factors, which should be understood as prerequisites, essential components, the presence of which is sufficient for the implementation of events or actions, the consequence of which may be harm to the benefits protected by law.

Further, such risk factors should be divided into two groups, where the first affects the amount of damage caused, and the second affects the likelihood of consequences.

The consequence of such mental operations should be a risk matrix, in which the final risk value should be presented as the probability of negative consequences

55 Alekseev et al. 2020, at 49.

occurring, multiplied by the amount of harm caused and the weight of the benefit protected by law.

This categorization will allow, on the one hand, to reduce the overall level of legal burden on participants in public relations complicated by artificial intelligence, and on the other hand, to increase the efficiency of the risk-based approach to civil liability for its actions.

The actual consequence of the introduction of a risk-based approach to civil liability for the actions of artificial intelligence should be, on the one hand, saturation in risk zones with legal norms imposing active-type responsibilities on its participants, expanding the list of the latter by including an authorized body called check the compliance of AI with safety requirements and design documentation, as well as an expanding list of legal facts entailing the onset of civil liability, and its inversely proportional design in areas with medium and low risk categories.

Conclusion

Analysis of the strict (non-fault) liability, fault liability and risk-based approach of a legal mechanism to regulate civil liability for actions of artificial intelligence in the Russian Federation and European Union law led to the following conclusions.

1. Strict (non-fault) liability does not fully comply with the theory of law, since bringing a person to civil liability for the very fact that he possesses an object does not correspond to either the essence (conviction), or the goals (upbringing), or functions (prevention and punishment) of legal responsibility.

2. Analysis of the Russian legal doctrine showed that researchers are inclined to apply the provisions of Article 1079 of the Civil Code to civil liability for the actions of artificial intelligence.

3 Methodological shortcomings inherent in the concept of the strict (non-fault) liability concept of civil liability explain its main disadvantage: the threat of imposing the obligation to compensate for the harm caused to the owner of the artificial intelligence technology will have a deterrent effect on the activities of developers, acting as a barrier to the development of innovations (chilling effect of liability law).

4. The fault concept of civil liability, although it fully complies with the theory of law, but it can also significantly complicate the injured person to receive compensation for harm caused due to the peculiarities of artificial intelligence technologies. These features, in the light of the question raised, first of all, should include the autonomy and complexity in explaining the actions performed by AI, as well as the qualitatively different risks that artificial intelligence can carry.

5. The alternatives to civil liability proposed by foreign scientists in the form of compensation schemes that are not based on fault appeared as a result of the shortcomings of the existing concepts of civil liability for the actions of artificial intelligence identified in this study, but such schemes cannot achieve the goals of protecting law and order and educating citizens.

6. A risk-based approach to civil liability for the actions of artificial intelligence is the mechanism of legal regulation of social relations complicated by Ai, in which its elements (legal norms, legal relations, acts of implementation of subjective legal rights and obligations) determine the assignment of the scope of Ai to certain risk categories.

7. Legal norm as an element of a risk-based approach for the actions of artificial intelligence provides generally binding rules of conduct, for deviation from which civil liability is established.

8. Legal relations as an element of a risk-based approach for the actions of artificial intelligence. Their essence is the subjective rights and obligations of the persons involved.

9. Acts of the implementation of subjective legal rights and obligations as an element of the risk approach to the actions of artificial intelligence, provide documents in written electronic or electronic form, submission to the creation, change or termination of obligations in the field of civil liability. Within the framework of this element, it is necessary to single out acts-regulators, with the help of which, on the basis of the norms of law, individual regulation of public relations in general and in the field of civil liability for the actions of artificial intelligence in the field, as well as legal enforcement acts, with the help of which the implementation of subjective legal rights is ensured and responsibilities in the field of civil liability for the actions of artificial intelligence on the basis of the imperious, coercive activities of state bodies.

10. Along with cross-sectoral and civil law principles, the risk-based approach to civil liability is based on the principles of proportionality and flexibility.

11. The principle of proportionality implies that the introduced norms of restriction of law in public relations, complications of Ai. in order to ensure that the elements of the mechanism are consistent with the proportionality mechanisms, the following answers to a number of questions are proposed. At the same time, a negative answer to any of the questions means that the test for the conformity of the element to the principle of proportionality is not passed (failed), the estimated (tested) prohibition should be excluded from the mechanism of legal regulation.

12. Flexibility as a principle of the mechanism of risk-based approach to civil liability for the operation of artificial intelligence means that its elements change from changing technologies of development.

13. An important part of the element of a risk-based approach to civil liability for the activities of artificial intelligence is the categorization of the risks that artificial intelligence technologies bear. Within the framework of such a categorization of risks, artificial intelligence technologies should be broken down into hazard classes (risk categories). Ai scope rules can be chosen as the basis for this categorization. The final value of the risk should be as the probability of the occurrence of negative consequences, multiplied by the amount of harm caused and the weight of the protected law of the good.

14. Such categorization, on the one hand, will reduce the overall level of burden on participants in public relations, complicated by artificial intelligence.

References

АлексеевС.С. Механизм правового регулирования в социалистическом государстве [Alekseev S.S. The Mechanism of Legal Regulation in a Socialist State] (1966).

Alekseev A. et al. Classification of Artificial Intelligence Technologies to Determine the Civil Liability, 1794(1) J. Phys. Conf. Ser. 012001 (2021). https://doi.org/10.1088/1742-6596/1794/1/012001

Barfield W. et al. The Cambridge Handbook of the Law of Algorithms (2021). https:// doi.org/10.1017/9781108680844

Schellekens M. No-Fault Compensation Schemes for Self-Driving Vehicles, 10(2) L. innov. Tech. 314 (2018). https://doi.org/10.1080/17579961.2018.1527477

Schellekens M. Self-Driving Cars and the Chilling Effect of Liability Law, 31(4) Comput. L. Secur. Rev. 506 (2015). https://doi.org/10.1016/j.clsr.2015.05.012

Information about the authors

Ksenya Kondrateva (Perm, Russia) - Associate Professor, Department of Civil and Business Law, National Research University Higher School of Economics (38 Stu-dencheskаia St., Perm, 614070, Russia; e-mail: kskondrateva@hse.ru).

Timur Nikitin (Dover, DE, United States) - Lawyer, Geo Track Technologies inc. (8 The Green, STE A, Dover, DE, 19901, United States, e-mail: nikitintimur@gmail. com).

i Надоели баннеры? Вы всегда можете отключить рекламу.