Научная статья на тему 'CRIMINAL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE: APPROACH OF RUSSIA AND CHINA'

CRIMINAL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE: APPROACH OF RUSSIA AND CHINA Текст научной статьи по специальности «Право»

CC BY
284
72
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ARTIFICIAL INTELLIGENCE / CRIMINAL LIABILITY / PROSECUTION / TO PRINCIPLE OF RELATIVELY STRICT LIABILITY

Аннотация научной статьи по праву, автор научной работы — Dongmei Pang, Olkhovik Nikolay V.

In the Era of Artificial intelligence (AI) it is necessary not only to define precisely in the national legislation the extent of protection of personal information and limits of its rational use by other people, to improve data algorithms and to create ethics committee to control risks, but also to establish precise liability (including criminal liability) for violations, related to AI agents.According to existed criminal law of Russia and criminal law of the People’s Republic of China AI crimes can be divided into three types: crimes, which can be regulated with existed criminal laws; crimes, which are regulated inadequately with existed criminal laws; crimes, which cannot be regulated with existed criminal laws.Solution of the problem of criminal liability for AI crimes should depend on capacity of the AI agent to influence on ability of a human to understand public danger of committing action and to guide his activity or omission. If a machine integrates with an individual, but it doesn’t influence on his ability to recognize or to make decisions. In this case an individual is liable to be prosecuted. If a machine influences partially on human ability to recognize or to make decisions. In this case engineers, designers and units of combination should be prosecuted according to principle of relatively strict liability. In case, when AI machine integrates with an individual and controls his abiity to recognize or to make decisions, an individual should be released from criminal prosecution.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «CRIMINAL LIABILITY FOR ACTIONS OF ARTIFICIAL INTELLIGENCE: APPROACH OF RUSSIA AND CHINA»

I Journal of Siberian Federal University. Humanities & Social Sciences 2022 15(8): 1094-1107

DOI: 10.17516/1997-1370-0912 EDN: TZPSKY

УДК 343.222.43:004.056.5:004.89(47+57)(510)

Criminal Liability for Actions of Artificial Intelligence: Approach of Russia and China

Pang Dongmei3 and Nikolay V. Olkhovik*b c

"Law Institute, Institute of intellectual property

Henan University's Chinese, Russian center of comparative law

Henan, People's Republic of China

b Tomsk State University

Tomsk, Russian Federation

cResearch Institute of the Federal Penitentiary Service of Russia Moscow, Russian Federation

Received 14.10.2021, received in revised form 10.12.2021, accepted 02.05.2022

Abstract. In the Era of Artificial intelligence (AI) it is necessary not only to define precisely in the national legislation the extent of protection of personal information and limits of its rational use by other people, to improve data algorithms and to create ethics committee to control risks, but also to establish precise liability (including criminal liability) for violations, related to AI agents.

According to existed criminal law of Russia and criminal law of the People's Republic of China AI crimes can be divided into three types: crimes, which can be regulated with existed criminal laws; crimes, which are regulated inadequately with existed criminal laws; crimes, which cannot be regulated with existed criminal laws.

Solution of the problem of criminal liability for AI crimes should depend on capacity of the AI agent to influence on ability of a human to understand public danger of committing action and to guide his activity or omission. If a machine integrates with an individual, but it doesn't influence on his ability to recognize or to make decisions. In this case an individual is liable to be prosecuted. If a machine influences partially on human ability to recognize or to make decisions. In this case engineers, designers and units of combination should be prosecuted according to principle of relatively strict liability. In case, when AI machine integrates with an individual and controls his abiity to recognize or to make decisions, an individual should be released from criminal prosecution.

Keywords: artificial intelligence; criminal liability; prosecution; to principle of relatively strict liability.

This article was prepared within project of the National fund of social sciences of the People's Republic of China "Institute of the general part of Russian criminal law" (project'

© Siberian Federal University. All rights reserved

* Corresponding author E-mail address: lawtsuolkhovik@mail.ru

number: 16AFX008), and program of main projects in sphere of philosophy and social science in Henan University (project's number: 2019ZDXM005). In addition, this article is funded by the National Foundation for study abroad.

Research area: law.

Citation: Pang Dongmei, Olkhovik, N.V (2022). Criminal liability for actions of artificial intelligence: approach of Russia and China. J. Sib. Fed. Univ. Humanit. soc. sci., 15(8), 1094-1107. DOI: 10.17516/1997-1370-0912

Уголовная ответственность за действия искусственного интеллекта: подходы в России и Китае

Пан Дунмэйа, Н. В. Ольховикб- в

0Юридический институт, Институт интеллектуальной собственности

Китайско-российский центр сравнительного правоведения

Хэнаньского университета

Китайская Народная Республика, Хэнань

бТомский государственный университет

Российская Федерация, Томск

"Научно-исследовательский институт

Федеральной службы исполнения наказаний

Российская Федерация, Москва

Аннотация: В эпоху искусственного интеллекта в национальных законах необходимо не только четко определить объем защиты личной информации и границы его рационального использования другими людьми, оптимизировать алгоритмы данных и создать комитет по этике для контроля рисков, но и установить четкую юридическую ответственность (в том числе и уголовную) за нарушения, связанные с продуктами искусственного интеллекта.

В соответствии с действующим уголовным законодательством России и КНР преступления, связанные с искусственным интеллектом, можно подразделить на три типа: преступления, которые могут регулироваться действующими положениями уголовного закона; преступления, которые неадекватно регулируются действующими уголовными законами; преступления, которые не могут регулироваться действующими уголовными законами.

Решение вопроса об уголовной ответственности за преступления, связанные с искусственным интеллектом, должно зависеть от способности продукта искусственного интеллекта влиять на способность физического лица осознавать общественную опасность совершаемого им деяния и руководить своими действиями или бездействиями. Если продукт искусственного интеллекта сочетается с физическим лицом, но не влияет на способность распознавания и суждения физического лица, то физическое лицо несет уголовную ответственность. Если продукт искусственного интеллекта частично влияет на способность распознавания и суждения физического лица, тогда следует привлекать к уголовной ответственности производителей,

разработчиков продуктов искусственного интеллекта и юнитов сочетания в соответствии с принципом относительно строгой ответственности. В случае, когда продукты искусственного интеллекта сочетаются с физическими лицами и контролируют способность распознавания и суждения физических лиц, физические лица должны быть освобождены от уголовной ответственности.

Ключевые слова: искусственный интеллект, уголовная ответственность, привлечение к ответственности, принцип относительно строгой ответственности.

Данная статья подготовлена в рамках проекта Национального фонда социальных наук КНР «Институт общей части российского уголовного права» (проект № 16AFX008) и программы основных проектов в области философии и социальных наук в Университете Хэнань (номер проекта: 2019ZDXM005). Эта статья также финансируется Национальным фондом для обучения за рубежом.

Научная специальность: 12.00.00 - юриспруденция.

Introduction

Due to the development of cybernetics Artificial intelligence (AI) becomes more important object of legal regulation (Kibal'nik, Volosyuk, 2018) and legal research (Shestak, Voevoda, 2019) in our world. Including in Russia and China.

Federal law № 172 of 28 June 2014 "On strategic planning in the Russian Federation" and the Russian Federation Presidential Decree of 07 May 2018 № 204 "On national objectives and strategic concerns of the development of the Russian Federation for the period up to 2024", the Russian Federation Presidential Decree of

09 May 2017 № 204 "On Strategy of the development of information society in the Russian Federation for 2017-2030", the Russian Federation Presidential Decree of 01 December 2016 № 642 "On Strategy of scientific and technological development of the Russian Federation", the Russian Federation Presidential Decree of

10 October 2019 № 490 "On development of artificial intelligence in the Russian Federation" define directions of informatization in the Russian Federation. The later establishes the National strategy of the AI development in the Russian Federation up to 2030.

The President of the Russian Federation V. Putin, addressing to scientists, engineers and representatives of high-technology business, defined the necessity to hold leadership in AI sphere. "Comfortable and safe cities, accessible and qualified medicine, education, modern lo-

gistics and reliable traffic infrastructure, space exploration, exploration of the World ocean, and finally, national defense capability, the development of all these spheres depends largely on our success in AI sphere now and in the nearest future. Not to notice changes, to reject them it means to devaluate, to lose existent opportunities, which can be great today, but tomorrow they can become obsolete rapidly or be zeroed out. Artificial intelligence is a resource of extreme power. Those, who owns it, would break forth", - underlined the President of the Russian Federation (Putin, 2019). Main principles of the development and use of AI- technologies, observation of which is compulsory in carrying out the National strategy of development of AI in the Russian Federation, are protection of human rights and freedoms, safety, transparency, technological state authority. Integrality of innovative course, rational economy and maintenance of competitive environment.

On 05 March 2017 Premier of the Chinese State Council of Li Keqiang in the Report of the government's work pointed strictly at supporting the AI development, underlining "we will accelerate elaboration and reorganization of new materials, artificial intelligence, electronics, biopharmaceutics, mobile service of fifth generation and other technologies". (Li Keqiang, 2017). On 08 July 2017 the Chinese State Council passed "Plan of the AI development of new generation", where it was clearly

formulated requirements to creation of legal, ethic and political system of AI" In this official document it was underlined that the important principles of legislation on AI are principle of human interest's protection, principle of clarity and principle of liability. Consequently, formation of corresponding principle of criminal liability is important part of creation of criminal regulation, related to artificial intelligence. On 20 July 2017 the Chinese State Council published "Next generation AI development Plan", suggesting "to build initial advantages of AI development in China, to accelerate creation of innovative country and world scientific - technological state". (http://www.gov.cn/zhengce/ content/2017-07/20/content_5211996.htm).

In Russia and in China attempts to use artificial intelligence have even been made in judicial sphere. Chairman of the Council of Judges of the Russian Federation V. Momotov, speaking at plenary meeting of international conference in Qatar on subject "Prospects for the use of artificial intelligence in judicial system of the Russian Federation", underlined "introduction of artificial intelligence in judicial system may provide a) elevation of quality and effectiveness of judicial activity based on use of support system of decision making by court, for example, system of sentencing in criminal proceeding, use of system of analysis of natural speech processing - recognition of general sense of text with possibility to distinguish key thesis from the text, use of speech recognition and video content with purpose to mark audio and video reports of court trials, computer-managed preparation of judicial acts drafts, b) efficiency improvement of judicial protection of rights and legal interests of citizens, organizations, state authority (based on use of intellectual assistants of trial participants, extension of distant participation in trials through use of biometric identification of citizens), c) reduction of conflicts, rising of legal consciousness by means of introduction of expert system of predictive analysis of the results of a trial, d) creation of systems of predictive analysis for changing caseload depending on legislation changes". (Momotov, 2019)

President of the Supreme People's Court of China Zhou Qiang, speaking at the third World

internet-congress "Intellectual courts", namely at forum on problems of rule of law in the Internet in 2016, said that he is going "to promote actively application of artificial intelligence in judicial sphere".(Deng Heng, 2017). Simultaneously, technological companies began to join their efforts with courts all over the country for promotion the AI application in justice, for example, IFLYTEKCO., LTD., as high-technology company and the Supreme People's Court from three provinces and one city (Jiangce, Zheji-ang, Anhui, Shanghai) signed a Decree on profound strategic cooperation of three provinces and one city at the Yangtze river delta in sphere "Artificial intelligence + court" (http: //www. ah.xinhuanet.com/2018-06/07/c_1122949446. htm.). Alongside this cooperation the Supreme People's Court of Shanghai it was elaborated "Shanghai High People's Court intelligent as-sistive case-handling system for criminal cases" (system -206) (Yan Jianqi, 2018).

At the same time Russian and Chinese legal communities conduct comprehensive researches on legal issues, related to artificial intelligence. Analyzing existed literature of Russia and China, authors show that today AI researches in legal sphere are concentrated on the following aspects: 1) definition of the notion Artificial intelligence (Arhipov, Nau-mov, 2017; Ponkin, Red'kina, 2018; Vasil'ev, Shpopper, Mataeva, 2018); 2) studying of the AI influence on the whole legal system and separate legal system (Yu Chengfeng, 2018; Li Sheng, 2018.); 3) studying AI legal capacity (Yastrebov, 2017; Wu Xiyu, 2018; Shestak, Volevodz, 2019.); 4) it is considered possible AI influence on branch law, for example, legal basis of liability for the damages, caused during the operation of an autonomous car (Churilov, 2018); doctrinal criminal law questions (Kibal'nik, Volosyuk, 2018); subject of AI crime and criminal liability (Mosechkin, 2019); legal issues, related to application of criminal legislation in crimes against property (Lin Shaowei, 2018; Jiang Bixin, Zheng Li-hua, 2018; Wu Yunfeng, 2018), and system of punishment (Liu Xianquan, 2018; Hisamova, Begishev, 2019.) and others. There are not so many specified researches, related to the problem of criminal liability for AI crimes.

Criminal risks of AI application

AI based on digital technologies is one of tendencies of future society. According to existed difference between types of AI there are two of them: a strong AI and a weak AI. Strong AI, having powerful mathematical and logical abilities, refers to big data, capacity to learn, algorithms, it moves towards mankind model. Regarding this AI type in future we should consider problem of its structured coexistence with humans in a society. At the same time, weak AI, focused on narrow tasks, realizes general human tasks under data guidance, for example, autopilot is typical weak AI (ZhengGe, 2017). Both strong AI and weak AI need big data and it follows risks and problems. Consequently, with a huge development and widespread AI application in different spheres of life there arise some negative problems such as social danger associated with excessive or incorrect use of AI technologies, social damage, associated with "autonomous" AI decisions, made with use of technologies of "computer-assisted instructions" and "deep learning".

Problem which influences the most on human consists of the fact that when artificial intelligence is progressing to such a degree, it can form its own consciousness and it can damage itself or human. Scientists predicted that "during AI Era people will lose their sacred status, they will become animals, who are captured by robots, and they can be murdered by robots at its own wish". (Yuval Noah Hariri, 2017). Analyzing examples when robots damaged themselves or other people, what happened in judicial practice, we can see that there is such a problem. For example, on 12 November 2013 in Austria there was an incident of "suicide" of a robot-cleaner, as a result the house of owner was burnt down. The reason of Roomba "suicide" was concluded as unbearable difficult housework (www.tanling. com/archives/1921.html). On 18 November 2016 at the International High-Tech Fair in China (Shenzhen city) there was the first incident of robot abuse in China, resulting sudden failure. This robot named "Fabo" broke the glass of exhibition booth and injured a visitor because of absence of proper instructions (Yao Wanqin, 2019).

Obviously, criminal risks in human society, created by AI technologies, are increasing quite fast. At present time (i.e. at the Era of weak AI) criminal risks, related to artificial intelligence, can be divided in two categories: firstly, criminal risks, emerging in use of intellectual robots by individual. Including cases, when an individual didn't expect by negligence that intellectual robots could have caused damages, but these actions have serious consequences for the society; or an individual anticipated that robots could have caused damages, but carelessly he planned to prevent from damages, and these actions have socially dangerous results; secondly, criminal risks, related to use of intellectual robots with intention to commit a crime.

It is not unusual when there are serious dangerous consequences when normal using of intellectual robots. For example. In 2016 in Handan, Hebei China self-driving car Tesla during "highway driving" failed to distinguish another car according to its program settings, as a result the driver died. In 2018 in Arizona, U.S. there was an incident when a pedestrian was struck by autonomous car Uber. In 2015 one of employees of Volkswagen was grabbed by robot and crushed against a metal plate while working on production line; he suffered a lot and died from his injuries in hospital (http://news.mydrivers.com/1 /437/437018. htm).

There are a lot of cases when a person uses intellectual robot to commit a crime. For example, a person uses pilotless IR camera to search cannabis farm and manipulates pilotless aerial vehicles to commit cannabis theft. Meanwhile, there are cases, when people equipped pilot-less aerial vehicles with guns, loud-speakers and other devices, manipulating pilotless aerial vehicles to rob pedestrians (http: / /digi.163. com/14 /0418 /17/9Q4OQ6IQ00162OUT.html). AI technologies can help in military sphere to clear mines and to provide security of people, but if it is used by terrorists, it can lead to unbelievable dangerous consequences.

Therefore, AI development, extensive use of autonomous devices lead to technical and ethical problems. Including legal regulation's problems of relations between a robot and a hu-

man, between a robot and a robot (Cukanova; Skopenko, 2018). Based on real cases of inflicting injuries to a human by artificial intelligence and cases, which can happen in future, lawyers should consider issues of using criminal norms to prevent AI offences against human interests.

Legal regulation of AI crimes in Russia and China

At this Era of weak AI intellectual robots may act only as extension of human body and mind, accomplishing actions within elaborated programs for realization of human will and consciousness. In the context of current level of development AI technologies existed criminal norms of Russia and China may regulate the majority of crimes, related to artificial intelligence, but there are some statements of the Criminal Code of the Russian Federation and the Criminal Code of the People's Republic of China, which are too blurred. These statements need to be explained by means of judgments or by criminalization of new criminal acts, arising at the AI Era, it is necessary to restore the balance between high development of technologies and relative stability of Criminal Code. According to existed criminal law of Russia and criminal law of the People's Republic of China AI crimes can be divided into three types: crimes, which can be regulated with existed criminal laws; crimes, which are regulated inadequately with existed criminal laws; crimes, which cannot be regulated with existed criminal laws.

1. AI crimes, which can be regulated with existed criminal laws. These crimes refer to AI crimes which can be actually regulated with existed criminal laws of the Russian Federation and criminal laws of the People's Republic of China or they can be regulated only with judicial explication aimed at strict definition of sphere and application of statements of the Criminal Code. For example. In "The first case with AI use for committing a crime in China" (http://www.xinhuanet.com/lo-cal/2017-09/26/c_1121726167.htm), criminals using capacity of an intellectual robot to learn, programmed it to distinguish effectively identification code of image data. This skill helped criminals to gain the access to personal accounts

and passwords of users on different websites (http: / /www.sohu.com/a/202973604_65917). Essentially it is illegal access to personal information of citizens, it is regulated with part 3 of art.253 of the Criminal Code of the People's Republic of China "Theft or illegal access to personal information through other means", that is why these actions can be qualified as illegal access to personal information. In the Russian Federation illegal access to computer information, containing personal information on private life, committed willfully and knowingly for mercenary or personal purpose, providing causing damage to rights and legal interests of citizens, is treated accumulative sentencing with art.272 and art.137 of the Criminal Code of the Russian Federation.

2. Some traditional crimes have acquired new characteristics in AI context and criminal law cannot cope with it effectively. At this Era of weak AI the behavior of intellectual robots is controlled with programs, elaborated and created by a human, these programs carry out human will and consciousness and they are continuation of human body and mind. By contrast of traditional criminal tools they are not only extension of human body, but they are extension of human intelligence, so this "intelligence" leads to certain changes in initial patterns of human behavior. Previously. In case of traffic accident, happened during a driving of a car (supposing that the driver is fully or partially responsible for the accident): when the accident was caused by quality of a car, responsibility for the product's quality is carried by automobile manufacturer or designer; when the accident was caused by driving violation, responsibility for it is carried by the driver. Nowadays autonomous car requires participation of a driver during the process of driving, that is why it is allowed to make driver responsible for the accident according to mentioned model of responsibility. However, if an autopilot is developing to the stage of fully-automatic driving. In other words, it is driving without participation of a driver, and this autopilot causes an accident, so if the accident is caused by the quality of a autonomous car, we can still charge automo-

bile manufacturer or designer with it; but if the accident is caused by driving violation, we cannot charge the driver with it, because autonomous car is related to weak intelligent robots and cannot be responsible for its actions due to the lack of its own will and consciousness. Criminal liability for a traffic accident, which should be imposed on driver, cannot be transferred to an autonomous car. In this case is it possible to make automobile manufacturer or user of this autonomous car liable for this traffic accident? If the answer is positive, are they liable for the quality of a car or for traffic accident? There is no clear regulation of these questions in existed criminal legislation of Russia and China.

3. AI crimes, which cannot be regulated with existed criminal laws, present a serious danger for the society. These actions cannot be regulated with current Criminal Code of the Russian Federation or Criminal Code of the People's Republic of China for a couple of reasons:

Firstly, actus reus (elements of crime), provided by criminal law, doesn't cover new forms of behavior at the AI Era. For example, today with AI technological development and with its combination with biology and neurology, there were produced AI prosthetic devices, which can help to a variety of disable persons to solve their problems and to reduce their suffering. If a person destroys AI prosthetic device, which works good with human body, it can cause great physical and mental suffering to its owner. Consequently, if we consider these AI prosthetic devices as property of a person and its damage is just a destruction of property, it doesn't make sense; meanwhile if we consider AI prosthetic device as a part of human body, it seems highly probable that damage of prosthetic device leads to health problem of a person. However, according to the Criminal Code of the Russian Federation and the Criminal Code of the People's Republic of China, willful health damage is considered as a crime only in case when bodily injuries form petty bodily harm (in Russia) and minor personal injury (in China), it is possible not to consider damage of AI prosthetic device, which didn't cause short-term health

problem or petty loss of ability to work, as minor personal injury or petty bodily harm, so it couldn't be considered as a crime. At the same time with AI technologies development AI prosthetic devices become cheaper. Price of AI prosthetic devices may not be compared to the notion of considerable damage, which is required as actus reus (elements of crime) of deliberate destruction or damage of property. In other words, existed criminal laws of Russia and China do not define liability for damage of AI prosthetic devices of other people.

Secondly, there are no elements of crime. Including new form of behavior, appeared at AI Era in the current Criminal Code of the Russian Federation or the Criminal Code of the People's Republic of China. For example, Microsoft created AI chatter bot Tay (chatter bot Tay was designed as 19-year-old girl), who has capacity to learn from interacting with human users. Some users figured out how it worked and using the mechanism of its learning, began tweeting politically incorrect phrases. Microsoft shut down the service the next day because of inflammatory and offensive tweets such as racism. Microsoft had released this chatter bot for entertainment but deliberately offensive behavior of other users made Tay to be in "instrument" of racist statements. According to the Criminal Code of the Russian Federation and the Criminal Code of the People's Republic of China, extensive use of such statements can be considered as a crime. Imagine is it a corresponding deliberate crime when a person deliberately teaches AI bot to pronounce these words? In addition, if creators of AI robots didn't manage to establish some functions to stop this behavior (i.e. AI robots cannot do it automatically), is it possible to consider these creators guilty in committing crime by negligence? If it is necessary, how should we build and improve corresponding legislation in this sphere?

To sum up. In course of the AI development we have more similar questions, that should be answered, that is why it is necessary to improve statements of the current Criminal Code of the Russian Federation and the Criminal Code of the People's Republic of China to meet the requirements of AI Era.

Influence of AI existence

on traditional definition of criminal liability

Criminal law should respond to the changes of time. However, faced with different criminal risks, related to AI technologies, current criminal legislation can hardly manage all new problems. Thereupon it is necessary to create new conception of criminal law, to improve legislation and judicial practice, adjusting to the requirements of time. Several scientists suppose that it's no purpose discussing criminal liability and legal capacity of intellectual robots, because they are just additional instrument for a human to deal with any work (Shi Fang, 2018). This viewpoint is based on the fact that intellectual robots today are weak AI robots, they can be used as part of a process, created and designed by a human to embody human will, and therefore only users can be subjected to of criminal liability. In our opinion. Intellectual robot is subject of the crime or it is an instrument of crime, it can influence on definition of criminal liability. It is revealed in following:

Firstly. Intellectual robots, used as crime instrument. Influence on criminal liability for committed crime. According to the judgment of the Supreme People's Procuratorate of the People's Republic of China of 18.04.2008 on classification of crime by way of receiving, use a credit card of another person in cash machine (ATM), it is stated that this action is provided by section 3 part 1 art.19 of the Criminal code of the People's Republic of China "use of credit cards, belonging to another person, without their knowledge or consent" and it is considered as credit card fraud. Concerning this Judgment some scientists noted that as far as cash machine (ATM) could not be deceived because it didn't have consciousness, it could not make a mistake of perception and could not dispose its property because of the perception mistake, that is why use of credit cards, belonging to other people without their knowledge or consent should be considered as theft (Zhang Mingkai, 2009). Other scientists stated that the main reason of classification of this crime as credit cards fraud consists in the fact that cash machine (ATM) after programming is business staff which operates on behalf of the financial

institution for financial processing: considering that business staff can be defrauded, computer-programmed cash machine can be a fraud subject for sure (Liu Xianquan, 2017). In other words, computer-programmed cash machine is distinct from regular technical vehicles in that they have recognition function of human brain, what can influence on definition of criminal liability in cases when they are subjects of crime. It seems that when people come down the weak AI Era. Intellectual robots have the capacity to deep learning, they acquire functions of human brain, and certainly it will influence a great deal on criminal liability of guilty persons.

Secondly. Intellectual robots, as crime instruments. Influence on criminal liability of guilty persons. Classification of criminal liability between designers and users can vary with the growth of "intellect" of intellectual robots, when robots are instruments for committing crime by people. For example, when a usual car is not dangerous, but its driver violated traffic rules and it led to a big accident, designer of a car is not liable for this crime. The user of this car (i.e. the driver) is liable for this crime. Meanwhile autonomous (self- driving car or robotic car) is controlled by a program of autopilot (for example, the way or direction of the car), there are only passengers in this car, there is no driver in it: in case of serious accident, related to the procedure itself (including violation of traffic rules by autonomous car and so on), the designer of autonomous vehicles can be liable for this accident only. It follows that at AI Era risks, related to AI technologies, are growing up shortly and intellectual robots, by contrast with ordinary auxiliary instruments. Influence on detection and classification of criminal liability towards wrongdoers.

It seems that the conception of the criminal law, criminal legislation and judicial practice should be adapted to time tendencies to provide that criminal law, being "the last defense line of the society", plays its role to guarantee stable and strong development of the society. Conception and system of criminal law in every social form are different, it is reflection of the fact, that criminal law corresponds to tendencies of the society's development. Today we live at the weak AI Era. Intellectual robots with

capacity to deep learning, playing important role in all aspects of social life. Influence and continue to influence on the development of social forms. That is exactly why we should adapt the concept of criminal law to provide stable development of AI technologies, to prevent and to control criminal risks, related to AI technologies, finally to achieve purposes of protection of human interests and contribution of progress in social development. This is the essence of the idea of criminal law at AI Era.

At AI Era we should create a perspective view of the criminal law concept. While technical backgrounds and social conditions, lying in the basis of criminal legislation at AI Era, are changing quickly (Zhao Li, 2015; Begishev, 2018), we should predict possible criminal risks today and think over response strategy.

The solution of the problem of liability for AI crimes

As it was mentioned above, Strategies of AI development were enacted in Russia and in China. In these official documents it was underlined that the important principles of legislation of AI are principle of human rights protection, principle of clarity and principle of responsibility. Consequently, composition of principle of criminal liability is an important part of criminal norms, related to AI.

Summarizing all opinions of Chinese scientists on AI liability (Chen Xingliang, 2010; Xia Chenting, 2019; Liu Honghua,2019), the authors approve introduction of strict liability into system of criminal law and application of relatively strict liability in context of rule of law towards AI crimes.

1. Introduction of strict liability

In the Chinese criminal law imputation of liability. In general, is considered as criminal liability of a person for committing a crime (Feng Jun, 1996), i.e. imputation of liability is general impact of correlation between a crime and criminal liability. AI crimes, as negative consequence of technical progress, are both an output of industrial society and typical effect of technology-related risk in society. Gravity of social risk, created by AI, is beyond human ability to find and to solve the problem, that

is why necessity of criminal regulation of AI crimes is obvious. Nevertheless, artificial intelligence, as a highly intelligent "person", may not only create and carry out unlawful risks, but it is able to evade created risks by means of effective control; it means that if an artificial intelligence commits a crime, it will be liable for all negative consequences. In this case there are some theoretical challenges to impose criminal liability on artificial intelligence; consequently, it is essential to create special principles of imputation of criminal liability for AI crimes according to characteristics of social risk to solve general problems on regulation of AI crimes.

General content of traditional principle of imputation of criminal liability lies in the fact that criminal liability is caused with guilt of a person, i.e. guilt of a person is starting point for subjecting to criminal liability. Deliberate intent and negligence as content of guilt are not only individual elements of crime, but it is a condition for determination of criminal liability. Consequently, a subject is not liable for its personal innocence. Faced to numerous complicated social risks, related to technical development, the opinions of Chinese scientists were divided over mentioned problem: some scientists considered that it was necessary to break the limits of principle of criminal liability on a provisional basis and to move to problems of criminal law on prevention and control of risks to provide security (Hao Yanbing, 2012); other scientists noted that it was necessary to follow principle of guilt with guarantee of freedom by criminal law (Lao Dongyan, 2014). Despite disagreements among scientists of criminal law, social practice and legal progress are developing according to its internal patterns, criminal law, as the last defense and control line of the society, enriches itself gradually with its content due to the changes of time. In mentioned case principle of strict liability (Li Lifeng, 2009) become strategic.

AI crimes as impact of technical risks are characterized with co-existence of the present and uncertainty of risk in modern society. AI products may achieve or overpass human intelligence by means of deep learning process, what can considerably increase expectation of

uncontrolled risks (Ma Changshan, 2018). Besides, as far as use of the technology of "black box" in AI agents leads to non-transparency of its algorithms, terminal users do not know how AI agents make their decisions by means of their algorithms. Definition of the process of making up the decisions by AI agents is uneconomic and even impossible task, according to the traditional principle of fault-based liability definition of this element of guilt is important and irreplaceable in the process of imposition of criminal liability. In that context, with reference to uncontrolled risks, related to AI crimes and non-transparency of algorithms of AI behavior, application of the principle of strict liability meets to a greater extent the requirements of technical norms and regulatory requirements at AI Era.

2. Application of the principle of relatively strict liability in context of rule of law

Application of the principle of relatively strict liability in context of rule of law means that it is limited sphere and degree of application of strict liability by means of direct regulation of the legislation, i.e. due to differences between strong AI and weak AI and to the level of combination of AI and an individual it is necessary to establish principle of fault-based liability with amendments of the principle of relatively strict liability.

Today majority of Russian and Chinese scientists, hewing to position "instrument" and " agent" (Wu Handong, 2017; Mosechkin, 2019), consider that designer and user of AI agent should be liable for its actions, if a machine of weak AI is used for committing a deliberate crime, user should be found guilty due to personal guilt. Other scientists noted that despite the fact that machine of weak AI does not have capacity to commit deliberate crimes itself due to absence of full consciousness and individual capacity to identify and to control its actions in context of criminal libility, at the same time it is not equivalent to traditional technical instruments. Weak AI, having the capacity to deep learning, may use technology of "black box" for making corresponding decisions and consequently, it has certain individu-

al characteristics, what will lead to individual "intellectual" differences of different AI units, which can be beyond intentions of designers (Li Zhengquan, 2019). In these cases criminal liability for crimes, committed by AI by negligence, demands introduction of the principle of relatively strict liability to explain classification of liability between AI creators, AI manufacturers, AI owners and AI users. As examples we can name cases of traffic accidents with autonomous cars: a) if the user of this car didn't participate in driving of this car, his liability is excluded and according to principle of relatively strict liability, the owner, manufacturer or programmer of autonomous car are liable for the accident. Nevertheless, mentioned case refers to the problem of responsibility for manufactured products, so corresponding parties should be solidarily liable; b) in case when user participated partially in driving of this car on autopilot mode, but he didn't take any effective and adequate measures to slowdown the car to prevent socially dangerous consequences, he should be solidarily liable for the accident jointly with owner, manufacturer and programmer. Concerned parties may exclude or define shares of liability according to their corresponding and reasonable arguments.

As to imputation of criminal liability for actions of strong AI units, as it was predicted by some scientists, strong AI has remarkable capacities to recognize and control its actions, consequently, it has corresponding criminal imputation, so criminal liability for committing a crime is followed according to the principle of fault-based liability. Although some scientists consider that appearance of strong AI may lead to the end of mankind (Hawking, http://www. sohu.com/a/137173188_609518), however, people's efforts to improve life by means of improvement of technology and science are not stopped, as efforts to improve laws for regulation of the development process of science and technology. In this context, "purpose of the legislator is not to establish definite order, but to create conditions when well-ordered structure can be built itself and be reconstructed permanently" (Friedrich von Hayek, 1997). We are sure that legislators have enough wisdom to restraint the development of strong AI in con-

text of protection of human rights, that is why the discussion of criminal liability towards AI strong units is not excessive.

It is possible to predict that in future the model of combination of human and AI will be permanent. When it comes to imputation of criminal liability in case of integration of a human and AI, we should form out three following consequences: firstly. In case when an AI unit is jointed to a human, but doesn't influence on his ability to recognize and to judge, so unit should be identified as additional instrument, belonging to a weak AI, for example, when using system of improvement of paramedical aid an individual bears criminal liability; secondly. In case when AI is combined with a human and it influences partially on his ability to recognize and to judge, an AI agent is theoretically still a weak AI, but its judgments on making decisions direct and influence on committing of a crime by a person, it is necessary to charge manufacturers, AI machine's engineers according to principle of relatively strict liability; thirdly. In case when AI is integrated with

References

a human and it dominates, controls the ability of a person to recognize and to judge, this AI machines are theoretically strong, and they can bear criminal liability. In this case unit of a strong AI should be held directly liable in its own capacity according to principle of faulted-based liability and an individual should be released.

Conclusion

AI development has attracted attention of many countries and international organizations in the Era of big data. On the one hand, artificial intelligence has the capacity to analyze data, to learn and to execute hard work, which cannot be done by human; on the other hand, there exist some reasonable legal risks, related to it. It is obviously, that AI technology represents not only a direction of prospective technological development, but it is a driving force of prospective legal research. We should keep up with times and improve current legislation due to the development of AI technologies.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Kibal'nik, A.G., Volosyuk, P.V. (2018). Iskusstvennyj intellekt: voprosy ugolovno-pravovoj doktriny, ozhidayushchie otvetov [Artificial Intelligence: Pending Criminal Doctrine Questions]. In Vestnik Nizhe-gorodskoj akademii MVD Rossii [Bulletin of the Nizhny Novgorod Academy of the Ministry of Internal Affairs of Russia], 4, 175-176.

SHestak, V.A., Voevoda, A.G. (2019). Sovremennye potrebnosti pravovogo obespecheniya iskusstven-nogo intellekta: vzglyad iz Rossii [Current needs of the legal support of artificial intelligence: a view from Russia]. In Vserossijskij kriminologicheskij zhurnal [All-Russian Criminological Journal], 13 (2), 197-200.

Vladimir Putin: V oblasti iskusstvennogo intellekta sohranyat' liderstvo prosto neobhodimo (2019). [Vladimir Putin: In the field of artificial intelligence, maintaining leadership is simply necessary]. Available at: https://ww-w.1tv.ru/news/2019-11-10/375457-vladimir_putin_v_oblasti_iskusstvennogo_intellekta_sohranyat_liderstvo_ prosto_neobhodimo (accessed 5 February 2020).

Doklad prem'er-ministra Li Kecyana o rabote pravitel'stva ot imeni Gosudarstvennogo soveta na pyatoj sessii Nacional'nogo sobraniya narodnyh predstavitelej 12-go sozyva (2017). [Report by Prime Minister Li Keqiang on the work of the government on behalf of the State Council at the fifth session of the National Assembly of People's Representatives of the 12th convocation]. In ZHen'min'zhibao: Gazeta [People's Daily: Newspaper], 17 March.

Uvedomlenie Gosudarstvennogo soveta po pechati i rasprostraneniyu plana razvitiya iskusstvennogo intellekta novogo pokoleniya (2017). [Notification of the State Council for the Press and Dissemination of the Next Generation Artificial Intelligence Development Plan]. Available at: http://www.gov.cn/zhengce/ content/2017-07/20/content_5211996.htm (accessed 9 January 2020).

Vystuplenie predsedatelya Soveta sudej RF V. V. Momotova na plenarnom zasedanii po teme "Pers-pektivy ispol'zovaniya iskusstvennogo intellekta v sudebnoj sisteme Rossijskoj Federacii", g. Katar. (2020). [Speech by the Chairman of the Council of Judges of the Russian Federation V. V. Momotov at the plenary

meeting on the topic "Prospects for the use of artificial intelligence in the judicial system of the Russian Federation", Qatar]. Available at: http://www.ssrf.ru/news/lienta-novostiei/36912 (accessed 5 May 2020).

Deng, Heng (2017). Ispol'zovanie tekhnologij iskusstvennogo intellekta i sudebnye innovacii [The use of artificial intelligence technology and judicial innovation]. In Narodnyj sud: gazeta [People's Court: newspaper], 14 December.

Sovmesnoe prodvizhnie sistemy «Iskusstvennyj intellekt+sud» IFLYTEK s Vysshimi narodnymi sudami v del'te reki YAnczy [IFLYTEK joint advanced AI + Court systems with the Higher People's Courts in the Yangtze River Delta]. Available at: http://www.ah.xinhuanet.com/2018-06/07/c_1122949446.htm (accessed 10 January 2020).

Yan, Jianqi (2018). Raskrytie «206»: Perspektiva iskusstvennogo intellekta v sisteme sudov - 164 dnya issledovanij i razrabotok vspomogatel'noj intellektual'noj sistemy v vozbuzhdenii ugolovnyh del v SHanhae [Disclosure "206": The Prospect for Artificial Intelligence in the Court System - 164 days of research and development of the auxiliary intellectual system in criminal proceedings in Shanghai]. In Verhovenstvo prava narodov [Rule of law of peoples], 2, 38-43.

Arhipov, V.V., Naumov, V.B. (2017). Iskusstvennyj intellekt i avtonomnye ustrojstva v kontekste prava: o razrabotke pervogo v Rossii zakona o robototekhnike [Iskusstvennyj intellekt i avtonomnye ustrojstva v kontekste prava: o razrabotke pervogo v Rossii zakona o robototekhnike]. In Trudy SPIIRAN [Proceedings of SPIIRAS], 6, 46-62.

Ponkin, I .V., Red'kina, A.I. (2018). Iskusstvennyj intellekt s tochki zreniya prava [Artificial intelligence in terms of law]. In VestnikRossijskogo universiteta druzhby narodov. Seriya: YUridicheskie nauki [Bulletin of the Peoples' Friendship University of Russia. Series: Law], 22 (1), 91-109.

Vasil'ev, A.A., SHpopper, Mataeva, M.H. (2018). Termin «Iskusstvennyj intellekt» v rossijskom prave: doktrinal'nyj analiz [The term "Artificial Intelligence" in Russian law: doctrinal analysis]. In YUrlingvistika [Jurlinguistics], 7-8, 35-44.

Yu, Chengfeng (2018). «Smert'» zakonov: krizis yuridicheskoj funkcii v epohu iskusstvennogo intellekta [The "death" of laws: a crisis of legal function in the era of artificial intelligence]. In Vestnik Vostochno-kitajskogo politiko-yuridicheskogo universiteta [Bulletin ofEast China Political and Law University], 2, 5-20.

Li, Sheng (2018). Pravovaya transformaciya v kontekste iskusstvennogo intellekta [Legal Transformation in the Context of Artificial Intelligence]. In Pravovoj obzor [Legal Review], 1, 98-107.

YAstrebov, O.A. (2017). Diskussiya o predposylkah dlya prisvoeniya robotam pravovogo statusa «elek-tronnyh lic» [Discussion on the prerequisites for assigning robots the legal status of "electronic persons"]. In Voprosy pravovedeniya [Jurisprudence], 1 (39), 189-202.

Wu, Xiyu (2018). Teoriya i praktika sudebnogo primeneniya II - o pravosub"ktnosti II [Theory and practice of the judicial application of AI - on the legal personality of AI]. In Zhejiang social Sciences, 6, 60-66.

SHestak, V.A., Volevodz, A.G. (2019). Sovremennye potrebnosti pravovogo obespecheniya iskusstvennogo intellekta: vzglyad iz Rossii [Current needs of the legal support of artificial intelligence: a view from Russia]. In Vserossijskij kriminologicheskij zhurnal [All-Russian Criminological Journal], 13 (2), 197-200.

CHurilov, A.YU. (2018). Pravovye osnovy otvetstvennosti za vred, prichinennyj pri ekspluatacii avtonomnogo avtomobilya [Legal basis for liability for damage caused by the operation of an autonomous car]. In Legal Cocept=Pravovayaparadigm [Legal Cocept = Legal Paradigm], 17 (4), 30-34.

Kibal'nik, A.G., Volosyuk, P.V. (2018). Iskusstvennyj intellekt: voprosy ugolovno-pravovoj doktriny, ozhidayushchie otvetov [Artificial Intelligence: Pending Criminal Doctrine Questions]. In Vestnik Nizhe-gorodskoj akademii MVD Rossii [Bulletin of the Nizhny Novgorod Academy of the Ministry of Internal Affairs of Russia], 4 (44), 175-176.

Mosechkin, I.N. (2019). Iskusstvennyj intellekt i ugolovnaya otvetstvennost': problemy stanovleniya novogo vida sub"ekta prestupleniya [Artificial intelligence and criminal liability: problems of the formation of a new type of subject of crime]. In Vestnik Sankt-Peterburgskogo universiteta. Pravo [Bulletin of St. Petersburg University. Right], 10 (3). 461-476.

Lin, Shaowei (2018). Vliyanie iskusstvennogo intellekta na korporativnoe pravo: problemy i otvety [The effect of artificial intelligence on corporate law: problems and answers]. In Vestnik Vostochno-

kitajskogo politiko-yuridicheskogo universiteta [Bulletin of the East China Political and Law University], 3, 61-71.

Jiang, Bixin, Zheng, Lihua (2018). Internet, bol'shie dannye, iskusstvennyj intellekt i nauchnoe za-konotvorchestvo [Internet, Big Data, Artificial Intelligence and Scientific Lawmaking]. In Pravovoj zhurnal [Law journal], 5. 1-7.

Wu, Yunfeng (2018). Primenenie ugolovnogo zakona na primere prestuplenij protiv sobstvennosti v epohu iskusstvennogo intellekta: dilemma i vyhod [The application of the criminal law as an example of crimes against property in the era of artificial intelligence: a dilemma and a way out]. In YUrisprudenciya [Jurisprudence], 5, 165-180.

Liu, Xianquan (2018). Rekonstrukciya sistem ugolovnoj otvetstvennosti i nakazanij v epohu iskusstvennogo intellekta [Reconstruction of criminal liability and punishment systems in the era of artificial intelligence]. In Politika i pravo [Politics and Law], 3, 89-99.

Liu, Xianquan (2018). Pervonachal'noe issledovanie rekonstrukcii sistem ugolovnoj otvetstvennosti i nakazanij v epohu iskusstvennogo intellekta [An initial study of the reconstruction of criminal liability and punishment systems in the era of artificial intelligence]. In Zakonnost' i obshchestvo [Law and society], 3, 206-209.

Hisamova, Z.I., Begishev, I.R. (2019). Ugolovnaya otvetstvennost' i iskusstvennyj intellekt: teoretich-eskie i prikladnye aspekty [Criminal liability and artificial intelligence: theoretical and applied aspects]. In Vserossijskij kriminologicheskij zhurnal [All-Russian Criminological Journal], 13 (4), 564-574.

Zheng, Ge (2017). Iskusstvennyj intellekt i budushchee prava [Artificial Intelligence and the Future of Law]. In CITIZEN AND LAW, 12, 14-15.

Hariri, Y.N. (2017). Homo Deus: A Brief History of Tomorrow. New York, Harper Collins Publishers Inc., 318.

Pervyj v mire robot-samoubijca: samosozhzhenie robota-uborshchika iz-za ustalosti ot raboty (2018). [The world's first suicide robot: self-immolation of a cleaning robot due to work fatigue]. Available at: https://www.tanling.com/archives/1921.html (accessed 19 December 2019).

Yao, Wanqin (2019). Pravovye riski II v epohu bol'shih dannyh i ih predotvrashchenie [Legal risks of AI in the era of big data and their prevention]. In Obshchestvennye nauki VnutrennejMongolii [Social Sciences of Inner Mongolia], 2, 87.

Kak sleduet razdelit' otvetstvennost' za chastye avarii, svyazannye s avariyami vozhdeniya? (2018). [How should responsibility for frequent accidents related to driving accidents be shared?]. Available at: http://www.sohu.com/aZ232993198_455835 (accessed 10 January 2020).

Pervyj «sluchaj ubijstva robota» proizoshel na zavode Volkswagen v Germanii (2015). [The first "robot killing case" occurred at a Volkswagen factory in Germany]. Available at: http://news.mydrivers. com/1/437/437018.htm (accessed 10 January 2020).

Prestupniki s ispol'zovaniem bespilotnogo letatel'nogo apparata nashli cel' i sovershili ograblenie [Criminals using an unmanned aerial vehicle found a target and committed a robbery]. Available at: http://digi.163. com/14/0418/17/9Q40Q6IQ001620UT.html (accessed 10 January 2020).

Cukanova, E.YU., Skopenko, O.R. (2018). Pravovye aspekty otvetstvennosti za prichinenie vreda ro-botom s iskusstvennym intellektom [Legal Aspects of Responsibility for Causing Damage by an Artificial Intelligence Robot]. In Matters of Russian and International Law, 8 (2A), 42-48.

Policiya SHaosina raskryla pervyj sluchaj prestupleniya s ispol'zovaniem II (2017). [Shaoxing Police revealed the first AI crime case]. Available at: http://www.xinhuanet.com/local/2017-09/26/c_1121726167. htm (accessed 10 January 2020).

Informaciya o prestuplenii, svyazannom s iskusstvennym intellektom, vpervye raskryta: vasha lichnaya informaciya byla takim sposobom skomprometirovana [Artificial intelligence crime information first revealed: your personal information was compromised in this way]. Available at: http://www.sohu.com/a/202973604_65917 (accessed 10 January 2020).

Shi, Fang (2018). Otricanie ugolovnoj pravosub"ektnosti II [Denial of the criminal legal personality of AI]. In YUridicheskaya nauka [Legal science], 6, 69.

Zhang, Mingkai (2009). Ugolovno-pravovoj analiz po delu Xu Ting [Criminal case analysis Xu Ting]. In Kitajskoe i inostrannoe parvo [Chinese and foreign law], 1, 35.

Liu, Xianquan (2017). Nachala ugolovnogo prava po finansovym prestupleniyam [The Beginnings of Criminal Law on Financial Crimes]. Shanghai, Shanghai people's publishing house, 509.

Zhao, Li (2015). Zakonodatel'stvo o kiberbezopasnosti dolzhno byt' perspektivnym [Cybersecurity law should be promising]. In Fa-zhi-ri-bao, 7 July.

Begishev, I.R., Hisamova, Z.I. (2018). Kriminologicheskie riski primeneniya iskusstvennogo intellekta [Criminological risks of using artificial intelligence]. In Vserossijskij kriminologicheskij zhurnal [All-Russian Criminological Journal], 12 (6), 767-775.

Chen, Xingliang (2010). Doktorial'noe ugolovnoepravo [Doctoral criminal law], Beijing, China, People's University Publishing House, 130.

Xia, Chenting (2019). Issledovanie puti vmeneniya ugolovnoj otvetstvennosti v epohu iskusstvennogo intellekta [The study of the imputation of criminal liability in the era of artificial intelligence]. In Pravovoe obshchestvo [Legal Society], 1, 48-52.

Liu, Honghua (2019). O pravovom statuse iskusstvennogo intellekta [On the legal status of artificial intelligence]. In Politika i parvo [Politics and Law], 1, 16-20.

Jun, Feng (1996). The theory of criminal responsibility, Law Press China. 227.

Hao, Yanbing (2012). Riskovoe ugolovnoe pravo: v centre vnimaniyaprestupleniya s usechennym sostavom [Criminal Risk Law: The focus of a truncated crime], Beijing: China, Political and Law University Press, 66.

Lao, Dongyan (2014). Riskovoe obshchestvo i izmenchivaya ugolovno-pravovaya teoriya [Risk Society and Volatile Criminal Law Theory]. In Kitajskoe i inostrannoe pravo [Chinese and Foreign Law], 1, 92-93.

Li, Lifeng (2009). Study of Mens Rea in American criminal law. Beijing: China, Publishing house of the Chinese political and law University. 658.

Ma, Changshan (2018). Social'nye riski II i ih pravovoe regulirovanie [Social risks of AI and their legal regulation]. In YUridicheskaya nauka [Jurisprudence], 6, 49-52.

Wu, Handong (2017). Institucional'nye mekhanizmy i pravovoe regulirovanie v epohu iskusstvennogo intellekta [Institutional mechanisms and legal regulation in the era of artificial intelligence]. In YUridicheskaya nauka [Jurisprudence], 5, 130-133.

Mosechkin, I.N. (2019). Iskusstvennyj intellekt i ugolovnaya otvetstvennost': problemy stanovleniya novogo vida sub"ekta prestupleniya [Artificial intelligence and criminal liability: problems of the formation of a new type of subject of crime]. In VestnikSankt-Peterburgskogo universiteta. Pravo [Bulletin of St. Petersburg University. Right], 10 (3), 461-476.

Li, Zhengquan (2019). Teoriya yuridicheskoj otvetstvennosti v epohu iskusstvennogo intellekta - na primere intellektual'nyh robotov [The theory of legal responsibility in the era of artificial intelligence - the example of intelligent robots. In VestnikDalyan'skogopolitekhnicheskogo universiteta. Obshchestvennye nauki [Bulletin of the Dalian Polytechnic University. Social Sciences], 5, 82.

Guiding AI to Benefit humanity and the environment (2017). Available at: http://www.sohu. com/a/137173188_609518 (accessed 10 January 2020).

Hayek, F. von (1997). The Constitution of Liberty. Translate: Deng Zhenglai. Publishing house: Life reading New knowledge Sanlian joint bookstore. 201.

i Надоели баннеры? Вы всегда можете отключить рекламу.