Научная статья на тему 'THE CONCEPT OF INTEGRATING ARTIFICIAL INTELLIGENCE INTO THE LEGAL SYSTEM'

THE CONCEPT OF INTEGRATING ARTIFICIAL INTELLIGENCE INTO THE LEGAL SYSTEM Текст научной статьи по специальности «Право»

CC BY
79
30
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
HUMAN / ARTIFICIAL INTELLIGENCE / INTEGRATION / LEGAL SYSTEM / LEGAL DOCTRINE / LEGAL REGIME / NATIONAL LEGAL ORDER / SMART REGULATION / DIGITAL SOCIETY

Аннотация научной статьи по праву, автор научной работы — Gavrilova Yulia A.

The article is devoted to the issue of artificial intelligence integration into the legal system. The human life is inextricably linked with digital technologies in the digital age. Legal regulation of developing and applying artificial intelligence has a complex influence on the legal system of Russian society. In this regard, the issue is characterized by high scientific and practical significance and meets the strategic needs of the legal policy of the Russian Federation. The purpose of the article is to formulate the main elements of the concept of integrating artificial intelligence into the legal system. Research methods contributing to reaching the aim are formal-legal, analogy, extrapolation, cultural-historical, modeling and forecasting. The results of the study can be outlined as follows. We think that humanistic approach to domestic legal system is the most optimal; within this approach artificial intelligence is naturally and imperceptibly integrated into the human environment as a “smart” intelligence that performs the functions of “smart” regulation. The legal regulation of embodied (robotic) and swarm (collective) artificial intelligence should be introduced with reasonable caution and predictability with regard to technical standards and controlled legal experiments after conducting the widest possible ethical expertise. When forming the concept of artificial intelligence integration into the legal system a number of fundamental factors must be taken into consideration: legal continuity of doctrinal legal knowledge, differentiation of legal regimes and consideration of the cultural and civilizational code and psychology and mentality of the society where such legal regulation is being developed and implemented.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «THE CONCEPT OF INTEGRATING ARTIFICIAL INTELLIGENCE INTO THE LEGAL SYSTEM»

Вестник РУДН. Серия: Юридические науки RUDN JOURNAL OF LAW

2021 Т. 25. № 3. 673—692 http://journals.rudn.ru/law

ПРАВО И ЦИФРОВЫЕ ТЕХНОЛОГИИ LAW AND DIGITAL TECHNOLOGY

DOI: 10.22363/2313-2337-2021-25-3-673-692

Research Article

The concept of integrating artificial intelligence into the legal system

Yulia A. Gavrilova H

Volgograd State University,

Volgograd, Russian Federation Hgavrilova_ua@volsu.ru

Abstract. The article is devoted to the issue of artificial intelligence integration into the legal system. The human life is inextricably linked with digital technologies in the digital age. Legal regulation of developing and applying artificial intelligence has a complex influence on the legal system of Russian society. In this regard, the issue is characterized by high scientific and practical significance and meets the strategic needs of the legal policy of the Russian Federation. The purpose of the article is to formulate the main elements of the concept of integrating artificial intelligence into the legal system. Research methods contributing to reaching the aim are formal-legal, analogy, extrapolation, cultural-historical, modeling and forecasting. The results of the study can be outlined as follows. We think that humanistic approach to domestic legal system is the most optimal; within this approach artificial intelligence is naturally and imperceptibly integrated into the human environment as a "smart" intelligence that performs the functions of "smart" regulation. The legal regulation of embodied (robotic) and swarm (collective) artificial intelligence should be introduced with reasonable caution and predictability with regard to technical standards and controlled legal experiments after conducting the widest possible ethical expertise. When forming the concept of artificial intelligence integration into the legal system a number of fundamental factors must be taken into consideration: legal continuity of doctrinal legal knowledge, differentiation of legal regimes and consideration of the cultural and civilizational code and psychology and mentality of the society where such legal regulation is being developed and implemented.

Key words: human, artificial intelligence, integration, legal system, legal doctrine, legal regime, national legal order, smart regulation, digital society

Conflicts of interest. The authors declared no conflicts of interest.

Article received 26th February 2021

Article accepted 15th July 2021

©Gavrilova Yu.A., 2021

Y^i 0 1 This work is licensed under a Creative Commons Attribution 4.0 International License https://creativecommons.Org/licenses/by/4.0

For citation:

Gavrilova, Yu.A. (2021) The concept of integrating artificial intelligence into the legal system. RUDN Journal of Law. 25 (3), 673—692. DOI: 10.22363/2313-2337-2021-25-3-673-692

DOI: 10.22363/2313-2337-2021-25-3-673-692

Научная статья

Концепция интеграции искусственного интеллекта в правовую систему

Ю.А. Гаврилова Ш

Волгоградский государственный университет, г. Волгоград, Российская Федерация Иgavrilova_ua@volsu.ru

Аннотация. Цифровая эпоха определяет актуальность статьи, когда жизнь человека неразрывно связана с цифровыми технологиями, одной из которых является искусственный интеллект. Правовое регулирование разработки и использования искусственного интеллекта оказывает комплексное воздействие на правовую систему российского общества. В связи с этим проблема интеграции искусственного интеллекта в правовую систему характеризуется высокой научно-практической значимостью и отвечает стратегическим потребностям правовой политики Российской Федерации. Цель статьи — сформулировать основные элементы концепции интеграции искусственного интеллекта в правовую систему. Методы исследования: формально-юридический, аналогия, экстраполяция, культурно-исторический, моделирование, прогнозирование. Результаты исследования. Наиболее оптимальным вариантом развития отечественной правовой системы в условиях цифрового общества является гуманистический подход, в рамках которого искусственный интеллект естественным образом и незаметно встраивается в окружающую человеческую среду в качестве «умного» интеллекта, выполняющего функции «умного» регулирования. Следует с разумной осторожностью и предсказуемостью вводить в действие правовое регулирование воплощенного (роботизированного) и роевого (коллективного) искусственного интеллекта на уровне технических стандартов и контролируемых правовых экспериментов, предварительно проводя максимально широкую этическую экспертизу. Выводы. При формировании концепции интеграции искусственного интеллекта в правовую систему основными элементами концепции должны выступить три принципиальных идеи: правовой преемственности доктринального юридического знания, дифференциации правовых режимов и учета культурно-цивилизационного кода, психологии и менталитета того общества, в котором разрабатывается и внедряется такое правовое регулирование.

Ключевые слова: человек, искусственный интеллект, интеграция, правовая система, правовая доктрина, правовой режим, национальный правопорядок, умное регулирование, цифровое общество

Конфликт интересов. Автор заявляет об отсутствии конфликта интересов.

Дата поступления в редакцию: 26 февраля 2021 г.

Дата принятия к печати: 15 июля 2021 г.

Для цитирования:

Гаврилова Ю.А. Концепция интеграции искусственного интеллекта в правовую систему // RUDN Journal of Law. 2021. Т. 25. №№ 3. С. 673—692. DOI: 10.22363/2313-2337-202125-3-673-692

Introduction

Russian legal scholars pay certain interest to the issues of artificial intelligence. Among them are E.A. Voynikanis, G.A. Gadzhiev, A.A. Kartskhiya, P.M. Morkhat, A.V. Neznamov, I.V. Ponkin, A.V. Popova, A.I. Redkina, V.I. Shershulsky, O.A. Yastrebov, etc. They investigate various aspects of the above phenomena including legal problems of artificial intelligence development, challenges and risks of its widespread functioning for human society, etc. Scientists are unanimous in their opinions concerning the importance of artificial intelligence application for the mankind; they predict longer life expectancy, better understanding of the world and the universe, further spiritual development of the humans, etc. All these are optimistic scenarios.

However, implementation of these technologies raises doubts about their humanization along with the declared effectiveness of artificial intelligence in creating a better living space for people.

To begin with, the human world is fragmented and is often replaced by virtual or hybrid forms. Economy and government administration are striving for digital environment. Digital regulators are beginning to claim the role of universal regulators degrading the value of other regulators (legal, moral, etc.). Peoples' lives and existence of entire countries and regions are under threat not only because of the total "digital" control over the private and public spheres but also because of the threat of deadly weapons use in solving global and regional military-political conflicts.

Finally, the future of traditional labor relations is uncertain due to the rapidly developing automation of production which entails ambiguous socio-economic consequences, especially disappearance of traditional types of employment and, as a result, large-scale unemployment.

This article aims to introduce the main elements of the concept of artificial intelligence integration into the legal system, based on the humanistic perspectives of society development. Such elements have been already highlighted in monographic and dissertation studies; however, such publications are not yet numerous. We understand artificial intelligence integration as a socially equitable legal regulation of public relations with the use of artificial intelligence; it is related to comprehensive expert support and risks that might occur, as well as efforts to minimize the negative consequences of its implementations into the life of society.

The recently published documents — the Strategy for the Development of Artificial Intelligence for the period until 2030, approved by the decree of the President of the Russian Federation No. 490 of 10.10.2019 (hereinafter referred to as the Strategy), and the Concept for the Development of Regulation of Relations in the Field of Artificial Intelligence and Robotics Technologies until 2024, approved by the order

of the Government of the Russian Federation No. 2129-R of 19.08.2020 (hereinafter referred to as the Concept) — have given rise to the wide discussion attracting all the interested parties.

It should be emphasized that these documents, despite their unquestionably important significance, are of a framework and programmatic nature, and need essential normative specification. It can be achieved both by the formation of a fundamentally new legal reality and development of the potential generally accepted legal structures: special regulation, legal fiction, analogy, subsidiary law enforcement, subject of law, person, legal liability, etc.

In this regard, the author does not seek to assess the efforts of European Union, United States, South Korea, Japan, Germany and other advanced states in this sphere; the analysis of this kind has already been carried out in many domestic legal studies. Instead, it seems more appropriate to focus merely on the main elements of the concept of artificial intelligence integration into the legal system. As a result, artificial intelligence technologies will become an integral part of the modern technical and technological way of life and an organic complement to the legal system of society representing the origins of natural and human intellect.

Artificial intelligence and development of legal doctrine

The legal doctrine is the key factor in comprehending the place and role of artificial intelligence in legal regulation. The legal science offers two cardinal strategies for finding the legal meaning of artificial intelligence in the modern period: revolutionary and evolutionary.

A revolutionary strategy means a radical scrapping of existing scientific and theoretical models. Arguments are given about the impossibility of adapting the current normative-legal schemes and the accepted scientific terminology to the pace of digitalization development (Mamychev & Miroshnichenko, 2019:132). Projects and plans are proposed for the formation of a new type of public relations and a new digital reality (Panchenko & Romashov, 2018:107). The psychological mechanism of such a strategy includes the statement of the approach to the point of technological "singularity", axiological pessimism, and the substitution of legal reality with legal futurology.

The evolutionary strategy offers an optimistic plan for the formation of a trusted, safe and comfortable environment of the human's and artificial intelligence coexistence of (subsection 4 of section 1 of the Concept). Forecasting should be reasonable and sufficient for state to develop the legal system in the conditions of artificial intelligence engagement (instead of the legal futurology). Such development should be based on the traditional concepts and institutions of law, established doctrinal approaches and problem-solving methods for artificial intelligence integration into the legal system (Yastrebov, 2018:325).

Within this strategy, integration of artificial intelligence performs as an extension of the action scope of legal phenomena and concepts contributing to clarification and

interpretation of their legal content in the judicial practice, classification and comparison, generalization and explanation of new features of artificial intelligence from the perspective of consistency of the existing legal knowledge.

From modern philosophical and general scientific discussions, it is known that there is no general concept of intelligence. Most scholars seem to share this opinion. It is asserted that natural human intelligence involves three levels: verbal, sensory and cognitive (Estep, 2006:223). Various artificial intelligence systems are created according to similar "patterns": talking androids, bots, robotics, etc.

Artificial intelligence does not stem from the laws of nature, is not associated with biosocial evolution, is designed by humans for certain applied goals and put into a certain technical form. However, some functions of the natural human intelligence can be programmed in the artificial intelligence, but only to the extent of human ambitions in terms of scientific knowledge and within the limits of its formalization.

It correlates with the opinion expressed in recent scientific publications that the technologization of artificial intelligence concept in legal studies is unpromising and useless, because it cannot add value to law. It is necessary either to legalize the technical definition of artificial intelligence, or to seek for the relevant special legal term clearly describing it (Spitsyn & Tarasov, 2020:106).

This opinion is quite reasonable and aims at protecting the fundamental approach to the anthropocentric paradigm in law. However, it is necessary to confess that a human being, things and objects are no longer separated from each other in the modern digital world, and technology has become an inseparable part of human life. Therefore, reluctance to include technological features into the legal definition of artificial intelligence means dismissal of that part of anthropocentric aspect that is predetermined by such technologies. Norms regulating technological innovations have never been denied by law, so the main task of legal science is to find correct ratio between hi-tech and legal norms, as well as to determine the legal limits of feasibility of artificial intelligence technologies.

GOST R 43.0.8-2017 goes along this path; it defines artificial intelligence as a modeled (artificially reproduced) intellectual activity of human thinking. Herewith, a human is considered as a technical operator of information processes with machine participation that can supplement the human's own cognitive resources by machine activation (hybrid intelligence) or machine simulation of human mental activity (artificial intelligence).

At the same time, legal doctrine should answer the fundamental question of the digital age concerning legal personality of artificial intelligence. We recognize the right to different approaches to the problem; however, in our opinion, legal personality of artificial intelligence is a speculative concept. Just as a human being remains the authentic keeper of legal personality, artificial intelligence is only a technology. If we allow legal personality of artificial intelligence systems, we can only tentatively speak about the secondary and derivative (artificial) character of legal personality of such systems. In fact, a human being can transfer a part of their legal personality

to an artificial intelligence; they can also terminate their actions applying such technology.

Paragraph 49 of the Strategy and subsection 9 of section 2 of the Concept propose to adopt the rules in the legislative order allowing only "step-by-step" delegation of decision-making to artificial intelligence in specified cases with respect to the constitutional rights of citizens, defense and security of the state. The objective expediency of such delegation is currently not clear and generally debatable, since the danger of restricting constitutional rights of an individual can be found in any sphere of life activities of modern humans.

In any case, judicial, prosecutorial and investigative functions should not be delegated to artificial intelligence whereas notarial and lawyer functions can be transferred, but on certain conditions. They relate to cases of incorrect advice when providing legal assistance, errors in databases that entail incorrect interpretation of laws and confusing the client, etc. We believe that not the artificial intelligence, but the human being that confers a partial legal personality, bears responsibility for the decisions made in all such situations. Be this person a head of a government body, a civil servant, or other kind of official, the question is formally within the limits of legislative discretion and creates ethical problems due to the fact that decision is not completely under the control of the human being.

The conceptual basis for the development of this thesis is the long-established institution of delegation of authority in public law and in private law (primarily civil) — the institution of representation. Transactions and legally significant actions generate rights and obligations for the represented person directly if they are performed by the representative on behalf of and in the interests of the represented person. The direct written approval of the transaction by the represented person is required in the absence or abuse of powers. If he/she does not approve the deal, an unauthorized person is deemed liable (Articles 182, 183 of the Civil Code of the Russian Federation). However, it entails the unconditional cancellation of decisions and acts of the unauthorized body (official) in public law.

There can be no such unconditional responsibility, and it is also impossible to automatically cancel such decision in the case of artificial intelligence, since it acts as an inanimate (lifeless) technology. Therefore, it is probably necessary to modify the institutions of actions in the interests of other persons and to outline management actions in terms of their improvement by artificial intelligence. Decisions made by artificial intelligence should be deemed as preliminary or draft. A human being must either approve the decision of artificial intelligence, thus enforcing it, or deny it. Only such approach will meet the legal essence of relations involving artificial intelligence as a technology controlled by a human being.

A few words about the proposal to treat artificial intelligence as a quasi-subject of law, so called "electronic person" (Popova, 2020:121). This issue can be solved only in conjunction with the concept of human legal personality. We think that constructions of an individual and a legal entity should remain intact, since they have been elaborated by the joint legal thought of mankind. In case of the "electronic person" metaphor, it

should be clearly understood that this is only a projection of an individual or legal entity with a delegated to it artificial legal personality in a virtual digital environment. It is introduced conditionally to speed up and standardize implementation of socially significant activities. Virtual companies, digital personalities, etc. are structures that act as a substitute representative form of human economic activity to maximize the utility and efficiency of business with the help of digital technologies. In this case, digital tools express only one of the levels of modern legal ontology. So, they should be perceived as those within the legal doctrine.

Artificial intelligence in the context of different legal regimes

The doctrinal justification of the concept of artificial intelligence integration into the legal system should take into account the construction of a categorical series of legal regulation. Fetyukov F.V. suggests an effective approach that considers the type (order) of legal regulation in analyzing the public relations development prospects related to human cloning (Fetyukov, 2020:890). When regulating public relations concerning the legal status of artificial intelligence, such approach can also be applied; however, it requires more fine-tuning and detailed differentiation.

First of all, it is impossible to speak about artificial intelligence only within the framework of general permission or general prohibition (Alekseev, 1989:132—183), since the potential of the permissive-prescriptive regime (Cherdantsev, 2002:345) is not taken into account. But it is relevant for securing the public powers of state bodies regulating the sphere of artificial intelligence application. Indeed, the high risk of this area determines the need to allow only what is directly prescribed by the statute for the vital interests of the individual, society and the state (paragraphs 48-51 of the Strategy). We will try to modify the terminology and define the regimes of legal permission, legal restrictions and legal prohibitions keeping in mind that the types and regimes of legal regulation are categories of the same order.

Secondly, there are different types of artificial intelligence, depending on the multicriterial approach: software execution (imperative or declarative paradigm), nature of the contextual environment (open or closed), possibility of hardware or physical implementation in the material world, etc. Among these types, we distinguish mainly software-algorithmic, embodied and swarm (collective) artificial intelligence.

Different legal regimes may regulate each of them. Moreover, the algorithmic component is present in all the mentioned types of artificial intelligence, but it differs in the execution model. Therefore, it determines the specifics of the operation of one or the other legal regimes.

The regime of legal permission creates favorable conditions for achieving a socially useful result of activity. It is applied in digital technologies for the simplest classical and "closed" algorithms, where the developer always defines the goal with strict sequence of steps to be followed (the imperative programming paradigm).

These are, for example, data mining and processing in the process of monitoring legislation and law enforcement practices. A step-by-step algorithm of using various

criteria for its implementation is formulated in the Methodology for Monitoring Law Enforcement in the Russian Federation, approved by the Decree of the Government of the Russian Federation No. 694 of 19.08.2011. However, the list of these criteria is not exhaustive, and they can be programmed both through modification of the program and use of artificial intelligence. It is obvious that automated schemes of performing such operations are controlled by a human being in case they do not cause any damage and only bring benefits. Under these circumstances they are realized in the form of legal permissions by the humans.

The regime of legal permission can also be used for developing and applying of "smart home" or "smart office" intelligent systems with the possibility of individual restrictions explicitly named in the statute. This phenomenon is called "sensor networks" in the American doctrine and "ambient intelligence" — in the European doctrine. It must be noted, however, that there is a distinctive terminological difference between

American and European domestic word usage of "smart" intelligence and "smart" regulation.

If the question arises about "smart" intelligence systems that demonstrate a certain degree of autonomy in relation to a human being and their traditional environment, then it should be clarified that these systems are defined as small, light and low-cost sets of sensor devices connected to distributed wireless networks and organizing communication with a human being and their environment. Their task is to capture the current context comfortable for a user and sometimes to predict the future context, as well as to independently perform a number of technical operations, creating comfortable conditions for living, entertainment and recreation (light control, switching on / off utility appliances, planning household chores, etc.).

The positive role of "smart" systems is to detect events and recognize the context of some typical human actions, to support easy and simple access to all functions, to adjust to the emotional sphere of an individual, to adapt and respond to the changing context of the user, and to be autonomous (Fu et al., 2018:115).

Existing legal norms, which require some clarification, can be used in order to improve legal regulation of "smart" intelligent systems. So, in particular, it is necessary to revise the list of technically complex goods approved by the Decree of the Government of the Russian Federation No. 924 of 10.11.2011 (as amended on 27.03.2019). The list should include the phrase "intelligent systems that interact with the physical environment without explicitly contacting the consumer due to management installed devices".

At the same time, it should be noted that application of "smart" intelligent systems does not guarantee that they will not have technical and software failures, faulty design and/or engineering defects. It can result, for example, in gas equipment engaging at the wrong time or under unspecified circumstances causing harm to life, health and property of the consumer, etc. However, the issue of legal consequences is resolved in this case by the statute.

In accordance with paragraph 4 of Article 14 of the Statute of the Russian Federation "On Consumer Rights Protection" No. 2300-1 of 07.02.1992 (as amended on 08.12.2020) the manufacturer, seller or contractor is responsible for the damage caused due to application of materials, equipment, tools and other means necessary for the production of goods (performance of works, provision of services). The responsibility occurs regardless of whether the level of scientific and technical knowledge allowed to identify their specific properties or not. Consequently, the artificial intelligence organically fits into the construction of this article as "other means used to provide services".

Here is another example. The traditional institution of a source of increased danger (Article 1079 of the Civil Code of the Russian Federation) can be applied to legal regulation of unmanned vehicles with full confidence. The owner or other legal owner of an autonomous vehicle is responsible for the damage caused by such vehicle. Regarding responsibility of developers, programmers, engineers and manufacturers discussed in the legal literature, there might be a recourse action on the claim of the owner or legal owner depending on the degree of their guilt. The structures "retirement from possession", "operation" of an autonomous vehicle, etc., will be clarified; the new structures "programming of an unmanned vehicle", "building the route of an unmanned vehicle", "beginning of movement", "end of movement", etc. will be introduced. However, the basis for legal regulation of responsibility remains traditional.

The regime of legal restrictions creates unfavorable conditions for implementation of socially significant tasks of protecting human rights and ensuring legality, law order, public and state security. In our opinion, classical software-algorithmic intelligence is generally harmless and falls under the regime of legal permission whereas such phenomenon as embodied artificial intelligence increases risks and poses new challenges to the development of law, legislation and legal system.

This is due to the fact that the embodied artificial intelligence represents a revision of the "bodiless" (ideally mathematical) form of algorithmic programming at the early stages of informatics development and introduction of body morphology, movement and plasticity into this process. That is how the robots and anthropomorphic "mobile" devices appear. Their significant difference from classical artificial intelligence lies in the presence of touch sensors ("reading" from the environment) and possibility of independent physical movement in space in order to perform the task assigned to them with the sensory information they acquire. This knowledge is inextricably linked with the action, i.e. "embodied" knowledge.

If the question of the so-called legal personality of artificial intelligence did not arise and did not make sense in relation to classical algorithmic artificial intelligence due to its "formlessness" and dematerialization, the question about the legal personality of systems of embodied artificial intelligence can theoretically be raised. This is determined by several factors.

First, the fundamental principle of embodied artificial intelligence is its ability to structure its own input sensory space through identification of statistical dependencies (relationships); information is received by sensitive receptors and is then measured using quantitative information methods. This allows to optimize management of neural and cognitive processes in a proactive manner and "select" only relevant data from the environment, just as perceptual experience is generalized in human cognition. It follows from this that the statistical structure of sensory inputs is the most important component of learning and development of this system (Sporns & Pegors, 2004:74,75).

Secondly, the display of the geometric model of the surrounding space and its division (clustering) into certain areas are carried out using the all-pervading computer vision of the robot, which distinguishes the path of movement and emerging obstacles in order to avoid them. This model of behavior, comparable to the function of human spatial orientation, is clearly not encoded, but it is the result of dynamic feedback between the robot and the environment and engaging visual homing mechanisms (Hafner, 2004:180).

Third, dealing with uncertainty/incompleteness of knowledge about the genuine world is a major challenge for robotics in a real, unrestricted and uncontrolled environment. In this case, the robot (embodied artificial intelligence) faces difficulties of cognitive reasoning in many complex and unknown for it environments. A possible way out of this problem can be probabilistic assessments of reality, similar to human value assessments. They combine a priori knowledge and models with inference not in the standard digital range from 0 to 1, but by probability degree of an event: 0, 0.1, 0.2 ...1, which is a more understandable process that reduces complexity of the environment perception (Bellot et al., 2004:186).

Fourth, sensory structures, spatial orientation and probabilistic assessments may allow to form an internal model of the world that belongs to the system of embodied artificial intelligence. However, the difficulty is that this model of reality is interpreted by it through the model of its own body. It turns out that sensory information modulates the picture of reality and constant introduction of its updated version into the robot program (rewriting) undoubtedly leads to better machine learning and will raise the question of the so-called "machine" consciousness in the future (Holland, 2004:37).

At the same time, the optimism of foreign scientists regarding the fate of embodied artificial intelligence on the example of individual local successes is premature. The presented technological solutions show, in general, imitation, artificially modeled image of the complex mechanism of the structure and functioning of human consciousness.

The regime of legal restrictions should extend to "open", i.e. self-learning, self-programmed and self-improving algorithms along with the embodied artificial intelligence. Here the developer formulates only the goal, and the ways and means of its achieving are determined by the algorithm (declarative programming paradigm).

For example, the search for texts of legal documents in reference legal systems through a set of keywords (instructions to the user) can be supplemented by a popup

hint (elements of artificial intelligence) based on the analysis of the experience of previous requests of the user or other persons. However, to do this, the machine must, as they say, learn and develop these abilities.

Hence, there is a substantial difficulty associated with the possible loss of full or partial human control and the problem of decision-making transparency by such autonomous/semi-autonomous systems of artificial intelligence. And as long as the degree of autonomy increases, the probability of full human control over their actions will only decrease; this requires adequate legal regulation.

The variants for solving this problem are as follows. First, Constraint Programming, which prescribes the impossibility of the artificial intelligence action in contradiction with the goals and objectives of a human being. Restriction can be imposed on the chosen means of achieving the goal, multi-purpose projects, amount of hardware capacity, etc. Legislative consolidation of this duty under the threat of sanctions will only strengthen the role of such technological restrictions.

Secondly, the programmed self-destruction of an artificial intellectual system, which is applied for military purposes to prevent the use of exported military technics against its own state by creating a "friend-foe" code. Herewith self-destruction as a civil technology can be carried out if it is impossible to implement programmable restrictions in the current environment, and if there is an attempt to overcome such restrictions (removal of protection by the system itself, malicious hacking, etc.) in other cases. The legislative consolidation of self-destruction may be conditioned by the absence of violations of the rights and legitimate interests of other persons and public interests (Part 3 of Article 55 of the Constitution of the Russian Federation).

Third, the "red button", i.e. a software function that allows to independently make a decision to terminate operation of an artificial intelligence system for the developer (at the code level) or the user (at the software interface level) if its functioning causes or creates a real threat of harm to a human being and/or civil rights, freedoms, public and state interests. Obviously, such right of the developer or user must be specified by the statute, and, for obvious reasons, it cannot be limited or excluded by the conditions of the contract.

Fourth, the "black box" model, based on the presumption that it is hardly possible to fully argue and explain a decision made by an autonomous or semi-autonomous artificial intelligence. Herewith it is necessary, at least, to record the decision-making process, the set of performed actions in this case (in the form of an unique log file or transaction log), and then to make efforts at the expert level to explain the made decision. At the same time, such a model leaves doubts about its social and legal effectiveness due to its hypothetical character. Moreover, the opinions expressed in the literature concerning legislative consolidation of the individual's right to receive information about the reasons and grounds of the legal decision concerning him/her made by an "open" artificial intelligence system raise skepticism (Kuteynikov et al., 2020:147). Consolidation of such subjective right is a fiction; it will not have any signs of feasibility due to insufficient level of scientific and technical knowledge about machine learning of such algorithms.

Fifth, creation of dual expert systems with the functions of mother control over the child agent. The child intelligent system constantly moves in the surrounding information environment and corrects the dynamics of its behavior based on the data of the environment. The mother system also performs remote monitoring of the autonomous unit using broadband wireless communication. The purpose of such dual systems is to prevent the designing of prototypes or changes to their knowledge base that are unacceptable and violate social norms (Stefanuk et al., 2020:460).

Swarm (collective) artificial intelligence is a community of artificial intelligences, including robots, which interact locally with neighboring intelligent systems based on certain principles. Thus, a collective coordinated behavior of the community is born; it has the quality of integrity (emergence) with the joint efforts of all. This technology has spread from biological approaches that have studied the behavior of ant colonies, bee swarms, schools of fish and flocks of birds. Hence, we see such characteristics of swarm artificial intelligence as autonomy, self-organization (decentralization), adaptability, stability and scalability (Farooq & Di Caro, 2008:101).

The main difficulty in programming swarm intelligence is to find optimal trajectories of interaction of such artificial members of the community; balanced with a single environment, the desired collective behavior (reliable and flexible) can be obtained from simple individual rules of interaction (Trianni et al., 2008:164).

Application of the principles of swarm artificial intelligence entails ambiguous consequences for society and individual. In paragraph 30 of the Strategy, the achievement of complete algorithmic imitation of individual and collective biological organisms ("swarm of bees" and "anthill") is stated as one of the priorities of the state scientific and technical policy in the context of the possible implementation of the idea of universal (strong) artificial intelligence. However, this thesis seems controversial and at the moment insufficiently justified. We suggest to seriously think about whether a modern human being needs a so-called "strong" artificial intelligence?

The more freedom is given and the more artificial intelligence learns, the more likely it gets out of human control. It is becoming increasingly difficult for a human to compete with collective artificial intelligence communities that process large amounts of legal information with the highest speed and accuracy. Is not the process of searching for its technological models connected with the voluntary humanity's rejection of leadership in the modern world and, in particular, with transformation of law into a secondary "software" application to digital codes?

The legal risks of swarm artificial intelligence application are extremely high, even if the embodied approach is applied. There is a tendency to centralize the idea of a collective swarm in engineering and technical research, to single out in it the leader of a global project, the owner of the main "recipe" and to hierarchically lock the management structure.

For example, the model of a qualitative cognitive agent includes, according to a number of authors, a knowledge representation model, behavioral planning, subsystem of agent interaction, and a general system of component's management (Kulinich,

2018:102, 103). A top-level management algorithm is proposed; it switches management between the basic algorithms of reinforcement learning, random walk, and rule-based planning, which allows an individual robot to purposefully (but without a human!) solve various tasks in the specified for it environment and under changing environmental conditions (Rovbo, 2019:44).

We believe in the current circumstances that the law should provide guarantees against the uncontrolled use of swarm intellectual systems in the regime of restrictions and prohibitions. If we are guided by the generally recognized norms and principles of international law, swarming technologies of drones and other devices for military purposes should obviously be prohibited. But the development and pilot operation of such systems for civilian purposes should be carried out in closed environments under the control of specialists. For example, this may be limited to the local scope of the enterprise (robotic production of household appliances) or the general training ground in the process of acquiring digital competencies by students (Bokova, 2020:286).

Hence, the corresponding legal regulation of swarm artificial intelligence can be carried out today both within the framework of experimental legal regimes and be developed for introduction into Russian civil circulation through technical standards and regulations.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Artificial intelligence and national legal order

The problem of artificial intelligence is characterized in the digital age by a universal significance for humanity. Human and artificial intelligence differ among many criteria on the question of values, since it is generally assumed in philosophy that values are a phenomenon of exclusively human, social and cultural nature. It is correctly noted that the value bases of law cannot be completely "copied" by artificial intelligence due to the mismatched essential basis: human creative activity vs algorithms and machine operations (Rafalyuk, 2020:860). We call artificial intelligence "friendly" if it is able to reproduce our values that we have created for ourselves in society.

However, teaching human values to artificial intelligence is an almost unsolvable task in the current conditions of neurosciences and cybernetic sciences development. It is fundamentally different, for example, from teaching the rules of navigation in a certain space. Hence, the values can only be "put" into robots as programs and uploaded to their database as "knowledge"; this imposes high ethical requirements on the creators of such programs. If we teach artificial intelligence to independently formulate goals and classify useful (important) information in accordance with these goals, i.e. to extract knowledge, then we recognize that it can formulate its own goals and objectives that may not conform with our goals and objectives. There is only one step from ""friendly" to "hostile" artificial intelligence in this case.

Each legal order retains its own specifics in the digital age, depending on the national and cultural identity. The attitudes to artificial intelligence are influenced by

the philosophical worldview of the population and scientists, traditions, psychology, religious norms and other factors, so it is necessary to take into account the peculiarities of the society where this problem is discussed and modeled. The boundaries between different legal systems are often within their fundamental values.

There are two major axiological research directions for the development of artificial intelligence and robotics in national legal order. One of them, more typical of Western and European legal order, is called human cyborgization, which is an expansion of the human body capabilities through incorporation of programmable technological components. These include I. Musk's "brain chips", medical implants, products of biomedical technologies and engineering genetics. The general problem of this direction is difficulties of distinguishing between therapeutic and experimental (risk) technologies; the boundary between them is flexible and requires careful ethical and legal discussion.

The second direction, which is widespread in the countries of Southeast Asia, is called the anthropomorphization of robots, where artificial intelligence mechanisms and technologies are improved through biological simulation innovations. The idea of "animating" things, endowing technics and technologies with "soul" and "intelligence" inherent in Eastern cultures, is placed in humanoid robots as representatives of this approach. (Seredkina, 2010:138—141). This second path is closer for the Russian legal order. Although, as noted above, positive therapeutic bioengineering is also popular in Russian society, and this sphere of public relations has long been waiting for detailed legal regulation.

When choosing a direction, the main criterion for the necessity and usefulness of artificial intelligence is improving quality of life and social well-being of the humans (Rybakov, 2021:31).

The development of artificial intelligence transforms traditional views about the process of domestic law-making, including it into the general trend of digitalization of public management. Nevertheless, we are very critical about ideas of direct regulation of public relations with the help of self-changing digital codes, "machine-readable" law, digital microdirectives, etc. The main problem raised by such ideas is the principle of similarity between law as a regulator of public relations and codes as regulators of technical processes. In fact, this similarity rests in their common algorithmic basis; it is argued that law is also, to a certain extent, an algorithm (Zenin et al., 2020:99).

A complete isomorphism between law and codes is impossible due to the genesis of these phenomena at different levels of matter: mechanical (technical) and social. Natural human language is available for everyone to understand as a means of communication and awareness of the meaning of legal rules. The change of the human language paradigm of law to a unified international digital legal language can lead to the elitism of such language and its specific producers — programmers. It will be difficult or impossible to understand the code sequences of characters for a simple citizen.

At the same time, the law exists as a means of social management for common people. Moreover, the same problems of ensuring the security and confidentiality of personal data from virus attacks, technical failures, etc. remain topical for this new language. This is especially noticeable when the code is incorrectly unpacked and the equipment is "frozen", when we still turn to the system administrator for help. It turns out that "who owns the codes, he owns the law". Is a new paradigm of law proposed in this formulation?

The option of developing and adopting the statute in the traditional "paper" and human format is more realistic and reliable, however, the mechanism of its execution may well be partially "digitized". It is necessary to improve the software and optimize information and communication technologies that accompany law-making procedures. We can fully agree on these grounds with M.L. Davydova that the so-called "smart" regulation in the legislative process is nothing less than an analogue of the most rational, effective and goal-oriented human law-making (Davydova, 2020:27).

The issue of recognizing verbose and complex expressions is relevant when legislative texts are being "digitized" through popular technologies of vector representation of words (Loukachevitch & Parkhomenko, 2018:112). Modern legal reference databases often contain direct cross-references to interrelated legal documents, but it is necessary to reveal implicit links between them in accounting practice. It simplifies the procedure for interpreting statutes and involves constructing and describing labeled datasets for marking the corpus of legal texts and tracing relationships between them (Devyatkin et al., 2020:229).

The needs of the time determine the necessity to organize and process big data in legislative process, including personal data of citizens, which are most often poorly structured (Djukova et al., 2019:116), or generally unstructured (Nevzorova & Nevzorov, 2019:130). The result of this analytical activity in the Russian Federation must become marked-up and structured datasets that are uploaded and stored on public accessible national digital platforms (paragraph 37 of the Strategy). Therefore, the large-scale introduction of personal data of citizens into commercial circulation to accelerate digital economic development (Magomedova et al., 2020:1017—1020) seems the least preferable. Russian society does not seem to be ready for this yet.

Finally, introduction of artificial intelligence in justice, which is a popular topic today, assumes solving a fundamental ethical problem: can a robot be a moral agent? Implementation of the concept of a legal and fair justice by a robot determines the importance of his clear understanding of such ethical categories as: free will, responsibility, social role, moral concept, distinction between "right" and "wrong". The main ethical categories that have been debated for centuries.

The robot judge must be able to experience emotions and empathize with people, as well as have social communication skills. In addition, a number of key issues of justice rest on the concept of "I", i.e., the presence of self-consciousness in

the robot judge (Karpov, 2020:62). Still, the available technological groundwork for these problems is very small. Therefore, there is still no sufficient reason to believe that a robot can properly replace a human in carrying out activities with such a high mission.

Conclusion

Thus, the main elements of the concept of artificial intelligence integrating into the legal system are the following:

1) preservation and further evolution of traditional basis of legal regulation in the form of "paper" statutes, practice of their application, generally recognized conceptual and categorical apparatus of legal science and ensuring the continuity of legal development,

2) establishing the features of legal regimes depending on the type of artificial intelligence,

3) critical comprehension and adaptive inclusion of foreign experience of legal regulation of artificial intelligence into the legal order, considering domestic value approaches.

The watershed between the projects of humanistic digitalization and digital dehumanization of society is, among other issues, the problem of general (strong) artificial intelligence. However, scientific community does not currently have a unified well-established view of what cognitive architecture it should have, how to classify its models and on what mechanisms and principles of operation it can be built. Consequently, as long as we are talking about weak (highly specialized) artificial intelligence, any prospects of digital "slavery" of the humans by the machine are hardly justified and are likely an exaggeration. However, just a reasonable and responsible human approach to the artificial intelligence integration into the legal system will allow to realize the slogan: "digit" for a human being, but not a human being for a "digit".

References / Список литературы

Alekseev, S.S. (1989) General permissions and general prohibitions in Soviet law. Moscow, Yuridicheskaya lititeratura Publ. (in Russian).

Алексеев С.С. Общие дозволения и общие запреты в советском праве. М.: Юрид. лит., 1989. 288 c.

Bellot, D., Siegwart, R., Bessiere, P., Tapus, A., Coue, C. & Diard, J. (2004) Bayesian Modeling and Reasoning for Real World Robotics: Basics and Examples. In: Iida F., Pfeifer R., Steels L. & Kuniyoshi Y. (eds.). Embodied Artificial Intelligence. Lecture Notes in Computer Science. T 3139. Springer, Berlin, Heidelberg. pp. 186—201. Doi: 10.1007/978-3-540-27833-7_14. Bokova, L.N. (2020) Legal regime of creation of a secure digital educational environment. RUDN Journal of Law. 24 (2), 274—292. Doi: 10.22363/2313-2337-2020-24-2-274-292. (in Russian).

Бокова Л.Н. Правовой режим создания безопасной цифровой образовательной среды // Вестник Российского университета дружбы народов. Серия: Юридические науки. 2020. Т. 24. № 2. С. 274—292. Doi: 10.22363/2313-2337-2020-24-2-274-292. Cherdantsev, A.F. (2002) Theory of State and Law: Textbook for universities. Moscow, Yurait-М Publ. (in Russian).

Черданцев А.Ф. Теория государства и права: учебник для вузов. М.: Юрайт-М, 2002. 432 с.

Davydova, M.L. (2020) "Smart regulation" as a basis for improving modern law-making. Journal of Russian Law. (11), 14—29. Doi: 10.12737/jrl.2020.130. (in Russian). Давыдова М.Л. «Умное регулирование» как основа совершенствования современного правотворчества // Журнал российского права. 2020. № 11. С. 14—29. Doi: 10.12737/ jrl.2020.130.

Devyatkin, D., Sofronova, A. & Yadrintsev, V. (2020) Revealing Implicit Relations in Russian Legal Texts. In: Kuznetsov S.O., Panov A.I. & Yakovlev K.S. (eds.). Artificial Intelligence. RCAI2020. Lecture Notes in Computer Science. Vol. 12412. Springer, Cham. pp. 228—239. Doi: 10.1007/978-3-030-59535-7_16. Djukova, E.V., Masliakov, G.O. & Prokofyev, P.A. (2019) Logical Classification of Partially Ordered Data. In: Kuznetsov S.O. & Panov A.I. (eds.). Artificial Intelligence. RCAI 2019. Communications in Computer and Information Science. Vol. 1093. Springer, Cham. pp. 115—126. Doi: 10.1007/978-3-030-30763-9_10. Estep, M. (2006) Self-Organizing Natural Intelligence. Issues of Knowing, Meaning, and

Complexity. Springer, Dordrecht. Farooq, M. & Di Caro, G.A. (2008) Routing Protocols for Next-Generation Networks Inspired by Collective Behaviors of Insect Societies: An Overview. In: Blum C., Merkle D. (eds.). Swarm Intelligence. Natural Computing Series. Springer, Berlin, Heidelberg. pp. 101—160. Doi: 10.1007/978-3-540-74089-6_4. Fetyukov, F.V. (2020) Development of legislation on human cloning: world experience and a promising legal model for modern Russia. RUDN Journal of Law. 24 (4), 881—900. Doi: 10.22363/2313-2337-2020-24-4-881-900. (in Russian).

Фетюков Ф.В. Законодательство о клонировании человека: мировой опыт и правовая модель для современной России // Вестник Российского университета дружбы народов. Серия: Юридические науки. 2020. Т. 24. № 4. С. 881—900. Doi: 10.22363/2313-23372020-24-4-881-900.

Fu, B., Mettel, M.R., Kirchbuchner, F., Braun, A. & Kuijper, A. (2018) Surface Acoustic Arrays to Analyze Human Activities in Smart Environments. In: Kameas A. & Stathis K. (eds.). Ambient Intelligence. AmI 2018. Lecture Notes in Computer Science. Vol. 11249. Springer, Cham, 115—130. Doi: 10.1007/978-3-030-03062-9_10. Hafner, V.V. (2004) Agent-Environment Interaction in Visual Homing. In: Iida F., Pfeifer R., Steels L. & Kuniyoshi Y. (eds.). Embodied Artificial Intelligence. Lecture Notes in Computer Science. Vol. 3139. Springer, Berlin, Heidelberg. pp. 180—185. Doi: 10.1007/978-3-540-27833-7_13.

Holland, O. (2004) The Future of Embodied Artificial Intelligence: Machine Consciousness? In: Iida F., Pfeifer R., Steels L. & Kuniyoshi Y. (eds.). Embodied Artificial Intelligence. Lecture Notes in Computer Science. Vol. 3139. Springer, Berlin, Heidelberg. pp. 37—53. Doi: 10.1007/978-3-540-27833-7_3. Karpov, V.E. (2020) Can a Robot Be a Moral Agent? In: Kuznetsov S.O., Panov A.I. & Yakovlev K.S. (eds.). Artificial Intelligence. RCAI 2020. Lecture Notes in Computer Science. Vol. 12412. Springer, Cham. pp. 61—70. Doi: 10.1007/978-3-030-59535-7_5.

Kulinich, A. (2018) Architecture of a Qualitative Cognitive Agent. In: Kuznetsov S., Osipov G., Stefanuk V. (eds.). Artificial Intelligence. RCAI 2018. Communications in Computer and Information Science. Vol. 934. Springer, Cham. pp. 102—111. Doi: 10.1007/978-3-030-00617-4_10.

Kuteynikov, D.L., Izhaev, O.A., Zenin, S.S. & Lebedev, V.A. (2020) Algorithmic transparency and accountability: legal approaches to solving the problem of the "black box". Lex russica. 73 (6), 139—148. Doi: 10.17803/1729-5920.2020.163.6.139-148. (in Russian). Кутейников Д. Л., Ижаев О. А., Зенин С. С., Лебедев В. А. Алгоритмическая прозрачность и подотчетность: правовые подходы к разрешению проблемы «черного ящика» // Lex russica. 2020. Т. 73. № 6. С. 139—148. Doi: 10.17803/1729-5920.2020.163.6.139-148. Loukachevitch, N. & Parkhomenko, E. (2018) Recognition of Multiword Expressions Using Word Embeddings. In: Kuznetsov S., Osipov G. & Stefanuk V. (eds.). Artificial Intelligence. RCAI 2018. Communications in Computer and Information Science. Vol. 934. Springer, Cham. pp. 112—124. Doi: 10.1007/978-3-030-00617-4_11. Magomedova, O.S., Koval, A.A. & Levashenko, A.D. (2020) Trade in data: different approaches, one reality. RUDN Journal of Law. 24 (4), 1005—1023. Doi: 10.22363/2313-2337-2020-244-1005-1023. (in Russian).

Магомедова О.С., Коваль А.А., Левашенко А.Д. Торговля данными: разные подходы, одна реальность // Вестник Российского университета дружбы народов. Серия: Юридические науки. 2020. Т. 24. № 4. С.1005—1023. Doi: 10.22363/2313-2337-2020-24-4-10051023.

Mamychev, A. Yu. & Miroshnichenko, O. I. (2019) Modeling the future of law: problems and contradictions of legal policy in the field of regulation of artificial intelligence systems and robotic technologies. Legal policy and legal life. (2), 125—133. (in Russian). Мамычев А.Ю., Мирошниченко О.И. Моделируя будущее права: проблемы и противоречия правовой политики в сфере нормативного регулирования систем искусственного интеллекта и роботизированных технологий // Правовая политика и правовая жизнь. 2019. № 2. С. 125—133.

Nevzorova, O. & Nevzorov, V. (2019) Ontology-Driven Processing of Unstructured Text. In: Kuznetsov S.O., Panov A.I. (eds). Artificial Intelligence. RCAI 2019. Communications in Computer and Information Science. Vol. 1093. Springer, Cham. Hp. 129—142. Doi: 10.1007/978-3-030-30763-9_11. Panchenko, V.Yu. & Romashov, R.A. (2018) The digital state — the conceptual basis of the global world order. State and Law. (7), 99—109. Doi: 10.31857/S013207690000235-0. (in Russian). Панченко В.Ю., Ромашов Р.А. Цифровое государство (digital state) — концептуальное основание глобального мирового порядка // Государство и право. 2018. № 7. С. 99—109. Doi: 10.31857/S013207690000235-0. Popova, A.V. (2020) Legal aspects of artificial intelligence ontology. State and Law. (11), 115—127. Doi: 10.31857/S102694520012531-5. (in Russian).

Попова А.В. Правовые аспекты онтологии искусственного интеллекта // Государство и право. 2020. № 11. С. 115—127. Doi: 10.31857/S102694520012531-5. Rafalyuk, E.E. (2020) The law of the future: searching for new truths or conserving traditional values? Trans. into Engl. by A.I. Nikolaeva. RUDN Journal of Law. 24 (4), 843—863. Doi: 10.22363/2313-2337-2020-24-4-843-863. (in Russian).

Рафалюк Е.Е. Право будущего: поиск новых истин или сохранение традиционных ценностей? / пер. с рус. на англ. Николаева А.И. // Вестник Российского университета дружбы народов. Серия: Юридические науки. 2020. Т. 24. № 4. С. 843—863. Doi:10.22363/2313-2337-2020-24-4-843-863.

Rovbo, M. (2019) Hierarchical Control Architecture for a Learning Robot Based on Heterogenic Behaviors. In: Kuznetsov S.O. & Panov A.I. (eds.). Artificial Intelligence. RCAI 2019. Communications in Computer and Information Science. Vol. 1093. Springer, Cham. pp. 44—55. Doi: 10.1007/978-3-030-30763-9_4. Rybakov, O.Yu. (2021) Quality of life, human well-being, the value of law in the conditions of digital reality. Man, society, law in the conditions of digital reality. Collection of articles. Moscow, Rusains Publ. pp. 15—31. (in Russian).

Рыбаков О.Ю. Качество жизни, благополучие человека, ценность права в условиях цифровой реальности // Человек, общество, право в условиях цифровой реальности. Сборник статей. М.: Русайнс, 2021. С. 15—31. Seredkina, E.V. (2010) Analysis of cyborgization and anthropomorphization programs in the context of high-tech philosophy. Perm National Research Polytechnic University. Culture, history, philosophy, law. (3), 137—146. (in Russian).

Середкина Е.В. Анализ программ киборгизации и антропоморфизации в контексте философии «хай-тек» // Вестник Пермского государственного технического университета. Культура, история, философия, право. 2010. № 3. С. 137—146. Spitsyn, I.N. & Tarasov, I.N. (2020) Artificial Intelligence in the Administration of Justice: Theoretical Aspects of the Legal Regulation (Articulation of the Issue). Actual Problems of Russian Law. 15 (8), 96—107. Doi: 10.17803/1994-1471.2020.117.8.096-107. (in Russian). Спицин И.Н., Тарасов И.Н. Использование искусственного интеллекта при отправлении правосудия: теоретические аспекты правовой регламентации (постановка проблемы) // Актуальные проблемы российского права. 2020. Т. 15. № 8. С. 96—107. Doi: 10.17803/1994-1471.2020.117.8.096-107. Sporns, O. & Pegors, T.K. (2004) Information-Theoretical Aspects of Embodied Artificial Intelligence. In: Iida F., Pfeifer R., Steels L., Kuniyoshi Y. (eds.). Embodied Artificial Intelligence. Lecture Notes in Computer Science. T. 3139. Springer, Berlin, Heidelberg. pp. 74—85. Doi: 10.1007/978-3-540-27833-7_5. Stefanuk, V.L., Zhozhikashvily, A.V. & Savinitch, L.V. (2020) Intelligent Systems with Restricted Autonomy. In: Kuznetsov S.O., Panov A.I. & Yakovlev K.S. (eds.). Artificial Intelligence. RCAI 2020. Lecture Notes in Computer Science. Vol. 12412. Springer, Cham. pp. 460—471. Doi: 10.1007/978-3-030-59535-7_34. Trianni, V., Nolfi, S. & Dorigo, M. (2008) Evolution, Self-organization and Swarm Robotics. In: Blum C. & Merkle D. (eds.). Swarm Intelligence. Natural Computing Series. Springer, Berlin, Heidelberg. pp. 163—191. Doi: 10.1007/978-3-540-74089-6_5. Yastrebov, O.A. (2018) Artificial Intelligence in the Legal Space. RUDN Journal of Law. 22 (3), 315—328. Doi: 10.22363/2313-2337-2018-22-3-315-328. (in Russian). Ястребов О.А. Искусственный интеллект в правовом пространстве // Вестник Российского университета дружбы народов. Серия: Юридические науки. 2018. Т. 22. № 3. С. 315—328. Doi: 10.22363/2313-2337-2018-22-3-315-328. Zenin, S.S., Kuteynikov, D.L., Izhaev, O.A. & Yapryntsev, I.M. (2020) Law Making in the Conditions of Algorithmization of Law. Lex russica. 73(7), 97—104. Doi: 10.17803/17295920.2020.164.7.097-104. (in Russian).

Зенин С.С., Кутейников Д.Л., Ижаев О.А., Япрынцев И.М. Правотворчество в условиях алгоритмизации права // Lex russica. 2020. Т. 73. № 7. С. 97—104. Doi: 10.17803/17295920.2020.164.7.097-104.

About the author:

Yulia A. Gavrilova — Candidate of Legal Sciences, Associate Professor of the Department of Theory and History of Law and State, Law Institute, Volgograd State University; 100, Universitetsky ave., Volgograd, 400062, Russian Federation ORCID ID: 0000-0002-8055-4710 e-mail: gavrilova_ua@volsu.ru

Об авторе:

Гаврилова Юлия Александровна — кандидат юридических наук, доцент, доцент кафедры теории и истории права и государства, Институт права, Волгоградский государственный университет; Российская Федерация, 400062, г. Волгоград, Университетский пр-т, д. 100 ORCID ID:0000-0002-8055-4710 e-mail: gavrilova_ua@volsu.ru

i Надоели баннеры? Вы всегда можете отключить рекламу.