Научная статья на тему 'SYSTEMATIC DISCUSSION OF ARTIFICIAL INTELLIGENCE’S PERSONALITY IN CRIMINAL LAW AND THE PUNISHABILITY FOR IMPROPER OMISSION BY THE GROUP ACTING AS THEIR GUARANTOR POSITION'

SYSTEMATIC DISCUSSION OF ARTIFICIAL INTELLIGENCE’S PERSONALITY IN CRIMINAL LAW AND THE PUNISHABILITY FOR IMPROPER OMISSION BY THE GROUP ACTING AS THEIR GUARANTOR POSITION Текст научной статьи по специальности «Право»

CC BY
18
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Russian Law Journal
Scopus
ВАК
Область наук
Ключевые слова
artificial intelligence / damage to legally-protected interests / Improper omission / guarantor position / no crime without law / legislation

Аннотация научной статьи по праву, автор научной работы — Chou-Yi Hsu, Jiang-Jia Wang

The rapid development of artificial intelligence (AI) in the twenty-first century not only makes our life more convenient but also brings some invisible concerns. Through the study of several typical AI applications, it is found that today's AI cannot realize its full functionality without its users. Therefore, users still bear most of the legal liability for damage to legally-protected interests caused by AI. However, despite the continuous development of AI, the ways of remedy for damages to legally-protected interests, that may occur as a result of the crime of omission, arising during the logical writing of program codes are rarely conclusive. In addition to distinguishing the main framework between AI, machine learning, and deep learning in criminal law, this research arguing, based on philosophical jurisprudence, concludes that although AI has yet to meet the personality characteristics in criminal law due to its lack of independence, AI scientists or legal persons representing it must hold the guarantor position in an improper omission. This will render the scientists and legal persons punishable and relief petitions possible, supplying a considerable legal basis for future formulation of legal norms on AI based on the principle of nulla poena sine lege (no crime without law, or principle of legality).

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «SYSTEMATIC DISCUSSION OF ARTIFICIAL INTELLIGENCE’S PERSONALITY IN CRIMINAL LAW AND THE PUNISHABILITY FOR IMPROPER OMISSION BY THE GROUP ACTING AS THEIR GUARANTOR POSITION»

SYSTEMATIC DISCUSSION OF ARTIFICIAL INTELLIGENCE'S PERSONALITY IN CRIMINAL LAW AND THE PUNISHABILITY FOR IMPROPER OMISSION BY THE GROUP ACTING AS THEIR GUARANTOR

POSITION

CHOU-YI HSU 1 2 *and JIANG-JIA WANG 3

1 Department of Law, National Chung Cheng University, 621301 Chiayi, Taiwan; t545316@gmail.com

2 Department of Pharmacy, Chia Nan University of Pharmacy and Science, 717301 Tainan, Taiwan;

t545316@gmail.com

3 Department of Law, National Chung Cheng University, 621301 Chiayi, Taiwan; ooseika@ccu.edu.tw Corresponding author (Chou-Yi Hsu, No.21, Wufu Rd., Guanmiao Dist., Tainan City, Taiwan,

t545316@gmail.com)

Abstract: The rapid development of artificial intelligence (AI) in the twenty-first century not only makes our life more convenient but also brings some invisible concerns. Through the study of several typical AI applications, it is found that today's AI cannot realize its full functionality without its users. Therefore, users still bear most of the legal liability for damage to legally-protected interests caused by AI. However, despite the continuous development of AI, the ways of remedy for damages to legally-protected interests, that may occur as a result of the crime of omission, arising during the logical writing of program codes are rarely conclusive. In addition to distinguishing the main framework between AI, machine learning, and deep learning in criminal law, this research arguing, based on philosophical jurisprudence, concludes that although AI has yet to meet the personality characteristics in criminal law due to its lack of independence, AI scientists or legal persons representing it must hold the guarantor position in an improper omission. This will render the scientists and legal persons punishable and relief petitions possible, supplying a considerable legal basis for future formulation of legal norms on AI based on the principle of nulla poena sine lege (no crime without law, or principle of legality).

Keywords: artificial intelligence, damage to legally-protected interests, Improper omission, guarantor position, no crime without law, legislation

1. Introduction

The main areas of scientific and technological development in the twenty-first century include

big data, neural networks, distributed registries, and AI. These new technologies have led to the need for the development of new criminal law theories. In recent years, although there have been many articles on the relationship between AI and the law, there is still little discussion on what AI is and its

practical application when it comes to the relationship between criminal law and the guarantor position in improper omission. Among them, a key issue is how to determine AI's criminal personality. Even though regulating the use of AI is for the benefit of all mankind, we will identify issues of AI's personality in criminal law from abstract concepts.

Guarantor position is the premise of what is provided in Article 15, Paragraph 1, of The Criminal Code of Taiwan—a person who has a legal obligation to prevent the occurrence of a criminal result, and is able to but fails to do so. This person shall be treated in the same way as those who cause the result by active conduct. A similar logic exists in other countries where criminal law is based on nulla poena sine lege principle (Priambada & Pratiwi, 2022). In other words, where the legislator expresses one's "obligation" to prevent the occurrence of a crime, but one fails to take appropriate measures,

1.e., fails to act, it is tantamount to committing a crime by means of action. To see it in another light, if there is no obligation to prevent crime, then there are no constituent elements of crime even if there is a lack of action.

Modern AI is still unable to think and operate independently. Therefore, the existence of its personality in criminal law is still difficult to confirm, and its punishability is certainly inconclusive. However, AI scientists or legal persons representing it must have the personality characteristics in the eyes of the law for them to be punishable. Studying the laws and statutes, we can find that if an AI developer deliberately directs a violation of criminal law, he or she should be considered as committing an active crime (Blount, 2021). However, if the damage to legally-protected interests is caused by hidden errors or continuous inaction of the program itself, then whether it constitutes a crime or not must be judged based on whether the group representing the AI has the status of guarantor. Therefore, it is worth exploring.

The sources of data for this study are mainly legal books, journals, publications, and doctrines. Philosophical arguments are based on legal knowledge. The method of study is mainly the revision of materials with a jurisprudential approach, including the definition of AI; the classification of prescribed crimes on nulla poena sine lege principle; whether AI has personality in criminal law and the inaction arising from its applications; the discussion of whether AI scientists or the legal persons representing it have a guarantor position, and the future legislation and issues that warrant attention. In addition, unlike the case law-based approach, this study will use the framework of the principle of legality in Taiwan or other countries' criminal law as its direction.

2. Literature Review 2.1 What is AI

The concept of AI extends back to ancient times when Hephaestus, the Greek craftsman, and God of craftsmanship, built a metal automata to help him in his workshop on Crete Island more than 2500 years ago (McCorduck et al., 1977). Aside from some basic man-made robots, AI has remained a regular feature of mythological and fictional themes for thousands of years, but humans are not content with inanimate objects with gods-imbued intelligence but wish to reverse the process, or

forging the gods, as Pamela McCorduck puts it in Machines Who Think (McCorduck & Cfe, 2004; Piel & Seising, 2022).

This seems to have become possible in the twentieth century, especially as the Dartmouth Summer Artificial Intelligence Research Program in 1956 provided a significant impetus(McCarthy et al., 2006). This program is generally regarded as the creation of AI as a discipline, so most people see it as the first year of AI. At that summer's conference, the pioneers in the field of AI dreamed of building complex machines with characteristics equivalent to human intelligence through emerging computers. This is the concept of what is called General AI, a magical machine that has all human senses (and possibly even more) and reasoning, and thinks like a human.

Nonetheless, before that, A.M. Turing, a British computer wizard, cryptographer, logician, and the father of computers and AI, put forward a thought experiment—the imitation game—in his 1950 paper Computing Machinery and Intelligence, to test whether computers can show the same or indistinguishable wisdom as a human (Turing, 2009; Turing & Haugeland, 1950). If the tester wasn't able to confirm the same series of questions raised by a person and a robot, and the answers he got made him unable to distinguish between the two, then it was determined that the robot had passed the (Copeland, 2000; Powell, 2019). This test not only helped the Allies unlock the encrypted information of the German army and thus win World War II(Alexander, 2019), but also is the precursor concept of AI in the twenty-first century.

However, in the late twentieth century, due to computers' limited calculating speed and graphic computing ability, AI remained with simple robot applications, such as the judgment of defective products, and the defeat of Russian chess king Garry Kasparov by AI-like Deep Blue in 1996. Nevertheless, in the twenty-first century, as the progress of Moore's Law continues to lead to breakthroughs in computer computing (Schaller, 1997), many AI capabilities have surpassed human imagination. AlphaGo for Go game (Granter et al., 2017) and Language Models for Dialog Applications (LaMDA), which recently sparked a debate on whether AI is conscious (Thoppilan et al., 2022), mainly consist of automatically building a response system from a large amount of data based on algorithm learning. This also makes machine learning and deep learning become the trend of the time.

To sum up, AI refers to the intelligence expressed by man-made machines. It usually alludes to the ability of computers to simulate human thinking processes and to imitate human capabilities or behaviors (Baduge et al., 2022). It consists of systematizing human experience and integrating it into information systems, so that the systems or the machines can change their operation methods according to operating environmental changes, just like humans.

There are diverse views on the scope of AI. An increasing number of applications and changes have been generated as time progresses. The structure of AI, machine learning, and deep learning are shown in Figure 1. Among them, machine learning is the most basic application method. It

analyzes data through algorithms, learns from it, judges or predicts certain things in the real world, and manually writes software programs with specific instructions to complete a special task. In short, it uses a large amount of data and algorithms to train a machine or program on learning how to perform tasks. This is where algorithms such as decision tree learning, inductive logic programming, clustering, and reinforcement learning are developed, but these fall short of the ultimate goal of General AI and fail to achieve a fraction of the goals of Narrow AI.

To solve the above problem, artificial neural networks have been derived from early machine learning, and this development has been for several decades. With the understanding of brain neuroscience, the interconnection of all neurons has become the inspiration for the development of neural networks. These artificial neural networks have discrete layers of connections and data propagation directions, unlike any neuron in the biological brain, which can connect to other neurons within a certain physical distance. Each neuron assigns a weight to the input, evaluates whether the task is being performed correctly or not, and determines the final result by the total value of the weights.

In the above-mentioned literature, it can be found that machine learning and deep learning are both methods to understand AI and related extensions. As there is no special law to govern the field of AI in Taiwan or other countries, a large range of AI is taken as the main framework when exploring the subject of criminal law, rather than discussing machine learning or deep learning alone.

Figure 1. Relationship between AI, machine learning, and deep learning

2.2 Types of criminal acts in criminal law

Now that the subject is set, discussion now is based on the country's criminal law, which is governed by the principle of legality. Criminal acts can be divided into negligence, action, and omission. The latter can be further divided into proper omission and improper omission according to its extent (See Figure 2 for the relevant structure). This study on the punishability of the creation of

AI or the legal persons representing it, due to their guarantor position, is based on the necessary premise of improper omission. Continuing below, the classification of criminal acts will be presented.

Lacks the element of intent,

punishment due to the lack of subjective consciousness, mitigating punishment due to the lack of subjective consciousness.

Figure 2. Schematic diagram of criminal behavior classification

2.2.1 Criminal negligence

In criminal law, criminal negligence refers to a crime committed as a result of fault by one who lacks the element of intent. Strictly speaking, it does not refer to criminal consciousness, as it alludes to the objective standard of the defendant's conduct, instead of his mental state. In other words, although a negligent offender is also criminally liable, there is a concept of mitigating punishment due to the lack of subjective consciousness to a certain extent. As far as the AI discussed in this study is concerned, even if scientists violate the duty of care, according to the framework of the negligent crime test, there still must be a judgment as to whether the results are avoidable. However, how machines make decisions also depends on how humans train them. It will be unreasonable to exempt scientists from responsibility because of the intervention of machines. If only negligent crime is used to pursue the responsibility for AI creation or the responsibility of legal persons representing it, it may, in some cases, lead to situations where the unfortunate consequences for the infringement of a legally-protected interest are the responsibility of no one but the victim.

2.2.2 Crime of voluntary act

Crime of voluntary act refers to an actor that actively creates a new risk and menaces actual harm to a legally-protected interest in violation of a prohibitive norm. This is also the most common form of ordinary crime. In other words, if the offender satisfies the subjective elements of knowledge and desire, the subjective and objective elements that constitute the crime are present, and if he does not prevent the violation of the law, then the conditions for a crime of voluntary act are met. Now, if scientists or legal persons have the possibility of deliberately using AI to commit crimes, they can be subjects of punishment. However, this study will not take the crime of voluntary act as the main theme of discussion, given that there are channels and grounds to remedy the subjects whose

Criminal Negligence

legal interests can be infringed if AI acts without criminal intent.

2.2.3 Criminal omission

A criminal omission offense is one where the actor fails to perform certain obligations and, consequently, fails to eliminate an existing risk, thus violating the commandments. One is an offender of omission when, without commissioning any behavior or action, commits a crime against the expectation of the criminal law. Therefore, it is necessary to discuss whether there is a need for punishment when there is no action. Jurisprudence classifies omission as improper and proper omission. In addition, depending on whether the actor has the guarantor position, the crime of omission can also be classified as "proper crime of omission" and "improper crime of omission" relying on whether the crime can only be completed by omission.

2.2.3.1 Proper omission

Proper omission refers to a crime that the perpetrator "can only" commit by inaction(Pochapska, 2021). The proper omission covers a crime whose content is simply negative inaction in violation of the law demanding that a certain action be taken. This is the crime that is established for deliberately not doing what the law expects or orders. For example, the crime of gathering against a dispersal order established in Article 149 of the Taiwan's Criminal Code is punishable for "not dispersing", so the perpetrator "can only" commission the crime by not doing so (that is, inaction). On the contrary, if the perpetrator obeys the order and "disperses", i.e., acts, it cannot constitute a crime stipulated in this article. As AI itself does not have an active or passive personality that can be considered negative in action, so it does not have the element required for proper omission.

2.2.3.2 Improper omission

Improper omission means that the perpetrator is not only able to commit a crime by an act but also enact the crime by "continuous" omission. As most of the provisions of criminal law are prohibitive norms forbidding people to engage in certain acts, most of the crimes of omission in violation of criminal law are improper crimes of omission. As far as the AI discussed in this study is concerned, its creator or the legal person representing it, whether acting (actually writing the code) or not (allowing AI to do something wrong based on the codes), can cause damages to a legally-protected interest. However, criminal law is not explicit on the conduct required for the improper crime of omission, and, rather, leaves substantive provisions to those on the crime of act. This causes problems in identifying the act. In addition, it should also be noted that the establishment of an improper crime of omission must be based on the premise of having the guarantor position(Gomez, 2020), which is also the focus of this research.

2.3 Guarantor position

The guarantor position refers to the obligation to prevent the occurrence of a crime. That is,

when the actor is described as "having the position of a guarantor", this phrase means that he "has the obligation to prevent the occurrence of crime." Conversely, if there is no obligation to prevent a crime from occurring, then, even if there is no act, there is no room to establish a crime. Theoretically, this is called the guarantor position of improper omission. Here the actor's omission must have the social significance equivalent to action and lead to the crime as a result. For the improper crime of omission to be committed, the criminal law requires that the actor must have the status of guarantor in order to have the duty of care. This is the case of the group of people that have the position of guarantor for AI as referred to in this study.

In addition to provisions of the special law, in the doctrine, there are two categories regarding the source of guarantor position:

I. Those who have the obligation to protect

(i) The status of guarantor who has the obligation to protect a specific legally protected interest.

(ii) Specific close relationships, such as parents and spouses.

(iii) The relationship of a common group at risk for a specific purpose, such as between members of a mountain climbing team.

(iv) Those who voluntarily assume the duty of protection, such as pool lifeguards.

(v) Members of special public or corporate bodies with combined protection obligations, such as police officers.

II. Persons with supervisory obligations

(i) The status of a guarantor who has an obligation to supervise a specific source of danger, such as a manufacturer of goods or scientists and legal persons referred to by this research.

(ii) Those who are in charge of others, such as prison officers, or those in charge of offensive patients in hospitals.

(iii) A person who acts in a dangerous manner, such as a person who lights a cigarette at a gas station and causes a restaurant fire.

The above-mentioned guarantor position with the obligation of protection refers to the guarantor's obligation to protect a particular legal interest from infringement, regardless of who caused the danger. The obligation to supervise refers to the obligation to oversee a particular source of danger so that it does not contravene others. The previously mentioned dangers and their sources represent the infringement of legally-protected interests caused by AI, i.e., the wrong programs. Therefore, it is worth exploring the group that has the status of guarantor of AI, from the standpoint of the obligations of protection or supervision.

3. Discussion

3.1 Does AI itself have a criminal personality?

Personality, according to the representative definition of psychologists, refers to the consistent and stable behavior patterns and ways of thinking displayed by a person when adapting to others, him/herself, things, and even the environment as a whole. But personality is by no means simple and unchanging—it has uniqueness, complexity, integrity and persistence(Hofstee, 1994; Krishnamurthy et al., 2022; Mischel, 1977). In criminal law rights and interests are determined based on first finding whether there exists personality, the same as with civil and constitutional law. Therefore, it must be first confirmed whether AI possesses personality before making a subsequent judgment on its punishability.

There is a basic consensus that AI can participate in criminal activities through humans(Lagioia & Sartor, 2020). Some studies have pointed out that there is a trend at the international level to recognize AI as an international legal person, with relevant recommendations (Talimonchik, 2021). When AI conducts an illogical behavior, its punishability and legally-justifiable decision depend on whether it has personality in the scope of criminal law. This is also true in commercial law —a corporation was created so that enterprises may independently engage in activities in their own name and enjoy corresponding rights and obligations, or where financial and commercial crimes are committed, corresponding decisions can be made against them and bind them (Berle, 1947; Hamilton, 1970).

AI can be broadly divided into weak and strong, determined by whether it can independently reason or solve problems (Chen et al., 2022). However, most of the existing AI is classified as weak or so-called narrow AI, as it only focuses on solving specific problems. However, no matter what the degree of AI is, its impact on all levels of society has gradually deepened with the progress of science and technology. Although the scientific community believes that it may take nearly fifty years for weak AI to become strong (Brown, 2021), at a time when humans are yet fully able to come up with a clear definition of our intelligence, what AI can do is only limited to allowing machines or software to decide on making a rather more interesting and quicker response. In addition, it can be seen from the above that both the "training logic" of machine learning and the "weight distribution" of deep learning are ways to achieve AI. In other words, they are responses given by scientists to specific purposes. The existing AI systems are still only programs that tokenize everything and focus on executing an "action." It is not yet out of the realm of Autonomous Machine Intelligence (AMI) and, therefore, does not possess the autonomous personality that is characteristic of ordinary people. However, although AI itself is not punishable under criminal law due to its lack of personality, during this period of development, we cannot ignore the adverse events that might be caused by AI's mistakes, as well as the possible damages to the legal interests in life and property that it might bring. There's still a need to find a logic that conforms to the principle of legality in criminal law to safeguard the rights and interests that it protects.

3.2 Exploration of AI's omission

Language Models for Dialog Applications (LaMDA) refer to conversational software developed based on transformer-based neural language models. It's a system that has millions of nodes and hundreds of millions of words of dialogue data and texts on the network for pre-training (Le et al., 2021). We first discuss whether it has a personality or not; this study considers that the reason why LaMDA has no perception ability is simple —the robot does not have a physiological function of sensing and perceiving. It is an AI software that generates complete sentences according to words or sentence prompts. If a user communicates with it at this time, it may cause inevitable harm if the word database collected by LaMDA scientists is not complete or the judgment logic is not sufficiently sophisticated. In addition, empathy is also an important factor in AI's ability to mislead users (Pelau et al., 2021). Empathy is understood as the ability to solve others' problems or meet their emotional needs based on understanding their feelings. In other words, it is relatively easy for users to get a response from the AI, that is fed by scientists based on what users want. It is not a mere imitation through collocating word strings. It is precisely because of this ability that the current conversational AI technology has developed to an extent that many believe it to be genuine. Imagine if a user with a depression or other mental problems tendency interacts with such an AI system, the effect could be the opposite, which may lead to an unpredictable event, if the response program is not sufficiently sophisticated or is deliberately designed. There are also similar discussions in the medical field, where even professional psychologists may take the wrong course of action (Ferguson, 2012; Reynolds et al., 2021), not to mention the fact that most conversational AI systems, written only by scientists based on words collected, have not undergone professional psychological evaluation. For a crucial precaution, the groups holding the guarantor position must not only have the obligation to protect, but also the obligation to supervise.

On the other hand, this study will evaluate the advanced application of today's automatic driving. According to the classification of the Society of Automotive Engineers (SAE), automatic driving can be divided into six levels, from zero to five, depending on the degree of controller intervention. Driver's intervention will not be necessary if the driver reaches high automatic driving level four or fully automatic driving level five in the future (Kaye et al., 2021). If an accident occurs, the responsibility cannot be attributed to the driver on the basis of the possibility that the danger is foreseeable, since, according to the classification, there is no vehicle driver intervention; even the driver's duty of care must be reduced so that there will be less room to hold the driver responsible for his or her negligence. However, because AI has no personality, it is not the subject of criminal culpability. Presently, there is less relevant law regulating how sufficient and complete the machine learning process should be. To this end, the legal person who manufactures and sells self-driving vehicles should be the main subject of accountability, as the program is to be judged according to conditions given by the scientists who developed it. However, the system may not have the common sense unique to personality, which will make the relevant functions very vulnerable, hence not allowing it to respond or take care of many extreme situations. In such light, the group that is in the

position of the guarantor of AI should have the obligation to supervise, so that the party whose legal interests are damaged can take this as the basis for a relief petition.

Another area for discussion is surveillance systems. A paper on deep neural network models that use image processing to predict criminal behavior was planned to be published in 2020 (Hashemi & Hall, 2020). More than 1,700 researchers from statistics, machine learning, AI, law, society, history, and anthropology came together to prevent the publication of this study, believing that its claim was based on an unsound scientific premise, research, and methods. In particular, the infringement of portrait rights and free legal interests in the data collection process has still not achieved an effective balance. Moreover, in recent years, government officials have excessively embraced machine learning and AI as tools to legitimize state violence and strengthen the political regime, especially when the country is likely to be in turmoil. This is a calculus bias on race, class, and gender, revealing that machine learning systems in the framework of AI actually magnify historical discrimination. If, in the future, an improper arrest or even imprisonment occurs as a result of such a criminal identification system, the supervision obligation that the group with guarantor position (which could even be traced back to the imported national representative legal entity) is under becomes very important.

Furthermore, another point to note is the application of AI in medical treatment, even though it is still at an auxiliary stage, and the real diagnostic results still need to be judged by doctors at the end. However, through consulting professional doctors, it is known that today's smart medicine is already a trend where basic judgment is passed on to AI, such as X-ray image recognition. The scientists who write judgment logics may, inevitably, make errors in judgment due to a lack of medical knowledge. In addition, it is necessary to consider the unique cases that may arise in medical treatment. For infringement of personal or life interests caused by medical treatment, the duty of AI scientists or legal entities representing it, to supervise from their guarantor position is extremely important if relief is sought due to AI's inaccurate judgment or incorrect auxiliary information.

Through the discussions above, we can find that when legal interests are violated due to the omission of today's artificial intelligence is easy to cause the relief to fail and make the victim suffer unilaterally, making the law lose its function of protecting justice. Therefore, ensuring the group with the guarantor position of AI from the standpoint of the obligations of protection or supervision is essential.

3.3 Is there a guarantor position in criminal law for AI scientists and legal persons

In the beginning, it was found that both the logical judgment of machine learning and the weight distribution according to deep learning is within the scope of AI, through the application of which omission may happen. It was also observable that most scientists will be affected by their physiological and psychological state when processing codes. In other words, it is extremely "personalized". Its derivative applications often decide the direction before the code is typed, resulting in that when using machine learning or deep learning-related products, users can only passively accept or second-guess the results of the system according to the logic completed by the

scientist. The same problem exists for the legal persons representing the system. This is dangerous in law, science, and ethics. If the judgment made by the AI under the logic of scientists' variation is wrong, it is often difficult for the users to find out. What is even worse, they might blindly believe in AI's judgment which causes deviation from the direction.

Based on the above, it is possible to mention that improper crime of omission is where the scientists or the legal persons behind AI, due to their position as guarantors, are legally obligated to prevent crime from occurring and fail to do so when they are able to, or commit, through passive omission, the crime that usually may be accomplished only by positive conduct. Therefore, it can be stated that the group representing AI must have the guarantor position's obligation of protection and supervision. Since scientists or legal persons have a certain degree of understanding and control over AI, there must be a duty to protect specific legal interests and a duty to supervise in order to prevent the system from foreseeable risks. This will serve to provide the ground of punishability and the basis for demand of relief for the damage to legal interests as a result of omission.

In addition, according to the legal principle of nulla poena sine lege in criminal law, there can be no crime without a law. However, AI itself does not fit in with this legal principle in criminal law. The culpability refers to that the actor can decide to implement the illegal act in a reproachable manner. The reproachability is premised on the actor (AI) having the possibility to act, that is, having the freedom of will. However, as AI does not have freedom of will, it does not possess the punishability of personality. According to the concept of functional culpability, culpability is a product of social construction. Criminal law attributes culpability to a person in order to maintain the effectiveness of the norms. Only those who can understand the meaning of the norms and violate them are culpable. However, as AI lacks the ability to understand or violate norms, it is not subject for punishability. Therefore, revising the elements regarding the guarantor position of natural or legal persons representing AI is a more feasible legislative model in the future. The advantage of this model is that it can overcome the limitation of the AI application, solve the difficulty in determining whether the result caused by the algorithm is avoidable and allow the development of AI in specific areas. This is a model that can strike a balance between the development of technology and the protection of legal interests. However, it should also be noted that the guarantor status is not limited to those expressly enumerated in the law, but rather it can be determined via contract, custom or the spirit of the law. Therefore, it is necessary to be stringent when determining whether the constituent elements are appropriate or not, otherwise, it may lead to the possibility that the state may arbitrarily impose a penalty.

3.4 Development direction of future laws

When robots or intelligent agents with autonomous learning and decision-making capabilities gradually emerge in the future, whether the status of machines should be enhanced, or even given legal personality, will be a major research direction of AI in law. The development of AI may also make science and technology replace law and become a method to regulate human behavior, that is,

the so-called "code is the law". The function of legal norms may still require or prohibit human engagement in certain specific conducts. If humans violate legal norms, they may be punished to a certain degree. After the full development of AI, it may become impossible for humans to engage in illegal activities. However, whether such restrictions on freedom of human activities can be justified at the constitutional level is also a problem that needs to be solved.

Such discussion is extremely valuable, as it can explore, preventively, AI crimes as a result of omission in the future. Take AI performing data collection according to scientists' logic, if a concern on violation of privacy arises due to omission, corresponding legislation can be enacted, or scientists can be urged to be more cautious in the process of code creation. In addition to the need to supervise the scientists or legal persons behind AI, we cannot ignore the protection of human rights provided in the European Convention on Human Rights adopted in 1950 by the Council of Europe and, in particular, those provided in Article 6 (Right to a fair trial) and Article 7 (No punishment without law) (Robertson, 1950). That is, when determining the punishability of scientists or legal persons holding a guarantor position in the future, there must be a guarantee of the principle that everyone is entitled to a fair trial, and that no one is held criminally guilty for any conduct that does not constitute a crime under existing law, which is also the spirit of the principle of nulla poena sine lege in criminal law.

In addition to human rights, it is important to pay attention to the psychological state of AI scientists and users—for instance, whether a specific ethical basis should be incorporated into the design of AI (Moser et al., 2022). The design of AI is often influenced by the bias in the mind of the designer, which may lead to biased decisions. As in the case of the conversational AI mentioned earlier, scientists should, to some extent, cooperate with psychologists or counselors to have a standardized evaluation procedure to avoid misguidance. Moreover, users must also undergo some degree of psychological evaluation to avoid the abuse of the punishability of guarantor status caused by misuse of the AI.

Through the above discussion, countries that take the guarantor's status as the premise for the crime of improper omission may, from a more reasonable regulatory angle, assess damage to legal interest violations brought by AI systems in the future, and make appropriate regulations on the conduct of scientists, legal persons, and users. As AI may change the object of application of the law, for the matter of whether the existing laws can be applied to new technologies and whether new laws need to be enacted to cope with the new technologies, this study can be used as a reference in formulating relevant legal provisions in the future.

4. Conclusion

At its current development stage, AI, for having no personality, is still unpunishable in criminal law. Nevertheless, AI scientists or the legal person representing it should, to some extent, have the status of guarantor with supervision or protection obligations, and, thus, have the punishability for improper omission to assure relief claims under criminal law upon infringement of legally-protected

interests.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

[1] Alexander, V. (2019). Siri Fails the Turing Test: Computation, Biosemiosis, and Artificial Life. Recherches Sémiotiques/Semiotic Inquiry, 39(1-2), 231-249. https://doi.org/https://doi.org/10.7202/ 1076234ar

[2] Baduge, S. K., Thilakarathna, S., Perera, J. S., Arashpour, M., Sharafi, P., Teodosio, B., Shringi, A., & Mendis, P. (2022). Artificial intelligence and smart vision for building and construction 4.0: Machine and deep learning methods and applications. Automation in Construction, 141, 104440. https://doi.org/https://doi.org/10.1016/j.autcon.2022.104440

[3] Blount, K. (2021). Seeking Compatibility in Preventing Crime with Artificial Intelligence and Ensuring a Fair Trial. Masaryk UJL & Tech., 15, 25. https: //doi. org/https: //doi. org/10.5817/MUJLT2021-1-2

[4] Brown, R. D. (2021). Property ownership and the legal personhood of artificial intelligence. Information & Communications Technology Law, 30(2), 208-234. https://doi.org/https://doi.org/10.1080/13600834.2020.1861714

[5] Chen, J., Sun, J., & Wang, G. (2022). From unmanned systems to autonomous intelligent systems. Engineering, 12, 16-19. https://doi.org/https://doi.org/10.1016/j.eng.2021.10.007

[6] Copeland, B. J. (2000). The turing test. Minds and Machines, 10(4), 519-539. https://doi.org/https://doi.org/10.1023/A:1011285919106

[7] Ferguson, C. J. (2012). The wrong cure. New Scientist, 216(2888), 24-25. https://doi.org/https://doi.org/10.1016/S0262-4079(12)62753-5

[8] Gómez, V. (2020). The Criminal Liability of the Compliance Officer: An Approach Through Several Hard Cases. Journal of Penal Law and Criminology, 8(1), 59-71. https://doi.org/10.26650/JPLC2020-0010

[9] Granter, S. R., Beck, A. H., & Papke Jr, D. J. (2017). AlphaGo, deep learning, and the future of the human microscopist. Archives of pathology & laboratory medicine, 141(5), 619-621. https://doi.org/10.5858/arpa.2016-0471-ED

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[10] Hashemi, M., & Hall, M. (2020). RETRACTED ARTICLE: Criminal tendency detection from facial images and the gender bias effect. Journal of Big Data, 7(1), 1-16. https://doi.org/https-J/doi.org/10.1186/s40537-019-0282-4

[11] Hofstee, W. K. (1994). Who should own the definition of personality? European journal of Personality, 8(3), 149-162. https://doi.org/10.1002/per.2410080302

[12] Kaye, S.-A., Somoray, K., Rodwell, D., & Lewis, I. (2021). Users' acceptance of private automated vehicles: A systematic review and meta-analysis. Journal of safety research, 79, 352-367. https://doi.org/https://doi.org/ 10.1016/j. jsr.2021.10.002

[13] Krishnamurthy, R., Hass, G. A., Natoli, A. P., Smith, B. L., Arbisi, P. A., & Gottfried, E. D. (2022). Professional practice guidelines for personality assessment. Journal of Personality Assessment, 104(1), 1-16. https://doi.org/10.1080/00223891.2021.1942020

[14] Lagioia, F., & Sartor, G. (2020). Ai systems under criminal law: a legal analysis and a regulatory perspective. Philosophy & Technology, 33(3), 433-465. https://doi.org/https://doi.org/10.1007/s13347-019-00362-x

[15] Le, T., Nguyen, T., Ho, N., Bui, H., & Phung, D. (2021). Lamda: Label matching deep domain adaptation. International Conference on Machine Learning,

[16] McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine, 27(4), 12-12. https://doi.org/https://doi.org/10.1609/aimag.v27i4.1904

[17] McCorduck, P., & Cfe, C. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence. CRC Press.

[18] McCorduck, P., Minsky, M., Selfridge, O. G., & Simon, H. A. (1977). History of artificial intelligence. IJCAI,

[19] Mischel, W. (1977). On the future of personality measurement. The Psychology of Social Situations, 32(4), 249-263. https://doi.org/https-J/doi.org/10.1016/B978-0-08-023719-0.50029-2

[20] Moser, C., den Hond, F., & Lindebaum, D. (2022). Morality in the age of artificially intelligent algorithms. Academy of Management Learning & Education, 21(1), 139-155. https://doi.org/https://doi.org/10.5465/amle.2020.0287

[21] Pelau, C., Dabija, D.-C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855. https://doi.org/10.1016/j.chb.2021.106855

[22] Piel, H., & Seising, R. (2022). In Memoriam Pamela McCorduck. In: Springer.

[23] Pochapska, L. (2021). Committing a crime by inaction. Topical Issues of Humanities, Technical and Natural Sciences, 331-333.

[24] Powell, J. (2019). Trust Me, I'ma chatbot: how artificial intelligence in health care fails the turing test. Journal of Medical Internet Research, 21(10), e16222. https://doi.org/10.2196/16222

[25] Priambada, B. S., & Pratiwi, D. R. (2022). Victimology Review of the Legal Protection of Victims of the Crime of Human Trafficking. Budapest International Research and Critics Institute (BIRCI-Journal): Humanities and Social Sciences, 5(2), 13310-13318. https: //doi. org/10.26737/ ij-mds.v1i1.419

[26] Reynolds, C. R., Altmann, R. A., & Allen, D. N. (2021). The problem of bias in psychological assessment. In Mastering modern psychological testing (pp. 573-613). Springer.

[27] Robertson, A. H. (1950). The European Convention for the Protection of Human Rights. Brti. YB Intl L., 27, 145.

[28] Schaller, R. R. (1997). Moore's law: past, present and future. IEEE spectrum, 34(6), 52-59. https://doi.org/10.1109/6.591665

[29] Talimonchik, V. P. (2021). The Prospects for the Recognition of the International Legal Personality of Artificial Intelligence. Laws, 10(4), 85. https://doi.org/https://doi.org/10.3390/laws10040085

[30] Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H.-T., Jin, A., Bos, T., Baker, L., & Du, Y. (2022). Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239. https://doi. org/https://doi. org/10.48550/arXiv.2201.08239

[31] Turing, A. M. (2009). Computing Machinery and Intelligence. In R. Epstein, G. Roberts, & G. Beber (Eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer (pp. 23-65). Springer Netherlands. https://doi.org/10.1007/978-1-4020-6710-5 3

[32] Turing, A. M., & Haugeland, J. (1950). Computing machinery and intelligence. The Turing Test: Verbal Behavior as the Hallmark of Intelligence, 29-56.

i Надоели баннеры? Вы всегда можете отключить рекламу.