Научная статья на тему 'MILITARY AI LEGAL DEFINITION’S CONTROVERSIES: A GAME-THEORETICAL APPROACH'

MILITARY AI LEGAL DEFINITION’S CONTROVERSIES: A GAME-THEORETICAL APPROACH Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
26
6
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MILITARY AI (MAI) / GAME-THEORETICAL MODELS / LETHAL AUTONOMOUS WEAPONS SYSTEMS (LAWS) / MILITARY AI (MAI) LEGAL REGULATION / INTERNATIONAL HUMANITARIAN LAW

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Brovko Vladislav Igorevich, Kulichina Asya Leonidovna, Osipchenko Anton Olegovich, Dzis Yuliya Ivanovna

The specific features of the process of the Military Artificial Intelligence (MAI) concept’s juridical regulation within the framework of the UN Security Council are addressed in the article. Based on the game-theoretical modelling and analysis, the authors come to the conclusion on the alleged reasons and premises for inefficiencies in the negotiations on the universally accepted MAI definition’s adoption and forecast the actors’ behavior extrapolating the current conditions

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «MILITARY AI LEGAL DEFINITION’S CONTROVERSIES: A GAME-THEORETICAL APPROACH»

МЕЖДУНАРОДНЫЕ ОТНОШЕНИЯ, ИСТОРИЯ,

36

РА

УДК 543.123

MILITARY AI LEGAL DEFINITION'S CONTROVERSIES: A GAME-THEORETICAL APPROACH

©Brovko Vladislav Igorevich1, ©Kulichihina Asya Leonidovna2, ©Osipchenko Anton Olegovich3, ©Dzis Yulia Ivanovna4

1 student, Siberian Federal university 2student, Siberian Federal university 3student, Siberian Federal university 4Candidate of Science (Political Science), Assistant Professor, Siberian Federal university

Abstract: The specific features of the process of the Military Artificial Intelligence (MAI) concept's juridical regulation within the framework of the UN Security Council are addressed in the article. Based on the game-theoretical modelling and analysis, the authors come to the conclusion on the alleged reasons and premises for inefficiencies in the negotiations on the universally accepted MAI definition's adoption and forecast the actors' behavior extrapolating the current conditions.

Keywords: military AI (MAI), game-theoretical models, Lethal autonomous weapons systems (LAWS), military AI (MAI) legal regulation, international humanitarian law.

In the last few years, the subject matter of the Artificial Intelligence (hereafter the AI) has begun to attract special attention from the international community. A steady and a significant increase in the rate of AI adaptation to the military systems accomplished by notably multiple superpowers has led to the heightening of the awareness of the issue. Nearly seven years after the open letter on the AI had been published [1], the consciousness of once hypothetical menace of the Military Artificial Intelligence (hereafter the MAI) drastically veered into practical measures and arrangements regarding it. Namely, the recent Sixth Review Conference of the CCW which took place on 13-17 December 2021 was devoted to the known lacunae in the definition of Lethal Autonomous Weapons Systems (hereafter LAWS) and concluded that "it

is essential first to identify the key attributes that would characterize a given weapon system as LAWS" [2].

However, the intention to defy is immanently problematic inasmuch as the actors frequently tend to demonstrate unwillingness to restrict their own military capabilities, whilst the decision to publically proclaim themselves the "wrongdoing" or "rogue" states that keep "upsetting the balance" is hardly popular due to the collateral reputational impact.

In order to achieve the purpose of the study (which is stated as the analysis of the Military AI definition's dynamics: identification of the factors influencing the process of developing and adopting a unified concept), the following objectives are implemented: to accumulate and dissect the empirical data, to form a hypothesis and make a prediction, to test the hypothesis by the game-theoretical method, to outline the prospects for the further discourse.

The methodology of the study includes both quantitative and qualitative methods. Inter alia, it is considered worthwhile to formalize the behaviourally relevant contradiction between the military AI technology possessing states from the prism of the two significant indicators, image (the reputation, renown or position on the world stage, being influenced by the stance in the process of discussion) and power (the prospects for military growth and particularly opportunities for the MAI technologies enhancement), using the game theory methods. Particularly the research involves two major models, representing two games in normal and extensive forms respectively, followed by the formalization principles and synopsis.

The empirical database, that specified the models' indicators and payoffs, includes the content analysis of the observed states' national strategies [3], UN-led conferences, especially the ones regarding the Convention on Certain Conventional Weapons [4], and the UN SC sessions' agenda items and final documents [5]. In addition, expert field studies [6], academic papers [7] and research reports [8] are reviewed.

The working hypothesis is that the UN SC member states that do possess or are hitherto developing and implementing the AI-based weapons systems would prefer to attempt to abstain from a decision rather than to abort or veto the resolutions in favour of forming the concept of military artificial intelligence and its inclusion in the national legislation of the UN member states, as well as in international law. The hypothesis testing is derived from the game-theoretical

37

models provided below, starting with the first game presented below in Figure 1.

(China), (France)

The US, Russia, The UK Strategies Strict definition Weak definition No definition

Strict definition -3; (-2), (2) 1; (1), (1) 1; (1), (1)

Weak definition 1; (1), (1) -1; (1), (1) 1; (1), (1)

No definition (weakly dominant strategy) 1; (1), (1) 1; (1), (0) 3: (2), (2)

Figure 1. The first normal-form game. The second coalition's payoffs are given in brackets separately

for China and France respectively. The cyan payoffs objectify the agreed conclusions, the green ones represent the Pareto optimum of the game.

The current game is designed to predict possible negotiation behaviour of the UN Security Council members regarding their debates over the subject matter of the MAI. The main goal is to define the ultimate strategies these countries would choose while discussing the issue of AI. The model may be presented as the cooperative simultaneous normal-form game involving two agents, presented by the two coalitions.

Although the game is stated to primarily deal with the UN SC member states, those are divided into two coalitions. First coalition includes the US, the United Kingdom, and the Russian Federation. The explanation to this fact is that the aforementioned actors showed their desire to develop the MAI and even though the stages of the AI military systems development in these countries differ, their common desire can be valued by the same payoff. So, it is supposed that the sides gain the same values as concerning this question their national interests happen to be aligned. The payoffs of the second coalition are different since China publicly shows its intention to limit the MAI while launching numerous military programs in the sphere. Nonetheless China calls for the limitations and, in this sense,

Chinese position might be considered similar with the French one, because France strongly supports the ban on MAI.

In order to make the decision-making process countable and predictable, the possible strategies are limited to three strategies for both sides. These strategies reflect the main approaches toward the MAI concept the sides tend to support.

The S1 "strict definition" strategy implies the potential inclusion of the terms such as "fully autonomous systems", "semi-autonomous systems" and "human supervised autonomous systems" in the definition of the MAI.

The S2 "weak definition" strategy connotes the potential inclusion of only the "fully autonomous systems" term in the definition of the MAI.

The S3 "no definition" strategy involves the measures undertaken by the agents to prevent the definition from the possible adoption and to delay the negotiations.

To reflect the whole process in the formal way and then make predictions based on this formalized model presented in the Figure 1 the following assumptions were suggested:

• There are two players (coalitions) I = {Player 1, Player 2}

• First player (p1) has one payoff value, second player(p2) has two payoff values, introduced to show the partial incompliance between France and China.

• Strategies for both sides are the same, Sp1=Sp2, and the set of strategies contains three strategies. These three strategies are Sp1=Sp2= {Sj, S2, S3}.

• The sides can come to an agreement only if both coalitions choose the same strategy. Otherwise the payoff profile will depict the status quo.

• The main factors that influence the payoffs are the image (an aggregate of reputational gains or losses) and the power (the opportunities to develop and implement the MAI technologies).

• The strategies {Si, S2} help the agents improve reputation as they show peace supporting policy

• The strategy {S3} helps them keep developing the MAI technologies.

As a result, the payoff matrix indicates that all the strategies are weakly dominated by the strategy S3 for both sides, which value is always higher or equal: Us3 > Usi and Us3 > Us2. The green colour shows the resulting payoff profile. But such a seemingly beneficial case is truthful for China only if the coalition with France will break up, so the disparities in the positions of France and China in the further negotiations are expected to increase.

The matrix presupposes that China is determined to develop its military sphere and the highest outcome may be reached when none of the

Figure 2. The second extensive-form game. The green coloured component represents the value and the Pareto optimum of the game. The shorthands "C_1", "C_2" and "C_3" stand for the "Coalition 1", "Coalition 2" and "Coalition 3" terms respectively.

The game constitutes a pattern of the MAI concept's high-level debate of the UN Security Council member states. The goal apparently remains the same - to outline the strategies that (having been chosen) lead to a higher reputational and geopolitical gains. It can be described as a

sides come to an agreement, thus continuing to expand the military use of the AI, at least for middle term period. Thereby, the main conclusion is that the sides will keep postponing the final decision due to the approximately common desire to reinforce their defence systems in order to achieve the military balance.

As soon as the agents' watches are harshly synchronized, the procedure based on the UN SC voting system demands the parties to seal the fate of the definition they had potentially previously agreed on. The process is simulated in the second game presented below in Figure 2.

cooperative sequential extensive-form game with complete information involving three agents, presented by the three coalitions.

The coalitions are composed of two groups of entities. They are led by at least one of the five permanent UN Security Council members, which are empowered to implement the eponymous "Veto" strategy. The other part is represented by the ten non-permanent UN Security Council member states, which are considered like-minded only in terms of the two-digit criteria convergence ("image" and "power") with no

39

regard to the foreign policy features, and do choose the coalitions based on their preferences.

The strategies are also shaped by the UN Security Council Working Methods, especially by its voting system, described in the Article 27 of the UN Charter, to depict the options available for the parties in an environment which closely resembles reality.

The S1 "adopt" strategy implies the potential adoption of the MAI legal concept.

The S2 "veto" strategy connotes the potential exercise of the veto, which leads to the no-action motion of any resolution.

The S3 "abstain" strategy involves the agents' abstinence in the vote or decision-making process.

The game's guidelines are attributable to the following premises:

• There are three players (coalitions) I = {Player 1, Player 2, Player 3};

• The order of players' moves isn't strictly defined, the first move right is assigned by the agents and does not drastically influence the course of the game;

• Each player (p1, p2, p3) have one payoff value, which shows the equity of the sovereign actors and the compliance inside coalitions;

• Strategies for both sides are the same, Sp1=Sp2 and the set of strategies contains three strategies. These three strategies are Sp1=Sp2= {S1, S2, S3}.

• The sides can come to an agreement only if both coalitions choose the same strategy. Otherwise the payoff profile will depict the status quo.

• The main factors that influence the payoffs are the image (an aggregate of reputational gains or losses) and the power (the opportunities to develop and implement the MAI technologies).

• The strategy {S1} helps the agents improve reputation as they show peace supporting policy

• The strategy {S2} negatively affects the agent's reputation as

the refusal to elaborate diminishes the negotiator's visage

• The strategy {S3} helps the agents keep developing the MAI while having insignificant reputational losses.

Eventually, the Pareto optimal payoff profile coloured green demonstrates the strict domination which characterizes the "abstain" S3 strategy for all of the agents as it earns a payoff strictly higher than any other one does: Us3 > Us1 and Us3 > Us2.

As soon as the information the players possess is absolute, any coalition moving earlier than the other(s) is considered aware of the possible rivals' gains. Hence, the S2 "veto" strategy would highly likely be neglected due to a severe insufficiency in Minimax calculations. Then, the highest payoff's value might be gained in US3 while abstaining from the definition's adoption.

The game reveals the parties' tendency to keep developing and enhancing the AI military systems of their own, which nevertheless does not exceed the undesirability or even fear of the coherent reputational losses. They might damage the military balance much worse than the limitations to defence systems' planning and deployment. This way, the parties are anticipated to abstain from a self-regulating decision to have the technological prevalence form a balance alternative to the international law's provisions.

In summing up of the process of analysing the dynamics of the legal regulation of military artificial intelligence on the world stage, namely in the UN Security Council, a number of conclusions were made.

Firstly, image policy (unpopularity of the "aggressor state" status acquiring) and the preservation of the military balance (the desire to establish a leading position in the field of MAI), as can be seen, serve as opposing factors resulting in the main reasons for the lack of effectiveness of the process of adopting the legal definition of the MAI.

Secondly, game-theoretical models confirm the hypothesis that the members of the UN Security Council, that do possess technologies in the field of military AI, will potentially choose a strategy to specifically refrain from deciding upon the international definition of the MAI rather than adopt or veto the resolutions. Nonetheless, the sovereigns' behaviour may be influenced and altered by various other factors (besides the two

40

assessed "power" and "image" indicators) in the real conditions. Those variables lay outside our area of research and might be beneficial for the scholars and analysts to address for the sake of further possible consideration and research.

Finally, the anticipation of abstinence from a self-regulating solution will lead to the fact that technological predominance will constitute a balanced alternative to the international security system based on the provisions of international law.

References:

1. University of California, Berkeley, 2015. Research Priorities for Robust and Beneficial Artificial Intelligence. [pdf] Association for the Advancement of Artificial Intelligence. Available at: <https://people.eecs.berkeley.edu/~russell/p apers/aimag 15 -research-agenda.pdf> [Accessed 02 February 2022].

2. UNODA (United Nations Office for Disarmament Affairs), 2022. Sixth Review Conference of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. Geneva, Switzerland, 13-17 December 2021. New York: United Nations.

3. UNODA (United Nations Office for Disarmament Affairs), 2020. The Militarization of Artificial Intelligence. New York, The USA, August 2019. New York: United Nations.

4. UN OICT (United Nations Office of Information and Communications Technology), 2018. Emerging technologies whitepaper series: Artificial Intelligence. [pdf] OICT Emerging Tech Team. Available at: <https://unite.un.org/sites/unite.un.org/files/ emerging-tech-series-ai.pdf> [Accessed 02 February 2022].

5. UNODA, 2022. Agenda Item 10: Submission of the Report of the Group of Governmental Experts. [pdf] (United Nations Office for Disarmament Affairs. Available at: <https://documents.unoda.org/wp-content/uploads/2022/01/Statement-NAM-CCW-Sixth-Review-Conference-Agenda-Item- 10-Submission-Repo.. ..pdf> [Accessed 02 February 2022].

6. Nicholas D. Wright, 2019. Artificial Intelligence, China, Russia, and the Global Order. [pdf] New York: Air University Press. Available at: <https://www.jstor.org/stable/resrep19585> [Accessed 02 February 2022].

7. Elsa B. Kania, 2019. Chinese Military Innovation in Artificial Intelligence. [pdf] Washington: Center for a New American Security. Available at: < https://www.jstor.org/stable/resrep28742> [Accessed 02 February 2022].

8. Simona R. Soare, 2020. DIGITAL DIVIDE? Transatlantic defence cooperation on Artificial Intelligence. [pdf] Paris: European Union Institute for Security Studies. Available at: <https://www.jstor.org/stable/resrep25027> [Accessed 02 February 2022].

ПРОТИВОРЕЧИЯ ЮРИДИЧЕСКОГО ОПРЕДЕЛЕНИЯ ВОЕНИЗИРОВАННОГО ИИ: ТЕОРЕТИКО-ИГРОВОЙ ПОДХОД

Бровко Владислав Игоревич, Куличихина Ася Леонидовна, Осипченко Антон Олегович, Дзись Юлия Ивановна

студент, Сибирский федеральный университет студент, Сибирский федеральный университет студент, Сибирский федеральный университет канд. полит. наук, доцент, Сибирский федеральный университет

В статье рассматриваются

особенности процесса правового регулирования концепта военизированного искусственного интеллекта в структуре СБ ООН. На основании результатов теоретико-игрового моделирования авторы приходят к выводу о предполагаемых причинах недостаточной эффективности проведения переговоров по поводу принятия общепризнанного определения военизированного ИИ и формируют прогнозы поведения игроков при экстраполяции текущих индикаторов.

Ключевые слова: военизированный искусственный интеллект (ВИИ), теоретико-игровые модели, смертоносные автономные системы вооружений, правовое регулирование военизированного искусственного интеллекта (ВИИ), международное гуманитарное право.

Рукопись поступила: 1 апреля 2022 г. Submitted: 1 April 2022

i Надоели баннеры? Вы всегда можете отключить рекламу.