Научная статья на тему 'REGULATORY PRINCIPLES OF DEVELOPMENT, INTRODUCTION AND USE OF ARTIFICIAL INTELLIGENCE IN ASIAN COUNTRIES'

REGULATORY PRINCIPLES OF DEVELOPMENT, INTRODUCTION AND USE OF ARTIFICIAL INTELLIGENCE IN ASIAN COUNTRIES Текст научной статьи по специальности «Экономика и бизнес»

CC BY
81
11
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
LAW AND ETHICS OF ARTIFICIAL INTELLIGENCE / COMPARATIVE LAW / CYBERETHICS / CYBERLAW / LAW OF THE PEOPLE'S REPUBLIC OF CHINA / LAW OF SINGAPORE / LAW OF THE REPUBLIC OF KOREA / LAW OF JAPAN

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Dremliuga Roman Igorevich

The Eastern Asia region is emerging as a new centre of innovative development of information technologies and global digital economy. The digital transformation of socioeconomic and political existence of countries is inextricably linked to the development and adoption of new regulatory systems. The overall success of the digital transformation of economy and society is hinged on the introduction of specific groups of technologies. Identifying specific groups of technologies as the reference points of the digital transformation is equally sensible from a regulatory perspective. Artificial intelligence is a key technology for digital transformation of any country at large. This study purports to identify the main regulatory features of the development, introduction and use of artificial intelligence in Asian countries such as People’s Republic of China, Singapore, Republic of Korea and Japan which are global digital leaders and which were chosen for this study on the basis of an analysis of independent ratings. A comparative study of the core regulatory provisions aimed at harmonizing social relationships arising from the development, introduction and use of artificial intelligence in the countries in question allows to propose possible ways of developing national regulation in respect of ethics and law applicable to AI. Based on the methodology of formal logical analysis and comparative law, the study allows to identify the essential regulatory principles of the development, introduction and use of AI in the selected countries. The findings point out a considerable similarity both at the level of strategic documents and codified regulatory principles, with the precedence for welfare of society and state. While some of the documents under study make references to human rights and individual liberties, the key idea is the achievement of prosperity and sustainable development of society. This approach is better suited to be replicated in the context of Russia. While all of the reviewed instruments perpetuate a humanistic approach involving an assessment of AI’s impact on users, society, environment, its interpretation in Asian countries differs from the one adopted in the Western world.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «REGULATORY PRINCIPLES OF DEVELOPMENT, INTRODUCTION AND USE OF ARTIFICIAL INTELLIGENCE IN ASIAN COUNTRIES»

Legal Issues in the Digital Age. 2022. Vol. 3. No. 3. Вопросы права в цифровую эпоху. 2022. Т. 3. № 3.

Research article УДК 347

DOI:10.17323/2713-2749.2022.3.101.119

Regulatory Principles of Development, Introduction and Use of Artificial Intelligence in Asian countries

Roman Igorevich Dremliuga

Digital Transformation Academy, Far Eastern Federal University, 10 Ayaks Str. , Russky Island, Vladivostok 690922, Primorskyi Krai, Russia, dremliuga . ri@dvfu . ru

The Eastern Asia region is emerging as a new centre of innovative development of information technologies and global digital economy. The digital transformation of socioeconomic and political existence of countries is inextricably linked to the development and adoption of new regulatory systems . The overall success of the digital transformation of economy and society is hinged on the introduction of specific groups of technologies . Identifying specific groups of technologies as the reference points of the digital transformation is equally sensible from a regulatory perspective . Artificial intelligence is a key technology for digital transformation of any country at large . This study purports to identify the main regulatory features of the development, introduction and use of artificial intelligence in Asian countries such as People's Republic of China, Singapore, Republic of Korea and Japan which are global digital leaders and which were chosen for this study on the basis of an analysis of independent ratings . A comparative study of the core regulatory provisions aimed at harmonizing social relationships arising from the development, introduction and use of artificial intelligence in the countries in question allows to propose possible ways of developing national regulation in respect of ethics and law applicable to AI . Based on the methodology of formal logical analysis and comparative law, the study allows to identify the essential regulatory principles of the development, introduction and use of AI in the selected countries . The findings point out a considerable similarity both at the level of strategic documents and codified regulatory principles, with the precedence for welfare of society and state . While some of the documents under study make references to human rights and individual liberties, the key idea is the achievement of prosperity and sustainable development of society. This approach is better suited to be replicated in the context of Russia . While all of the reviewed instru-

© Dremliuga R.I., 2022

This work is licensed under a Creative Commons Attribution 4.0 International License

ments perpetuate a humanistic approach involving an assessment of AI's impact on users, society, environment, its interpretation in Asian countries differs from the one adopted in the Western world .

Keywords

law and ethics of artificial intelligence, comparative law, cyberethics, cyberlaw, law of the People's Republic of China, law of Singapore, law of the Republic of Korea, law of Japan .

Acknowledgments: the study was supported by Far Eastern Federal University Program Priority 2030: Digital Sciences .

For citation: Dremliuga R . I . (2022) Regulatory Principles of Development, Introduction and Use of Artificial Intelligence in Asian countries . Legal Issues in the Digital Age, vol . 3, no . 3, pp . 100-119 . DOI:10 .17323/2713-2749 . 2022 .3 . 100. 119

Background

One of many different approaches developed by the international practice to establish a regulatory system for digital economy is to identify social relationships and behavior of the entities to be regulated in the course of the development, introduction and use of specific technologies as a complex subject of regulation. In this regard, the technology of artificial intelligence is the one currently stirs up interest.

The development of artificial intelligence (hereinafter AI) is a national priority in many countries, with dozens adopting and implementing strategies and programmes to encourage studies and developments in this area. The introduction of intelligent technologies into the economy, welfare and governance has become a key point of public policies in many countries. Regulation of the emerging social relationships involved in the development, introduction and use of AI is a key issue in this area.

With the adoption of a national AI code of ethics, Russia is making its first steps in this direction. A study of the relevant international experience is needed to develop a regulatory system for the development, introduction and use of AI. Meanwhile, the currently available research papers are focused on Western Europe and North America whose regulatory approaches and principles applicable to AI are often ill-suited to the Russian context. Hence it is of major interest to review the existing regulatory principles of the development, introduction and use of AI in those Asian countries which are global leaders of the digital economy.

1. Digital economy and AI

Some researchers argue that the development of digital economy is hinged on the introduction of specific cross-cutting technologies. For instance, V.A. Vaipan considers the following technologies to be crucial for successful development of the digital economy: big data; neurotechnologies and artificial intelligence; shared register systems (blockchain); quantum technologies; new production technologies; internet of things; robotic and sensor components; wireless technologies (including 5G networks crucial for driverless vehicles); technologies of virtual & augmented reality [Vaipan VA., 2019]. The author reasonably argues that successful digital transformation of economy and society is inextricably linked to the introduction of specific groups of technologies. These technologies are vital to realize a transition to a new socioeconomic order within the given time period.

Identifying specific groups of technologies as reference points of the digital transformation is also sensible from a regulatory perspective. The digital economy and stages of its development could be represented as a set of technologies to be applied to economic activities and various aspects of social life. This process of introduction and use gives rise to specific social relationships to be conventionally divided into macro-groups that are easier impacted by regulation.

Groups of technologies have common features and normally exhibit similar regulatory problems as regards social relationships emerging in the process of use. While cross-cutting technologies are not tantamount to the digital economy, the perception of the digital transformation through the lens of specific technological development will greatly simplify the understanding of ongoing changes. To have an idea of the digital transformation and how it splits into specific objectives, a simple model of technological change is required.

Using specific technologies as a backbone of the regulatory system's design will considerably simplify the task by reducing it to the development of systems or sets of provisions regulating the given groups of technologies. This approach will make legal collisions and contradictions much less likely to occur. In such a model, the areas of regulatory intervention are separated by being linked to specific cross-cutting technologies.

Many countries have adopted this particular model to drive and regulate the digital economy. They opt for a legal policy applicable to specific

technologies to be used rather than digital economy as a whole. Thus, many countries including global technological leaders have strategies for the development and introduction of artificial intelligence which often envisage a special legal regime to encourage R&D and investments into a specific cross-cutting technology.

Thus, in 2017 China adopted the New Generation Artificial Intelligence Development Plan1 expecting to become a global leader in AI innovations by 2030 at the last stage of its implementation. By that time, the core AI sector is expected to more than double up to CNY 1 trillion (nearly USD 147 billion). The strategy also provides for improvement and review of the national regulatory system to address problems involved in the development and use of AI technologies [Roberts H., 2020]. Decomposing the digital economy into extended groups of social relationships involved in the application of specific technologies is thus one of the promising models for the development of regulatory policies.

Artificial intelligence is now believed to be a major breakthrough largely in advance of other cross-cutting technologies. It is a unique computing technology that already has a major impact on social relationships and is likely in the near future to radically transform social order across the board. It is logical to expect that a technology with so much social impact will change the regulatory sphere as well.

The widespread introduction of artificial intelligence will give rise to new social relationships. This is true not just for AI. The most important overall feature of information society and digital economy is the emergence of a new system of social relationships. In other words, the digital economy and information society make up a new system of social relationships arising from the use of computer data and ITC technologies. Thus, X Ya. Khabrieva and N.N. Chernogor have identified 9 new types of relationships related to digitization [Khabrieva T. Ya., Chernogor N. N., 2018: 94].

We will start off by analyzing what artificial intelligence/intelligent system is from a technical point of view. What makes this technology stand out compared to others, only to affect the nature of regulation applicable to its use? The answer to this question will help to identify the limits of applicability of the instruments to be considered further. Definitions of artificial intelligence currently abound. Thus, some of the frequently cited studies

1 Available at: https://flia.org/notice-state-council-issuing-new-generation-artificial-intelligence-development-plan/ (accessed: 12.07.2021)

define AI as the capability of a machine/device to imitate intelligent behavior [Padhy N., 2005: 23]. It means the behavior previously associated only with humans which ranges from perception of complex images to creativity.

Some Russian researchers underline that "an intelligent system is the one which can intentionally, depending on the state of data inputs, change not just operating parameters but the way of behavior as such, the latter depending not only on the current state of data inputs but also on the previous states of the system itself" [Yakushev D.I., 2016: 67]. This definition identifies one major feature to single out AI technology and systems from other computer-driven technologies and systems, the former being more self-determined and less dependent on unpredictability of input parameters than other computer systems.

I.R. Begishev and Z.I. Khisamova define AI as an adaptable, autonomous, cognitive intelligent system capable of conscious volitional behavior and allowing to imitate neuronal and neuronal network activities of human brain by processing environmental information [Begishev I.R., Khisamova Z.I., 2021: 25]. The said definition identifies many of the features proper of intelligent computer systems. Moreover, it narrows the concept of "intelligent systems" down to their specific implementation based on neuronal networks. While AI systems based on neuronal networks currently dominate, there are other ways of building intelligent systems such as knowledge-based systems [Aslamova E.A. et al., 2018] or evolutionary algorithms [Zaginaylo M.V., Fatkhi V.A.]. The definition proposed by these authors thus fails to cover all implementation approaches to the modern AI systems.

Meanwhile, the approach equalizing "neuronal networks" and AI is not off the mark. Since deep neural networks have been the most widespread approach to developing AI systems, they are indeed meant in most cases when reference is made to artificial intelligence. The technologies for digital imitation of the neural networking structure of human brain allow to successfully solve a variety of tasks, from imitating live human contact to driving a vehicle [Nikolenko S., Kadurin A., Arkhangelskaya E., 2020: 7-10]. A breakthrough in AI over the last decade owes itself precisely to neural networks.

Artificial intelligence based on neural networks is capable of solving many tasks more efficiently than man. There has long been a firm belief that artificial intelligence can never beat masters of Go since the moves in board games of this type cannot be anticipated, with possible combinations

outnumbering atoms in the Universe. Meanwhile, a trained intelligent machine was able to beat several world champions. Rather than programmed to play in the ordinary way, the AI system learned to master Go by repeatedly playing itself a game — in fact 29 million times to achieve complete superiority over human champions [Silver D., 2017].

In doctrine the process of AI development is most often understood as programming, which is wrong. Intelligent systems based on neural networks are not programmed but learn using either the data they generate or interactions with similar systems. Programmers will only design their architecture, run tests or verify the results. The behavior of a system is predictable only with some probability. A wrong understanding of the development process and operating parameters of AI systems makes it difficult to draft adequate regulatory provisions.

For instance, some Russian researchers believe that a robotic algorithm is developed by man even in case of AI and self-learning neural networks [Vasiliev A.A., Ibragimov Zh. I., 2019: 51]. Others [Vasiliev A.A., Pechatnova Yu.B., 2020: 17] argue that regulation of "programming errors and their implications" is the crucial issue of using intelligent computer systems.

For the purpose of this study, an AI system is defined as a computer system or software which imitates one or more aspects of intelligent behavior and which is more self-determined and independent from the developer's (user's) will than other computer systems. Some intelligent systems are capable of (self) learning and are to some extent non-predictable and nontransparent to their developers and other users.

The specific features of AI technology determine the unique nature of social relationships arising from its use, only to require a special approach to regulation in this area.

2. Asian countries as global AI leaders

It is only recently that Russia put forward claims for leadership in this area, with the first attempts to develop the relevant regulatory framework embodied in the national AI code of ethics which was developed through cooperation between major IT companies, public agencies and academics community2. This document has not only defined the core principles of AI

2 AI Code of Ethics Signed in Russia. Available at: URL: https://rg.ru/2021/10/26/v-rossii-podpisan-kodeks-etiki-iskusstvennogo-intellekta.html (accessed: 16.02.2021)

regulation but also established the requirements to development and use of intelligent systems to be complied with. Though Russia has achieved a considerable success in digitization, the country is not among global leaders yet.

A review of the relevant international experience may considerably help to evaluate the potential for application of the national regulatory policies and options for its development. For a study of regulatory mechanisms applicable to AI, it is of major interest to examine the experience of countries at the top of independent digitization ratings since this analysis would allow to identify promising ways to develop the national regulatory framework for AI.

The experience of Western Europe and North America has been extensively described in Russian and international scholarly literature. For this reason, it is the countries of Asia taking a lead in one or more indicators relevant for the digital economy that were selected for the study. The selection was made on the basis of their ranking in global competitiveness ratings published under the auspices of the World Economic Forum in its Global Competitiveness Reports of 20193 and of 20 204.

Singapore ranks third in the said ratings in terms of regulatory development related to digital economy while being among top ten countries in terms of many digitization-related criteria. China, with its largest digital services market, is also a leader in digitization along with South Korea (top ranking in ICT adoption and top ten in digital infrastructure, innovation capability and macroeconomic indicators of digital transformation) and Japan (top ranking in human capital development, top ten in GCI 4.0, digital services market, digital infrastructure and also innovation capability). Thus, the analysis will focus on Asian digitization leaders with a global level domination.

Moreover, regulation of the digital sector follows a different philosophy in Asia. In Eastern Asia, digitization is regulated on the basis of altogether different cultural principles and paradigms. The West is trying to strike the right balance between commercial use of data and common good arising from the protection of privacy and personal dignity. According to the

3 World Econ0mic Forum. The Global Competitiveness Report Insight Report 2019. Available at: http://www3.weforum.org/docs/WEF_TheGlobalCompetitivenessRepo rt2019.pdf (accessed: 12.01.2021)

4 Ibid. 2020. Available at: http://www3.weforum.org/docs/WEF_TheGlobalCompetiti venessReport2020.pdf (accessed: 24.05.2022)

Western ideology, machines cannot be completely independent as this is a human prerogative. Eastern Asia will often put common good first due to Confucian, Buddhist and animist traditions. Far from being in opposition, man coexists with nature, surrounding things and other people in a harmonic way [Kokuryo J., 2022]. State or nation is often perceived as a meta-family to share personal data with, a family from which there could be no secret. This is probably why these countries are successful in terms of ITC development in general and AI in particular.

3. Strategic planning standards for AI development

The countries under study have adopted strategic documents defining the AI development for decades ahead. They reflect the national political and economic context this way or another. Thus, China adopted in 2017 the New Generation of Artificial Intelligence Development Plan in support of its claims for the global technological leadership5. Under the plan, AI is a technology to transform the life of each human being and the world as a whole, the main objective being to secure a national leading edge in the area of AI development, introduction and use.

The Chinese strategic plan refers to AI technology as a driver of economic development and a new catalyst of industrial transformation to be focused on by the government. It is explicitly stated that major changes to AI-related policies and regulations are required to achieve success.

The AI development strategy puts forward four basic principles reflecting the peculiarities of China (para II B), one of which being absolute technological leadership to secure the country's domination elsewhere thanks to success in AI. Moreover, under the AI development principles, any achievements in civil use are to be made available to the government for military use.

Under the strategic plan, China is expected to achieve global leadership in both theoretical and practical studies of artificial intelligence by 2030. By this time the Heavenly Empire should become a global leader in AI applications and a driver of AI innovations. The said achievements are necessary to secure China's leading edge in economic and innovative development. By 2030, China expects to develop a system of regulations, ethical basis and comprehensive policies applicable to AI (IIC).

5 Available at: https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of-Artificial-Intelligence-Development-Plan-1.pdf (accessed: 16.07.2022)

Not claiming for itself a global technological leadership, Singapore has a strategic plan focused on four specific AI applications. The AI development at the national level is regulated by the Singapore National AI Strategy6 whereby the country should become a global center for the development, testing, introduction and scaling of AI solutions. The document's focus is on economic transformation and higher living standards through the introduction of AI systems rather than on global domination in intelligent technologies.

Under this strategy, the transformation will be driven by five national AI projects, each addressing Singapore's key integrated socioeconomic objectives. The first project called Intelligent Cargo Planning purports to streamline air, sea and road cargo traffic across the country, its performance indicators being higher productivity of businesses and higher efficiency of the national economy. This focus is crucial since Singapore is a major transportation hub in Asia.

Singapore has been at the top of international ratings of smart city solutions for several years in a row. As the country boasts to be the smartest city nation7, the second nationwide AI project is focused on "uninterruptible and efficient municipal services" to be made more accessible, reliable and modern.

The third nationwide AI project is for "prevention and treatment of chronic diseases", with intelligent systems, according to the strategy's text, to increase the efficacy of prevention and diagnostics of chronic diseases. It is also expected to use AI for reducing the cost of treatment. The project assumes that AI could be widely used for analysis of clinical data, medical images, genome data and health-related behavioral aspects. As applied to health, AI should result in increased life expectancy, lower costs and higher quality services.

The fourth nationwide project is focused on "individual education through adaptive learning and skills assessment". Singapore is a recognized regional leader in education, its two main universities sustainably being among top three of the Asian university rankings8. The fourth initiative

6 Available at: https://www.smartnation.gov.sg/files/publications/national-ai-strategy. pdf (accessed: 16.07.2022)

7 Available at: https://www.smartnation.gov.sg/about-smart-nation/our-journey/achie-vements (accessed: 16.07.2022)

8 Available at: https://www.qschina.cn/en/university-rankings/asian-university-rankings/ (accessed: 16.07.2022)

purports to help teachers increase the learning efficiency of each student individually through the use of AI solutions. Since the country attempts to secure for itself a better position in the international market for education services, this objective is also a priority.

Despite Singapore is a global center open to international travel, the authorities pay much attention to security of its borders, with border control as the fifth key project of the national AI strategy. Its implementation is expected to result in more secure borders and better quality services offered to tourists. One of the project's objectives is to make border control fully automatic and monitored by intelligent systems.

Singapore has not adopted a specific strategy for AI regulation, its national strategy containing only one relevant provision — para 4.2 stating "the intellectual property regulation will be reviewed to make sure Singapore's laws support the development and marketing of new AI technologies"9. Transparent and clear legislation is expected to attract investment and assure the country's tech entrepreneurs.

The Korean Government announced the adoption of the National Strategy for Artificial Intelligence in 201910 to define the development of AI in Korea until 2030. By this time, the country is expected to rank third in terms of digitization and to successfully compete with global IT leaders such as China, Germany and Japan are repeatedly mentioned in the strategy for comparison. In stressing global importance of AI technologies, the document emphasizes the peculiarities of Korean digital economy. Practical steps for achieving the strategic objectives include the development of ethical standards, promoting and building confidence in intelligent technologies in society, creating an AI learning support center for data protection, encouraging R&D, and creating new jobs in skills required for effective development and use of AI. The strategy has 100 nationwide objectives divided into 9 strategies and 3 areas — AI ecosystem, AI use, human-centric AI — with the following three main objectives to be achieved by 2030:

making South Korea more competitive internationally in the area of digital technologies;

9 Available at: https://www.smartnation.gov.sg/files/publications/national-ai-strategy. pdf (accessed: 16.07.2022)

10 Available at: https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mId=10&mPi d=9&pageIndex=&bbsSeqNo=46&nttSeqNo=9&searchOpt=ALL&searchTxt (accessed: 25.07.2022)

achieving full-fledged use of AI in various sectors (e-government, industry, health, etc.);

improving the living standards through the use of AI.

The Korean strategy follows an approach similar to that of Singapore. It is planned to make AI hardware and software more competitive by "identifying and focusing" on the areas where the country can achieve success and a leading edge. Moreover, it is expected to support both fundamental and applied AI studies, that is, to actively develop education and research relevant for intelligent technologies.

In Japan the main strategic document is the Social Principles of Human-Centric AI)11. Adopted in 2019, this instrument does not only provide a strategy but also contains ethical principles and standards to govern the introduction of AI. The strategy is based on the following principles:

AI-ready society — social changes needed to realize Society 5.0;

Human-centric AI.

To make society AI-ready, Japan should move in this direction jointly with the national government and related industries and businesses. Under the strategic document, its principles should become part of public policies. Moreover, Japan should promote these principles internationally and take leadership in international discussions to create AI-ready societies worldwide.

The strategy's provisions, while not considered as regulations, determine the development path of the country's regulatory framework. Strategic documents also identify social and political priorities to affect nationwide regulatory development. The said documents define the structure and content of future codes of ethics and often provide a basis for regulatory formulas and definitions.

4. Regulatory principles and framework of AI development, introduction and use

Some countries have recently taken steps at the national level to formulate the general principles and provisions for AI regulation in various forms.

11 Available at: https://www8.cao.go.jp/cstp/english/humancentricai.pdf (accessed: 24.07.2022)

Russia is also among such countries, with the AI Code of Ethics mentioned above. The jurisdictions under study do not have universal regulations governing AI. However, Singapore, China, Republic of Korea and Japan have adopted soft regulation in the form of so-called AI codes of ethics.

In 2021, China's Ministry of Science and Technology adopted the New Generation Artificial Intelligence Ethics Specifications12. It was stated as part of its General Principles that the purpose was to introduce ethics into life-cycle of AI development and use, with its normative rules serving to promote fairness, justice, harmony, safety and security, and to prevent problems such as prejudice, discrimination, invasion of privacy and data leakage13.

These rules apply to natural and legal persons, as well as no-profit entities and government agencies involved in AI-related activities including governance, R&D, procurement and application. The document details each type of AI-related activities. Governance refers to strategic planning, drafting and implementation of policies, regulations, rules and technical standards, as well as resource allocation, supervision and inspection. R&D mainly means research and development of AI-related technologies and products. Procurement regards production, operation and sale of AI products/services while use basically means purchase, consumption and marketing of intelligent products and services.

The Chinese AI code of ethics also enshrines the following ethical standards and principles, including:

Enhancing the well-being of humankind.

Promoting fairness and justice.

Protecting privacy and security.

Ensuring controllability and trustworthiness.

Strengthening accountability.

Improving ethical literacy.

The first principle means that AI-related innovations and applications should be human-centric, with the code and its underlying provisions being focused on the needs, values and rights shared by all people. The text

12 Available at: https://opengovasia.com/china-develops-code-of-ethics-to-regulate-artificial-intelligence/ (accessed: 16.07.2022)

13 Available at: https://ai-ethics-and-governance.institute/2021/09/27/the-ethical-norms-for-the-new-generation-artificial-intelligence-china/ (accessed: 16.07.2022)

makes a special point of the need to observe national and regional ethical standards. In line with the Confucian tradition, it requires to adhere to the priority of public interests. Other elements of the East Asian culture are visible in the duty to promote harmony between man and machine, and to strengthen the feeling of happiness.

The provision on improving ethical literacy is a principle rarely found in national codes of ethics. The code requires to actively study and mainstream the knowledge related to AI ethics, gain an objective insight into ethical problems, and keep from under- or overestimating ethical risks. It is stated that there is a need to hold or participate in discussions of AI-related ethical problems, as well as to raise awareness on issues of AI ethics and governance.

In Singapore, the main document addressing AI law and ethics is the Model AI Governance Framework14. Published by the PDPC (Personal Data Protection Commission), it contains the guidelines followed by a majority of Singapore's AI developers. The document's second edition was presented at the annual meeting of the World Economic Forum in Davos, in January 202015.

The standards and principles stated in the Model AI Governance Framework are discretionary. The document provides advice on issues to be discussed when assessing specific applications of AI technology and possible confidence-building steps. The Model AI Governance Framework also recommends reasonable steps to bring in-house policies, structures and processes at private companies and public agencies in line with existing data governance and protection practices. Despite the Framework's non-binding nature, many companies in Singapore have undertaken to adhere to its standards and principles. Many tech companies also implemented the document's standards into corporate bylaws by making discretionary guidelines binding on their staff.

As stated in the Model AI Governance Framework, all regulations applicable to AI relationships should rely on the following two principles: AI should be explainable, transparent and fair); AI should be human-centric). To describe the first principle, the document makes use of three attributes at once: explainable, transparent and fair. Many guidelines and regulations

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

14 Available at: https://ai.bsa.org/wp-content/uploads/2019/09/Model-AI-Framework-First-Edition.pdf (accessed: 25.07.2022)

15 Available at: https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf (accessed: 31.07.2022)

refer to the said attributes as specific principles underlying the use of AI [Floridi L., Cowls J., 2021]; [Engstrom D., Ho D., 2020].

The human-centric attribute enshrined in the code has to be clarified. Under the text, the AI governance rules should primarily take into account human nature, rights and liberties, human needs and creative potential. It means that the rules to be enshrined in a regulation should be for the benefit of people in the first place. An emphasis on this principle is questionable. The human-centricity, as observed by many researchers in Russia and elsewhere [Chesterman S., 2020] is an a priori attribute of any social rules, both ethical or legal.

The explainable attribute reflects to what extent AI is understandable to an outside observer. As applied to regulation of social relationships arising from AI use, it primarily means understanding of AI decision-making processes by society.

Transparency is an AI attribute close to some extent to explainable since it also means that society should be able to exercise control over the functioning of an intelligent system. As an AI attribute, transparency could be understood in two ways: legal transparency (accessibility of programme codes despite the intellectual property or commercial secret regimes enshrined in the national legislation) and algorithm transparency (understanding how the algorithm works).

Fairness as an AI attribute often means that decisions made by intelligent systems will be free from discriminatory human prejudice of various kind which in scholarly literature and regulatory documents is equivalent of discrimination based on race, culture, gender [Gentzel M., 2021].

The Framework explains that human centricity means AI should be used to amplify capabilities, protect the interests and ensure well-being and safety of man. These considerations are of primary concern in the design, development and deployment of AI in Singapore. This list of attributes rather reminds of human-centric or humanistic approach also mentioned in the Chinese AI Code of Ethics.

Singapore does not just declare the principles of AI ethics but also creates the tools to make them real. On 25 May 2022, the Infocomm Media Development Agency (IMDA) and Personal Data Protection Commission (PDPC) announced the creation of AI Verify, the world's first AI governance testing system intended for companies willing to demonstrate compliance with AI ethical principles in an objective and verifiable way. This

development designed to make AI-based IT products more transparent is now at the minimum viable product stage (MVP)16.

Developers and owners can test the declared performance of AI systems on standardized texts in accordance with a set of principles. AI Verify brings together a mix of open-code testing solutions including process audits as a convenient self-assessment toolbox. This toolbox will generate reports for developers, managers and business partners covering the main aspects affecting AI performance.

The approach boils down to testing products for compliance with the Model AI Governance Framework. Testing applies to AI attributes such as transparency (compliance with stated outcomes, understanding decision-making processes, and absence of unintended bias), safety, system sustain-ability, performance tracking capability. This system is actually an intelligent technology for autonomous check for compliance.

In December 2020 the Ministry of Science and ITC jointly with the Korean Information Society Development Institute have presented the AI Standards of Ethics, a summary of the key principles and requirements to AI technologies, at the meeting of the Presidential Committee on the Fourth Industrial Revolution17. The document contains 2 core principles and 10 requirements to AI systems to be developed and introduced.

The core principles enshrined in the document are: human dignity — human life has the highest value, AI should be designed and used in a way not harmful to physical and psychic health of man; public utility — AI should be used to achieve the maximum well-being for everyone and ensure protection of vulnerable groups which may be isolated from information society because of their status; viability — the use of AI should correspond to purposes and intentions of the activity field for which it was designed and to comply with ethical standards.

In Japan, the AI ethics is regulated by the Social Principles of Human-Centric AI)18. While the document assumes that the introduction of new

16 Developing the MVP for AI Governance Testing Framework. Available at: https:// www.pdpc.gov.sg/news-and-events/announcements/2021/07/developing-the-mvp-for-ai-governance-testing-framework (accessed: 24.07.2022)

17 Available at: https://www.korea.kr/news/pressReleaseView.do?newsId=156428773 (accessed: 24.07.2022)

18 Available at: https://www8.cao.go.jp/cstp/english/humancentricai.pdf (accessed: 04.07.2022)

ethical principles will lead to the realization of society 5.0, the regulatory principles to be introduced should rely on a new philosophy.

The philosophy of society 5.0 is underpinned by three core values. Dignity: under the Japanese code of ethics, the new society will have respect for human dignity. People cannot be overly dependent on AI while the technology cannot be used to control human behavior through the excessive pursuit of convenience and efficiency. Using AI as a tool, it is proposed to construct a society where people can better demonstrate various human abilities: show greater creativity, engage in challenging work, and live richer lives both physically and mentally. This principle to a large extent echoes the statements of other digitization leaders in Asia (China, Singapore, Korea).

Diversity and inclusion is another principle which assumes that people with diverse capacities, characteristics, backgrounds can pursue their own well-being. While the principle is rather an ideal, the document puts it forward as an objective for the realization of society 5.0. People of diverse backgrounds, values and ways of thinking should be able to pursue their goals. The first two principles of society 5.0 echo the principles of human-centricity stated in AI code of ethics of China and Singapore. The same principle is enshrined in a majority of Western regulations of AI.

The third principle of society 5.0 philosophy is sustainability. AI should be used to create a range of new businesses and solutions to resolve social disparities and develop a sustainable society. There is a need to address global environmental problems and climate change. The sustainable development concept is widespread and now makes part of many strategic documents at the international level, one of the best known being the Sustainable Development Goals published by the United Nations19. Judging by its text and explanations of this principle, the Japanese code echoes the UN SDGs. It also has obvious links to the Confucian concept of social harmony and animistic ideas of universal connection.

Conclusion

The analysis shows a considerable similarity of AI regulatory principles in states studied both at the level of documented strategies and codified regulatory principles, with well-being of society and state as the predominant vec-

19 Available at: https://www.un.org/sustainabledevelopment/ru/sustainable-development-goals/ (accessed: 24.07.2022).

tor. Despite that certain documents under study refer to human rights and individual liberties, the key idea is pursuit of prosperous and sustainable society. This approach is better suited to be replicated in the context of Russia.

In Asia man is conceptually regarded as an object rather than subject (which is less true for Singapore). All documents under study are based on the humanistic approach providing for an assessment of AI impact on users, society and environment, something that should not deceive unsophisticated readers. First, this humanism towards man is passive. While developers have an obligation to make technology humane, the authorities have the right to control this process. Second, the priority is prosperous society, not man. This Asian humanism is considerably different from what is enshrined in codes of ethics in Western Europe and North America.

At the same time the humanistic approach stated in Asian countries marks a step towards people and their needs. It welcomes solutions that do not harm but improve the life of people and society [Xu L., 2020]. Moreover, as some authors rightly note, the introduction of any technology is a step towards dehumanization by default [Oviatti S., 2021: 278-287]. Technologies replace and oust man from decision-making by reducing human understanding and control of events. Hence, it is necessary to enshrine this principle since any technology is knowingly anti-human, unless its developers and operators are forced to apply it with a view to man and human values, liberties and needs.

The potential connecting link between the Western and Eastern approaches is protection/safeguard of human dignity. Despite different priorities and objectives all national codes of ethics make a point of safeguarding human dignity this way or another, with human needs, abilities and characteristics to be taken into account in developing and using AI.

In addition, responsibility for the caused damage is also a point in common. While the concept of individual responsibility before society and state undoubtedly exists in the West, Eastern societies make a focus on being loyal and responsible to one's family or even strangers. Despite significant cultural differences, developers, owners or other persons involved in AI operation should be responsible for their actions.

References

1. Aslamova E . A . et al . (2018) Information system for assessing industrial safety level using the knowledge-based system technology. Reshetnevs-kiye chteniya=Reshetnikov's Readings, vol . 2, pp . 221-223 (in Russ . )

2 . Begishev I . R . , Khisamova Z . I . (2021) Artificial intellect and criminal law: a study. Moscow: Prospekt, 192 p . (in Russ . )

3 . Chesterman S . (2020) Artificial Intelligence and the Problem of Autonomy. Notre Dame Journal of Emerging Technologies, issue 2, pp . 211 — 250 .

4 . Engstrom D . , Ho D . (2020) Algorithmic accountability in the administrative state . Yale Journal of Regulation, vol . 37, no . 3, pp . 800-854 .

5 . Floridi L. , Cowls J . (2021) A Unified Framework of Five Principles for AI in Society. Philosophical Studies Series, vol . 144, pp . 5-17 .

6 . Gentzel M . (2021) Biased Face Recognition Technology used by Government: Problem for Liberal Democracy. Philosophy and Technology, vol . 34, no . 4, pp . 1639-1663 .

7 . Khabrieva T. Ya . , Chernogor N . N . (2018) Law in the era of digital reality. Zhurnalrossiyskogo prava=Journal of Russian Law, no . 1, pp . 85-102 (in Russ . )

8 . Kokuryo J . (2022) An Asian perspective on the governance of cyber civilization . Available at: https://doi . org/10. 1007/s12525-022-00523-5

9 . Nikolaenko S . , Kadurin A. , Archangelskaya E . (2020) Deep Learning. Saint Petersburg: Piter, 480 p . (in Russ . )

10 . Oviatt S . (2021) Technology as infrastructure for dehumanization: three hundred million people with the same face . ICMI 2021: Proceedings of the 2021 International Conference on Multimodal Interaction, pp 278-287

11. Padhy N . P. (2005) Artificial intelligence and intelligent systems. Oxford: University Press, 231 p

12 Roberts H , Cowls J et al (2020) The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation Available at: https://link . springer, com/content/pdf/10. 1007/s00146-020-00992-2 . pdf

13 . Silver D . , Schrittwieser J . et al . (2017) Mastering the game of Go without human knowledge Nature, no 550, pp 354-359

14 . Vaipan V. A. et al . (2019) Regulating Economic Relationships in the Modern Context of Digital Economic Development: A Study. Moscow: Yustitsinform, 376 p (in Russ )

15 . Vasiliev A.A. , Ibraghimov Zh . I . (2019) The regulation of robotics and artificial intelligence in the EU. Rossiysko-aziatskiy pravovoy zhurnal=Russian-Asian Law Journal, no . 1, pp . 50-54 (in Russ . )

16 . Vasiliev A. A. , Pechatnova Yu . V. (2020) Artificial intelligence and law: issues, prospects . Rossiysko-aziatskiy pravovoy zhurnal=Russian-Asian Law Journal, no . 2, pp . 14-18 (in Russ . )

17 . Xu L. (2020) The Dilemma and Countermeasures of AI in Educational Application . ACM International Conference Proceedings, pp . 289-294.

18 . Yakushev D . I . (2016) The Definition of Artificial Intelligence . In: Selected Works: Regional Information Technology and Cyber Security. Saint Petersburg: Society of Information Technology, Computing Equipment, Communication and Management Systems, pp . 67-69 (in Russ . )

19 . Zaginailo M . V. , Fathi V. A .(2020) The genetic algorithm as an efficient tool of evolutionary algorithms . Innovatsii . Nauka . Obrazovaniye=Innovations, Science, Education, no . 22, pp . 513-518 (in Russ . )

Information about the author:

R . I . Dremliuga — Candidate of Sciences (Law), Professor.

The article was submitted 15 . 06 . 2022; approved after reviewing 02 . 08 . 2022; accepted for publication 19 . 08 . 2022 .

i Надоели баннеры? Вы всегда можете отключить рекламу.