Научная статья на тему 'Artificial Intelligence Governance and China’s Experience under the Community of Common Destiny for Mankind Concept'

Artificial Intelligence Governance and China’s Experience under the Community of Common Destiny for Mankind Concept Текст научной статьи по специальности «Экономика и бизнес»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
global governance / artificial intelligence / risks / community of common destiny for mankind / Chinese experience / глобальное управление / искусственный интеллект / риски и угрозы / сообщество единой судьбы человечества / китайский опыт

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Jia Shaoxue

In recent years artificial intelligence (AI), backed by big data and the Internet, has been rapidly developing and determining the future direction of the world’s science and technology development. Although artificial intelligence is beneficial to the scientific and technological revolution and industrial modernization of mankind, it has also brought new risks. People pay more and more attention to the potential risks of artificial intelligence that should be effectively managed. Artificial intelligence risks are characterized by the diversity of technological threats, the similarity of AI risks faced by different countries and the high complexity of governance, something that requires concerted efforts by all countries. It is necessary to carry out the development of artificial intelligence in the country from the perspective of the common interests of mankind, ensure the safety and manageability of artificial intelligence, and strengthen international cooperation. At present Western countries advocate the concept of technological hegemony and technological monopoly, and developing countries have little opportunity to express their opinions on the governance of artificial intelligence, and China’s Community of the Common Destiny for Mankind Concept is necessary for the governance of artificial intelligence. Based on that concept, the paper explores China’s new practices and proposals for the domestic and international AI governance. In response to the problem of overuse and misuse of new technologies, China proposes to establish an artificial intelligence governance system that includes joint management by various actors, open and transparent regulation, comprehensive consultations, and the development of effective evidencebased laws, so as to promote the beneficial development of artificial intelligence in the future and contribute to the deepening of AI governance based on the Chinese proposal.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

УПРАВЛЕНИЕ ИСКУССТВЕННЫМ ИНТЕЛЛЕКТОМ И ОПЫТ КИТАЯ В РАМКАХ КОНЦЕПЦИИ «СООБЩЕСТВА ЕДИНОЙ СУДЬБЫ ЧЕЛОВЕЧЕСТВА»

В последние годы искусственный интеллект, опираясь на большие данные и Интернет, стремительно развивается и определяет будущее направление мирового развития науки и техники. Несмотря на то что искусственный интеллект благоприятствует научно-технической революции и индустриальной модернизации человечества, он также привел к появлению новых рисков. Приходится обращать все больше внимания на потенциальные риски искусственного интеллекта, которые следует регулировать. Риски искусственного интеллекта характеризуются разнообразием технологических угроз, сходством рисков искусственного интеллекта, с которыми сталкиваются разные страны и высокой сложностью управления, и требуют согласованных усилий всех стран. Необходимо осуществлять развитие искусственного интеллекта в стране с точки зрения общих интересов человечества, обеспечивать безопасность и управляемость искусственного интеллекта, укреплять международное сотрудничество. В настоящее время западные страны отстаивают концепцию технологической гегемонии и технологической монополии, а развивающиеся страны имеют мало возможностей выражения мнения в управлении искусственным интеллектом, и китайская концепция «Сообщества единой судьбы человечества» необходима для управления искусственным интеллектом. Начиная с данной концепции, в статье отмечаются новые китайские опыты и предложения в области внутреннего и международного управления искусственным интеллектом. В ответ на проблему чрезмерного использования и злоупотребления новыми технологиями Китай предлагает создать систему управления искусственным интеллектом, включающую совместное управление со стороны различных субъектов, открытое и прозрачное регулирование, всесторонние консультации, разработку эффективных законов, чтобы способствовать благотворному развитию искусственного интеллекта в будущем и вносить вклад в углубление управления искусственным интеллектом с помощью китайского предложения.

Текст научной работы на тему «Artificial Intelligence Governance and China’s Experience under the Community of Common Destiny for Mankind Concept»

Legal Issues in the Digital Age. 2023. Vol. 4. No. 3. Вопросы права в цифровую эпоху. Том 4. № 3.

Research paper УДК: 342

DOI:10.17323/2713-2749.2023.3.81.96

Artificial Intelligence Governance and China's Experience under the Community of Common Destiny for Mankind Concept

Uil Jia Shaoxue

Center for International Legal Training and Cooperation for the SCO, Shanghai University of Political Science and Law, 7989 Weiqingsong Ave., Qingpu District, Shanghai 201701, China

"=!■! Abstract

In recent years artificial intelligence (AI), backed by big data and the Internet, has been rapidly developing and determining the future direction of the world's science and technology development. Although artificial intelligence is beneficial to the scientific and technological revolution and industrial modernization of mankind, it has also brought new risks. People pay more and more attention to the potential risks of artificial intelligence that should be effectively managed. Artificial intelligence risks are characterized by the diversity of technological threats, the similarity of AI risks faced by different countries and the high complexity of governance, something that requires concerted efforts by all countries. It is necessary to carry out the development of artificial intelligence in the country from the perspective of the common interests of mankind, ensure the safety and manageability of artificial intelligence, and strengthen international cooperation. At present Western countries advocate the concept of technological hegemony and technological monopoly, and developing countries have little opportunity to express their opinions on the governance of artificial intelligence, and China's Community of the Common Destiny for Mankind Concept is necessary for the governance of artificial intelligence. Based on that concept, the paper explores China's new practices and proposals for the domestic and international AI governance. In response to the problem of overuse and misuse of new technologies, China proposes to establish an artificial intelligence governance system that includes joint management by various actors, open and transparent regulation, comprehensive consultations, and the development of effective evidence-

© Shaoxue J., 2023

This work is licensed under a Creative Commons Attribution 4.0 International License

based laws, so as to promote the beneficial development of artificial intelligence in the future and contribute to the deepening of AI governance based on the Chinese proposal.

Keywords

global governance; artificial intelligence; risks; community of common destiny for mankind; Chinese experience.

Acknowledgements: The paper was drafted under the general project of the National Social Science Foundation of the People's Republic of China "Research on the Legal Governance Mechanism of Data Security in the SCO" (project No. 22BFX160).

For citation: Jia Shaoxue (2023) Artificial Intelligence Governance and China's Experience of the Community of Common Destiny for Mankind Concept. Legal Issues in the Digital Age, vol. 4, no. 3, pp. 81-96. D0l:10.17323/2713-2749.2023.3.81.96

Background

Humanity has embraced the age of artificial intelligence. A major driving force of the fourth industrial revolution, AI technology is giving a new lease of life to such important sectors as military science, finance, education, science and technology, culture etc., while providing enormous capabilities for the historical evolution of humankind and creating a new model of global development [Shen X., Shi B., 2018:15]. AI is shaping the future of human society in an unprecedented way. While countries take the inherent challenges of AI technologies seriously, uncertainty of the risks is a major social concern. In the context of already existing or likely threats in the course of AI evolution, countries need to manage and regulate these risks as a matter of priority. At present, both domestic and international academic circles lack an analysis of the Chinese concept and approach to AI governance. For this reason, this paper dwells on China's Community of Common Destiny for Mankind concept to discuss the peculiarities of AI governance and Chinese proposal to manage AI with the global development prospects in view.

1. Specifics of AI Threats

An enormous commercial and social value of AI technologies is now propagating them across different spheres of life. As a new generation of information technologies, AI normally exists in the form of software and hardware to include a host of applications responding to vision, hearing and different sensory stimuli such as imitation of human games, language

translation, automated driving, face recognition etc. Depending on the use, the following three AI categories could be distinguished: weak artificial intelligence, artificial general intelligence, artificial superintelligence.

Weak artificial intelligence covers AI technologies endowed with some cognitive capability and widely used in everyday life, such as voice recognition, translation, face recognition etc. This type of AI has enjoyed the most large-scale development and marketing success.

Artificial general intelligence has cognitive ability matching that of man, with a single AI system able to perform a multitude of cognitive activities and behave intelligently, such as managing unmanned combat aircraft for an autonomous analysis of terrain and assessment of threats, functioning as generative AI, etc. [Zhang L., 2023: 126-128].

Artificial superintelligence has a cognitive ability beyond that of man, only to surpass man in such spheres as scientific innovations and autonomous production of knowledge.

Thanks to a breakthrough in data science, computing capabilities and algorithms, AI has entered a new age of explosive development. Some researchers believe that AI will evolve exponentially, once the singularity limit is overcome [Han Y., Zhang F., Peng J., 2023: 122]. The greatest peculiarity of artificial intelligence is the likelihood of becoming self-conscious in the future [Yu N., 2017: 95-96]. If AI is not guided by human standards and not restrained in its growth, the risk will become unmanageable. AI technologies have inherent risks and threats which for practical governance translate into the following aspects:

First, AI-related threats are diverse. Technological threats are largely concentrated in the following spheres: in the military sphere, AI is able to make independent decisions while its ability to collect and analyze huge amounts of data can undermine the traditional methods of warfare such as the use of unmanned aircraft and other types of arms, only to increase the gap in military power between countries. As regards the economy, AI will replace man and change the future of work to inevitably generate deeply rooted conflicts in the global social structure resulting in segregation and inequality [Ma C., 2018: 48-55]. AI also affects the industrial development at the national level which can create financial risks, sectoral monopolies and other negative implications. In the social sector, AI technologies are subject to algorithmic discrimination and biases to give rise to legal and moral dilemmas fraught with considerably more violations of privacy and ethical risks. In short, AI technologies gradually affect human behavior and

result in risks not predicted by system developers, only to engender multiple threats for human society with regard to employment, law, privacy, ethics and security [Wu S., Luo J., 2018: 112-114].

Second, countries face similar AI risks. In the age of globalization, many Al-related issues of political governance are of global importance. As AI technologies spread out, the underlying risks grow and progress across the world, with characteristically cross-border dissemination from the national to international level. No country, organization or person can independently handle AI technological threats. The reliance of artificial intelligence on big data for algorithmic operation results in security risks such as personal data and state secret leakages — for example, widespread theft of personal data, intrusion into public networks and loss of control over national data, something that no sovereign state with a traditional closed governance system can be safe from. Moreover, the global use of AI technologies is faced with general problems. For example, there is research across the world to develop self-driving vehicles, only to result in numerous legal problems. Who will be held responsible in the event of an accident between a self-driving vehicle and a human driver, if neither party is at fault? There is a need to assign and accept the relevant legal obligations.

Third, AI-related risk management is complex one. AI governance has moved beyond the scope of relationships between individuals up to the intergovernmental level and, not confined to the protection of privacy, data leakage etc., extends to the level of human consciousness, operation of state and society. Countries are faced with the choice of methods to govern AI. Disputes between countries on who has the right to formulate and interpret the AI governance rules have made the global cooperation in this area problematic. With the importance of AI governance recognized worldwide over the last few years, the trend for an independent way has become relatively obvious in practice. Governing AI requires not only a generally accepted concept but also technical implementation in the form of rules. Since AI technologies have a global scope in different countries, cultures and spheres, the success will depend on cognitive understanding of each country and at the same time on the extent of concerted action by the international community.

2. Pressing Problems of AI Governance

To explore AI governance, one need to first have basic theoretical understanding. The core AI governance elements are several and include subjects,

objects and methods of governance. The first largely include governments, international organizations, public institutions etc. Objects of governance include AI technologies themselves and related problems. Methods — that is, specific means and policies to govern AI — largely cover ethical limitations, technological innovations, regulatory and legal provisions reflecting the rules, concepts and the underlying values to be observed in using AI technologies. The joint efforts of all subjects result in control over an object of governance to provide a basis for addressing global challenges and transnational threats which emerge in the process of technological change. A current controversy over who should direct AI governance, what is efficient as regulation, what values should be upheld and what methods adopted prevents collective action, with AI governance becoming a global social problem affecting the interests of the population at large, problem of competition and difficulties to conceptualize values.

2.1. Diverging Interests of AI Governance Subjects

AI governance subjects include public authorities, non-governmental organizations (NGOs), enterprises, research centers, private individuals etc. Governments assume the leading role in AI application, research and development; high-tech companies act as developers and suppliers while NGOs, research institutions and individuals are important parties in terms of relevant assessment and opinion. Depending on the governance scope, the activities of numerous parties involved in this process normally take place on two different levels: national and international.

AI governance at the national level is a very complex task involving conflicts of interest between different subjects. While new AI should be regulated, overly rigid regulation will obstruct technological process, with businesses interested in minimal provisions for more profits and room for independent decision-making; meanwhile, the public sector will opt for stability and security including to avoid non-ethical and illegitimate use of the technology. NGOs, research centers and individuals are watchdogs to guard against moral prejudice, discrimination, racism, human rights and other AI-related problems, and perform the monitoring function with regard to public opinion by producing societal moral judgments in the process of governance. In the age of AI, too strict or relaxed regimes will concern the interests of each subject resulting in a chaotic AI governance pattern from the perspective of law, ethics and economy.

AI governance at the international level is not only an intrinsically technical problem but also that of international development standards. The economic foundations and technological resources of countries are imbal-anced on the international scale, with evident disproportions and deviations observed in the technological development. The international community has realized that common standards contribute to global development of AI but the harmonization process is not simple. Moreover, technological AI standards vary across countries and regions. Policy development will normally fall behind the speed of technological change: decision-makers cannot fully understand AI for lack of adequate experience, only to make wrong decisions, with cooperation mechanism between civil servants and technology researchers often absent. Moreover, despite the adoption of certain national technological standards by the international community, international organizations at the sectoral level cannot engage in a technical dialogue for lack of the relevant practical experience.

2.2. Aggravating Competition for AI Governance

Thanks to an overall breakthrough in three core components — data, algorithms and arithmetic capability — AI has demonstrated a capacity matching or even surpassing that of man in spheres such as education and technologies, traffic management, financial investment, legal proceedings to become a field for competition between countries [Li C., 2021: 127-128]. The progress in AI technologies is related with the increase of competitiveness. The international technological rules and coordination mechanisms applicable to AI are currently dominated by the governments of developed countries such as the United States. Over the last few years the United States, European Union, OECD and other large countries and organizations worldwide have been following each other in launching AI policy plans to resolve pressing issues.

The European AI Strategy builds on trust as a prerequisite of the human-centered approach to AI. In April 2019, the European Commission published the Building Trust in Human-Centric Artificial Intelligence1, a document describing the key requirements and concept of trustworthy AI presented by the High-Level Expert Group on AI in the Ethics Guidelines for Trustworthy AI. According to the Guidelines, a trustworthy AI should be: lawful — respecting all applicable laws and regulations; ethical — re-

1 Available at: https://ec.europa.eu/digital-single-market/en/news/communication-building-trust-human-centric-artificial-intelligence (accessed: 01.09.2023)

specting with ethical principles and values; robust — both from a technical perspective and taking into account its social environment2.

On 22 May 2019 the OECD countries officially approved the first package of intergovernmental AI principles by approving international standards of robustness, security, sustainability, fairness and safety of AI systems3.

In May 2023 the US Administration published a new National AI R&D Strategic Plan defining key priorities and purposes of the Federal Government's investments into AI research and development4. As part of the international efforts to ensure responsible use of AI, the G7 initiated the same month the AI Hiroshima Process which promotes an open and constructive dialogue on the implications of AI tools such as ChatGPT, an AI model supported by Microsoft OpenAI. Moreover, at the Hiroshima summit the G7 leaders stressed the need in developing and adopting the relevant technical standards to support AI "robustness". They also noted the importance of ensuring the compliance of AI advances with common democratic values5.

On 8 June 2023 the United States and United Kingdom approved the Atlantic Declaration for economic partnership underlined the need in further strengthening of cooperation in such fields as artificial intelligence to ensure the American and British leadership in the key and novel technolo-gies6. This Declaration reaffirms the fact that Western countries will be fully involved in the global governance of the emerging technologies to make it a major field for further discussions and global leadership.

Governments currently regard AI technologies as a key to the future of their countries, thus manifesting a clearly national interest. For lack of a major coordinating body vested with absolute powers, many countries have set for a dominant position in AI governance rules on the argument of technological gap and technological inequality. Striving to secure the maximum domination, Western countries headed by the United States are tak-

2 Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed: 31.08.2023)

3 Available at: https://globalcentre.hse.ru/news/276245330.html?ysclid=lmbc9egy9e 516596400 (accessed: 02.09.2023)

4 Available at: https://d-russia.ru/administracija-ssha-opublikovala-novyj-strategiche-skij-plan-issledovanij-i-razrabotok-v-oblasti-iskusstvennogo-intellekta.html (accessed: 03.09.2023)

5 Available at: https://www.fullrio.com/economy-70466 (accessed: 03.09.2023)

6 Available at: https://baijiahao.baidu.com/s?id=1771350683397528978&wfr=spider& for=pc (accessed: 03.09.2023)

ing steps to hold back the developing countries and break away from them in the technological development, only to further undermine the cooperative nature of the global AI governance. In the age of artificial intelligence, the workforce from developing countries is involved in the international division of labour on much looser terms, with the governing power of sovereign countries in decline [Han Y., Zhang F., Peng J., 2023: 138-139]. The problem of technological inequality is obstructing technological progress in developing countries while an enormous potential of the leading nations may finally result in a technological hegemony [Mei L., 2023: 53]. Today developing countries do not have much to say on AI governance as the projects they are involved in are relatively few. For this reason, developing countries need to constantly improve their technological potential in this field and promote a reform of the existing global system of AI governance in the interest of their own development.

2.3. Lack of Value-based Consensus in AI Governance

The process of technological development of AI is closely related with the world's global development path, civilizational concepts and ideologies. Due to the specifics of political systems, national contexts and cultural traditions the AI-related technological policies worldwide largely differ and have different values in view. Many countries are attempting to impose AI values to promote their own development needs and interests while sticking to a technological model underpinned by their core values. While on 25 November 2021 the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted the Recommendation on the Ethics of Artificial Intelligence7, the first ever global standard on AI ethics, such global awareness standards are few and not binding.

AI should be underpinned by right values with fairness and equity as the main value-based principles, otherwise it can depart from its original purpose to become a tool for those in power to keep their privileges [Sun W., 2017:120-126]. There is currently no universally applicable regulatory system with international AI governance rules [Zhu M., Xu C., 2023: 10371049]. From the global perspective, the diverging governance concepts in different countries are a major problem for AI governance. AI systems worldwide are influenced by different values, only to prevent the effective cooperation for global AI governance. Western countries headed by the

7 Available at https://d-russia.ru/junesko-prinjala-rekomendaciju-ob-jeticheskih-aspe-ktah-iskusstvennogo-intellekta.html?ysclid=lmbgz5ysye848477271 (accessed: 02.09.2023)

United States are attempting to dominate by combining values with technological monopoly to keep their leadership and the established world order. Apart from the technology as such, the United States clearly showcase the values of the Western world — freedom, democracy and the rule of law — while underlining that AI technologies should comply with American values and interests. Imposing "unilateral" values on other countries aggravates the existing conflicts in the global AI governance.

3. China's AI Governance Proposals

The extent of reducing AI-related risks depends on evidence-based governance mechanism to be created. Based on the regulatory framework, countries should ensure ethical support for the development of relevant technologies and interpretation of algorithms, as well as a proper balance between technical responsibility and ethics, so that AI systems could be used in a fair, transparent and safe environment. In a deeper sense, AI challenges the subjective status of "man", only to pose questions such as how man and machine can co-exist; how the legal liability between man and machine can be defined; and how AI's legal status and liability can be determined. The primary purpose of AI governance is to ensure safety of man, so that machines would comply with the existing moral and value-based human attitudes. China's concept of the Community of Common Destiny for Mankind does not only follow the logic of common human development but also paves a realistic way to address the AI development dilemma.

3.1. Promoting collective governance of multiple subjects

AI development involves multiple stakeholders to require collective participation of many subjects in AI governance. Domestically, each state has to establish linkages between companies and government agencies to shape a cooperation model in a competitive context; companies should be made to comply with their social obligations and follow the principles of safety in developing and applying new technologies; civil society should play a monitoring role in achieving social consensus, promoting common goals and improving the efficiency of AI governance. Globally, there is an evidently onesided trend in AI governance, with developing countries less involved in these efforts and unable to make their voice heard. The AI governance community should involve not only developed economies but also developing nations. From the global perspective, AI governance concerns common interests of all mankind, something that requires to fully account for a balanced devel-

opment of global AI technologies and give more voting power to developing countries to promote a favourable impact of AI technologies worldwide by bridging the digital divide. AI governance should promote openness and cooperation, fully mobilize the enthusiasm of multiple stakeholders, shape a truly multi-principle model of governance involving national public authorities, R&D companies, international organizations and civil society, bring together the existing governance platforms and institutions on the global scale, create and improve a wider platform for international cooperation.

The Community of Common Destiny for Mankind concept assumes joint consultations and cooperation, and respect for common interests of all mankind. Being part of the overall structure of the Community, the global multi-level synergetic cooperation is a major element of China's involvement and promotion of the global AI governance. AI development is hinged on the synergetic governance system shaped at the global level by cooperation between all parties. China adheres to the Community of Common Destiny for Mankind, opposes the technological monopoly of a few countries in AI and focuses on joint cooperation of all countries and especially on technological exchanges between developing countries. In 2017, China created the Agency for Promotion of Development Planning of New Generation AI to organize and implement development planning of new generation AI and major R&D projects8. Universities, research centers and companies have established AI committees for advice on AI-related implementation issues. The government has created a system for promoting AI advances for better social governance and guidance. An important goal is to develop sectoral guidelines for self-regulation of the AI sector and for sharing of the best AI development practices with those in need.

3.2. Creating an open and transparent regulatory mechanism

A majority of AI-related innovations are implemented by the global technological powers such as Western countries, hence the importance of the policies for responsible technological regulation for these countries. In the globalized world, the question of how to minimize the negative implications of technologies through regulation is the key for governing AI. Europe and the United States are reinforcing the legal regulation of AI. On 14 June 2023 the European Parliament has voted for the approval of the AI Act which became the world's first comprehensive regulation on AI to pass the parliamentary pro-

8 Available at: https://www.sohu.com/a/646248011_121106842 (accessed: 02.09.2023)

cess. The Act purports to supervise AI systems by classifying them across four risk categories ranging from "minimum risk" to "unacceptable risk"9.

In terms of normative regulation and innovations, countries largely differ from each other. To address this issue, governments should strike an optimal balance between regulation and innovation without going to extremes like over-regulation or whateverism. Moreover, since a common regulatory system serving the interests of a few technologically advanced countries will inevitably fail, it is necessary to take into account common regulatory interests of countries as part of international cooperation and set up a multi-party AI regulatory network. With each country being a"regulator" and "competitor" at the same time, they will jointly work to establish a common, transparent and interpretable regulatory regime for AI.

While attaching much importance to the development and use of AI technology, China proposed to make AI part of the national development priorities back in 2015 to promote deep integration of AI into political and social life and to use the national leadership for regulatory guidance to ensure sound and robust development of the AI industry. Domestically, China strives to improve the open and transparent AI regulatory system and to develop a system of rapid response to technological risks. At the international level, the focus is made on the joint regulatory involvement of the global community. China proposes to step up the research on the global issues of common interest, upholds the creation of international organizations on AI and joint development of the relevant international standards. With reliance on the Community of the Common Destiny for Mankind concept, China is hoping to promote common and transparent regulatory standards at the global level for safe and widespread use of AI technologies.

3.3. Towards the Principle of Comprehensive Consultations

The social impact of artificial intelligence is largely about human values. The strife towards universal human values embraces peace, equity, development, justice, democracy, freedom etc. In view of the complexity and diversity of human societies and abstract expression of human culture, the global AI governance consensus should be upheld by the principal question — common destiny of mankind. Since AI affects all mankind, AI governance should be human-centered and provide for human interests and human agency. Governments and societies should collectively work to ensure

9 Available at: http://news.sohu.com/a/687493860_121124603 (accessed: 02.09.2023)

human autonomy in governance practices [Gao Q., 2020:101]. At present, several international organizations have proposed reference frameworks for regulating AI governance but the global AI governance mechanism is yet to be improved, with no common reference framework in place.

In June 2019 China has published the Principles of New Generation AI Governance for Responsible AI10. It differs from AI guidance issued by other countries in more focus on the importance of jointly building the Community of Common Destiny for Mankind for sustainable economic, social and environmental development based on the cooperative model rather than the one dominated by any single country. In particular, China has put forward eight principles including harmony and friendship, integrity and equity, inclusion and joint use, respect for privacy, safety and control, common responsibility, openness and cooperation, flexible governance. AI should be developed to preserve social stability, with responsible AI to be implemented on the basis of a comprehensive review of risk management initiatives. Thus, the Principles of New Generation AI Governance for Responsible AI encourage coordination and cooperation between global organizations, public authorities, research centers, education institutions, companies, civil society and population for promoting AI development and governance, as well as underline the need in a broad consensus with regard to the international AI governance system, standards and norms established with the help of international organizations.

In June 2020 China's research centers published the White Book for Sustainable AI Development putting forward for the first time ever the principle of sustainable AI development based o"respect for consultations and study of the engagement culture"as well as the solution to future AI governance problem by"promoting sustainable development of the AI industry and creating the Community of Common Destiny for Mankind11. In promoting the synergetic cooperation between countries, China is striving to set up a cooperative platform on AI, with relevant issues proposed for the agenda of the G20, APEC and BRICS workshops or those held on a bilateral basis. China advocates a global AI governance mechanism based on the Community of Common Destiny for Mankind concept to make sure that AI governance serves to achieve common good, remove the digital divide, ensure social equity and justice, observe moral and ethical standards, contribute to the progress of human civilization.

10 Available at: https://rn.gmw.cn/baijia/2021-06/29/34959031.html (accessed: 05.09. 2023)

11 Available at: https://baijiahao.baidu.com/s?id=1670258881368998719&wfr=spider &for=pc (accessed: 05.09.2023)

3.4. Developing and Approving Effective

and Evidence-based Laws

The spectacular development of AI technologies since the early XXI century had a considerable impact on the existing legal system and public governance methods, with the disruption of law and order being a major challenge faced by mankind. The regulatory failure and disruption of law and order are manifested at the central level as"governance deficiency"[Zhang W., 2021: 18-23]. The AI challenge theoretically means that certain traditional legal concepts or views no longer compatible with AI are to be amended accordingly [Chen J., 2018: 137-138]. The inadequacy of laws and regulations to identify persons at law and assign liability for AI products can impact the development of related sectors. China actively advocates "human-centered"AI serving "a good cause". China's AI governance system is now evolving towards comprehensive and delicate governance based on exploring a possibility to have a governance system combining "soft ethics" and "tough law". Academic lawyers have conducted profound studies in the area of data rights, confidentiality in the Internet, personal data rights, core human rights and other aspects of different subjects, with positive results being achieved [Chen P., 2018: 71-72]. Following the idea of security and parallel development, China has adopted a number of underlying laws and policies to regulate and transform the new generation of AI technologies.

The improvement of laws and regulations on AI-related data security comes first. Public and regulatory authorities have adopted the relevant regulations to respond to regulatory needs in their respective domains in a positive way. The Provision for the Development of New Generation AI (published on 20 July 2017) is a policy document to develop AI in China before 2030 with a focus on the goals, key objectives and guarantees of the new generation AI. Based on this document, China has adopted and made effective a number of regulations such as the Law on Personal Data Protection (in force since 1 November 2021) which provides that no organization or individual can illegally gather, use, process or transmit personal data of other individuals, illegally offer for sale, provide or disclose personal data of others; they should not engage in personal data processing operations that pose a threat to national security or public interest. The Law on Data Security (in force since 1 September 2021) expands the importance and scope of data application with a special focus on the data security regime.

Defining the development limits of AI technologies comes second. AI is an emerging technology to be regulated with an adequate account for

innovative developments and applications while providing for tighter regulation of the legal liability of developers, suppliers and users, and defining their core obligations by formulating relevant laws. In 2020, the legislative plan of the Standing Committee of the National People's Congress (SC NPC) mentioned AI-related legislation and regulation by explicitly noting a need to focus on legal issues related to new technologies and fields such as artificial intelligence, blockchain and gene editing. To implement the legislative plan, China has adopted the following regulations: Ethical Code for the New Generation AI (published on 25 September 2021) to guide natural and legal persons involved in AI-related activities on ethical standards; Provisions on Promoting the AI Industry Development in the Shenzhen Special Economic Zone (in force since 1 November 2022), China's first bylaw to promote the sector's development; Provision on Managing Algorithmic Recommendations for Web-Based Information Services (in force since 1 March 2022) to impose the main responsibility for algorithmic security on platform companies and to provide users with the right to chose recommendations and delete data labels; it also contains a clear requirement to algorithmic recommendation services to observe public morals [Xu K., 2022: 125-130]; Provision on Governing the Deep Synthesis of Web-Based Information Services (in force since 10 January 2023) which provides that deep synthesis technologies cannot be used for any activity prohibited by laws and regulations, with suppliers to assume the main responsibility for security. All this reflects the value-based focus on disseminating algorithms for common good at the level of algorithmic governance in China. Finally, the Time-Bound Policies for Governing the Generative AI Services (in force since 15 August 2023) contribute to managing the relevant risks as a bylaw applicable to generative AI [Zhang X., 2023: 43-48].

Conclusions

The development of laws and regulations in different countries lags behind and is not adequate to the explosive growth of AI technologies, with the issues of how to determine the vector of technological progress, set up a platform for cooperation, formulate governance standards and assign risks and responsibilities yet to be properly resolved. As the world becomes globalized, a major factor is the development of common and coordinated AI governance rules, something that requires a unanimous consensus between countries on global governance vectors and rules to be achieved with the help of international mechanisms. To address common problems faced

by mankind, China has put forward the Community of Common Destiny for Mankind concept as a clear reference for promoting global AI governance, and argues for stronger international cooperation based on equality and mutual assistance, with all countries to achieve the shared use of information and collectively establish the AI governance system.

References

1. Chen P. (2018) On the principle of subjectivity in the construction of network legal rights. Zhong guo fa xue=Chinese Law, no. 3, pp. 71-88 (in Chinese)

2. Chen P. (2019) The power of algorithms: application and regulation. Zhe jiang she hui ke xue=Zhejiang Social Science, no. 4, pp. 52-58 (in Chinese)

3. Chen P. (2019) Government in the era of smart governance: risk prevention and capacity enhancement. Ning xia she hui ke xue=Ningxia Social Science, no.1, pp. 95-104 (in Chinese)

4. Chen J. (2018) Legal challenges of artificial intelligence: Where should we start? Bi jiao fa yan jiu=Comparative Law Studies, no. 5, pp. 136-148 (in Chinese)

5.Gao Q. (2020) A primer on intelligent revolution and modernization of national governance. Zhong guo she hui ke xue=Chinese Social Science, no.7, pp. 81-102 (in Chinese)

6. Han Y, Zhang F., Peng J. (2023) Order reconstruction: global economic governance under the impact of artificial intelligence. Shi jie jing ji yu zheng zhi=World Economy and Politics, no.1, pp.121-149 (in Chinese)

7. Jiang K. (2019) Law as algorithm. Qinghua faxue=Tsinghua Law, no.1, pp. 64-75 (in Chinese)

8. Li C. (2021) Legal governance of artificial intelligence discrimination. Zhong guo fa xue=Chinese Law, no. 2, pp.127-147 (in Chinese)

9. Liu X. (2003) Dilemmas and directions of cognitive science research programs. Zhong guo she hui ke xue=Chinese Social Sciences, no. 1, pp. 99-108 (in Chinese)

10. Ma C. (2018) Social risks of artificial intelligence and its legal governance. Fa Iv ke xue (xi bei zheng fa da xue xue bao)=Legal Science. Journal of Northwest University of Politics and Law, no. 6, pp. 47-55 (in Chinese)

11. Mei L. (2023) Technology displacing power: changing power structure of national governance in the age of artificial intelligence. Wu han da xue xue bao (zhe xue she hui ke xue ban)=Journal of Wuhan University. Philosophy and Social Science Edition, no. 1, pp. 44-54 (in Chinese)

12. Sun W. (2017) Rethinking value on artificial intelligence. Zhe xue yan j7u=Philosophical Research, no.10, pp.120-126 (in Chinese)

13. Shen X., Shi B. (2018) The future computed: artificial intelligence and its role in society. Beijing: Beijing University Press, 275 p. (in Chinese)

14. Wu S., Luo J. (2018) Legal governance of artificial intelligence safety: a review around system safety. Xin jiang shi fan da xue xue bao(zhe xue she hui ke xue ban)=Journal of Xinjiang Normal University. Philosophy and Social Science Edition, no. 4, pp.109-117 (in Chinese)

15. Xu K. (2022) China's construction and theoretical reflection on the system of accounting laws. Fa lv ke xue(xi beizheng fa da xue xue bao) =Legal Science. Journal of Northwestern University of Politics and Law, no. 1, pp.124-132 (in Chinese)

16.Yu N. (2017) Self-consciousness and object-consciousness: the class nature of artificial intelligence. Xue shu j'ie=Academia, no. 9, pp. 93-101 (in Chinese)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

17. Zhang X. (2023) Data risks and governance paths of generative artificial intelligence. Fa lv ke xue(xi bei zheng fa da xue xue bao) =Legal Science. Journal of Northwest University of Politics and Law, no. 5, pp. 4254 (in Chinese)

18. Zhang A., Sun Y (2021) The subjective perspective of algorithmic power and its state capacity shaping. Xue shu yue kan=Academic Monthly, no. 12, pp. 96-105 (in Chinese)

19. Zhang W. (2021) Building legal order of intelligent society. Xin hua wen zhai=Xinhua Digest, no. 3, pp. 18-23 (in Chinese)

20. Zhang L. (2023) Legal positioning and hierarchical governance of generative artificial intelligence. Xian dai fa xue=Modern Law, no. 4, pp. 126-141 (in Chinese)

21. Zhu M., Xu C. (2023) International soft law regulation of artificial intelligence ethics: current situation, challenges and countermeasures. Zhong guo ke xue yuan yuan kan=Bulletin of Chinese Academy of Sciences, no. 7, pp. 1037-1049 (in Chinese)

Information about the author:

Jia Shaoxue — Doctor of Sciences (Law), Associate Professor.

The paper was submitted to editorial office 10.09.2023; approved after reviewing 05.10.2023; accepted for publication 05.10.2023.

i Надоели баннеры? Вы всегда можете отключить рекламу.