УДК 334 : 004.8
Рольф Клауберг,
доктор естественных наук
Международная ассоциация межкультурного диалога и геостратегических исследований, Швейцария
КОНЦЕПТУАЛЬНЫЙ АНАЛИЗ ВОЗМОЖНОСТЕЙ И РИСКОВ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА
ДЛЯ ВСЕХ СЕКТОРОВ ЭКОНОМИКИ
Rolf Clauberg,
Doctor in Natural Science (Dr.rer.nat.)
International Association of Intercultural Dialog and Geostrategic Studies, Switzerland
E-mail: [email protected]
A CONCEPTUAL ANALYSIS OF THE OPPORTUNITIES AND RISKS OF ARTIFICIAL INTELLIGENCE
FOR ALL PARTS OF THE ECONOMY
Аннотация. Эта статья посвящена одной из самых актуальных тем современности — развитию искусственного интеллекта во всех сферах его применения. Искусственный интеллект повлияет на все сферы нашей экономики, изменяя или ликвидируя существующие рабочие места, а также создавая новые. В статье анализируются различные возможности и риски для систем с искусственным интеллектом для всех сфер экономики. Для данного анализа используется модель, которая определяет возможности и опасности для этих систем, исходя из характеристик технологий и требований потенциальных приложений — реактивных и креативных. Так, реактивные приложения распознают определенные шаблоны и объекты, а затем и реагируют на них. В свою очередь, креативные приложения создают новые методы, технологии и направления в искусстве. К подгруппам реактивных приложений относятся — мониторинг, автономные системы и системы поддержки экспертов. Для каждой области применения анализируются возможности, риски и побочные эффекты, которые представлены в примерах. Развитие искусственного интеллекта создает большие возможности, однако надо учитывать серьезные потенциальные опасности и негативные эффекты. Так, широкие возможности открываются в области медицинской диагностики, разработки новых технологий и материалов, а также повышения безопасности в общественных местах и на промышленных объектах. Вместе с тем существует риск негативных последствий. Так, наиболее опасными угрозами являются теракты с применением оружия, управляемого искусственным интеллектом. К существенным побочным эффектам относятся — исчезновение целых профессиональных групп из-за появления высокоэффективных систем поддержки экспертов, а также растущая зависимость от искусственного интеллекта во всех сферах жизни. Безусловно, развитие искусственного интеллекта ставит перед человечеством перед выбором, от которого зависит наше будущее.
Ключевые слова: искусственный интеллект, экономическое воздействие искусственного интеллекта, рабочие места, экономические отрасли, автономные системы, роботы, экспертные системы поддержки, искусство при помощи искусственного интеллекта.
Abstract. This article is dedicated to one of the most actual topics of our time — the development of artificial intelligence (AI) in all its fields of application. AI will impact all parts of our economy, changing or eliminating existing jobs as well as creating new ones. The article analyses the different opportunities and threats for artificial intelligence systems with a model that extracts the threats and opportunities from the characteristics of the technology and the requirements of potential applications. Main application classes are reactive and creative applications. The reactive class is based on classification plus corresponding actions with subclasses like monitoring, autonomous systems, and expert support systems. The creative class generates new methodologies, technologies, and styles of art. Opportunities, threats, and side effects are analyzed for each application class with examples for each class. We find huge opportunities, but also serious potential threats and side effects. Important opportunities are in medical diagnostics, development of new technologies and materials, and improvement of safety and security in public as well as industrial areas. Most serious threats are AI based terrorist attacks. Serious possible side effects are the disappearance of complete
classes of jobs due to highly efficient expert support systems as well as an increasing dépendance on AI solutions in all aspects of our society. Of course, the development of artificial intelligence puts humanity before a choice on which our future depends.
Key words: artificial intelligence, economic impact of artificial intelligence, jobs, economic sectors, autonomous systems, robots, expert support systems, artificial art.
Introduction
Artificial intelligence (AI) is on its way to become a dominating technology for nearly all industries, services, and governments. Many articles already have been written on the merits and risks of this technology and several international organizations, e.g., UNESCO [1], are working on norms of ethical applications of it. National governments are preparing laws governing the use of AI. The special interest in AI is caused by the fact that AI not only prepares the materials on which humans then build their decisions, but that AI also directly makes decisions and executes them. This is a novel development. The established "Big Data Analytics" still left decisions to human experts; AI goes beyond this.
The first step in conceptually analyzing a new technology should always be based on a "Technology value and risk model" which analyzes its opportunities and risks based on the characteristics of the technology and the requirements of its applications. In the past we have seen the development of many technologies which finally created huge problems all over the world because the threats were not realized early enough and possible steps to counter such threats were not taken. Examples are nuclear reactors build in earthquake sensitive regions or nuclear waste storage in unstable regions, as well as Zeppelins with hydrogen tanks. Considering the threats caused by a technology, there are several classes of threats. The first class of threats is that caused by applying the technology to a bad cause. In principle, every technology can be used in a positive as well as a negative way. What is considered as a bad cause depends on the accepted ethic which itself depends on the dominant culture in a specific region. One disputed technology, e.g., is the use of video surveillance. In certain countries many people consider this as an invasion of privacy and strongly oppose this technique [2] while others
proudly report the high security in their country caused by this technology [3]. The second class of threats are caused by inherent failures of the technology, i.e., a not properly working technology causing wrong or biased results. The third class of threats are side effects. Side effects are unintended, include all kinds of social or environmental aspects, and may turn up soon after the introduction of a new technology or only after the technology is in use for a long time. Famous examples are the destruction of specific job classes through new technologies, air pollution by industries and vehicles, nuclear waste from corresponding power stations, or plastic waste from consumer products.
In the following sections we will shortly describe the basic AI technologies as well as the classes of applications and then go into the threats and opportunities of AI.
AI technologies
The most successful approaches for implementing AI solutions used in the last decades were those based on algorithms [4], neural networks [5; 6], as well as combinations thereof [7; 8]. Pure algorithmic solutions may use combinations of multiple algorithms to identify a specific object or event, but they are usually limited to those whose characteristics are already known [4]. They cannot identify new kinds of events or objects. Neural networks on the other hand can be trained to identify specific complex patterns of an object or event. Neural networks are built with layers of neurons. Each neuron in a layer connects to each neuron in the following layer, but the weight or strength of each of these connections varies between 0 and 1 with the specific value determined by the training process of the neural network. Each neuron calculates its output value from all its input values and a bias value. In principle, neurons with specific properties can be mapped onto logic-gates like AND, OR, and NAND, from
which all digital computers are build [9]. Therefore, neural networks can be used for all kinds of computation, but instead of writing the application code, the users train the networks with data and feed-back about the correctness of the results. There are different kinds of neural networks as well as combinations of them [10; 11]. Also, the neural networks may use different numbers of layers of neurons. Very complicated aspects can be addressed by using neural networks with multiple layers — so called deep neural networks. The first layers analyzing very simple and specific aspects of the input data and the later ones a hierarchy of ever more complex and abstract concepts. Deep neural networks perform much better on many problems than shallow neural networks. But the methods that enable learning in such networks were developed only after 2005. Training such networks may be done in supervised or unsupervised mode. Highly successful pattern recognitions are reported for the analysis of images, e.g. medical ones [12], speech or language patterns [13; 14], as well as behavioral patterns [5; 6]. Neural networks of course can also be considered as algorithms, but they are assumed to work like specific parts of the human brain and are changed by the data used for training them. These training aspects cannot be summarized in a simple parametric expression. Also, there are specific hardware units like Tensor Processing Units (TPUs) to accelerate neural network machine learning [15]. Therefore, it is practically not possible to retrace the final decision based on a neural network and explain it. Assurance of the correctness of a classification by neural networks is therefore an important issue [16; 17]. Of course, the normal procedures for secure software development can be applied for algorithms used in training neural networks, but there always is the problem that large neural networks contain a huge number of neurons and a corresponding larger number of connections which itself cannot be described as a simple algorithm.
When we talk about algorithms in this article then we do not mean neural networks. For this kind of algorithms retracing their final decision is possible. However, any combination of algorithms and neural
networks will have the issues listed for neural networks.
Any computer program can generate wrong results due to programming errors, but neural networks have the additional issue that their correct operation also depends on the training data. The selection of the training data may lead to wrong classifications due to omission of object classes from the training data, or the inclusion of adversarial examples [18]. Standard datasets for training neural networks to recognize handwritten numbers from 0 to 9 contain 60 000 training pictures [9]. In many cases, the only solution is to use very large datasets to test the systems after they have been trained and it is very important that these test sets are not the same as the training sets. Adversarial examples can lead to false classifications while still reporting a high certainty for the classification. The examples given in [18] are all for images and the authors find that the issues stem from computer vision architectures. Considering the creation of new insight, the selection of training data can lead to severely biased results. If you consider, e.g., the search for new medical treatments based on a database where nearly all or at least a large part of the patients belongs to the same gender or ethnic group, you may end up with treatments which are suboptimal for patients not belonging to this gender or ethnic group.
AI applications
With AI there is a huge number of potential applications, but on a very high level all of them can be grouped into 2 classes — reactive and creative applications:
1. Reactive applications classify a set of input data as a specific object or event and either directly enact a corresponding reaction on this or give a recommendation on how a human should react or simply provide the classification result to a human. Direct action is mainly used if a very fast reaction is needed which does not allow a delay through a human in the decision loop. Examples are, e.g., automatic brake systems in cars [19], automatic cyber defense systems [20], as well as fully autonomous systems like drones [21] and robots [22,23]. Recommendations
or simply providing classification results to a human are usually used in expert support systems [11; 15; 16; 28].
2. Creative applications generate a new set of output data from a set of input data. Examples are, e.g., language translation [13; 14], generation of plans for the creation of new materials with specific properties [28; 29], predictions of protein structures from DNA or RNA sequences [7; 30; 31], creating new technical solutions [32], or creating new products of art [33-35]. They often use multiple neural networks including classification systems as used for reactive systems, as well as deep neural networks. A special role play generative adversarial networks (GANs). GANs are often used with one neural network trained to identify a specific class of objects, e.g., a specific type of art, and another neural network to create such art. The creative network is trained by the classification network until the classification network can no longer distinguish the artificially created objects from the group of original objects. Here AI creates art which is not distinguishable from existing art. There are approaches going beyond standard GANs by making them capable of generating creative art by maximizing deviation from established styles and minimizing deviation from the art distribution used to train the classification network. The authors call their system a creative adversarial network (CAN) [36]. Their goal is to generate novel works, but not so novel that it could create aversion. Also, the novel art shall show increased stylistic ambiguity, hence it should not easily be assigned to an existing style of art. The authors claim that in their tests human subjects could not distinguish their artificial art from those created by humans and that they even rated the art generated by AI higher on various scales. Creative applications may also use classical computer modeling in combination with AI solutions [37]. The AI solutions then usually control the classical computer modeling as well as database search for suitable starting points.
The requirements of corresponding AI applications are fulfilled by AI solutions based on the AI technologies described in the previous section. Each solution is usually specific to its application.
Opportunities and threats
As mentioned before, threats and opportunities are determined by the characteristics of the technology used and the requirements defined by the application. If the technology cannot satisfy the requirements of the application, there are no opportunities of this technology for the selected application, but there may be opportunities for another application. Hence, there usually is a group of applications which offer opportunities for a certain technology. This group may change over time due to evolution of the technology, the evolution of competing technologies, the appearance of new applications, and the disappearance of older applications.
Opportunities for AI solutions can mostly be grouped into the following classes:
• Systems for monitoring/surveillance with fast reaction
— industrial and public safety and security
• Autonomous systems
— Robots, drones, unmanned vehicles (aerial, submarine, in the street)
• Expert support systems
— Medical, business, juristic, administrative, and political analyzes
• Creative systems to find new solutions
— Medical treatments, new technologies, new materials, new art
Threats can be grouped into
• Using AI for a "bad purpose". What is a bad purpose depends on the accepted ethic which itself may be different for different cultures and different groups within a culture. UNESCO [1] tries to define globally accepted ethic rules for AI, but whether all points of the final definition will be accordingly applied by all its member states is another question.
• Inherent failures of AI applications, i.e., systems not properly working, causing wrong or biased results.
• Unintended, and possibly unexpected, side effects from the application of AI technology.
We will now consider these kinds of opportunities and threats for each of the 4 classes of opportunities listed above.
Opportunities and threats from systems for monitoring/surveillance with fast reaction
These systems are used to improve safety or security in public as well as industrial areas. Examples are automatic brake systems in cars to avoid accidents or at least reduce their impact [19], automatic cybersecurity in cyber-physical systems to protect against malfunction, as well as cyber-attacks against critical infrastructure and industrial production [4-6; 20; 38].
The possibility to abuse similar systems to detect weaknesses in protection systems and perform machine-controlled attacks against cyber-physical systems is a real threat. This threat was the main reason to create automatic cybersecurity systems since the machine-driven attacks need very fast counteractions.
Inherent failures of the AI systems due to programming or training errors are of cause possible. In the simplest case this means that the systems don't detect the critical event and don't react. The damage however may be much larger than if the system did not exist because the humans monitoring the system may rely on the automatic response. On the other hand, a false alarm in the case of automatic brake systems can cause accidents which otherwise would not happen. In the case of cyber-defense systems, it can cause a shutdown in corresponding business operations and related financial losses. Hence, careful testing against malfunctions of these AI systems is a necessity. Test cases must not be the same as those used to train the systems, otherwise the omissions from the training data sets will not be detected by the testing sets.
Opportunities and threats from autonomous systems
Autonomous systems exist in different versions, e.g., industrial robots [23], drones [21], unmanned vehicles of any kind, and are used in many different areas.
Opportunities for drones and unmanned vehicles are mainly new applications, in markets inaccessible before or only with high risk for human life and health. Important areas are safety inspection of buildings, underground sewerage inspection, inspection of undersea constructions, search for survivors after natural disasters. Large business opportunities may also exist in transporting small parcels by drones. Robots are mainly used in industrial production and care of the elderly or sick. Here, an important use of AI classification systems is to guaranty the safety of humans in close proximity to a moving robot and in case of care for elderly or sick humans, to realize situations where help must be organized, e.g., by informing medical personal.
Potential abuse possibilities of drones and unmanned vehicles are terror acts using these systems to find specific persons or objects and destroy them with explosives or poisons [38]. Drones can fully automatically find and follow an object. In case of a fully autonomous system, they will also execute their orders without remote human control. These are no longer just potential threats, these cases already happened. An example was the killing of an Iranian general by a U.S. drone attack on January 3, 2020 [39]. Another example was the killing of Iran's top nuclear scientist with an AI-assisted remote-control killing machine on November 27, 2020 [40] with all members of the killing-operation 1000 km away from the action. In these real cases, a camera was used to identify the victim by a remote working human to avoid an attack on wrong persons. Other groups may use fully autonomous systems in the future.
Inherent threats for robots, drones, and unmanned vehicles are accidents caused by malfunction of the AI systems due to programming errors and insufficient training of neural-network based units.
Opportunities and threats from expert support systems
Expert support systems are used for medical, business, juristic, administrative, and political analyzes. They practically are used in every professional area. The final decision is still made by a human expert,
but potentially strongly influenced by the results provided by the AI system. The big advantage of using AI systems is that a single expert can handle much more cases in the same time period and with a much larger amount of data analyzed per case as without such systems.
Direct abuse of expert support systems is difficult to define. The expert support system analyzes data and provides the result of the analysis to the expert. As long as all used data are legally available to the expert, it looks as if the expert has an acceptable basis for his decision. However, one heatedly discussed kind of expert support system is surveillance of public places [2; 3]. Besides the identification of criminals on the search list, it also allows identification of persons participating in public demonstrations or monitoring the activities of political opponents. The identification of persons based on pictures is highly successful with AI based systems. In addition to the use of data not legally received, there is the possibility to purposely train the system with a restricted data set and thereby generate a biased result to justify the final decision by the human expert.
Inherent threats are the use of limited databases and training errors for the AI system. This may cause a wrong decision by the expert. Considering that expert support systems are used to decide many issues of large importance for individual persons, the possibility of wrong decisions is a severe issue. We think here of medical diagnoses [12], decisions about hiring or promoting a person, decisions about insurance claims, and many other decisions which may significantly impact the present and future situation of a person. In addition, wrong or strongly biased decisions may also have severe consequences for companies and even countries considering the use of such systems to prepare negotiations between companies or countries [27], [26]. Again, the conclusion is that substantial testing of AI systems is a necessity.
Opportunities and threats from creative systems
Creative systems are already used to create new medical devices and treatments [41], new technologies [3], new materials [28-
30; 37], or new pieces of art [33-35]. Thomas B. Ward argued in his articles on creativity and entrepreneurship that new insight cannot be created from nothing, but that the combination of knowledge from different areas or concepts can create novel solutions [42]. He also found in studies with groups of students that combining concepts at a higher degree of abstraction leads to more novel solutions while lower degrees of abstraction generate solutions closer to existing ones. The advantage of AI in creating new solutions is based on its potential to analyze databases from multiple areas with the new AI technologies and thereby use deep learning to achieve a higher degree of abstraction [10]. Future applications of CANs may no longer be restricted to the creation of new art but may be used to find highly creative solutions to business questions, thereby replacing human entrepreneurs by machines which can use huge databases and deep learning.
Abuse possibilities of creative systems could be massive creation of art in a specific style to suppress prices for this kind of art, and creation of large numbers of fake news looking similar to corresponding real news. This also includes the creation of fake pictures and even videos to damage the reputation of opponents or competitors [43].
Inherent threats are the use of strongly biased data for developing new medical treatments or devices by e.g., using data from one gender or one ethnic group only. This could generate suboptimal or even bad solutions for persons which are not from the selected gender or ethnic group. Again, the conclusion is that substantial testing of AI systems is a necessity.
Side effects
Side effects are unintended and include all kinds of social or environmental aspects. They may turn up soon after the introduction of a new technology or only after the technology is in use for a long time. We can only guess about these effects and have to wait what will really happen.
One such effect is the possible diminishing or complete elimination of the number of available jobs in specific professions. This effect is well known for industrial
automatization where it mainly destroyed low-skill jobs. For AI there is a high probability that it will destroy also high-skill jobs. Especially the increase in efficiency and productivity due to expert support systems may enable a single expert to analyze a much larger number of cases as before and thereby reduce the number of experts needed. We expect this risk to be high for back-office jobs like checking insurance claims, where the expert only analyses cases without having any direct contact with the clients. In medical jobs as doctors or nurses, we expect the increased efficiency to increase the quality of the service instead of reducing the number of jobs, since other factors more strongly limit the number of clients a doctor or nurse can handle. Of course, there also will be many new jobs created for producing AI solutions, but they will most likely be very different from the jobs AI will destroy. Hence, we expect that large retraining efforts will be needed. Also looking at the creative AI solutions, there is the possibility that many of the newly created AI jobs will disappear again in the far future, when AI reaches the point that itself generates large numbers of new AI solutions. These kinds of evolution may also have huge social effects, not just by the destruction of jobs, but also by creating a society which depends more and more on the use of AI solutions. Here, the use of AI with deep neural networks and huge databases to generate truly creative solutions in all areas looks promising as well as threatening.
Conclusion
The conceptual analysis of opportunities and threats of AI solutions shows that there are great opportunities for the use of modern AI technologies to support public and industrial safety and security as well as rescue operations after natural disasters, enable complex analyzes in many different fields of applications, and even create new medical treatments, new materials with complex properties, new technologies, new pieces of art, and new creative business solutions. However, there are also possibilities of serious abuse of the technology, as well as inherent risks and possible disturbing social side effects. The
main source of the inherent threats of AI technologies is the difficulty of retracing the final decision of AI technologies due to the dependence of these decisions on training data for neural networks. Presently the only suitable solution to this threat seems to be extensive testing of the AI systems with test data which must not be identical to the training data. However, even exhaustive testing of large standard software systems is very difficult to achieve. Often errors are only detected when the software is already in use for some time. The issue is the complexity of the software system. But the complexity of large neural-network based systems may easily exceed those of present-day standard software systems.
If we look at the discussions in the UNESCO meetings to define ethics rules for AI [1] we see clear requests for guaranties of proper operations, respect for human dignity, compensations for damages through false decisions, and checks in all phases of the development as well as application of AI systems. In addition, there are also requests for general access to AI technologies and education about these technologies. Some of the requests may contradict each other. For example, the request for general access and education may limit the possibilities to ensure proper operation and other ethics requests. Together with the general availability, small size and low prices of Tensor Processing Units as well as free available open source robot operating systems and machine learning platforms (text and footnotes on page 4 of [38]), this request from UNESCO enables single persons or small groups of persons to develop their own AI solutions and use them without any external control. This includes terrorist groups using drones for attacks but also ordinary people using AI systems to make all kinds of personal decisions where badly trained systems may cause disastrous consequences.
Considering long term side effects of AI, not only the impact of changes in the job market is important but also the possible growing dependence on AI applications in all aspects of our society. Especially the evolution of deep learning with truly creative solutions ranging from art to business may change many areas of our society.
In summary, there are great opportunities for AI technologies, but also threats and side effects, and the control about the threats and side effects may be very limited due to free available components, open-source software, and education about AI systems.
References
1. UNESCO. Elaboration of a Recommendation on the ethics of artificial intelligence [Electronic resource]. 2021. URL: https://en.unesco.org/ artificial-intelligence/ethics (accessed: 07.08.2021).
2. Bacchi U, Asher-Shapiro A. Debate on surveillance and privacy heats up as U.S. protests rage | Reuters [Electronic resource] // Reuters. 2020. URL: https://www.reuters.com/ article/uk-minneapolis-police-privacy-trfn-idUSKBN23902V (accessed: 15.08.2021).
3. Government of the Principality of Monaco. A model public security system / Security / Policy & Practice / Portail du Gouvernement — Monaco [Electronic resource]. URL: https://en.gouv. mc/Policy-Practice/Security/A-model-public-security-system (accessed: 14.08.2021).
4. Varga P. et al. Real-time security services for SDN-based datacenters // 2017 13th International Conference on Network and Service Management (CNSM). IEEE, 2017. P. 1-9.
5. Kathareios G. et al. Catch It If You Can: RealTime Network Anomaly Detection with Low False Alarm Rates // 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2017. P. 924929.
6. Truong-Huu T. et al. An Empirical Study on Unsupervised Network Anomaly Detection using Generative Adversarial Networks // Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence. New York, NY, USA: ACM, 2020. P. 20-29.
7. Senior A.W. et al. Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13) // Proteins Struct. Funct. Bioinforma. John Wiley and Sons Inc., 2019. Vol. 87, № 12. P. 1141-1148.
8. Wang S. et al. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model // PLOS Comput. Biol. / ed. Schlessinger A. Public Library of Science, 2017. Vol. 13, № 1. P. e1005324.
9. Nielsen MA. Neural Networks and Deep Learning [Electronic resource]. Determination Press, 2015. P. 1-225. URL: http://neural-networksanddeeplearning.com/ (accessed:
13.10.2021).
10. Abiodun O.I. et al. State-of-the-art in artificial neural network applications: A survey // Heliyon. Elsevier, 2018. Vol. 4, № 11. P. e00938.
11. Mukhamediev R.I. et al. From Classical Machine Learning to Deep Neural Networks: A Simplified Scientometric Review // Appl. Sci. 2021. Vol. 11, № 12. P. 5541.
12. Deepa S, Aruna Devi B. A survey on artificial intelligence approaches for medical image classification // Indian J. Sci. Technol. 2011. Vol. 4, № 11. P. 1584-1595.
13. Johnson M. et al. Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation // Trans. Assoc. Comput. Linguist. 2017. Vol. 5. P. 339-351.
14. Barrault L. et al. Findings of the 2019 Conference on Machine Translation (WMT19) // Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. Vol. 2, № 1. P.1-61.
15. Weiss T.R. Google Launches TPU v4 AI Chips [Electronic resource]. 2021. URL: https://www. hpcwire.com/2021/05/20/google-launches-tpu-v4-ai-chips/ (accessed: 01.09.2021).
16. Cluzeau J.M. et al. Concepts of Design Assurance for Neural Networks ( CoDANN ) // Public Report Extract Version 1.0. 2020. 1-104 p.
17. Batarseh FA., Freeman L, Huang C.-H. A survey on artificial intelligence assurance // J. Big Data. 2021. Vol. 8, № 1. P. 60.
18. Hendrycks D. et al. Natural Adversarial Examples // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. P. 15262-15271.
19. Christy A. Artificial Intelligence based Automatic Decelerating Vehicle Control System to avoid Misfortunes // Int. J. Adv. Trends Comput. Sci. Eng. 2019. Vol. 8, № 6. P. 3129-3134.
20. Kwon D. et al. An Empirical Study on Network Anomaly Detection Using Convolutional Neural Networks // 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2018. Vol. 2018-July. P. 15951598.
21. Palossi D. et al. A 64-mW DNN-Based Visual Navigation Engine for Autonomous Nano-
Drones // IEEE Internet Things J. 2019. Vol. 6, № 5. P.8357-8371.
22. Falotico E. et al. Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform // Front. Neurorobot. Frontiers, 2017. Vol. 11, № JAN. P. 2.
23. Lee S., Bekey GA. Applications of Neural Networks to Robotics // Control and Dynamic Systems. Academic Press, 1991. Vol. 39, № P1. P. 1-69.
24. Artificial Intelligence and Expert Systems for Engineers // Artificial Intelligence and Expert Systems for Engineers / ed. Krishnamoorthy C.S., Rajeev S. CRC Press, 2018.
25. Oren O., Gersh BJ,, Bhatt D.L. Artificial intelligence in medical imaging: switching from radiographic pathological data to clinically meaningful endpoints // Lancet Digit. Heal. Elsevier, 2020. Vol. 2, № 9. P. e486-e488.
26. Fedorchenko S. Artificial Intelligence in Politics, Media and Public Administration: Reflections on the Thematic Portfolio // J. Polit. Res. Infra-M Academic Publishing House, 2020. Vol. 4, № 2. P. 3-9.
27. Gorodnova N.V. Artificial intelligence in economic diplomacy and international trade // Russ. J. Innov. Econ. 2021. Vol. 11, № 2. P. 565-580.
28. Sitek W, Trzaska J., Dobrzanski LA. An artificial intelligence approach in designing new materials Analysis and modelling // 339 Trans. Assoc. Comput. Linguist. 2006. Vol. 17, № 1-2. P. 277-280.
29. Kiselyova N.N. et al. Computational materials design using artificial intelligence methods // J. Alloys Compd. 1998. Vol. 279. P. 8-13.
30. SeniorA.W. et al. Improved protein structure prediction using potentials from deep learning // Nature. Nature Publishing Group, 2020. Vol. 577, № 7792. P. 706-710.
31. DeepMind.com. AlphaFold: Using AI for scientific discovery | DeepMind [Electronic resource]. 2020. URL: https://deepmind.com/ blog/article/AlphaFold-Using-AI-for-scientific-discovery (accessed: 02.08.2021).
32. The Economist. AI is transforming the coding of computer programs | The Economist [Electronic resource]. 2021. URL: https://www.economist. com/science-and-technology/2021/07/07/ai-is-transforming-the-coding-of-computer-programs (accessed: 08.08.2021).
33. Mazzone M., Elgammal A. Art, Creativity, and the Potential of Artificial Intelligence // Arts. Multidisciplinary Digital Publishing Institute, 2019. Vol. 8, № 1. P. 26.
34. Squarespace. Short film entirely created by AI [Electronic resource] // YouTube. URL: https:// www.youtube.com/watch?v=8XO3q6MA668 (accessed: 16.08.2021).
35. Xue A. End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks // 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2021. P. 3862-3870.
36. Elgammal A. et al. CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms // Proc. 8th Int. Conf. Comput. Creat. ICCC 2017. Georgia Institute of Technology, 2017.
37. Jumper J. et al. Highly accurate protein structure prediction with AlphaFold // Nature. 2021. Vol. 596, № 7873. P. 583-589.
38. Clauberg R. Cyber-physical systems and artificial intelligence: chances and threats to modern economies // Мировые цивилизации. 2020. Vol. 5, № 3-4. P. 1-9.
39. BBC. Qasem Soleimani: US strike on Iran general was unlawful, UN expert says — BBC News [Electronic resource] // BBC. 2020. URL: https://www.bbc.com/news/world-middle-east-53345885 (accessed: 08.10.2021).
40. Bergman R., Fassihi F. The Scientist and the A.I.-Assisted, Remote-Control Killing Machine — The New York Times [Electronic resource] // The New York Times. 2021. URL: https://www. nytimes.com/2021/09/18/world/middleeast/ iran-nuclear-fakhrizadeh-assassination-israel. html?searchResultPosition=2 (accessed: 08.10.2021).
41. Chinzei K. et al. Regulatory Science on AI-based Medical Devices and Systems // Adv. Biomed. Eng. 2018. Vol. 7. P. 118-123.
42. Ward T.B. Cognition, creativity, and entre-preneurship // J. Bus. Ventur. 2004. Vol. 19, № 2.P.173-188.
43. Maras M.-H., Alexandrou A. Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos // Int. J. Evid. Proof. SAGE PublicationsSage UK: London, England, 2019. Vol. 23, № 3. P. 255262.