https://doi.org/10.48417/technolang.2024.02.01 Editorial introduction
Chat GPT and the Voices of Reason, Responsibility, and
Regulation
Elena Seredkina1 ( ) and Yongmou Liu 2 1Perm National Research Polytechnic University, Komsomolsky prospekt 29, Perm, Russian Federation
2Renmin University of China, Zhongguancun Street 59, Haidian District, Beijing, China Abstract
ChatGPT, a large language model (LLM) by OpenAI, is expected to have a transformative impact on many aspects of society. There is much discussion in the media and a rapidly growing academic debate about its benefits and ethical risks. This article explores the profound influence of Socratic dialogue on Western and non-Western thought, emphasizing its role in the pursuit of truth through active thinking and dialectics. Unlike Socratic dialogue, ChatGPT generates plausible-sounding answers based on pre-trained data, lacking the pursuit of objective truth, personal experience, intuition, and empathy. The LLM's responses are limited by its training dataset and algorithms, which can perpetuate biases or misinformation. While a true dialogue is a creative, philosophical exchange filled with ontological, ethical, and existential meanings, interactions with ChatGPT are characterized as interactive data processing. But is this really true? Perhaps we are underestimating the evolutionary growth potential of large language models? These questions have important implications for theoretical debates in cognitive science, changing our understanding of what cognition means in artificial and natural intelligence. This special issue examines ChatGPT as a subject of philosophical analysis from a position of cautious optimism and rather harsh criticism. It includes six articles covering a wide range of topics. The first group of researchers emphasizes that machine understanding and communication matches human practice. Others argue that AI cannot reach human levels of intelligence because it lacks conceptual thinking and the ability to create. Such contradictory interpretations only confirm the complexity and ambiguity of the phenomenon.
Keywords: ChatGPT; Artificial Intelligence; Large language model; Dialogue; AI Ethics Code; Responsibility
Citation: Seredkina, E., & Liu, Y. (2024). Chat GPT and the Voices of Reason, Responsibility, and Regulation. Technology and Language, 5(2), 1-10. https://doi.org/10.48417/technolang.2024.02.01
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
УДК 1: 004.8
https://doi.org/10.48417/technolang.2024.02.01 Редакторская заметка
ChatGPT и голоса разума, ответственности и
регулирования
Елена Середкина1 ( ) и Юнмоу Лю 2
1Пермский национальный исследовательский политехнический университет, Комсомольский
проспект 29, Пермь, Россия [email protected]
2Народный университет Китая, Чжунгуаньцунь 59, район Хайдянь, Пекин, Китай Аннотация
Ожидается, что ChatGPT, большая языковая модель (LLM) от OpenAI, окажет огромное влияние на многие аспекты жизни общества. В средствах массовой информации ведется много дискуссий по поводу LLM, а ученые все чаще обсуждают ее преимущества и этические недостатки. В этой статье исследуется глубокое влияние диалога в сократовском значении на западную и незападную мысль, подчеркивая его роль в поисках истины посредством активного мышления и диалектики. В отличие от сократического диалога, ChatGPT генерирует правдоподобные ответы на основе заранее подготовленных данных, не стремясь к объективной истине, личному опыту, интуиции и сочувствию. Ответы LLM ограничены набором обучающих данных и алгоритмами, которые могут закреплять предвзятость или дезинформацию. В то время как сократический диалог представляет собой творческий, философский обмен, наполненный онтологическим, этическим и экзистенциальным смыслом, взаимодействие с ChatGPT характеризуется как интерактивная обработка данных. Но так ли это на самом деле? Возможно, мы недооцениваем потенциал эволюционного роста больших языковых моделей? Эти вопросы имеют важные последствия для теоретических дебатов в когнитивной науке, меняя наше понимание того, что означает познание в искусственном и естественном интеллекте. В этом специальном выпуске ChatGPT как предмет философского анализа рассматривается с позиций сдержанного оптимизма и довольно жесткой критики. В него вошли шесть статей, охватывающих широкий круг тем. Первая группа исследователей подчеркивает, что машинное понимание и общение соответствуют человеческой практике. Другие утверждают, что ИИ не может достичь человеческого уровня интеллекта, потому что ему не хватает концептуального мышления и способности творить. Столь противоречивые интерпретации лишь подтверждают сложность и неоднозначность изучаемого феномена.
Ключевые слова: ChatGPT; Искусственный интеллект; Большие языковые модели; Диалог; Кодекс этики ИИ; Ответственность
Для цитирования: Seredkina, E., Liu, Y. ChatGPT и голоса разума, ответственности и регулирования // Technology and Language. 2024. № 5(2). P. 1-10. https://doi.org/10.48417/technolang.2024.02.01
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
In recent years, the rapid development of AI technology feverishly swept the world. People who are concerned about this development are not limited to AI developers and promoters, commentators and researchers in humanities and social sciences, but include ordinary members of the public who are deeply worried that their lives will be profoundly affected by AI. The issue of AI development is no longer primarily a technical challenge but has become a matter of public debate. This is very clear in the recent release of Sora by OpenAI or Musk's open source Grok. The basic question in these public debates is whether the current general direction of AI development is problematic, and in what direction it should move forward.
Recently, ChatGPT exploded in popularity, sparking community-wide concern and debate about Generative Artificial Intelligence (GAI). Concerned about the potential ethical and safety issues associated with it, a large group of experts, including Elon Musk, jointly signed an open letter calling for a moratorium on the training of GPT-4 successor AI models for at least six months.1 The call drew opposition from another group of AI experts, including Wu Enda.2 On April 11, 2023, the State Internet Information Office in China publicly released the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comment),3 reacting to the governance of GAI at an unprecedented pace (Cole et al., 2023). All of this demonstrates that the social impact of GAI applications represented by ChatGPT, Midjourney, and DALL-E 2 may prove enormous, must be carefully studied, and requires a prudent response.
ChatGPT reconfigures the public sphere. It brings to a head the question: Must we mean what we say? How can we take responsibility for artificially produced text - and how will the technology be regulated in different technopolitical traditions? The special issue seeks to highlight two aspects. 1) Large language models and the culture of dialogue in the context of human-machine interaction: From the perspective of the history of Western thought, the "dialogue" that began in ancient Greece is not an exchange of information, but an act of cognition of a certain object through being present together. But what is a dialogue with ChatGPT? Will a new way of asking questions bring us into a new world of thinking? 2) Legal regulation of ChatGPT in various sociocultural contexts, technical and technocratic governance: Different technological paradigms or forms of technical intelligence respond differently to the challenges of the digital age. ChatGPT evokes technocracy and the idea of monitoring or shaping the "voices of reason" ("public sphere") and the technological "Lebenswelf - with societies confronting the question of how an intelligence should behave and how it can be bound to the truth. All aspects call for innovative models of adapting ChatGPT for use.
1 Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
2 Elon Musk wants to pause 'dangerous' A.I. development. Bill Gates disagrees—and he's not the only one. https://www.cnbc.com/2023/04/06/biU-gates-ai-developers-push-back-against-musk-wozniak-open-letter.html; Wu Enda: AI in the next 10 years, from hardware first to data King. https://www.lwxsd.com/pcen/info view.php?tab=mynews&VID=22320
3 Notice of the Cyberspace Administration of China on Soliciting Public Opinions on the Draft Measures for the Administration of Generative Artificial Intelligence Services. https://www.cac.gov.cn/2023-04/11/c 1682854275475410.htm (in Chinese).
Let us consider the first aspect in detail. The profound influence of Greek philosophy on Western (and now non-Western) thought cannot be overstated. This way of thinking is based on representing the culture of dialogue as Socrates' maieutic technique of communication and a communal search for truth. In this context, Socrates' dialogues with various contemporaries, recorded in Plato's Dialogues, are an encyclopedia of ancient Greek knowledge. The highest goal of dialogue is not the exchange of information, but the achievement of true knowledge of things and phenomena by the cognitive subject through its active thinking with the interlocutor. In other words, dialogue is the discovery of truth (as "aletheia", or "unconcealment", in the terminology of Martin Heidegger). Through involvement in dialogue, Socrates helps his interlocutors discover not only the world around them, but also themselves. Later, Plato perfected the form of dialogue as a philosophical reflection thus developing the method of dialectics. In its original sense, "dialectics" is the art of arguing, exploring, and persuading others through conversation. To be more precise, dialectics is a universal logical way and method of discussing problems.
But what is a conversation with ChatGPT like? Will a new way of asking questions lead to a new type of thinking? Can we delegate creative functions to the artificial intelligence? Is it possible to teach critical thinking to beginners using the large language models (LLM)?
A large difference between the "human-human" and "human-artificial agent" systems lies in the reasons and purposes of initiating the dialogue (Seredkina and Mezin, 2022). Socrates poses difficult questions to his opponents, but his questions are aimed not only at obtaining an answer from them, but also at allowing them to form their own judgment about a certain cognitive situation. From this standpoint, dialogue turns into an exchange of ideas between people. ChatGPT is the exact opposite of that. Based on machine instructions, LLMs are pure streams of information circulating inside the internal storage. If some content was not included in the pre-training database, then the dialogue will not even start, or the model will give absurd answers. To explain this further, the main goal of ChatGPT is to generate plausible-sounding answers, not to seek objective truth or engage in genuine dialectical inquiry.
Chatbots' capabilities are still limited by the training dataset and the algorithms being used. They lack the dimensions of human qualities such as personal experience, intuition, and empathy. Additionally, ChatGPT bases its answers on the most common statements that are popular among common people. But as history shows, only a few people possess the truth, and originally creative ideas are often not accepted by contemporaries. In light of the above, ChatGPT, trained on large collections of text data, can inadvertently perpetuate biases or misinformation instead of leading users closer to objective truth. In general, it could be said that hiding behind the impressive appearance of the blossoming flowers of LLMs are imperfections of the communicative act (lack of transparency, redundant information, blind spots in knowledge, errors of common sense).
To be precise, a dialogue with ChatGPT is not a conversation but interactive data processing. Of course, there is a temptation to metaphorically represent the mechanism of human intelligence as a computer, but this would be a huge simplification of the human spiritual world, since the emotional, intuitive, and associative elements in dialogue are
not limited to information processing. Mutual communication is filled with a lot of different meanings and connotations - ontological, ethical, existential. A real philosophical discussion is a creative understanding of a cognitive situation, posing questions based on one's own life situation, self-knowledge, and various contradictions in the world. This type of creativity cannot be simply replaced by machines and algorithms.
However, how far can we go in creating a digital copy of the human mind? A relatively recent experiment by scientists shows that artificial intelligence based on GPT-3 mimics the American philosopher Daniel Dennett pretty well. To achieve that, the language model was firstly trained on his texts devoted to a range of philosophical questions about free will. Then, during the experiment, the researchers asked different groups of people (random readers and experts) to familiarize themselves with the answers and determine which of them belonged to the real philosopher, and which ones were generated by ChatGPT. As a result, it was found that the experiment participants could not always distinguish real quotes from fake comments (Strasser et al., 2023).
As one might expect, we are able to create quasi-philosophical texts using ChatGPT, taking into account the personal characteristics of individual philosophers of the past and present, and even enter into a philosophical dialogue with their digital replicas. But will it be relevant to philosophical dialogue and the search for truth? One of the organizers of the above experiment stressed that it was not a Turing test (Schwartz, 2022). If the experts were given an enhanced ability to interact with GPT-3, they would soon realize that they were not communicating with the real Dennett. In this sense, the digital copy of the philosopher looks more like the advanced format of an interactive textbook, a simulator for preparing for tests. After all, language and culture are not just a translation of the ideas of great thinkers and artists, but a result of a unique process of generating new meanings, interpreting concepts, taking fresh challenges into account, and throughout, creating a new language, primarily a philosophical one.
But it must be said that the AI LLM in general and ChatGPT in particular has come a long way since it was first introduced in 2022. With the drastic increase of the model size and the huge effort being put into honing and polishing the algorithms and datasets, ChatGPT-3 and ChatGPT-4 are able to give plausible answers on a wide set of various topics, solve problems, and hold free conversation really well. Various AI models are being developed and successfully used for performing different tasks ranging from AI recognition and real-time translation services in modern smartphones to the AI generated fill in Adobe Photoshop and AI-based drone control algorithms. In this regard, deeper philosophical reflection is needed, perhaps seeing AI as a new form of rationality or focusing on a hybrid form of intelligence (human and machine).
This special issue presents critical as well as moderately techno-optimistic views on the future of artificial intelligence in its competition with humans. Its contradictory interpretations give rise to a certain semantic polyphony and creative polysemy.
As for the second aspect and the issue of regulation in different contexts, it is hardly touched upon in this special issue. But we would like to outline the main contours of the ethical and legal regulation of AI. Today, many countries are developing their own versions of legal and ethical regulation of AI, primarily the USA, Europe, Russia, China,
and Japan. This is due to the need to protect human dignity and personal integrity; ensure the rights of weak social strata; limit social inequality that may arise in the process of using AI technologies (Stahl and Eke, 2024; Lee, 2023).
Thus, the AI Ethics Code in Russia establishes general ethical principles and standards of behavior that should guide participants in relations in the field of AI.4 It takes into account the requirements of the National Strategy for the Development of Artificial Intelligence for the period until 2030, approved by the President of the Russian Federation. This is an open project that is constantly being supplemented and refined. In 2024, a number of Russian companies signed the Declaration on the responsible development and use of services in the field of generative AI.5 The signatories agreed on the principles of security and transparency, ethical treatment of sensitive topics, taking measures to prevent abuse and misinformation, and educating users about the possibilities of new technologies. The Declaration establishes ethical principles and recommendations for a responsible attitude towards AI not only for developers and researchers, but also for users of neural network services.
The Chinese experience is also worthy of attention. In October 2023, China's Ministry of Science and Technology published a Code of Ethics that aims to regulate existing or developing artificial intelligence models. China is opting for a strong regulatory model in which the state thinks very seriously about the long-term social transformations associated with AI (from social exclusion to existential risks and offensive speech) and actively tries to manage and guide these transformations.
It is important to emphasize that there is a common denominator between all ethical projects and codes in the USA, Europe, Russia and Asia. In particular, the ethical specifications for next-generation artificial intelligence begin with the very clear premise that AI technologies must always be under human control and that only humans have full decision-making authority. In this sense, we are not talking about the autonomy of machine intelligence, although in recent years philosophers and lawyers have been actively developing the concept of a distributed responsibility that includes people and autonomous intelligent agents (Christen et al, 2023; Tsamados et al, 2024).
These questions have important implications for theoretical debates in cognitive science, changing our understanding of what cognition means in artificial and natural intelligence. This special issue examines ChatGPT as a subject of philosophical analysis from a position of cautious optimism and rather harsh criticism. It includes six articles covering a wide range of topics. The first group of researchers emphasizes that machine understanding and communication matches human practice. Others argue that AI cannot reach human levels of intelligence because it lacks conceptual thinking and the ability to create. Such contradictory interpretations only confirm the complexity and ambiguity of the issues
Vladimir Arshinov and Maxim Yanukovich's "Neural Networks as Embodied Observers of Complexity: An Enactive Approach" examines neural networks through the
4 AI Ethics Code in Russia. https ://ethics.a-ai. ru/
5 AI Alliance participants signed a declaration on the responsible development and use of generative AI as part of AI Day at the Russia International Exhibition. https://ai.gov.ru/mediacenter/uchastniki-alvansa-v-sfere-ii-podpisali-deklaratsiyu-ob-otvetstvennoy-razrabotke-i-ispolzovanii-gene/ (in Russian)
enactivist paradigm, which views cognition as arising from an organism's interaction with its environment. It argues that neural networks, as complex adaptive systems, evolve through continuous feedback and adaptation, resembling biological systems. This perspective sees knowledge as actively constructed, not passively processed, and highlights the concept of "structural coupling," where neural networks co-evolve with their information ecosystems. By portraying machine cognition as similar to human cognitive processes, the article suggests an epistemological shift in understanding cognition, with implications for both technical applications and cognitive science debates (Arshinov & Yanukovich, 2024).
Vladimir Shalack's (2024) "Exposing Illusions - The Limits of AI by the Example of ChatGPT" critically discusses developments in artificial intelligence, focusing on OpenAI's ChatGPT. Al's concept, proposed in 1950 by Turing along with a test to verify AI creation, remains difficult to define. The author argues that true intelligence involves more than pattern recognition, self-learning, and purposeful activity. It includes conceptual thinking, language representation, and reasoning - traits unique to humans. Historically, AI has developed through logical and neural network approaches. Neural networks struggle to explain their reasoning, complicating the verification of their conclusions. Examples show ChatGPT fails at simple conceptual reasoning due to fundamental limitations in its language model that can't be fixed with more training. Additionally, ChatGPT is vulnerable to neurohacking, posing risks for decision-making in the field of management.
Rebecca Perez Leon's (2024) "Do Language Models Communicate? Communicative Intent and Reference from a Derridean Perspective" evaluates the arguments made by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in their article "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" The authors argue that Language Models (LMs) cannot truly communicate or understand because their outputs lack communicative intent and are not based on real-world contexts. This paper contends that such a view is too restrictive and fails to recognize various forms of communication, including those between humans and non-human entities. It argues that communicative intent is not a necessary condition for communication or understanding, as these can occur without real-world grounding, involving hypothetical scenarios instead. Drawing on Derrida's philosophy, the paper presents alternative concepts of communication and understanding, proposing that LMs can indeed be seen as capable of both.
Anna Kartasheva's (2024) "Dialoguing with Large Language Models as Autocommunication" focuses on the features of interactive communication with large language models (LLM). With this format of communication (in the chatbot interface), the recipient and sender of the message coincide, so such a dialogue can be designated as autocommunication. The sender of the message (LLM) does not formulate the response themselves, but responds to the user's request based on known data provided by society to train the model - whether willingly or not. Autocommunication within the framework of dialogue with neural networks is a discursive practice that helps people formulate their own ideas. But that is not all: it is also important to mention the possibility of self-improvement and self-development in communicating with neural networks. Can neural
networks make people more creative? Only one thing is indisputable - dialogic relationships benefit all participants in communication.
Alexander Vnutskikh and Sergey Komarov's "Lebenswelt, Digital Phenomenology and the Modification of Human Intelligence" raises the question in which sense intelligence and communication are human today? The hypothesis of their research is that the digital transformation, leading to the emergence of large language models and talking gadgets, simultaneously leads to a serious modification of the intelligence of the person. People communicate as they think. But the modern person, apparently, does not think the same way as the subjects of the "pre-digital" era thought. The study of the structures of consciousness of the modern "digital subject" should be the goal of a special, "digital phenomenology" as well as "digital" anthropology, ontology, axiology, sociology, and psychology based on its understanding of human existence (Vnutskikh & Komarov, 2024).
Andrei Alekseev and Ekaterina Alekseeva's "GPT Assistants and the Challenge of Personological Functionalism" discusses whether it is correct even to speak of "generative artificial intelligence." They argue that it is premature to assert that GPT assistants like ChatGPT can replace humans in sociocultural electronic communication. Personological functionalism, which argues for replacing people with machines, is rooted in Ned Block's psychofunctionalism, advocating for the inclusion of "meaning" to pass the original Turing test. In addition to this, personological functionalism requires "creativity" for passing the Turing test. The paper demonstrates that GPT assistants fail the creativity test. To highlight their inability to pass the Turing test for meaningfulness, modifications to the Block machine were made in 1978 and 1981 by integrating neurocomputers with symbolic versions. This expanded Block test reinforces the argument that GPT assistants cannot fulfill the roles proposed by psychological or personological functionalism (Alekseev & Alekseeva, 2024).
When we evaluate the capacity of ChatGPT to match or surpass human capabilities, this is evidently an invitation to look at ourselves. Some of the authors in this collection offer theoretical accounts of human communication, understanding, and thought that allow for machines to do the same (Arshinov & Yanukovich, 2024; Perez Leon, 2024; Vnutskikh & Komarov, 2024). Others cite creativity and conceptual reasoning to highlight an unbridgable gap between human and machine intelligence (Alekseev & Alekseeva, 2024; Kartasheva, 2024; Shalack, 2024).
All this calls for comprehensive investigation and prudent reflection of the Voices of Reason, Responsibility, and Regulation. The following collection of papers can do no more but make a beginning.
REFERENCES
Alekseev, A. Y., & Alekseeva, E. A. (2024). GPT Assistants and the Challenge of Personological Functionalism. Technology and Language, 5(2), 80-99. https://doi.org/10.48417/technolang.2024.02.07
Arshinov, V. I. & Yanukovich, M. F. (2024). Neural Networks as Embodied Observers of Complexity: An Enactive Approach. Technology and Language, 5(2), 11-25. https://doi.org/10.48417/technolang.2024.02.02 Christen, M., Burri, T., Kandul, S., & Vörös, P. (2023). Who is Controlling whom? Reframing "Meaningful Human Control" of AI Systems in Security. Ethics and Information Technology, 25, 10. https://doi.org/10.1007/s 10676-023-09686-x Cole, J., Sheng, M., & Hoi Tak Leung (2023). New Generative AI Measures in China. https://voicebot.ai/2022/08/02/gpt-3-ai-successfully-mimics-philosopher-daniel-dennett/
Kartasheva, A. (2024). Dialogue as Autocommunication - On Interactions with Large Language Models. Technology and Language, 5(2), 57-66. https://doi.org/10.48417/technolang.2024.02.05 Lee, D. D. (2023). Ethical AI: The Philosophy of ChatGPT (Code of Governance).
Independently published. Perez Leon, R. (2024). Do Language Models Communicate? Communicative Intent and Reference from a Derridean Perspective. Technology and Language, 5(2), 40-56. https://doi.org/ 10.48417/technolang.2024.02.04 Schwartz, E. H. (2022, Aug. 2). GPT-3 AI Successfully Mimics Philosopher Daniel Dennett. Voicebot.ai https://voicebot.ai/2022/08/02/gpt-3-ai-successfully-mimics-philosopher-daniel-dennett/ Shalak, V. (2024). Exposing Illusions - The Limits of AI by the Example of ChatGPT. Technology and Language, 5(2), 26-39.
https://doi.org/10.48417/technolang.2024.02.03 Seredkina, E., & Mezin, E. (2023). Kak mozhet povliyat' ChatGPT na kul'turu dialoga i obrazovaniye? [How can ChatGPT Affect the Culture of Dialogue and Education?] Naukovedcheskiye issledovaniya, 3, 74-89.
https://doi.org/10.31249/scis/2023.03.04 Stahl, B.C., & Eke, D. (2024). The Ethics of ChatGPT - Exploring the Ethical Issues of an Emerging Technology. International Journal of Information Management, 74, 102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700 Strasser, A., Crosby, M., & Schwitzgebel, E. (2023). How Far Can We Get in Creating a Digital Replica of a Philosopher? Social Robots in Social Institutions, 366, 371380. https://doi.org/10.3233/FAIA220637 Tsamados, A., Floridi, L. & Taddeo, M. (2024). Human control of AI systems: from supervision to teaming. AI and Ethics, https://doi.org/10.1007/s43681-024-00489-4
Vnutskikh, A., & Komarov, S. (2024). Lebenswelt, Digital Phenomenology and Modification of Human Intelligence. Technology and Language, 5(2), 67-79. https://doi.org/10.48417/technolang.2024.02.07
СВЕДЕНИЯ ОБ АВТОРАХ / THE AUTHORS
Елена Середкина, [email protected] ORCID 0000-0003-2506-2374
Юнмоу Лю, [email protected] ORCID 0000-0002-5785-7553
Elena Seredkina, [email protected] ORCID 0000-0003-2506-2374
Yongmou Liu, [email protected] ORCID 0000-0002-5785-7553
Статья поступила 29 мая 2024 Received: 29 May 2024
одобрена после рецензирования 17 июня 2024 Revised: 17 June 2024
принята к публикации 23 июня 2024 Accepted: 23 June 2024