Научная статья на тему 'Diverse Cultures, Universal Capacity: An Interview with Markus Gabriel'

Diverse Cultures, Universal Capacity: An Interview with Markus Gabriel Текст научной статьи по специальности «Философия, этика, религиоведение»

CC BY
89
17
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Human-Machine Interaction / Intelligence / Ethics / Universalism / Взаимодействие человека и машины / Интеллект / Этика / Универсализм

Аннотация научной статьи по философии, этике, религиоведению, автор научной работы — Li Yue, Gabriel Markus

The documentary Philosophy in the Age of Desire records a short encounter between Markus Gabriel and Hiroshi Ishiguro’s Geminoid in 2018. Their exchange on the role of technology in human life, on the conception of human being, and other topics revealed noticeable differences between the German philosopher and the Japanese engineer, but can these be interpreted as “cultural” differences? Four years later two separate interviews follow up on their conversation, This interview explores their differences by examining Gabriel’s own experiences with AI and his definitions of related concepts such as “intelligence,” “ethics,” and “consciousness.” Gabriel emphasizes that due to our organic precondition there is only a lower-level response in terms of self-understanding. It is only the variability in the expression of self-understandings that results from cultural construction. Focusing on the universal basis of humanity and the influences from Asian philosophy regarding human becoming, Gabriel calls for the further investigation of the cultural presentations of artificial intelligence.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Разнообразие культур, универсальные способности: Интервью с Маркусом Габриэлем

Документальный фильм “Философия в эпоху желания” фиксирует короткую встречу между Маркусом Габриэлем и Геминоидом Хироши Исигуро в 2018 году. Их обмен мнениями о роли технологий в жизни человека, о концепции человека и других темах выявил заметные различия между немецким философом. и японским инженером, но можно ли их интерпретировать как “культурные” различия? Четыре года спустя их разговор продолжается в двух отдельных интервью. В этом интервью раскрываются эти различия, исследуя собственный опыт Габриэля с искусственным интеллектом и его определения связанных понятий, таких как “интеллект”, “этика” и “сознание”. Габриэль подчеркивает, что из-за нашей органической предпосылки существует только низко-уровневая реакция с точки зрения самопонимания. Что есть только вариативность в выражении самопонимания, являющаяся результатом культурного конструирования. Сосредоточив внимание на всеобщей основе человечества и влиянии азиатской философии на становление человека, Габриэль призывает к дальнейшему исследованию культурных представлений об искусственном интеллекте.

Текст научной работы на тему «Diverse Cultures, Universal Capacity: An Interview with Markus Gabriel»

https://doi.org/10.48417/technolang.2022.01.06 Research article

Diverse Cultures, Universal Capacities: an Interview with Markus Gabriel

Yue Li1,2 (E) and Markus Gabriel3

University of Heidelberg, Grabengasse 1, 69117 Heidelberg, Germany 2Karlsruhe Institute of Technology, 4, Engler street, 76131 Karlsruhe, Germany

yue.li@kit.edu 3University of Bonn, Poppelsdorfer Allee 28, 53115 Bonn, Germany

Abstract

The documentary Philosophy in the Age of Desire records a short encounter between Markus Gabriel and Hishiro Ishiguro's Geminoid in 2018. Their exchange on the role of technology in human life, on the conception of human being, and other topics revealed noticeable differences between the German philosopher and the Japanese engineer, but can these be interpreted as "cultural" differences? Four years later, two separate interviews follow up on their conversation. This interview explores their differences by examining Gabriel's own experiences with AI and his definitions of related concepts such as "intelligence," "ethics," and "consciousness." Gabriel emphasizes that due to our organic precondition there is only a lower-level response in terms of self-understanding. It is only the variability in the expression of self-understandings that results from cultural construction. Focusing on the universal basis of humanity and the influences from Asian philosophy regarding human becoming, Gabriel calls for the further investigation of the cultural presentations of artificial intelligence.

Keywords: Human-Machine Interaction; Intelligence; Ethics; Universalism

Citation: Li, Y. & Gabriel, M. (2022). Diverse Cultures, Universal Capacities: An Interview with Markus Gabriel. Technology and Language, 3(1), 47-56. https://doi.org/10.48417/technolang.2022.01.06

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

УДК 1: 62-529

https://doi.org/10.48417/technolang.2022.01.06 Научная статья

Разнообразие культур, универсальные способности: Интервью с Маркусом Габриэлем

Юэ Ли1,2 (И) и Маркус Габриэль3

1Гейдельбергский университет, Грабенграссе 1, 69117 Гейдельберг, Германия 2Технологический институт Карлсруэ, 4, ул. Энглер, 76131 Карлсруэ, Германия,

yue.li@kit.edu

3Боннский университет, Поппельсдорфер-Аллее 28, 53115 Бонн, Германия

Аннотация

Документальный фильм "Философия в эпоху желания" фиксирует короткую встречу между Маркусом Габриэлем и Геминоидом Хироши Исигуро в 2018 году. Их обмен мнениями о роли технологий в жизни человека, о концепции человека и других темах выявил заметные различия между немецким философом. и японским инженером, но можно ли их интерпретировать как "культурные" различия? Четыре года спустя их разговор продолжается в двух отдельных интервью. В этом интервью раскрываются эти различия, исследуя собственный опыт Габриэля с искусственным интеллектом и его определения связанных понятий, таких как "интеллект", "этика" и "сознание". Габриэль подчеркивает, что из-за нашей органической предпосылки существует только низко-уровневая реакция с точки зрения самопонимания. Что есть только вариативность в выражении самопонимания, являющаяся результатом культурного конструирования. Сосредоточив внимание на всеобщей основе человечества и влиянии азиатской философии на становление человека, Габриэль призывает к дальнейшему исследованию культурных представлений об искусственном интеллекте.

Ключевые слова: Человека-машинное взаимодействие; Интеллект; Этика; Универсализм

Для цитирования: Li Y., Gabriel M. Diverse Cultures, Universal Capacities: An Interview with Markus Gabriel // Technology and Language. 2022. № 3(1). P. 47-56. https://doi.org/10.48417/technolang.2022.01.06

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

INTRODUCTION

This interview with German philosopher Markus Gabriel took place on February 21, 2022. The idea for this interview originated from the documentary Philosophy in the Age of Desire (Gabriel et al., 2018). It records a short but close encounter in 2018 between Gabriel, Japanese engineer Hiroshi Ishiguro, and Ishiguro's Geminoid. One can observe the different opinions of the European philosopher and the Asian engineer regarding their understanding of humanity and technology. We therefore invited Gabriel to talk more about his experiences in 2018 as well as possible changes of his positions since then regarding robotics and AI (compare Ishiguro et al., 2022).

As it happened, Gabriel had just talked with psychologist and Nobel-prize winning economist Daniel Kahneman about the metaverse. Recently, Professor Gabriel acquired a grant for a research project on the cultural presentation of artificial intelligence: Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures.

Fig. 1. Markus Gabriel in the documentary Philosophy in the Age of Desire

(Gabriel et al., 2018).

HUMAN-MACHINE INTERACTION

Yue Li: Please allow me to first go back to 2018. In the documentary, I notice that before you made a comment on the Geminoid and considered it an "absent-minded neutral guy," you asked a question: "Can I touch it?" (fig. 1). After Ishigoru's approval ("Yes, you can touch"), you first touched the Geminoid's right hand, and then very briefly the face. Although you described the robot as an object in your question with the impersonal pronoun "it," you began the interaction 'politely' with a 'shake of the hand.' Could this

be interpreted as a brief confusion or even a moment of the uncanny, caused by the "absent-minded guy"?

Markus Gabriel: There are two layers of an ethical context here. On the one hand, I'm aware of the different ethical stance of Ishiguro as an individual and maybe as a representative of a Japanese culture. He has a different relationship to that object, and that relationship deserves my respect, even though I don't share the same kind of relationship to the object. The Geminoid is an object onto which a lot of emotion is projected, just as children project emotions on their puppets. Indirectly, I owe that object something from an ethical standpoint. I treat the Geminoid with respect fully knowing that it is entirely inanimate. But then there is a second layer. Ishiguro's robots are cleverly constructed. Due to its human shape and texture of its appearance I - as a human organism - react to it in a certain way. This reaction is universal - there is no difference here between a Chinese, a German or a Japanese. So, imagine I have a computer or GPT3 (Generative Pre-trained Transformer 3) and it talks to me: It sounds to me as if someone was talking to me. I think I hear a voice, but I would say that a computer-generated "voice" is not a voice. Just like I don't think that the programmed chess computer plays chess. There's something that looks uncannily as if it was playing chess, but that is not playing chess.

I still defend the following ontology: The object is inanimate, but the context of its use and abuse gives the object cultural value and personal value. This activity of projection deserves my respect. Besides, the objects, in particular Ishiguro's objects, are cleverly constructed in such a way that it's almost impossible for me as a human animal not to take a certain emotional, ethical stance towards them. Thanks for pointing this out in your reading of that scene.

Li: The two layers you mention remind me of Mein Algorithmus und Ich - a book that results from the collaboration between the author and an artificial intelligence (Kehlmann 2021). This co-writing experience also reveals the two sides: The uncanny moment on the one hand and the refusal of the phenomenological acknowledgment of the AI on the other hand.

Gabriel: Absolutely. There is the phenomenology and then the cognitive correction mechanism, the question is which one is right. I have arguments that support my cognitive correction. This is as with Corona, when I was in the infection phase; I was in denial because I am boosted, but I knew that it does not protect me from Corona 100%, therefore I corrected myself even before I developed symptoms. In this case the mechanism trumps the phenomenology. But there are cases where the phenomenology trumps the correction mechanism, which is why AI and robotics raise these important issues. It is not obvious that my ontological claim that these are just a bunch of objects is the correct one. I'm making a defeasible, fallible knowledge-claim.

ETHICS AND INTELLIGENCE

Li: This uncertainty can be observed in contemporary science fiction as well. As opposed to Kehlmann's and your perception, robots are often depicted as morally better than humans -- see for example the novel Machines Like Me (McEwan, 2019) or the movie I Am Your Human (Schrader, 2021). Back in 2018 you saw the danger of dehumanization and the emergence of a cyber-dictatorship in the Japanese development of humanoids. So why can't a human-like construct, as portrayed in literature or as claimed by Ishiguro, be a moral model or at least a reflective surface for a better understanding of human-identity? Gabriel: I think this is dangerously wrong. Just think about the recent book by Daniel Kahneman (2021) Noise: A Flaw in Human Judgement. It's very clear that human judgment is flawed in comparison to algorithms. Algorithms generally perform much better because their judgments are constructed in a simple way, and therefore they don't vary too much - they behave more coherently. This is empirically tested, so there's no doubt about that.

However, I would again emphasize that these "judgements" do not actually judge, so they don't do ethics. Part of ethics is that it's hard whenever you're facing a real ethical choice. We are facing ethical choices every day in the pandemic: "Should I go out," "who should I meet," "how to deal with this once we contract the virus" - what's the best and fairest solution of all these problems? Often one knows the answer, but the question is, how do we translate the answer into action. It's constitutive of ethics that we are free to decide. The fact that our judgment is flawed is the manifestation of our freedom which is the condition of responsibility and ethics. Precisely because the algorithm outperforms us, also in what looks like judgment, proves that they are less ethical. They lack interiority, freedom and ethics because these dimensions of human life have an organic precondition.

I'm a universalist in ethics: Human ethics binds all humans together. Facts about the health effects of the virus just depend on human organisms. There is no difference between a Chinese person vis-à-vis an Australian and a French person as human beings regarding this aspect: The organic preconditions are preconditions for higher-level ethics while robots and other silicon-based information processing systems just didn't evolve in this way.

Li: It's the moral core of every human, the animal part of human which really matters. Gabriel: Yes, I think we are moral as animals. This is what Darwin (1871) said. In The Descent of Man, when it comes to the question what if anything is the difference between the human animal and other animals, his answer is literally that of Kant - the "Categorical Imperative."

Li: What about technology? Back in 2018 Ishiguro defined humans by the formula Human = animal + technology. How would you define the role of technology in our life? Gabriel: Let's start from my book The Meaning of Thought (Gabriel, 2018/2020) in which I define intelligence. Many people dodge the question, but I think AI researchers

should be forced to define it. Here is an attempt: Let intelligence be the capacity of an animal to solve a given problem in a finite amount of time. If some systems solve the same problem faster than some others, it means that they are more intelligent. By doing so we can measure intelligence. I think that AI research is in the business of measuring intelligence in that sense of intelligence - but there are other definitions of intelligence -and of producing models of intelligence such as search algorithms. An additional premise is that thought models are not themselves thinkers. One could argue that in the case of AI, the map is the territory. But I think that AI is the model of a target system, which is human and animal intelligence, just like Google Maps provides a model for say the Black Forest. It would be a category-mistake to confuse the Black Forest with a representation in Google Maps, though this representation is incredibly helpful and even reveals otherwise hidden features. If I just walk through the Black Forest, I will never figure out how far it is through the Black Forest from Freiburg to Munich. But if I use Google Maps, I will get an incredibly accurate answer.

It is similar in the case of chess. I'm not that bad at playing it, but I'm not a grandmaster. During a game I neglect lots of details while an AI system can even detect patterns in my game that no human will identify. It doesn't mean that the AI system is a better chess player, but it is an incredibly good model of a chess player. I would like to develop a theory which allows me to explain the models of contemporary technology without denying the obvious facts of this incredible technological progress while maintaining the good old 1980s position that it is not a real AI.

Li: It is a refusal of the "Chinese room" or "Turing test": Passing the Turing test does not indicate a real language user.

Gabriel: Absolutely! All the technology specialists always tell me, "You don't know that area. It doesn't have limits." There might not be any technological limits to our models (at least not any time soon, as there is still a lot of space for more processing power), but there is still an ontology which can be modelled but not constructed by technology. AI systems might, as you say, pass the Turing test and they translate German into Chinese better than I can (but worse than you). But all of this is just their performance and they're all performers in so many ways. Imagine I had to screen every web page and remember where some term occurs so as to find that term if needed. I wouldn't get very far searching the internet. Obviously, even on this everyday level, we are dealing with very powerful AI, and this has changed our form of life.

It's not a dualist position that I'm defending. I'm saying that we are becoming more intelligent by using these models. It takes more time for Heidegger to walk from Freiburg to Munich without using Google Maps than me using my smartphone. I think the velocity of the contemporary era is driven by AI, which makes us solve our problems faster. Superintelligence is happening, but it is us humans who experience it and become more intelligent.

I don't think we need to wait for the Terminator. Let me give you another very optimistic example. How on earth did we manage to deal with this very dangerous pandemic in such an overall efficient way? It seems to be actually worse than the Spanish flu, and yet, compared to 1914, we are doing ok after two years. One of the reasons is that we can go online: It's our digital infrastructure which has allowed us to pay for the lockdown because we were productive economically. Besides, thanks to digital technology - which always involves the usage of AI - we have become much more intelligent at solving our problems including finding vaccines and medication against the virus. The intelligence explosion which we find in all the thought experiments is really happening, but it's not a property of the machine but of the human-machine interface which is getting more intelligent.

DIVERSE CULTURES, UNIVERSAL CAPACITY

Li: What you say about the relation between humanity and technology reminds me of your dialogue with Ishiguro. He argues that humans are animals who can use technology, which underscores the meaning of technology for the concept of human. But you deny this and underscore that it is the animal part which really matters and makes our self-concept. Can you explain your view of this "self-understanding"? In the discourse of intercultural robotics (Cheng, 2020) have arisen culturally related robot images, such as "the Buddha in the robot" vs. "the ghost in the shell." This, in turn, can be traced back to different "images of people" or "self-understanding" in different cultures. Does this contradict the universal ontology or a universal epistemology?

Gabriel: My answer goes back to an ongoing dialogue with the philosophers Thomas Nagel and Paul Boghossian at New York University in 2015. At the time I was deeply puzzled exactly by this point. Chinese, German and English have very different mentalistic vocabulary. The English word "mind," for instance, does not have an exact match in German, nor does the German word "Geist" in English. And in Chinese, "computer" is "Dián Näo" - electrical brain - you regard the computer as a brain, as opposed to the European linguistic representation.

There are different ways of thinking about ourselves which are encapsulated in language use. It depends, among other things, on culture, art and religion. I call this variability. But your way of formulating it, to which I perfectly subscribe, has an interesting catch. You said that there are different cultures or different self-conceptions. This implies that they have something in common: They are different exercises of the same capacity - the capacity to have an image of oneself. This capacity is the same in China and Bavaria: Markus Söder thinks of himself in a different way than Xi Jinping does. They are in different conditions and have different problems and values. They are very different people, but both think of themselves, have a conception of who and what they are, and this capacity is the same.

Li: And this is consciousness.

Gabriel: I prefer to call it human-mindedness or Geist, because consciousness is too often associated with a subjective inner feeling, the phenomenal quality of our animal existence. The higher order capacity to have a lower order response to what you are is a capacity shared by all humans and it has a biological ground. It is not identical or reducible to the biological ground, but it is also biological because we are all the same kind of animal, the human animal.

NEW INFLUENCES, NEW UNDERSTANDINGS

Li: Reviewing your journey in 2018, with all the developments since then, what has changed? What would change if you revisited Ishiguro's Lab?

Gabriel: Generally, I have been in close contact with various Asian traditions of doing philosophy. This has accompanied me in different ways for almost three decades until now. Over the last four years, this has been intensified. Chinese and Japanese philosophers - in particular Zhang Xudong, Zhao Tingyang and Takahiro Nakajima -have profoundly influenced me. Their work as well as our conversations have led me to think of the human being in terms of human becoming.

Zhang Xudong, who is professor of comparative literature at NYU, has just published his work in Chinese on the history of universalism (Zhang, 2021). He argues that human becoming is the right term, and we should think of universalism not as a static thing but as an activity of universalizing: Being human - that capacity - is not a static thing but a historical realization, which can take shape in different ways. That's something that I learned through interaction with different Asian cultures.

All of this comes from India to China and Japan and then takes different forms on the spot with local traditions. There has always been an interesting rejection of stable substance ontology since the time of Laozi. I read Laozi as a rejection of the idea that there are substances, think of his concept of "empty" and "the space between" (Moeller, 2007). The crucial way of thinking in Asian traditions is that relations are not entities: Everything is related, but the relation between the objects are not more objects. This is an important philosophical fact, and these traditions have a very deep recognition of it. And this has been influencing me in the last four years, which is why I now tend to think of all of this as human becoming. My universalism or my humanism has gained another perspective through dialogues with thinkers in Japan and China.

If I revisit that scene in Ishiguro's lab, I would probably be more accommodating to some of his ideas. I wouldn't change my position regarding the ontology, but I would be more aware of the performative dimension. I think I now understand Ishiguro's mind better than at the time. I now see that the way in which you express an ontology can add another layer of cultural difference.

That's one of the reasons why I started my new project on the cultural presentation of artificial intelligence to fully explore the lower level of the conceptions of human being and how they may also impact the higher universal human form. Who says that the higher

universal human form is stable? Therefore, I stopped at a certain level of thinking about this as universal and stable, and maybe I should regard this as dynamic ...

Li: ... and changeable and constructed by different cultures in different ways? Gabriel: Definitely. But I think we must never make the mistake of denying the universal basis of humanity. If we say there are "different cultures" and "constructed in different ways," there is always the danger that we stop seeing the humanity in the other - and that must be avoided. If we look at the geopolitical situation right now, what Europe should bring to the global debate is precisely the recognition of humanity in every person, as Kant called it. And that is not a European invention. You find it in Ancient Egypt, China, etc. long before Europe even existed.

Let's just end with a geopolitical statement: The right thing to do in the geopolitical climate right now is the morally good. I think it is a moral mistake that Germany does not hand over vaccine information to Africa. This is what Europe should have learned from its history of colonialism: that other parts of the world, which are incredibly powerful now like China, demand and deserve respect. One way in which Europe should enter the global debate is by bringing to the table its capacity to attain ethical insight of the universal, but that means thinking globally - decentering. I think we need to decenter much more. Particularly for that new project this requires bringing these other conceptions into the picture.

China's and Japan's technological success-stories of the last decade cannot merely be a result of financial support of AI research from their governments. It must also be an expression of a different way of thinking of oneself as human - a certain frame of mind. I think that this frame of mind is something which deserves further investigation. What is the cultural difference? I would again emphasize that the construction of cultural difference has the goal of finding something we share rather than something that separates us.

REFERENCES

Cheng, L. (2020). Das Unheimliche der Entfremdung: Humanoide Roboter und ihre Buddha-Natur [The Uncanny of Alienation: Humanoid Robots and Their Buddha-Natures]. Jahrbuch Technikphilosophie, 6, 83-101.

https://doi.org/10.5771/9783748904861-83 Darwin, C. (1871). The Descent of Man and Selection in Relation to Sex. John Murray. Gabriel, M., Ishiguro, H., Kokubun K., & Chiba, M. (2018, July 15). Yokubo no jidai no tetsugaku: Marukusu gaburieru Nihon o iku. [Philosophy in the Age of Desire: Markus Gabriel in Japan] [TV program]. NHK-BS1 https://www.youtube.com/watch?v=H9J19m4ey8g Gabriel, M. (2020). The Meaning of Thought. Polity Press. (Original book published in 2018)

Jiang, H., Cheng, L., & Ishiguro, H. (2022). The Blurring of the Boundaries between Humans and Robots is a Good Thing and a New Species would be Born: An Interview with Hiroshi Ishiguro. Technology and Language, 3 (1), 40-46. https://doi.org/10.48417/technolang.2022.01.05 Kahneman, D. (2021). Noise: A Flaw in Human Judgement. HaperCollins. Kehlmann, D. (2021). Mein Algorithmus undIch [My Algorithm and Me]. Klett-Cotta. McEwan, I. (2019). Machines like me. Jonathan Cape.

Moeller, H. G. (trans.), (2007). Daodejing (Laozi): A Complete Translation and

Commentary, Chicago: Open Court. Schrader, M. (Director). (2021). I'm Your Man [Film]. Letterbox Filmproduktion SWR.

Zhang, X. D. (2021). Historical Reflections on Western Discourses of the Universal. Shanghai Renmin Press.

СВЕДЕНИЯ ОБ АВТОРАХ / THE AUTHORS

Юэ Ли, yue.li@kit.edu Yue Li, yue.li@kit.edu

Маркус Габриэль, gabrielm@uni-bonn.de Markus Gabriel, gabrielm@uni-bonn. de

Статья поступила 21 ноября 2021 Received: 21 November 2021

одобрена после рецензирования 21 февраля 2022 Revised: 21 February 2022

принята к публикации 23 февраля 2022 Accepted: 23 February 2022

i Надоели баннеры? Вы всегда можете отключить рекламу.