Научная статья на тему 'Response: Language and robots'

Response: Language and robots Текст научной статьи по специальности «Языкознание и литературоведение»

CC BY
56
13
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Linguistic constructions of robots / Affordances / Hermeneutic responsibility / Post-phenomenology / Narrative / Лингвистические конструкции роботов / Афордансы / Герменевтическая ответственность / Постфеноменология / Нарратив

Аннотация научной статьи по языкознанию и литературоведению, автор научной работы — Coeckelbergh Mark

Six commentaries on the paper “You, robot: on the linguistic construction of artificial others” articulate different points of view on the significance of linguistic interactions with robots. The author of the paper responds to each of these commentaries by highlighting salient differences. One of these regards the dangerously indeterminate notion of “quasi-other” and whether it should be maintained. Accordingly, the critical study of the linguistic aspects of human-robot relations implies a critical study of society and culture. Another salient difference concerns the question of deception and whether there is a distinction between real and perceived affordances. The prospect of AI systems creating language or co-authoring texts raises the question of the hermeneutic responsibility of humans. And regarding the missing dimension of temporality, studies of macroand micro-level hermeneutic change become more important.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Ответ: Язык и роботы

Шесть комментариев к статье “Ты, робот: о лингвистическом конструировании искусственных других” формулируют разные точки зрения на значение лингвистических взаимодействий с роботами. Автор статьи отвечает на каждый из этих комментариев, выделяя существенные различия. Один из них касается опасно неопределенного понятия “квазидругого” и необходимости его замены. Критическое изучение лингвистических аспектов отношений человека и робота подразумевает критическое изучение общества и культуры. Еще одно заметное различие касается вопроса обмана и того, существует ли различие между реальными и воспринимаемыми аффордансами. Перспектива систем искусственного интеллекта, создающих язык или становящихся соавторами текстов, поднимает вопрос о герменевтической ответственности людей. Более важными становятся исследования герменевтических изменений на макрои микроуровнях из-за недостающего измерения темпоральности.

Текст научной работы на тему «Response: Language and robots»

https://doi.org/10.48417/technolang.2022.01.14 Research article

Response: Language and Robots

Mark Coeckelbergh (0) University of Vienna, Universitatsring 1, 1010 Vienna, Austria mark.coeckelbergh@univi e.ac.at

Abstract

Six commentaries on the paper "You, robot: on the linguistic construction of artificial others" articulate different points of view on the significance of linguistic interactions with robots. The author of the paper responds to each of these commentaries by highlighting salient differences. One of these regards the dangerously indeterminate notion of "quasi-other" and whether it should be maintained. Accordingly, the critical study of the linguistic aspects of human-robot relations implies a critical study of society and culture. Another salient difference concerns the question of deception and whether there is a distinction between real and perceived affordances. The prospect of AI systems creating language or co-authoring texts raises the question of the hermeneutic responsibility of humans. And regarding the missing dimension of temporality, studies of macro- and micro-level hermeneutic change become more important.

Keywords: Linguistic constructions of robots; Affordances; Hermeneutic responsibility; Post-phenomenology; Narrative

Citation: Coeckelbergh M. (2022). Response: Language and robots. Technology and Language, 3(1), 147154. https://doi.org/10.48417/technolang.2022.01.14

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

УДК 1: 62-529

https://doi.org/10.48417/technolang.2022.01.14 Научная статья

Ответ: Язык и роботы

Марк Кекельберг (И) Венский университет, Университет-ринг 1, 1010 Вена, Австрия mark.coeckelbergh@univi e.ac.at

Аннотация

Шесть комментариев к статье "Ты, робот: о лингвистическом конструировании искусственных других" формулируют разные точки зрения на значение лингвистических взаимодействий с роботами. Автор статьи отвечает на каждый из этих комментариев, выделяя существенные различия. Один из них касается опасно неопределенного понятия "квазидругого" и необходимости его замены. Критическое изучение лингвистических аспектов отношений человека и робота подразумевает критическое изучение общества и культуры. Еще одно заметное различие касается вопроса обмана и того, существует ли различие между реальными и воспринимаемыми аффордансами. Перспектива систем искусственного интеллекта, создающих язык или становящихся соавторами текстов, поднимает вопрос о герменевтической ответственности людей. Более важными становятся исследования герменевтических изменений на макро- и микроуровнях из-за недостающего измерения темпоральности.

Ключевые слова: Лингвистические конструкции роботов; Афордансы; Герменевтическая ответственность; Постфеноменология; Нарратив

Для цитирования: Coeckelbergh M. Response: Language and robots // Technology and Language. 2022. № 3(1). P. 147-154. https://doi.org/10.48417/technolang.2022.01.14

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

It is a great honour for me to receive comments on my work (Coeckelbergh, 2011), and while in this concise response I cannot do justice to all the finesses of each of the pieces that came to me as unasked but pleasantly surprising and extremely welcome gifts, I am grateful for the opportunity to respond to some of the main arguments.

Larissa Ullmann (2022) wonders how to best describe the ontological hybridity of human-robot relations, that is, in a way that does not strictly separate subject and object. Borrowing from Don Ihde, I used the term "quasi-other," but Ullmann rightly argues that this is not always appropriate since the characteristics of robots are not necessarily comparable to humans. Therefore, she proposes a new, original linguistic term, 'sobject,' to describe the relationships between human subjects and robots. In this way she proposes to capture similarity without sameness. This intervention is creative and largely successful, although the "subject" aspect of "sobject" still risks to evoke the meaning of human subject, given that, in contrast to some animal ethicists, most of us do not spend much time on thinking about subjectivity in animals. We often assume that humans are the only subjects. Generally, this discussion illustrates how difficult it is to move beyond anthropomorphism and anthropocentrism with regard to these kinds of phenomena. To move beyond humans as the measure of all things (to use a statement from the ancient Greek philosopher Protagoras) turns out to be more difficult than expected. As von Xylander puts it nicely based on shopping metaphors: 'When I attribute subjectivity to a robot, it may dawn on me that I am the subject whose traces I've been following with my shopping cart' (von Xylander, 2022). And it is again language that is enabling that kind of movement. We can only gain critical awareness of what is happening once we consider what we are doing with language (when we are doing things with robots). For example, as both Ullmann and von Xylander show, when we "do robots" (that is, when we use them and talk about them and to them) we also "do gender." Although it is true that robots come in many forms, all this seems more applicable to humanoid and android robots than to others.

In her piece, Cheryce von Xylander (2022) claims that my analysis of the linguistic construction of artificial others sounds like a plea for the rights of robots. But this was neither my purpose nor my approach. Rather, by attending to the role of language in constructing what robots "are," we can take a critical philosophical attitude towards any arguments about the moral status of robots, including those who argue for robots rights. Hence my purpose was not a re-evaluation of the ontological status of robots but rather a meta-evaluation, or rather meta-description: a description and potentially a better understanding of how we evaluate robots. This was also my aim in articles on robot rights and related writings on moral status. For example, I do not favour a suspension of disbelief: I merely describe that this is how people deal with their experience of robots. I try to understand what is happening. However, von Xylander rightly points to 'the obscenity of our Anthropocene ecologies' and the predatory tactics of capitalism, which also impacts modern robotics and indeed risks to turn us into robots, already from its very conception. The Czech word robota, introduced into English by Capek (1920) in his famous play, means forced labor, slavery. Again my use of the term "quasi-other" is rightly seen as dangerous, here since it is argued that it might justify using robots within a cannibalist-capitalist logic. Seen from this angle, I conclude that a critical study of the

linguistic aspects of human-robot relations must be connected to a critical study of society and culture. For example, I sympathize with the author's remark about sexbots and consumerist culture. This angle needs to be included in any robot ethics and robot politics that claims to be critical.

The question regarding personalism is more complicated. It is not clear to what extent a theory focused on virtue and habit needs to be personalist, and whether the conservative version of personalism rightly questioned by von Xylander (and indeed Bourdieu) is the only version one could have. It is good to be reminded of the tension here between Bourdieu's and, say, Maclntyre's theoretical commitments. But deleting the person altogether, as a poststructuralist reading of Bourdieu seems to do, also comes at a cost. More generally, this piece reminds me that for thinking about technology we need a balanced social theory, which enables analysis at all levels and, like Bourdieu, aims at connecting the levels in a meaningful and fruitful way.

Leon Pezzica (2022) rightly highlights that not only language but also the robot's characteristics construct robots. He then claims that my account does not enable a deception objection. His own affordance-based approach, by contrast, succeeds in this since it is based on experience and affordance-gaps. Pezzica is right that I problematize arguments for deception that are based on a distinction between what the robot "really" is (an object, a machine, etc.) and what the robot seems to be (a person, for example). What the robot "is", I argue, depends on the language we use to construct the status of the robot. However, I doubt whether that implies that we can no longer use the term deception at all in this context. We can still say that a person is deceived; all I claim is that it is problematic to say that something unreal is happening. For example, when an elderly person uses the baby seal robot PARO and thinks that it is a real animal, then that person is deceived. But the reason why has nothing to do with the absence of reality: her experience is real, the robot really makes baby seal sounds, and so on. But the language, experience, and design of the robot work together to create a performance that misleads the person in thinking that the robot is an animal. To further conceptualize what happens, Pezzica's proposal is very helpful: we can use the concept of affordance. What happens is about a relation between the properties of the robot and the capabilities of the agent. However, I hesitate again when Norman (2013) and Pezzica (2022) then make a distinction between perceived and real affordances. The PARO robot really affords being talked to in a way one could talk to a baby seal, for example through its embodiment and the sounds it makes. If the designer succeeds in giving the robot the ability to engage in a relevant form of communication, there is no "perceived" affordance; the affordance, the experience, and the performance are real. It just is not such an animal. But there is a similarity in both performance and affordance. And that performance and affordance is enabled by human beings, as Pezzica rightly argues based on S^tra. Deception thus only makes sense once we bring in that human role in enabling, creating, and participating in these performances. I thus welcome to add the concept of affordance to that of performance, but question whether it is necessary, desirable, and justifiable to make a distinction between real and perceived affordances. That being said, there can be differences between expected performances and actual performances. Such gaps are indeed problematic.

Daria Bylieva (2022) rightly and helpfully draws attention to the phenomenon that today language is also used by non-humans such as AI technology, and in a way that is not pre-determined by human uses and that may impact both language and social reality. Use of language does not only construct the representation of relationships but also transforms them. For example, children adjust their language when they speak to an avatar. On their part, machines do not necessarily talk like humans and become more human-like. For example, they may be genderless. They might even use words more effectively and consistently, or create an interlingua. Bylieva shows that it is not technically possible that AI creates texts from which it is no longer possible to say whether it is created by humans or machines. This will change both linguistic and social reality. That language is no longer a human monopoly is a very interesting thought. What new uses of language, and ultimately, what new languages will AI create? How will this change social reality? To conceptualize this, we can use the same framework I proposed: Just like humans, AI might also construct reality in particular ways, ways that are not neutral to what things "are". Like humans, their linguistic performances also co-construct what they and we talk about and talk to. However, it will remain up to humans to interpret and make sense of what is going on and what will be going on. As I stress in recent papers, humans are only participants in processes that create meaning. But they are a very important part in it: since meaning is only created through them, they carry the hermeneutic responsibility. Exercising this hermeneutic responsibility, however, will require us to become more attentive to the technological developments Bylieva describes. The time that we can afford to see machines as mere tools - in a linguistic sense or otherwise - is definitely over.

In line with my recent process- and narrative oriented work, Cathrine Hasse (2022) offers some critical remarks and objections that can be summarized as implying that "things have moved on" - both in a superficial and in a deeper sense - since the publication of my paper on the linguistic construction of artificial others. I agree that empirical examples could be updated (using Hasse's own well-respected recent work and that of others) and that (pre-existing) practical experience with robots shapes people's concept of robots. Hasse's most interesting point here - at least in my view and in the light of my current interest in time, movement, and technology - is that we must be concerned with 'the micro-processes of ongoing changes in local practices.' Practical experience moves language and thinking. This change does not only happen at the societal and cultural level, but also in material-linguistic practices and concrete situations. For example, people may lose interest in the robot over time. In my response to postphenomenology I emphasized the macro level of hermeneutic change precisely because postphenomenology often tends to not only forget about the temporal dimension of technology - Hasse acknowledges this and emphasizes that relations change over time - but also about the wider cultural and societal dimension. However, Hasse is right that it also matters what happens at the micro level. Here empirical research can help. Note that my article in question does give examples of what happens in human-robot interaction at the micro level; yet the time dimension is missing, and in line with my current work and research interests that can and must be remedied. For example, one could study how people speak differently about an assistive device with voice interface

such as Alexa after use of a few months; it might be that people lose interest or discover other uses. With Olya Kudina I have done some research on meaning-making in the case of such devices; here more attention could be given to the time dimension (Kudina & Coeckelbergh, 2021). Another point Hasse offers is that people conceptualize the robot on the basis of stories, then comes the experience. This is congruent with the experiments by for example Kate Darling in the mid 2010s: In an experiment on empathy, participants were given a backstory and then the researchers observed what the participants did to the robots. However, I do not quite see why this implies that Hasse and I would go in opposite directions: here too, language (in the form of narrative) constructs what the robot "is," that is, how we think about the robot and how we experience the robot, and what we then do with or to the robot. So first there is language, then the experience and the practice. However, perhaps this discussion is a kind of chicken and egg problem: language use must be seen as an inherent part of our experience and practices, and language without experience and practice is dead. In this case: someone sees a robot (with particular affordances - see Pezzica!) and experiences that robot, uses a particular kind of word, and in turn that language use again shapes the perception and experience. Or there might be first a narrative, then the experience, then a different narrative may emerge. What is important in all these cases is to conceptualize and show the mutual interrelation of language and meaning, language and experience, language and how we think about technology, and preferably at both micro and macro level. And with Hasse and in line with my current work on performance, time, and technology, one could add that in such studies and reflections it is vital to take into account the shifts and changes: the shifts and changes in language use and the shifts and changes in our perception, experience, and thinking about robots. Finally, Hasse rightly remarks that there is also cultural variation. In my recent work (but also earlier when I wrote about robots in Japan) I am sensitive to this, see for example my recent paper on 'The Ubuntu Robot' (Coeckelbergh, 2022). Yet here too, there is no static situation: cultures are always in movement, just as the micro processes and interactions that tie together uses of words and things and in this way make meaning and let meaning emerge. My recent work, which connects technology use with performance, narrative, time, and meaning, contributes to this project and I am open to collaboration with researchers, including Hasse, that look at these topics from a more empirical angle.

Yue Li (2022) investigates the hybrid character of robots from a literary angle (in particular work by McEwan and Kehlmann), sees writing and co-writing as linguistic interaction between humans and robots, and argues that studying this reveals the normative dimension of language. Even if the phenomenon I described (a turn towards use of the "you" when interacting with robots) and the tendency towards human-robot hybridity does not always occur in reality, such language use and the corresponding relationship are constructed in literature. What is added to my approach is the thesis that writing and co-writing constitutes particular interactions between humans and robots that are not limited to direct speech. While there are for example shifts from impersonal to personal pronouns (which constructs the machines in particular ways as presented in my paper), there is more going on. For example, in Machines Like Me (McEwan, 2019)

relationships are constituted by means of narration, self-reflection, and explanation. In My Algorithm and Me (Kehlmann, 2021) AI is co-writer, which also offers a different perspective. The boundary between humans and robots are blurred in various ways. Here too it is language that creates this ambiguity, fiction in this case. But in line with my current interest in narrative and technology, therefore, I welcome Yue Li's investigations as broadening and complementing the initial framework of my paper, next to being interesting in their own right. Fictional narratives are forms of language use which also construct what we think machines "are" and how we (inter)act with them. These narratives could be analysed in various ways and at various levels, for example at the grammatical level but also at the level of the structure of the narrative as a whole -consider here my work with Wessel Reijers inspired by Ricoeur (Reijers et al., 2021). Metaphors could also be interesting to study, as the author proposes. Moreover, Yue Li shows that fiction is a very helpful instrument to explore and research the ambiguities and confusions that might be created by technologies such as AI and robotics - even if they do not frequently occur today. For example, if and when AI gets better at writing (see also Bylieva's piece), what does and will it mean to co-write with AI? Will we be able to handle that hybridity? What does it imply for our self-image, as persons and as humanity? What does it mean for our knowledge about ourselves? For our relationships? What demons will we summon when AI might show us new, darker sides of the human mind? Or is this not possible? And what are the new, creative and indeed poetic possibilities?

REFERENCES

Bylieva, D. (2022). Language of AI. Technology and Language, 3(1), 111-126.

https://doi.org/10.48417/technolang.2022.01.11 Capek, K. (1920). R.U.R.: Rossumovi Univerzální Roboti. Animedia. Coeckelbergh, M. (2011). You, Robot: on the Linguistic Construction of Artificial

Others. AI & Society, 26(1), 61-69. https://doi.org/10.1007/s00146-010-0289-z Coeckelbergh, M. (2022). The Ubuntu Robot: Towards a Relational Conceptual Framework for Intercultural Robotics. Science and Engineering Ethics, 28(2), http// dx.doi.org/10.1007/s11948-022-00370-9 Hasse, C. (2022). Language and Robots: from Relations to Processes of Relations. Technology and Language, 3(1), 127-135.

https://doi.org/10.48417/technolang.2022.01.12 Kehlmann, D. (2021). Mein Algorithmus undIch [My Algorithm and Me]. Klett-Cotta. Kudina, O., & Coeckelbergh, M. (2021). "Alexa, define empowerment": Voice Assistants at Home, Appropriation and Technoperformances. Journal of Information, Communication and Ethics in Society, 19(2), 299-312. https://doi.org/10.1108/JICES-06-2020-0072 Li, Y. (2022). Affirming and Denying the Hybrid Character of Robots: Literary Investigations. Technology and Language, 3(1), 136-146. https://doi.org/10.48417/technolang.2022.01.13 McEwan, I. (2019). Machines like me. Jonathan Cape.

Norman, D. A. (2013). The Design of Everyday Things. Basic Books. Pezzica, L. (2022). On Talkwithability. Communicative Affordances and Robotic Deception. Technology and Language, 3(1), 104-110. https://doi.org/10.48417/technolang.2021.04.10 Reijers, W., Romele, A., & Coeckelbergh, M. (2021). Interpreting Technology: Ricoeur on Questions Concerning Ethics and Philosophy of Technology. Rowman & Littlefield Publishers.

Ullmann, L. (2022). The Quasi-Other as a Sobject. Technology and Language, 3(1), 7681. https://doi.org/10.48417/technolang.2022.01.08 von Xylander, C. (2022). Quipping Equipment. Apropos of Robots and Kantian Chatbots: Commentary on Mark Coeckelbergh, "You, robot: on the linguistic construction of artificial others" (2011). Technology and Language, 3(1), 82-103. https://doi.org/10.48417/technolang.2021.04.09

СВЕДЕНИЯ ОБ АВТОРЕ / THE AUTHOR

Марк Кекельберг, mark.coeckelbergh@univie.ac.at Mark Coeckelbergh, mark. coeckelbergh@univie.ac.at ORCID 0000-0001-9576-1002 ORCID 0000-0001-9576-1002

Статья поступила 10 марта 2022 Received: 10 March 2022

одобрена после рецензирования 21 марта 2022 Revised: 21 March 2022

принята к публикации 21 марта 2022 Accepted: 21 March 2022

i Надоели баннеры? Вы всегда можете отключить рекламу.