https://doi.org/10.48417/technolang.2022.01.10 Research article
On Talkwithability. Communicative Affordances and Robotic Deception
Leon Pezzica (0) Darmstadt Technical University, Karolinenpl. 5, Darmstadt, 64289, Germany, leonpezzi ca@ gmx .de
Abstract
This paper operates within Mark Coeckelbergh's framework of the linguistic construction of robots. Human-robot relations are conceptualised as affordances which are linguistically mediated, being shaped by both the linguistic performances surrounding human-robot interaction as well as the robot's characteristics. If the robot signifies the affordance of engaging in human-human-like conversation (talkwithability), but lacks the real affordance to do so, the robot is to be thought of as deceptive. Robot deception is therefore a question of robot design. Deception by robot not only has ethically relevant consequences for the communicating individual, but also long-term effects on the human culture of trust. Mark Coeckelbergh's account of the linguistic construction of robots as quasi-subjects excludes the possibility of deceptive robots. According to Coeckelbergh, to formulate such a deception objection, one needs to make problematic assumptions about the robot being a mere thing as well as about the authentic, which one must assume can be observed from an objective point of view. It is shown that the affordance-based deception objection to personal robots proposed in this paper can be defended against Coeckelbergh's critique as the detection of affordances is purely experience-based and the occurrence of deception via affordance-gaps is not in principle limited to robots. In addition, no claims about authenticity are made, instead affordance-gaps are a matter of appropriate robot signals. Possible methods of bridging the affordance-gap are discussed. - This is one of six commentaries on a 2011-paper by Mark Coeckelbergh: "You, robot: on the linguistic construction of artificial others." Coeckelbergh's response also appears in this issue of Technology and Language.
Keywords: Human-robot relations; Robot ethics; Language; Affordances; Deception
Citation: Pezzica, L. (2022). On Talkwithability. Communicative Affordances and Robotic Deception. Technology and Language, 3(1), 104-110. https://doi.org/10.48417/technolang.2022.01.10
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
УДК 1: 62-529
https://doi. org/10.48417/technolang.2022.01.10 Научная статья
О способности к разговору. Коммуникативные аффордансы и обман роботами
Леон Пеццика (И) Дармштадский технический университет, Каролиненплац 5, Дармштадт, 64289, Германия
leonpezzi [email protected]
Аннотация
Данная статья разработана в рамках лингвистического конструирования роботов Марка Кекельберга. Отношения человека и робота концептуализируются как аффордансы, которые лингвистически опосредованы и формируются как лингвистические действия, окружающие как взаимодействие человека и робота, так и характеристики робота. Если робот показывает возможность участвовать в разговоре, подобном межчеловеческому ("talkwithability"), но не имеет реальной возможности для этого, робота следует считать обманщиком. Таким образом, обман роботов - это вопрос конструкции робота. Обман со стороны робота имеет не только этически значимые последствия для общающегося человека, но и долгосрочные последствия для человеческой культуры доверия. Представление Марка Кекельберга о лингвистическом конструировании роботов как квазисубъектов исключает возможность роботов-обманщиков. Согласно Кекельбергу, чтобы сформулировать данное возражение об обмане, необходимо сделать сомнительные предположения о том, что робот является просто вещью, а также об аутентичности, которую, как следует предположить, можно наблюдать с объективной точки зрения. Показано, что возражение против обмана, основанного на аффордансах, против персональных роботов, предложенное в данной статье, может быть защищено от критики Кекельберга, поскольку обнаружение аффордансов основано исключительно на опыте, а возникновение обмана через аффорданс-промежутки в принципе не ограничивается роботами. Кроме того, не делается никаких заявлений о подлинности, вместо этого аффордансные пробелы связаны с соответствующими сигналами робота. Обсуждаются возможные методы преодоления разрыва возможностей. - Это один из шести комментариев к статье 2011 года Марка Кекельберга: "Ты, робот: о лингвистическом конструировании искусственных других". Ответ Кекельберга также опубликован в этом выпуске журнала "Technology and Language".
Ключевые слова: Отношения человека и робота; Этика роботов; Язык; Аффордансы; Обман
Для цитирования: Pezzica, L. On Talkwithability. Communicative Affordances and Robotic Deception // Technology and Language. 2022. № 3(1). P. 104-110. https://doi.org/10.48417/technolang.2022.01.10
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
'Some robots are revealed as artefacts that are co-constructed in at least the following ways: they are at the same time engineering constructs and social-linguistic constructs. Their appearance creates social relations that are linguistically mediated', Mark Coeckelbergh (2011) writes to conclude his linguistic-hermeneutic approach towards understanding human-robot interaction (HRI) (p. 67). A robot is a social-linguistic construct insofar as the mode in which we refer to the robot (third versus second person) 'interprets and shapes our relations to robots', constructing them as quasi-others (Coeckelbergh, 2011, p. 64). This 'robot talk' is embedded into a form of life that can be understood in a Wittgensteinian sense1, making possible and shaping the language we use to refer to robots while likewise being shaped by these concrete uses (Coeckelbergh, 2011, p. 65). It is important to note that, while in the end the robot can be understood as a linguistically constructed quasi-other, this construction still depends on how the robot appears to us - which on one hand is indeed shaped by a culture, but on the other hand also based on certain robot characteristics. To get a better understanding of this phenomenon, I will utilize the concept of affordances stemming from design theory and apply them to HRI.
Coeckelbergh (2011) furthermore responds to what he calls 'the deception objection to personal robots' which is brought forth by opponents of such robots and 'concerns the dual charge that these robots, human-robot relations, human-robot "conversations", etc. are not really persons, not really (social) relations or conversations (i.e. are inauthentic), and that giving them to people would be a matter of deception' by pointing out that those objections involve claims about authenticity which we do not have unmediated access to (p. 66). After explicating my approach, I will respond to the four ways in which Coeckelbergh deems the deception objection to be problematic.
In the following, I aim to make the case for a more nuanced version of the deception objection within Coeckelbergh's framework of linguistic construction. By drawing on Henrik Skaug Sutra's conception of how social robots can influence the 'human culture of trust', I will argue that designing robots with characteristics that facilitate second-person robot talk is to be seen as morally problematic.
The appearance of the robot creates the social relation which is then linguistically mediated. A more precise way of conceptualising this interaction would be by the concept of affordance, the 'relationship between the properties of an object and the capabilities of the agent that determine just how the object could possibly be used' (Norman 2013, p. 11). Just as a door affords to be opened, a robot affords to be talked to. Now, one may ask, in what way does the robot's affordance to be talked to differ from that of another thing, say a puppet, a toaster oven or a stone? Since it is still possible to talk to such a thing, does it not therefore also afford this kind of behaviour? Well, yes and no. One needs to distinguish between real and perceived affordances (Norman, 2013, p. 18; Norman, 1999). While technically a toaster oven affords to be talked to - in the sense that it is possible for an entity capable of language to perform that action -, the perceived affordances differ drastically. In comparison to the toaster oven, a robot by its appearance is much more likely to signify the affordance of being talked to (an empirical claim that
1 For a more detailed understanding see Coeckelbergh (2018).
would need to be verified). Beyond that, it may signify the affordance that one can have an actual conversation with it, talk with it. Norman (2013) points out that 'perceived affordances may not be real' and therefore 'misleading', which is the case when Wile E. Coyote runs against a wall painted like a tunnel, thus signifying the affordance of enterability, or when we try to engage in conversation with a robot that signifies the possibility to do so, but lacks the corresponding real affordance (p. 17f). The perceived affordance of being able to talk with the robot in a human-human-like manner is constituted by certain signifiers, certain robot characteristics. What these are is an empirical question, but it seems evident that anthropomorphic cues which usually are signifiers for the possibility of human-human-conversation, such as having facial features, reacting to speech in certain ways, lifelike movement, etc., are also signifiers for the human-robot equivalent - although there might be others. Carmina Rodríguez-Hidalgo (2020) for example proposes embodiment to play an important role in creating communicative affordances in HRI. Furthermore, research on the anthropomorphisation of robots suggests that humans tend to think of robots as being more anthropomorphous than they are and thus overestimating their capabilities (S^tra, 2021, p. 282), supporting my thesis on how such an affordance is created. With this affordance-based approach we now have a concept of how the social relation, which afterwards is linguistically mediated, arises. It is important to note that an affordance (whether real or perceived) is still a relation between object and agent, so while being based on robot characteristics, the affordance-based approach is still consistent with Coeckelbergh's (2011) claim that how we approach the robot linguistically influences our relationship to it - and thus its perceived affordances (p. 65).
In the following it is assumed that robot deception is neither done by the robot itself, which I do not consider an author of its actions, nor is it a matter of self-deception2. Or as S^tra (2021) puts it: 'Robot deception thus refers to deception by robot, and robots are merely the vessel of deception.' The question at hand is therefore one regarding deceptive design (p. 279). It now seems evident that a robot designed in a way that signifies the affordance of talkwithability - understood as having a meaningful human-human-like conversation with it - without affording to actually do so can be called misleading, the question is in what way this is of moral concern. While a discrepancy between perceived and real communicative affordances is obviously a matter of deception on an individual level, S^tra (2021) also emphasises the importance of trust in human-human-communication and argues that 'HRI will have spillover effects to HHP, damaging the human culture of trust if 'social robots make institutionalized cooperation unreliable or corrupt (p. 283). Robot deception, deception by design and deployment of a social robot, therefore, arises if the robot's design creates a gap between perceived and real communicative affordances, signifying the former to be more akin to human-human-like conversation as the latter are and therefore having long-term effects on the human culture of trust.
2 Thus, I will not seek to answer certain questions by Coeckelbergh (2011, p. 68) like 'For instance, is it morally problematic to use the ''you'' perspective in relation to an artefact?' as - while they may be of interest - they concern a different problem.
Coeckelbergh (2011) shows how a deception objection may be - or as he claims is - problematic by identifying four assumptions which proponents of deception objections seem to be committed to (p. 66). I will now respond to those one by one, clarifying how the deception objection introduced in this paper works without making those claims and refining my argument in the process.
'(1) that talking to things is always and necessarily morally problematic' (Coeckelbergh, 2011, p. 66)
The proposed deception objection is not in principle limited to robots. Instead, employing things with the same affordance-gap into contexts of human interaction would be a matter of deception as well. It just seems that other things do not signify communicative affordances in this way and therefore lack the moral significance of long-term effects on human trust. An exception might be computers when we implement software such as chatbots. While it would need to be researched whether such programs would signify the same affordance of talkwithability - maybe more so if a virtual avatar is included - the same objection would apply on the level of software development.
'(2) that only human relations are real, true, and authentic' (Coeckelbergh, 2011, p. 66)
In order to make the deception objection it is not necessary to make any claims regarding authenticity; perceived and real affordances of the robot are experienced through concrete actions. If the robot does not engage in the conversation, the perceived affordance of talkwithability is discovered not to be real.
'(3) that there is an objective, external point of view that allows us to judge the reality and truth of the human-robot relation' (Coeckelbergh, 2011, p. 66)
Again, the deception objection need not make this claim, but can be entirely experience-based. If the robot is perceived to afford a human-human-like conversation, but fails to do so, it is deceiving. There might be more elaborate robots in the future, which could actually afford talkwithability, in which case my deception objection would not be applicable. But this does not affect the deceptiveness of current-state personal robots. I will also come back to this point in my conclusion.
'(4) that to say that the robot is a thing is completely unproblematic' (Coeckelbergh, 2011, p. 66)
Granting a robot personhood might not be an entirely different matter as it may go hand in hand with certain communicative affordances, but my argument still holds water if that would be the case. Furthermore, my considerations allow for hybrid forms such as quasi-others, they even presuppose that these hybrid forms exist in the form of a robot that is linguistically approached in the second person due to perceived communicative affordances, but does not match those in its real affordances. Not limiting this objection to things begs the question: can a human differ in perceived and real affordances as well and if so, does the deception objection apply? But because humans are not designed
entities and my argument concerns deceptive design, this is not a well-formulated question3.
It has been shown that human-robot relations can be conceptualised as affordances which are then linguistically mediated, being shaped by both the linguistic performances surrounding HRI - which themselves are embedded in a form of life - as well as the robot's characteristics. Deception by robot design now occurs when the robot signifies the affordance of talkwithability without matching it as a real affordance. One can propose this argument without the problematic assumptions common to deception objections as it is not based on the thing-status of robots and involves no claims regarding the authenticity of the robot's signals, nor that the human-robot relation can be judged from an objective perspective outside of communicative performances.
So how can the gap between perceived and real communicative affordances be bridged? While I agree with Coeckelbergh (2012) who proposes 'that what we need instead, if anything, are not "authentic" but appropriate emotional responses - appropriate to relevant social contexts', I have argued that as long as those responses are not appropriate, the robot's design can - and must - be thought of as deceptive (p. 392). Bridging the gap by evoking appropriateness of the robot's responses can be done in two ways: the perceived affordances can be matched to the real ones or the other way around. Perceived affordances are a relation between the robot and the perceiving subject, so they can be altered by either changing the robot's characteristics or how the subject perceives the robot, so implementing signifiers that work against the creation of perceived talkwithability or deliberately leaving out certain signifiers like a mouth would be a way to correct perceived affordances. Although a probably more fruitful approach would be to alter the subject's perception of the robot by 'fine-tuning human expectations about robots' (Coeckelbergh 2012, p. 393), including a 'bottom-up approach' by educating people about potential, especially long-term risks of robot deception (S^tra, 2021, p. 284). The other way of aligning perceived and real affordances is by enhancing what a robot can do and therefore creating a real affordance of talkwithability. But by looking at the performance of current state-of-the-art "communicative" robots such as the ones by Hanson Robotics4, one can conclude that we are far from making this possible. Hence, keeping robot deception to a minimum might not be a concern anymore as innovation in robotics progresses, but to deal with current deceptive communicative affordance-gaps is both the developer's task as well as a broader cultural and educational project.
REFERENCES
Coeckelbergh, M. (2011). You, Robot: On the Linguistic Construction of Artificial
Others. AI & Society, 26(1), 61-69. https://doi.org/10.4324/9781315528571 Coeckelbergh, M. (2012). Are Emotional Robots Deceptive? IEEE Transactions on Affective Computing 3(4), 388-393. https://doi.org/10.1109/T-AFFC.2011.29
3 If in the future the design of human beings should be - fully or to some degree - possible, for example through genetic engineering, these considerations need to be revisited. But I do not see why the deception objection should not apply to human design as well or why that could be seen as problematic.
4 https://www.hansonrobotics.com/hanson-robots/
Coeckelbergh, M. (2018). Technology Games: Using Wittgenstein for Understanding and Evaluating Technology. Science and Engineering Ethics, 24(2), 1503-1519. https://doi.org/10.1007/s11948-017-9953-8 Norman, D. A. (1999). Affordance, Conventions and Design. Interactions 6(3), 38-43. Norman, D. A. (2013). The Design of Everyday Things. Basic Books. Rodríguez-Hidalgo, C. (2020). Me and My Robot Smiled at One Another: The Process of Socially Enacted Communicative Affordance in Human-Machine Communication. Human-Machine Communication, 1, 55-69. https://doi.org/10.30658/hmc.1.4 Sœtra, H. S. (2021). Social Robot Deception and the Culture of Trust. Paladyn, Journal of Behavioral Robotics, 12, 276-286. https://doi.org/10.1515/pjbr-2021 -0021
СВЕДЕНИЯ ОБ АВТОРЕ / THE AUTHOR
Леон Пеццика, [email protected].
ORCID 0000-0002-7740-6768
ORCID 0000-0002-7740-6768
Статья поступила 6 января 2022
одобрена после рецензирования 9 февраля 2022
принята к публикации 28 февраля 2022
Received: 6 January 2022/ Revised: 9 February 2022 Accepted: 28 February 2022