Научная статья на тему 'Instructing Tacit Knowledge: Epistemologies of Sensory-Based Robotic Systems'

Instructing Tacit Knowledge: Epistemologies of Sensory-Based Robotic Systems Текст научной статьи по специальности «Строительство и архитектура»

CC BY
89
30
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Robotic Manipulation / Instruction as Translation / Embodied/Distributed Embodiment / Machine Learning / Deep Learning / Human-Machine Interaction / Philosophy of Technology / Philosophy of Media / Роботизированные манипуляции / Инструкция как перевод / Распределенное воплощение / Машинное обучение / Глубокое обучение / Взаимодействие человека и машины / Философия техники / Философия медиа

Аннотация научной статьи по строительству и архитектуре, автор научной работы — Wuzella, Regina

The article tries to outline the supposed precarity of the body (or body-bound knowledge) in the context of AI-based environments by re-negotiating the borders of formalizing material-based, cognitive and tacit knowledge in regards to the (robotic) gesture (of grasping). In the following, it will be a matter of tracing the epistemes underlying this simulation that relies on specific instructions, which is understood in this context as a specific rule or command and hence by nature an explicit directive to execute a task on a behavioral level. How concepts of embodied knowledge are inscribed in the fabrication of the systems, how they can be recognized and how human corporeal involvement can be described on different levels of fabrication and use, is therefore part of the analysis: For the special case of humanoid designed robots the challenges are located in the anthropomimetic fabrication on a computational level, as well as in the production of anthropomorphic design and thus connects to a specific knowledge of the human movement and the human sensory apparatus.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Обучение неявным знаниям: Эпистемологии сенсорных робототехнических систем

В статье делается попытка обрисовать предполагаемую ненадежность тела (или связанных с телом знаний) в контексте сред, основанных на ИИ, путем пересмотра границ формализации материальных, когнитивных и неявных знаний в отношении (роботизированных) жестов. (схватывания). В дальнейшем речь пойдет о том, чтобы проследить эпистемы, лежащие в основе этой симуляции, основанной на конкретных инструкциях, которые в данном контексте понимаются как конкретное правило или команда и, следовательно, по своей природе явная директива для выполнения задачи на поведенческом уровне. Таким образом, частью анализа является то, как концепции воплощенных знаний вписываются в создание систем, как их можно распознать и как можно описать человеческое телесное участие на разных уровнях изготовления и использования: проблемы находятся в антропомиметическом изготовлении на вычислительном уровне, а также в производстве антропоморфного дизайна и, таким образом, связаны с конкретным знанием человеческого движения и человеческого сенсорного аппарата.

Текст научной работы на тему «Instructing Tacit Knowledge: Epistemologies of Sensory-Based Robotic Systems»

https://doi .org/10.48417/technolang.2022.02.03 Research article

Instructing Tacit Knowledge: Epistemologies of Sensory-Based Robotic Systems

Regina Wuzella (S) University of Siegen, Herrengarten 357072 Siegen, Germany re gina.wuzella@uni -siegen.de

Abstract

The article tries to outline the supposed precarity of the body (or body-bound knowledge) in the context of AI-based environments by re-negotiating the borders of formalizing material-based, cognitive and tacit knowledge in regards to the (robotic) gesture (of grasping). In the following, it will be a matter of tracing the epistemes underlying this simulation that relies on specific instructions, which is understood in this context as a specific rule or command and hence by nature an explicit directive to execute a task on a behavioral level. How concepts of embodied knowledge are inscribed in the fabrication of the systems, how they can be recognized and how human corporeal involvement can be described on different levels of fabrication and use, is therefore part of the analysis: For the special case of humanoid designed robots the challenges are located in the anthropomimetic fabrication on a computational level, as well as in the production of anthropomorphic design and thus connects to a specific knowledge of the human movement and the human sensory apparatus.

Keywords: Robotic Manipulation; Instruction as Translation; Embodied/Distributed Embodiment; Machine Learning; Deep Learning; Human-Machine Interaction; Philosophy of Technology; Philosophy of Media

Acknowledgment: Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 262513311 - SFB 1187

Citation: Wuzella, R. (2022). Instructing Tacit Knowledge: Epistemologies of Sensory-Based Robotic Systems. Technology and Language, 3(2), 14-37. https://doi.org/10.48417/technolang.2022.02.03

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

УДК 621.865.8:159.9.016.1 https://doi.org/10.48417/technolang.2022.02.03 Научная статья

Обучение неявным знаниям: Эпистемологии сенсорных робототехнических систем

Регина Вузелла (И) Университет Зигена, Херренгартен, 357072 Зиген, Германия re gina.wuzella@uni -siegen.de

Аннотация

В статье делается попытка обрисовать предполагаемую ненадежность тела (или связанных с телом знаний) в контексте сред, основанных на ИИ, путем пересмотра границ формализации материальных, когнитивных и неявных знаний в отношении (роботизированных) жестов. (схватывания). В дальнейшем речь пойдет о том, чтобы проследить эпистемы, лежащие в основе этой симуляции, основанной на конкретных инструкциях, которые в данном контексте понимаются как конкретное правило или команда и, следовательно, по своей природе явная директива для выполнения задачи на поведенческом уровне. Таким образом, частью анализа является то, как концепции воплощенных знаний вписываются в создание систем, как их можно распознать и как можно описать человеческое телесное участие на разных уровнях изготовления и использования: проблемы находятся в антропомиметическом изготовлении на вычислительном уровне, а также в производстве антропоморфного дизайна и, таким образом, связаны с конкретным знанием человеческого движения и человеческого сенсорного аппарата.

Ключевые слова: Роботизированные манипуляции; Инструкция как перевод; Распределенное воплощение; Машинное обучение; Глубокое обучение; Взаимодействие человека и машины; Философия техники; Философия медиа

Благодарность: Финансируется Deutsche Forschungsgemeinschaft (DFG, Немецкий исследовательский фонд) — ID проекта 262513311 — SFB 1187

Для цитирования: Wuzella, R. Instructing Tacit Knowledge: Epistemologies of Sensory-Based Robotic Systems // Technology and Language. 2022. № 3(2). P. 14-37. https://doi.org/10.48417/technolang.2022.02.03

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

iu

INTRODUCTION: INSTRUCTING TACIT KNOWLEDGE

An instruction is a specific rule or command and hence by nature an explicit directive to execute a task on a behavioral level. In the following I want to discuss the notion of instruction in regard to machine learning-based robotic systems - amongst them soft robotic systems: I will outline the limits of explicit instructions in terms of the question of translatability of tacit/embodied knowledge (cf. Polyani, 1985; Collins et al., 2001) on a computational and morphological level in regards to specific outputs and controlled behaviour. To do so, I will take a close look at the processes of instruction in the context of robotic grasping devices and furthermore take the pivotal role of (tactile, soft) sensors into account. The (smart-)material used within soft robotics/morphological computing itself will be questioned as an quasi-agent of instruction. Since this paper is written from a media theoretical and STS-Studies perspective, I will understand the discussed robotic grasping devices as a media compound that are realized on the background of specific epistemic assumptions and socio-technological conditions. I mainly rely on examples within the fabrication of the (robotic) gesture of grasping, which plays a crucial role in (collaborative) robotics in actively exploring the environment and responding to the environment: e.g. in the situational embedding of robotic systems, design principles and the question of "embodiment" of formalized knowledge (embodied knowledge). Which epistemic parameters and presuppositions -for example anthropo- and biomimetic models are at work in current implementations of the robotic gripper arm - need to be reflected upon in this context? Through the sensor technologies used in mobile robotics, which on the one hand generate multisensory data from the environment and on the other hand are supposed to facilitate interactional processes by simulating human sensory systems, new perspectives can - so it is assumed - be developed with regard to the question of the possibilities of formalization and the significance of embodied knowledge in the context of AI-based collaborative systems. The interplay of computation and design is becoming increasingly decisive in the field of social/companion robotics and thus evokes general discourses about the role of corporeality and physique in cognitive performance and, to a certain extent, comes to a head with the use of smart materials1, such as those used in the field of soft robotics or humanoid robotics. The article tries to outline the supposed precarity of the body (or body-bound knowledge) in the context of AI-based environments by re-negotiating the borders of formalizing material-based, cognitive and tacit knowledge in regards to the (robotic) gesture (of grasping).

1 By smart materials, we primarily mean materials that are deformable (see soft robotics), i.e. that differ from classic rigid materials such as metal, which are primarily used in industrial robotics. Deformable materials - such as silicone elastomer, types of hydrogel, even liquids or gases, etc. - make robotic actuators more elastic and allow them to adapt more flexibly to their environment. In addition, it is also possible to embed deformable, elastic sensors in the material itself (see Gel-Sight Sensor below) or to fuse them with the material, allowing a wider span of data generation from the environment. The term smart materials is also used in a very general ecological sense, referring to the biodegradation of sustainable materials.

THE GESTURE OF GRASPING

The robotic hand has become an emblematic image for automated work. The transfer of activities from the human hand to the robotic hand has long since ceased to be limited to industrial manufacturing techniques, most notably those of the automotive industry, but has penetrated fields ranging from medical diagnostics to automated care work in/as part of so-called ambient assistance living environments. In relation to the latter, technologies such as robotic interaction with the environment encounter specific challenges: Unpredictability in the interaction with humans and an environment that is not (yet) adapted qua mechanization to mobile robotic systems. In the field of machine learning/deep learning-based humanoid robotics, multi-modal sensor technologies are therefore becoming increasingly important to ensure smooth and "intuitive" processes of robotic machines. Sensors increasingly represent a kind of key element in terms of (active or passive) interaction with digital technologies in general. Sensors can be understood here as a kind of threshold medium, since they initially provide the "preconditions that precede [the] translation chains" (cf. Thielmann, 2019, p. 2) In this process, new assemblages are always created in combination with technical things/objects: Embedded in a technical, material structure, sensors are always part of it, acting as contact zones that invite users - depending on the design of the objects or infrastructures in which they are embedded - to participate in specific ways or even guide them to do so. Sensors are openings and closings to infrastructures of processing, which in turn follow their own temporalities and frequencies of data transmissions: Linked back to servers and in exchange with other sensors, they collect data from the environment that is processed in computational processes. In the field of robotic interaction, it is increasingly possible to offload computational processes into the robot "body" via sensor technologies, as is the case with soft robotics or morphological computation systems, for example. The goal of humanoid robotic interaction is to simulate human perception and ultimately to replicate it in interaction with humans as actions, which in the context of machine learning-based robotics can replace humans themselves in some areas as more efficient, secure and safe agents (care robots, medical assistance systems, etc.).

FIGURES OF THOUGHT OF THE SENSORY AND ALGORITHMIC

INSTRUCTION

Taken on their own, sensors are confronted with nothing more than an endless noise of unspecified data streams: they are thus initially nothing more than a kind of perforation between the environment and computer systems, spanning zones of translation and transmission. Sensor-based technologies, which unfold both on a data-processing level and as medial practices, can only be understood in the interplay of the technologies employed (artificial intelligence, engineering, design, etc.), which unfold their own dynamics against this background and can only be partially experienced by the human sensory apparatus. The very fact that data-processing media are also always time-critical would exclude any kind of human experience. At this point, it is no longer a question of how and whether the robotic system can be replicated in the human sense,

but how the exchange of experience in the interaction from the human side is to be described. And from this perspective, most processes involving sensory data processing are initially no longer directed at human perception at all, because a "microtemporality that subverts intuition and discernment" (Sprenger, 2014, p. 14) underpins what is happening. Sensor-based technologies generate inherent dynamics that lie below the threshold of human consciousness. Mark Hansen captures processes that occur below or beyond human sensory perception with the concept of wordly sensibility, (cf. Hansen, 2018) which refers to a moment of technical-machine processing in which human perception remains outside. At the same time, however, Hansen understands this dimension of machine processing as a "quasi-technical extension of phenomenological reflection" (Krtilova, 2016, p. 102). However, since cognition and reflection in the phenomenological sense require real - i.e. temporally and spatially situated -experientiality (p. 102), this would exclude the possibility of any kind of reflection that includes experientiality as part of it - precisely because it is located outside of human consciousness: neither from the machine nor from the human side. If sensors capture data, the possibility of a description, at least in the sense of a human sensual experience, breaks off at this point. This results in a difficulty in describing sensor-based technologies: And in general, with the emergence of digital mobile, wearable and networked technologies, the classical single medium in the sense of a technical a priori no longer seems conceivable. At the same time, the WHAT has increasingly given way to the HOW, which means that the description of media structures is primarily concerned with performative acts, processes, and relations, which, among other things, in the sense of actor-network theory or actor-media theory, are based on the agency of the actors (human or non-human) and understand media as and in chains of operations, translations, and transformations between things and people. If humans are in constant communication with sensory media, even if this is located outside of human experience, it would rather be a social a priori that one could speak of here. But how can such a program be implemented, if observability - especially in the case of sensor-based technologies - is not possible at places, breaks off or, better, advances into other timespace logics. While the network metaphor was used for a long time to describe operational chains, it seems to have become more and more obsolete. New figurative models to capture these processes are emerging: For example, an expanded understanding of network developed - away from flat and vertical connecting lines that only transmit and translate from one point to another (cf. Galloway and Thacker, 2007), or figures of thought such as that of the fabric (Thielmann, 2019, p. 3-4), which in the characteristics of its micro-architectural nature refers, among other things, to the socio-technical (pre-)conditions of the transmission of data in relation to sensor media (cf. Thielmann, 2019, p. 3-4.). James Ash uses the term phases to capture dynamic processes that smart objects generally produce: the spatio-temporal modes of these

2 These kinds of figurative models are to be understood as figures of thought part of an epistemological process: Figures of thought - understood as operational terms - allow for shifts in perspective: As culturally situated conceptualizations, they induce certain logics of action, expose thought structures, and imply them again. The historical-epistemological character of these concepts, as well as their operational capacity, must be examined elsewhere.

processes thereby unfold (disclose) (cf. Ash, 2018) the respective specific properties of these objects. As inherent characteristics of smart objects, Ash assigns them an anticipatory orientation with respect to the environment in which they operate (protentiality) and further describes a kind of contingent combinability of different components inherent in smart objects in general (intentionality). (Ash, 2018, p. 10-19) How the modes of the respective phases3 can be made operable for an understanding of sensor media remains to be further investigated. It remains to be noted here that sensor-based technologies - Ash refers here to smart objects - not only follow their own sequences, but within these sequences address each other qua high-frequency clocking, send sensory signals, locate each other: these processes would - loosely according to Ash - first unfold the agency of digital objects/technologists and repeatedly update them in interaction - but at points independent of human intervention4. Sensor technologies can be located here as a kind of key medium, since they provide the pivotal point for feeding in and processing signals (be it gestures to be recognized, facial expressions, but also sensory data such as temperature, pressure, etc.). To not only detect and gather white noise, sensory systems are embedded in a compound of ML-based systems that are executing a set of algorithmic techniques (cf. Rieder, 2020) to process and translate sensor-based signals. Bernhard Rieder (2020) understands AI-based systems to be -amongst other things - specific combinations of algorithmic techniques: A set of heuristic procedures for producing operations and behaviors in the context of computation (cf. Rieder, 2020). An algorithm can be understood as "a computational method of calculation" (Jaton, 2021, p. 5) and is "a procedure that takes any of the possible input instances and transforms it to the desired output" (Skiena, 2008, p. 3). Jaton considers the practices and actions brought forth because of algorithmic commands as an "action- oriented way." And he further specifies: "In view of the inquiry's empirical results, algorithms may be considered, but certainly not reduced to, uncertain products of ground- truthing, programming, and formulating activities." (Jaton, 2021, Glossary) In this sense, algorithms are hence the explicit instruction to process the input signals into behaviour and rely on the same time on the soft-and hardware technologies they are embedded in and executing instructions with. Yet, in the context of soft robotics and morphological computing, researchers are aiming for more intuitive operations on a behavioural level, especially within unpredictive and unstructured environments. Here the process of translating tacit or embodied knowledge into explicit directives as behavioural ouput is contested. To further elaborate on this point, I will first describe the complexity of translating and synchronising multi-sensory signals from the environment in order to simulate the gesture of human grasping.

3 Spatial modes: diffusion, partition, envelopment; Temporal modes: gradation, dispersion, dilatation;

(Ash, 2018, p. 5)

MULTIMODAL REPRESENTATIONS/INSTRUCTING MULTIMODAL

OPERATIONS

The simulation and simultaneous optimization of the human hand, in particular the gesture of grasping, is a central research topos in the field of robotic interaction. In the field of industrial robotics, the gestures to be performed by robots take place in pre-structured environments specifically engineered for this purpose. Each gesture is preprogrammed, the planned steps can be performed precisely and repetitively, and far exceed human capacities in the environment of assembly line work. The control of the environment on the one hand and the control of and by the monotonously working robotic system on the other hand are important parameters for this work performance, because it is rooted in "monofunctionality, specialization and isolation from the world" (cf. Hauser & Freyberg, in press).

In unstructured, unpredictable, or even dangerous environments where cooperative robotic systems are used, multimodal, sensory gripper control systems (gripper actuation) (Gong et al., 2017, p. 1-3) prove to be instrumental for a better, more "initimate" interaction. (Gong et al., 2017, p. 13) In routine tasks, such as unlocking a door, humans typically seamlessly combine multiple senses, most notably sight and touch, to accomplish the task. Our visual feedback lets us semantically interpret geometric objects and their properties in order to accurately reach for them; our haptic feedback provides information about the current surface situation and structure of the object and also the conditions between the environment and the object. (cf. Bohg in Clark, 2020) Multisensory information meets here and is processed simultaneously (cognitively). At Stanford University's Interactive Perception and Robot Learning Lab (IPRL), under the direction of Jeannette Bohg, deep learning-based algorithmic representation models are coupled with multimodal sensing technologies and humanoid design to perfect the robotic grasping gesture: A combination of on-board sensing and motion capture technologies is being tested here to extend the sensory system (Lee et al, 2019, p. 10). Bohg and her team specialize in humanoid robotic systems designed to operate in uncertain terrain-as is the case in healthcare, underwater environments, or mines, for example. Sensors that record and interpret as much information as possible from the environment are crucial for the targeted performance of tasks. (cf. Bohg: in Clark, 2020) Various robotic gestures are tested: Based on deep reinforcement learning methods, self-supervised learning (unsupervised learning) is used here as an obligatory learning model: Thus, in this case, the output y (in the case, recognizing the object and manipulating it) is not given. The input x (for example, images of objects, geometric data) are available, but are present as unordered data. In the inference phase, the system itself determines ways to solve the problem qua learning algorithms that are fed with the sensory data (cf. Sudmann, 2018). Besides the nature of the actuator types (actuators), the main challenge lies in the alignment and synchronization and interpretation of the visual and haptic signals fed in by sensors (fusion) (Lee et al, 2019, p. 9). "We examined the value of learning a joint representation of time-aligned multisensory data for contact-rich manipulation tasks. To enable efficient real robot training, we proposed a novel model to encode heterogeneous sensory inputs into a compact multimodal latent

representation...[... ] Once trained, there presentation remained fixed when being used as input to a shallow neural network policy for reinforcement learning. We trained the representation model with self-supervision, eliminating the need for manual annotation. Our experiments with tight clearance peg insertion tasks indicated that they require the multimodal feedback from both vision (RGB and depth) and touch." (Lee et al, 2019, p. 10)

Figure 1. Robotic gripper arm performing the Peg-In-Hole experiment. The OPTOFORCE 6-axis force-torque sensor is used here. The RGB camera is used for

initial alignment (Ding et al., 2019)

Using the example of peg-insertion, in which an object was to be brought into the correct opening by the robot arm (here a two-limbed model), this task cannot be performed without haptic feedback. The system was able to start the task, but visual sensor signals were not sufficient to let the system know when this task had been completed. (cf. Bohg in Clark, 2020) Therefore, as part of the peg-insertion experiment, the force-torque (F/T) sensor was used to detect haptic data such as hardness/softness and vibration. (Lee et al., 2019, p. 2) This had delivered 6 measurement signals of the 3D surface every millisecond; whereas the camera sensor delivered 648 pixels every 34 milliseconds. (cf. Bohg in Clark, 2020) The feedback of the tactile sensor emits signals in much shorter frequencies: The challenge, then, is to synchronize this data. To connect these different modalities so that the data can be recognized, processed, and "fused," specific model architectures (Multimodal Fusion Model) (Lee et al., 2019, p. 3) are needed for each, which can make computed model representations usable even in combination. (Lee et al., 2019, p. 3) Tactile sensors are also critical in the context of robotic manipulation accomplished with multi-limb actuators: especially when there is uncertainty about the position and/or shape of the object being manipulated. Biomimetic sensors, such as the BioTac sensor from Syntech used elsewhere by Bohg and team, (cf. Bohg: in Clark, 2020) attempt to replicate the human fingerprint, and can provide measurement data on temperature values in addition to surface acquisition of the object. In this example the highly complex multi-sensory interplay on an embodied level is

disclosed. The question if and to what extend human intelligence is embodied and hence can be formalized is still an ongoing discourse. In "What Computers Cant Do" Hubert Dreyfus (1978) already argued that human intelligence and expertise depend primarily on unconscious instincts rather than conscious symbolic manipulation, and that these unconscious skills could never be captured within formal rules. DreyfuB (1978) states that "so-called algorithmic programs follow an exhaustive method to arrive at a solution, but become rapidly unwieldy when dealing with practical problems ". By the notion of practical problems he is pointing out the role of the body or embodied knowledge in terms of intelligent behaviour. It is nowadays well known, that not all cognitive interplays and processing of information is happening in the brain or in the nervous system, but happens within or as the body (cf. Brock, 2021). In an interview Oliver Brock, Professor at the Robotics and Biology Laboratory at the TU Berlin, refers to this problem via the example of human vision: Visual perception is a very strong and important sense in humans, our retina has probably thousands of task specific feature detectors. (cf. Brock in Clark, 2020). But the information that comes out of the optical nerve to the brain is not really an image as we imagine it pixel by pixel, it has already been filtered to a significant degree for information that allows us to survive: in some sense our retina is the embodiment that has encoded the learnings of our evolution that allows to operate our visual system in a much lower dimensional space (cf. Brock, 2021). Therefore one could say that certain sensory systems process information aside from the brain. Another example for embodied knowing in humans might be the performance of locomotion, as I will point out in the case of so the called Passive Dynamic Walkers (see below).

TACTILE SENSORS - INTELLIGENCE OF THE MATERIAL

"Vision starts with the eyes and touch starts with the skin, but in both cases, the most important work is done in the brain, where the raw signals are transformed into a meaningful model of the scene." (Adelson, 2021) Ted Adelson heads the Perceptual Science Group at the MIT Computer Science and Artificial Intelligence Lab (CSAIL). However, the fact that robotic systems need more than force feedback information (i.e., pressure and resistance) to approximate the haptic sensitivity of the human hand was one of the starting points in the development of the GelSight sensor by the Perceptual Science Group led by Ted Adelson at the MIT Computer Science and Artificial Intelligence Lab (CSAIL). (cf. Yuan et al., 2017). In addition to surface sensitivity and perception of mechanical values such as pressure and vibration, the deformable material is also used to simulate a type of tissue stretching based on which further sensory signals are generated (Yuan et al., 2017, p. 2) "We try to address the question with the answer that geometry sensing is equally important as force sensing for robots. To better measure the geometry, a deformable surface, and high spatial resolution sensing are required. With the measurement of high-resolution geometry, the robot will be able to learn more about the objects' shape and texture. Moreover, the dynamic interaction between the soft sensor and the environment can reveal more physical properties of the object being contacted, such as the slipperiness and compliance." (Yuan et al., 2017, p.

2) The prototype of the GelSight sensor was already developed in 2009 at MIT by Wenzhen Yuan, Siyuan Dong and Ted Adelson (2017, p. 4). This is attached to the fingertips of the robot arm: the sensor converts mechanical deformations of the material and the touched objects during contact into visual data. This image information allows conclusions to be drawn about the surfaces of the objects to be manipulated (shape, hardness, friction) and also provides information about the mechanical interaction to be performed (pressure/force, shear/shear, glide/slip). (Yuan et al., 2017, p. 4) The GelSight sensor can measure surfaces in the micrometer range, and because the signals are geometric in nature, the system can calculate position of the object to be grasped. (Yuan et al., 2017, p. 13)

Figure 2. Here you can see the GelSight sensor to be attached to the "fingertips" of the

robot arm, used by Ted Adelson et al. Here, this is being pressed onto hemispherical silicone samples on an experimental basis to obtain data on the hardness of the material.

(Yuan et al., 2016)

With this sensor technology, it is possible to measure pressure, distance displacement, or friction during the interaction, as well as the likelihood of the object slipping out of the grasp - the latter is also crucial for the successful execution of the grasping act (Yuan et al., 2017, p. 18). Cameras are embedded in the deformable elastomer layer attached to the fingertips, which film the deforming material during manipulation and derive haptic information from this same visual data: How fast and in what way the material deforms provides additional information about the object's texture. (p. 1) These geometric parameters can thus be calculated from the highresolution images and are additionally supplemented with coloring-marking data. Because primarily visual data (vision based data) (p. 2) is used here to gather haptic information, learning algorithms applied in the field of computer vision are also used here (p. 2). "Unlike other optically based approaches, GelSight works independently of the optical properties of the surface being touched. The ability to capture material-

independent microgeometry is valuable in manufacturing and inspection, and" (p. 2) There are now several generations of the GelSight sensor: one of which is the application described above: "The fingertip version of GelSight has been successfully applied on robotic grippers, and the new design makes the sensorfabrication and the data accessibility much more convenient. With the tactile information provided by GelSight sensor, a robot will be able to perform much better in multiple tasks related to both perception and manipulation" (p. 20).

Adelson et al. describe the challenges of tactile sensors as follows: Measuring pressure and resistance when touching an object to be manipulated represents only one of the tasks here. Likewise, measurements often involve only one or a few contact points, but of course multiple contact zones are needed to generate haptic information during grasping (Yuan et al., 2017, p. 4). And the hardware, respectively the design of the robotic extremities also prove to be too rigid and inflexible ("too bulky") (p. 4) The material nature in the design and the associated rigidity of the actuators in motion make a more "natural", i.e. more elastic grip on the object difficult. In addition, it is costly to attach elastic sensors to these same hard surfaces (usually metal). (p. 3) The fabrication of these special tactile sensors themselves is equally difficult: they are initially costly and, in the case of robotic manipulation, are produced in collaboration with (product) designers and engineers. Access to these sensor technologies is often not possible outside the laboratories where they are produced, for technical or legal reasons. The TacTip sensor is an exception here: the 3D printing processes are released as open source methods laboratory. In the case of the OptoForce Sensor (OptoForce Kft., Budapest, Hungary) (p. 4) and the BioTac Sensor (SynTouch Inc., Montrose, CA, USA) (p. 4), researchers have founded start-ups to make this technology available to other robo-labs. On a socio-technical level, these developments are thus dependent on an interdisciplinary network of developers, engineers, product designers, etc., which in turn are framed by technological, economic, and legal parameters in both fabrication and production.

Above described examples of multimodal sensing is only one aspect of robotic manipulation that unfold in use within the parameters of input/output logics of processing learning algorithms on the one hand and material design on the other. Robotic mobile systems build on technological developments such as pervasive/ubiqitous computing and artifical intelligence in addition to the field of sensors: sensors thus play a key role in the implementation of mobile and efficient robotic systems in various ways: for example, in conjunction with actuators of the robotic systems (tactile sensors such as the BioTac sensor) or the offloading of computational processes as sensors embedded in the material itself (soft robotics, morphological computation). Therefore, robotic sensor-based systems can be understood as an epiphenomenon of ambient intelligence (AmI) located within or as IoT (internet of Things) environments: It is a technology that only ever appears in a media network. Based on algorithms, a vast amount of environment-related signals are extrapolated qua sensor technologies. Like electronic environments, humanoid robotic systems aim to respond anticipatively to the presence of humans and thus interact with the environment, yet cannot "grasp" their environment in the human sense (Flusser,

1991, p. 67). For Flusser, the gesture of be-grasping is both theory and practice, unfolding as a non-dichotomous counterpart oscillating between material and mental space. "[T]o touch an object with the fingertips [...], they follow its outline, weigh its weight on the palms (ponder it), pass it from one hand to the other (consider it). This is the 'gesture of apprehension'. It is not (despite the claims of our scientific tradition) a 'pure' gesture of 'objective' observation. [...] The gesture of apprehension is practical." (p. 67) This concept of gesture does not exclude (bodily) techniques as practice and the theoretical reflection of them from each other. A gestural being-in-the-world of humans would thus also be understood as a producing and reflecting of the technical world, which would be inherent to a concept of mediality of grasping in Flusser's sense. Within humanoid robotics it would thus be a matter of locating a newly directed bodily situatedness: The description of the simulation of human sensory systems should not end here, however, in the statement that a replication of human perception is not possible. In the following, it will be a matter of tracing the epistemes underlying this simulation. How concepts of embodied knowledge are inscribed in the fabrication of the systems, how they can be recognized and how human corporeal involvement can be described on different levels of fabrication and use, is therefore part of the analysis: For the special case of humanoid designed robots the challenges are located in the anthropomimetic fabrication on a computational level, as well as in the production of anthropomorphic design and thus connects to a specific knowledge of the human movement and the human sensory apparatus.

EXPLICIT LANGUAGE VS. EMBODIED KNOWLEDGE

In the context of the GELSight sensor technology, images become information carriers of haptic dimensions of the object to be manipulated, which are extrapolated from these images as a computational process. Here, an attempt is made to replicate human sensory systems in the interplay of computation, sensor technology and design and thus to make them more adaptable to unpredictable environments: The challenge here lies, among others, in multi-sensory "sensing" of the environment. Another problem area that opens up in the field of humanoid robotics is the recognition of body language and emotion recognition. Gestures, facial expressions, as well as the voice and increasingly physiological measurements are data that the interdisciplinary research field of affective computing attempts to systematically capture, interpret, and simulate via machine learning/deep learning technologies: this field of study is increasingly coming into play in the development of humanoid robotics. Here, too, the language and translation performance of whatever (body) knowledge and understanding must always be an explicit one for the system, meaning that the system does not allow ambiguities at the computational level, and any activity of the robot translated as an action must be explicitly formulated within algorithmic models. In order to be able to guarantee the translation performance during the human-machine interaction on a performative-physical level, certain performative processes such as gestures and facial expressions have to be recognized, processed, (re-)produced and applied as sign systems. (cf. Bachle et al., 2017) Especially in the field of social robots, which are used on both a functional

(socially simulated behavior) and formal level (anthropomorphic design) in diverse social contexts, the use of DRL methods should make it possible for the system to react to the environment as "spontaneously" and "intuitively" as possible. (cf. Bachle et al., 2017, p. 72) Due to these deep reinforcement learning (DRL) methods, which in use select data qua random algorithm, Bachle et al. (2017) also attribute to the system a kind of "functional equivalence" (p. 68) of embodied knowledge: the authors see this embodied knowledge in the learning of social structures, their rule systems and the implementation of these as robotic actions. However, it is also stated elsewhere that it is always only a "subsequently simulated representation" (p. 76) of the physical world, in the context of which social behavior is adapted by the robotic system. In addition to a variety of explicit learning models from different disciplines (behaviorism, cognitivism, constructivism), phenomenologically influenced models of representation can be identified (cf. Sudmann, 2018), which inscribe themselves as models of reality in/as algorithmic models of the world; and conversely, can again be identified input in the description of and reflection on the interaction with ML/DL- based (robotic) systems. The concept of embodied knowledge, as the idea of a corporeal constitution of experiences, takes a central role in the phenomenological lineage of Husserl, Merleau-Ponty, DreyfuB et.al. The body as the condition of the possibility of world- and self-perception beyond a Cartesian separation of body and mind may be identified as a central intersection. Merleau-Ponty, for example, sees projection, conscious distancing, and perspective vision as conditions for the successful movement of one's own body through space. The interaction of motor skills and vision is fed by the memory of the body and can be recognized in a temporal structure in the successful movement through space. In the second part of the Phenomenology of Perception, Ponty describes the "spatiality of one's own body and motor activity" (Merleau-Ponty, 1966/1974, p. 123129). On the basis of the pathological case study of the patient Schneider, who had been treated by Kurt Goldstein. The chapter reveals, among other things, that the ability to move in space or to move towards something is bound to the ability to distinguish figure-background. The spatial perception, as a kind of body memory, underlies the process as a temporal structure: Patient Schneider suffers from a neurological damage, which makes it impossible for him to connect individual bodily functions, which interact in a successful body-space orientation. (cf. Merleau-Ponty, 1966/1974) For a successful motor function it is necessary "to hold directions, to draw lines of force, to open perspectives, briefly to organize the world according to an instantaneous principle, to base on the geographical environment a milieu of behavior and a system of meaning that expresses in the outside the inner activity of the subject." (Merleau-Ponty, 1966/1974, p. 138) "It is also this "projection" or "conjuring" function (in the sense in which a medium conjures up and makes appear an absent one) that makes abstract movement possible: for in order to possess my body independently of any urgent task, in order to be able to play with it at will, in order to be able to write movements in the air[...], I must likewise be able to reverse the natural relation of body and environment [...]." (p. 138). Here, then, it is not mobility or thought per se, but the faculty of motor projection, a kind of virtual relation to the world and movement that is thus always actualized as such in physical-mental connection and central to physical interaction with

the environment. Reference should be made at this point above all to the temporal structure, which Merleau-Ponty has each specifically (humanly) defined as a condition for senso-motor activity, and to which Merleau-Ponty here refers: Experience is thus local in the sense in which it is (back-)coupled to the environment and to the time-space experience of the body. In contrast to this, sensor-based robotic systems span different, parallel space-time continua in the processing of data: Here, the sensors are the interface at which this decoupling from the physical world takes place. Moreover, the reference to the world remains coupled to a formalized transmission of information, i.e., based on a decision-logical calculability (cf. Mersch, 2013). This follows a "mathematized communication" whose "basic category [...] is transfer, transmission, translation or mediation [which] is itself still subject to mathematization" (Mersch, 2013, p. 25) and is located within an "algorithmic rationality." (cf. Mersch, 2013) Any form of machine learning-based action and "perception" of robotic manipulation can thus only be meant as a temporally and quantitatively limited, reactively shaping act (cf. Rautzenberg, 2020). Thus, even the capacity of machine "sensing", which Bachle et al do not set as a categorical quantity here, can only be read as a functional mode as a metaphor, since it -in contrast to the concept of embodied knowledge or tacit knowlegde (cf. Polanyi, 1985) - must be based on formalized, i.e. explicit language.

DE-CENTERED CENTRALIZATION

With respect to Artificial Intelligence research, cognitive systems, respectively the "intelligence" of machine systems, were initially equated primarily with the translation of symbolic orders into algorithms; with respect to problems such as the example of AlphaGo, known from DEEPMIND, which was developed using traditional brute-force algorithms, the system outperforms human cognitive abilities in terms of time-critical pattern recognition and computational capacities. (cf. Lyre, 2013) The fabrication of "cognitive" embodied systems that face uncontrollable and unpredictable environmental conditions, and here are expected to interact with humans and the environment, as is increasingly the case with robotic systems, is therefore far more difficult. A bodily situatedness and constitution of cognitive systems therefore becomes clear especially in the example of robotics. (Lyre, 2013, p. 186) Passive robotic walking machines that move in a self-stabilizing manner by pure mechanical performance using gravity can serve as an example. In the early 1990s, Rodney Brooks (1991) introduced the concept of a subsumption architecture (subsumption architecture) in 'Intelligence without representation' (p. 146). This architecture should reduce internal representation models (p. 145) in robotic systems as much as possible in order to implement certain behavioral modes directly qua sensory input and thus enable the most flexible navigation possible through unknown environments.

Figure 3. A bipedal Passive Dynamic Walker that runs without propulsion. A follow-on model to Tad Mac Geer's 1990 prototype (Collins et al., 2001).

A central problem for Brooks at that time was the question of how many levels could be fabricated within the framework of the model of subsumption architecture without the interaction between these levels turning out to be too complex. What level of complexity with respect to actions of the system would be possible even without a central representation. (Brooks, 1991, p. 145) and whether this approach of "leaner computerization" could also integrate higher-operating functions such as "learning" of the systems was equally unresolved at that time (p. 155).

i

Figure 4. The walking robot CASSIE. Developed by Agility Robotics in 2017, the walking robot will be used for parcel delivery.

This approach of "leaner computation" is also being (re)pursued in more recent models of walking robots: In the case of CASSIE, designed by Agility Robotics in 2017, locomotion is increasingly detached from central computational operations of the

system. (cf. Hurst, 2019) A mass-spring system allowed the central drive motor to be smaller and the model to be more efficient: In the following, several smaller drive motors distributed in the walking body were used. In addition, the robot no longer has visual sensors, but is only equipped with propioceptive (depth) sensors. (cf. Ackermann, 2022) This makes it possible to reduce computational processes that would normally control the entire system, since parts of the locomotion feedback directly to the environment by means of depth sensors. In an attempt to control CASSIE in uncertain terrain exclusively via DRL algorithms, there were comparatively far more complications than in the application of the multi-modal system described. (cf. Ackermann, 2022) Under reduction of computational processes and the use of mechanical driving force in the form of a multi-modal system, a more "spontaneous" behavior of the robotic system becomes possible here in the first place: Although in combination with ML-based processes, but nevertheless parallel to computational automation, the system gains a kind of autonomy. While here it is mechanical principles that implement these non-linear action sequences, in the field of soft robotics and morphological computation it is materials and sensor technologies that merge with actuators themselves in order to move computational processes out of the system. (cf. Hauser & Freyberg, in press).

SOFT ROBOTICS/MORPHOLOGICAL COMPUTATION

In the discussion of multi-modal, sensor-based systems and the accompanying reduction of a centrally organized controller-control system, it can thus be stated that a transfer of the assumption of translating bodies as mere centrally controllable tools into/as humanoid robotic systems quickly reaches its limits. With the Gestalt principles of soft robotics and morphological computation, one therefore tries to outsource certain controls into the robotic system, respectively into the material used for the body itself. The morphological nature of the body itself becomes part of the (motion) intelligence of the robotic system and can be understood as a kind of embedded intelligence (cf. Bongard & Pfeifer, 2007). For example, the human hand "knows" through the pressure produced by the touched object and the way pressure is distributed how firm the grip has to be without the object escaping during the grip. The softness of our fingers and hands, the information that is transmitted and feedback processed by millions of neurons in the hand as feedback between the brain and the hand, is crucial for a successful grip. (Bakhy & Al-Waily, 2021, p. 382) Soft movement components, such as soft fingertips, are essential in the field of soft robotics. Materials that combine sensing, actuation, and computation are being developed at Harvard's Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied Sciences, for example. For this purpose, a platform for 3D printing processes has been founded that can pick up and process motion, pressure, touch and temperature signals embedded in so-called smart materials. This type of "integrated sensing" (cf. Burrows, 2018) is to be used, among other things, for robotic assistance systems in the medical field and can seamlessly embed a wide variety of

capabilities and materials within a soft body by means of "embedded 3D printing" (cf. Burrows, 2018).

Figure 5. This soft robotic gripper is the result of a platform technology developed by Harvard researchers to create soft robots with embedded sensors that can sense inputs as diverse as movement, pressure, touch, and temperature (Burrows, 2018)

For this, an organic, ionic-technology-based conductive liquid ink produced via 3D printing within the soft elastomer matrices on which many soft robotics technologies are based. (See Burrows, 2018) This process allows sensors that are normally too rigid to be embedded directly into soft tissues. This tripartite "soft robotic gripper" seen in the image is capable of detecting pressure relief, contact, temperature, and curvature. In this case, additional light sensitive and depth sensors (deep touch sensors) are fed in. Freyberg and Hauser discuss how new design principles of soft robotics and morphological computation, which go beyond conventional robotics, can possibly be relocated away from "functionally isolated working bodies" (cf. Hauser & Freyberg, in press) along the principles of mimesis and poeisis: "Our thesis is that the bionic developments presented here in the field of robotics show important points of contact with principles of morphological thinking, as it has developed especially since Goethe. The terminological equivocation in the term "morphology" proves adequate when one considers the implications of the starting point of Gestalt and structure and its connections with theories of embodiment as they present themselves in the environment of discussions of cognitive science, artificial intelligence, and philosophy of mind, to which researchers from robotics have made important contributions." Morphological Gestalt principles give not only the material but also the form a specific mode of functioning in each case in the movement and interaction with the environment and thereby refer to the dynamics intrinsic to a body: these are coupled to a neural system of the body and function only in cooperation. (cf. Bongard & Pfeifer, 2007, p. 361-364.) Bongard and Pfeifer understand the process of off-loading (p. 361) of certain neuronal processes into the morphology of the body and the environment nature as indispensable and of essential condition for the functioning of a biological system. " [...] for recognizing objects in the real world, agents have to achieve data reduction through sensory-motor coordination, thus inducing correlations; for object manipulation we

have to exploit the morphology-the anatomy-of the hand and its material properties, i.e., the deformable fingertips and the elasticity of the muscle-tendon system." (p. 361) Using smart materials, we can thus speak of a kind of "encoded" embodiment, through which processes such as walking or grasping can be completely outsourced to the "body" by taking over computational processes not centrally controlled by the body itself. "This makes a task much easier, since part of the "work" will already have been done by the body, reducing the complexity of the robot's computational problems and the corresponding control and learning tasks. This may even extend to situations where parts of the robot break down, resulting in highly resilient, adaptive and intelligent machines." (Hamacher & Hauser, 2015) Here, not only is a knowledge of the physical body itself invoked, but it also breaks with the view that the body must be controlled at all times by a central controller, which in turn is based on the notion of a separation of head and hand, and which carried over into early robotics as these same dichotomously organized Gestalt principles. However, the fact that this proves to be a hindrance for multifunctional tasks in unpredictable environments has been taken up by fields such as soft robotics: The specific material itself becomes - to put it bluntly - the sensor, or the robotic body can be thought of as a sensor in an extended sense being in feedback with the system and serving as an impulse transmitter. The material is to be understood as a carrier of action without instruction from a centralized command and can be regarded as an dynamic variable in use: It is not always foreseeable how it interacts with unstructured environments and behaves in relation to them. On a functional level embodied/tacit knowledge within AI-based mobile systems is only made possible in this example because of the (partial) "becoming-analog" of the body: It gains a kind of autonomy within pre-calculated rules of automation and can be thought of as a tactile body (Tastkorper), acting upon a certain performative surplus in the interaction.

GESTURES OF THE DIGITAL - DISTRIBUTED EMBODIMENT

Media practices that unfold against the backdrop of a non-experience of processed data nevertheless naturally help shape social spaces that are to be renegotiated. The approach that at this point "tacit knowledge" does not inscribe itself "categorially", but nevertheless in its "functionality" in practices of action as a (novel) form of knowledge, is able to illuminate the understanding around the shaping of social spaces in human-machine interaction: Against the background of a lifeworld situatedness, i.e., a language-action and gestural procedure that unfolds with the use of digital technologies, a certain intrinsic sense and unpredictability can be sounded out in and as action practice. There are specific gestures that emerge from this and that also find application in the fabrication of the technologies. The concept of gesture, which I will understand below as a formation that is meant to grasp the physical bodily action as well as theoretical reflection on it, can perhaps contribute as a figure of thought of the gestural to the question of a describability of digital practices of use. If one wants to understand the development of digital sensor-based technologies as a kind of caesura moment, in which it is no longer possible to describe individual chains of operations linearly, then the description and analysis qua gestures are possibly a model for illuminating

processes selectively, and to be applied as a kind of open taxonomy, always directed at specific technological arrangements. A seamless description of data and media practices as interlocking causal chains of operations is not only problematic due to an increased emergence of sensor technologies. But these complicate such an endeavor, as discussed above. This also includes a bodily involvement to be re-located, which also includes the human perceptual apparatus. Digital technologies have each produced specific gestures (cf. Heilmann, 2010) and demand the same from us in their technical production. The figure of distributed embodiment (cf. Engemann & Feigelfeld, 2017) points to the necessity of human gestures - of bodily work and bodies in general - as input for machine learning-based technologies, which as sensory environments register movements of the bodies and thus can only complete the machine learning process. The human hand, of course, also contributes in aggregating and labeling the underlying training data for this process. Here Engemann ties in with the discourse of an indexicality of the digital and understands these gestures as traces of the physical world inscribed in and as digital technologies: These working gestures are also made for labeling processes in the field of training data for robotic systems. Furthermore, the gestures in the laboratory, which are made during the training of robotic systems -often repetitively - are to be mentioned, as well as gestures of fabrication and in the design and creation of robotic bodies, gestures of communication, assistance, division of labor, etc. With recourse to McLuhan, Till Heilmann (2010), for example, disentangles the gesture of pressing a key as the gesture of the digital and grasps digital technologies in general as "technical implementation of tactility" (p. 132) Tactility in McLuhan's is a synaesthetic concept and is neither congruent with that of the haptic nor is it the mere addition of the human senses, but precisely that which opens up as a complex mesh as a disposition to the world. (cf. McLuhan, 1964/2005) According to Heilmann and with reference to Leroi-Gourhan (1964/1988), the hand as well as language as a structured disposition are the prerequisite for digital thinking and becoming and being-in-the-world in the first place. (cf. Heilmann, 2010) Benjamin Peters also emphasizes this distinction when he states: "Perhaps the most ancient of the predecessors to digital discourse dates back to the Latin source of the term itself - the original digit, or the index finger. This essay takes that origin point -a digits an index finger- literally. [...] I will explore how digits do what index fingers do - namely count, point and manipulate. ("Manipulate" of course is a back -formation from Latin for handful - a handful of fingers.)" (Peters, 2016, p. 94). Thus, not only would a respective directed tactile dimension of human sensory endowments be realized only in the fabrication and use of digital technologies. Our bodies, and in particular our hands, against the backdrop of a historical language-action, moreover, constitute the condition for the historically contingent bringing forth of just such technologies. In a description of robot-human interaction that understands the interplay of machine and human gestures as a mutual learning process, it must also be taken into account that the technical object as a media compound of anthropomorphically designed robot, action-related artificial intelligence, and human-inspired sensor technology is as it were, encounters the human being quasi as a Doppelgänger.

CONCLUSION

The extent to which body-bound knowledge is indispensable for cognitive processes is attempted to be revealed or opened up for renegotiation in the above-mentioned examples of sensor-based robotic interaction. The paradigm shift that took place in a Western industrialized tradition of thought at the latest in the 1980s with respect to machine-learning based systems and body-bound knowledge (cf. DreyfuB (1978), Brooks (1991) et al.) focuses on the fact that corporeality or body-bound knowledge is grasped as (one of) the preconditions of cognitive performance in the first place: This insight is momentous in the context of physical interaction with AI-based systems. Addressing the question of situationally bound learning and maneuverability of automated systems in unpredictable environments can and should further deepen reflection on the underlying epistemic parameters of applied knowledge models of learning, intelligence, and cognition. In the second part of the article, the question of bodily involvement for the genesis of knowledge concepts was sharpened insofar as the human body - first and foremost the human hand (cf. Peters, 2016) - is understood not only as involved in, but as a precondition for abstract symbolic thinking (cf. Heilmann, 2010): the digital itself is thus identified as a historically contingent, each specific, human achievement, These are the horizons against which the analysis of the examples presented takes place. With the above-mentioned examples it should be shown that multi-sensory data genesis qua sensor technologies may give new accesses and perspectives on the connections between learning, cognition and corporeality by looking at the challenges to translate human language and multi-sensory signals into machine action: Already on the level of fabrication it becomes clear - in the interplay of (IT) engineering biological expertise and humanistic reflection - that situational and intelligent collaboration is not only exhausted in/as formalized computational processes and hence is not sufficient to instruct machine behavior. The Passiv Walking machines shows that especially sensor-motoric actions like locomotion of the body in unstructured environments are the biggest challenges to be translated into algorithms, because they are embedded within highly intuitive lower-dimensional human cognitive acts in the first place (cf. Brock, 2021). Furthermore the example of soft-/smart materials lays out that matter itself, hence the body itself has its own agenda in encountering its environment. On a media philosophical account one could think about it in terms of a performative surplus of the material itself. The concept of some form of centralized general intelligence delegated by the brain only is hence contested: On the background of the research on ML-based robotic gripping systems it is outlined that the complex interplay among cognitive processes and multi-sensory signals are intertwined throughout the body and cannot be captured on an algorithmic therefore instructive level only. In this regard I layed out that mobile sensing technologies serve as a way to observe this complexity. On a more general account, these questions about the material conditions of knowledge are also to be classified within the framework of the material turn of science studies: Here, the body is not only essentially involved in the feedback and applicability of knowledge, but also makes comprehension in the cognitive and figurative sense (cf. Lakoff & Nunez, 2000) possible in the first place.

REFERENCES

Ackermann, B. (2022, January 20). Legged Robots Learn to Hike Harsh Terrain. IEEE

Spectrum. https://spectrum.ieee.org/legged-robots-anymal Adelson, T. (2021). MIT CSAIL. https://www.csail.mit.edu/person/ted-adelson Ash, J. (2018). Phase Media. Space, Time and the Politics of smart objects. Bloomsbury Publishing

Bakhy, S. & Al-Waily, M. (2021). Development and Modeling of a Soft Finger in Robotics Based on Force Distribution. Journal of Mechanical Engineering Research and Developments, 44(1), 382-39. Bächle, T., Regier, P., & Bennewitz, M. (2017). Sensor und Sinnlichkeit Humanoide Roboter als selbstlernende soziale Interfaces und die Obsoleszenz des Impliziten [Sensor and Sensuality Humanoid robots as Self-Learning Social Interfaces and the Obsolescence of the Implicit]. In C. Ernst & J. Schröter (Eds.), Navigationen -Zeitschrift für Medien- und Kulturwissenschaften, Jg. 17 (pp. 67-86). Schriftenreihe Navigationen Universität Siegen.

https://doi.org/10.25969/mediarep/1793 Bongard, J.& Pfeifer, R. (2007). How the Body Shapes the Way We Think. A New View

of Intelligence. The MIT Press Brock, O. (2021, October 25). Co-Design Of Soft Robots. SoundCloud [Podcast by Marwa EIDiwiny]. https://soundcloud.com/ieeeras-softrobotics/round2-oliver-brock-co-design-of-soft-robots?si=cd4134120ff647ebb8055c686cae9227 Brooks, R. A. (1991). Intelligence without Representation. Artificial Intelligence, 47(1-

3), 139-159. https://doi.org/10.1016/0004-3702(91)90053-M Burrows, L. (2018, February 28). Novel 3D Printing Method Embeds Sensing Capabilities within Robotic Actuators. Wyss institute. https://wyss.harvard.edu/news/novel-3d-printing-method-embeds-sensing-capabil ities-withi n-robotic-actuators/ Clark, L. (2020, May 11). Learning to Grasp with Jeannette Bohg. Robohub. [Podcast].

https://robohub.org/l earning-to- grasp/ Collins, S.. Wisse, M., & Ruina, A. (2001). A Three-Dimensional Passive Dynamic Walking Robot with Two Legs and Knees. International Journal of Robotics Research, 20, 607-615. https://doi.org/10.1177/02783640122067561 Ding, J., Wang, C., & Lu, C. (2019). Transferable Force-Torque Dynamics Model for Peg-in-hole Task, http://arxiv.org/abs/1912.00260 https://rhizome.org/editorial/tags/hands/ Dreyfus, H. (1978). What Computers Can't Do. Harper and Row Publishers. Engemann, Ch. & Feigelfeld, P. (2017). Distributed Embodiement. In M. Kries, C. Thun-Hohenstein, & A. Klein (Eds.), Hello Robot. Design zwischen Mensch und Maschine (pp.252-259). Vitra Design. Flusser, V. (1991). Gesten: Versuch einer Phänomenologie [Gestures: an Attempt at a

Phenomenology]. Düsseldorf und Bensheim. Galloway, A. & Thacker, E. (2007). The Exploit. University of Minneapolis Press Gong, D., He, R., Yu, J., & Zuo, G. (2017). A Pneumatic Tactile Sensor for Co-

Operative Robots. Sensors, 17(11), 2592. https://doi.org/10.3390/sl7112592 Hamacher, A. & Hauser, H. (2015). Morphological Computation: The Hidden Superpower of Soft-Bodied Robots. RoboHub:

https://robohub.org/morphological-computation-the-hidden-superpower-of-soft-bodied-robots/

Hansen, M. B. N. (2018). Topology of Sensibility. In U. Ekman, J. D. Bolter, L. Diaz, M. Sendergaard, M. Engberg (Eds.), Ubiquitous Computing, Complexity, and Culture. Routledge. https://doi.org/10.4324/9781315781129 Hauser, H. & Freyberg, S. (in press). Form und Technik. Das morphologische Paradigma der Robotik [Form and Technique. The Morphological Paradigm of Robotics]. In Morphologie als Paradigma in Philosophie und den Wissenschaften (Beihefte der Allgemeinen Zeitschrift für Philosophie Buchreihe) Frommann-Holzboog.

Heilmann, T. A. (2010). Digitalität als Taktilität. McLuhan, der Computer und die Taste [Digitality as a tactility. McLuhan, the computer and the key]. Zeitschrift für Medienwissenschaft, 3(2), 125-134. https://doi.org/10.25969/mediarep/2490 Hurst, J. (2019, February 26). Building Robots that Can Go where We Go. IEEE

Spectrum. https://spectrum.ieee.org/building-robots-that-can-go-where-we-go Jaton, F. (2021). The Constitution of Algorithms: Ground Truthing, Programming,

Formulating. MIT Press. Krtilova, K. (2016). Technisches Begreifen. Von „undinglichen Informationen" zu Tangible Interfaces [Technical understanding. From "essential information" to Tangible interfaces]. In J. Sternagel & F. Goppelsröder (Eds.), Techniken des Leibes (pp. 87 - 106). Wissenshaft. https://doi.org/10.5771/9783845280950-87 Lakoff, G., & Nunez, R. E. (2000). Where Mathematics Comes from: How the

Embodied Mind Brings Mathematics into Being. Basic Books. Lee, M. A., Zhu, Y., Zachares, P., Tan, M., Srinivasan, K., Savarese, S., Fei, L., Garg, A. & Bohg, J. (2019). Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks. arXiv:1907.13098 Leroi-Gourhan, A. (1988). Hand und Wort Die Evolution von Technik, Sprache und Kunst [Hand and Word The Evolution of Technology, Language and Art]. Suhrkamp. (Original work published in 1964) Lyre, H. (2013). Verkörperlichung und situative Einbettung [Embodiment and situational embedding]. In A. Stephan & S. Walter (Eds.), Handbuch Kognitionswissenschaft (pp. 186-192). Metzler. McLuhan, M. (2005). Understanding Media. The Extensions of Man. Routledge.

(Original work published in 1964) Merleau-Ponty, M. (1976). Phänomenologie der Wahrnehmung [Phenomenology of

Perception]. De Gruyter. (Original work published 1966) Mersch, D. (2013). Ordo ab Chao - Order from Noise. Diaphanes. Peters, B. (2016). Digital. In B. Peters (Ed.), Digital Keywords. A Vocabulary of

Information, Society and Culture (pp. 93-109). Princeton University Press. Polanyi, M. (1985). Implizites Wissen. Suhrkamp Verlag

Rautzenberg, M. (2020). Matters of Choice? Wahl und Entscheidung in Algorithmischen Kulturen [Matters of choice? Choice and Decision in Algorithmic Cultures]. Internationales Jahrbuch für Medienphilosophie, 6(1), 8194. https://doi.org/10.1515/jbmp-2020-0004 Rieder, B. (2020). Engines of Order. A Mechanology of Algorithmic Techniques.

Amsterdam University Press. Skiena, S. (2008) The Algorithm Design Manual [2nd ed]. Springer Sudmann, A. (2018). Zur Einführung: Medien, Infrastrukturen und Technologien des maschinellen Lernens [For an Introduction: Media, Infrastructure and Technologies of the Machine Learning]. In C. Engemann & A. Sudmann (Eds.), Machine Learning. Medien, Infrastrukturen und Technologien der Künstlichen Intelligenz. Bielefeld.

Sprenger, F. (2014). Die Kontingenz des Gegebenen - zur Zeit der Datenkritik [The Contingency of the Given - at the Time of Data Criticism]. Mediale Kontrolle unter Beobachtung, 3(1), 1-20. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-400210

Thielmann, T. (2019). Sensormedien: Eine medien- und praxistheoretische Annäherung [Sensor Media: A Media and Practical Theoretical Approach], Media of Cooperation Working Paper Series, 9, 1-10 http://dx.doi.org/10.25819/ubsi/31 Yuan, W., Dong, S. & Adelson, E. H. (2017). GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force. Sensors, 17(12), 2761. https://doi.org/10.3390/s17122762 Yuan, W., Srinivasan, M. A., & Adelson, E. H. (2016). Estimating Object Hardness with a GelSight Touch Sensor. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 208-215). IEEE. https://doi.org/10.1109/IROS.2016.7759057

СВЕДЕНИЯ ОБ АВТОРЕ / INFORMATION ABOUT THE AUTHOR

Регина Вузелла, regina.wuzella@uni-siegen.de Regina Wuzella, regina.wuzella@uni-siegen.de

Статья поступила 22 апреля 2022 Received: 22 April 2022

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

одобрена после рецензирования 10 июня 2022 Revised: 10 June 2022

принята к публикации 10 июня 2022 Accepted: 10 June 2022

i Надоели баннеры? Вы всегда можете отключить рекламу.