Научная статья на тему 'On the intimate relationship between man and machine'

On the intimate relationship between man and machine Текст научной статьи по специальности «Клиническая медицина»

CC BY
118
35
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Epistemology & Philosophy of Science
Scopus
ВАК
RSCI
ESCI
Область наук
i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «On the intimate relationship between man and machine»

ЭПИСТЕМОЛОГИЯ & ФИЛОСОФИЯ НАУКИ • 2013 • Т. XXXVII • № 3

n

the Intimate Relationship between Man and Machine

MATTHIASDELIANO (GERMANY)

Die Glorifizierung des Cyborgs als neues Lustweltreich des Menschen verkennt aber, welche Geduld, Compliance und sogar Schmerzbereitschaft schon heute der Einsatz technischer Mittel in Therapie und Rehabilitation erfordern.

Detlef B. Linke

As tools, machines are functional extensions of our body augmenting and expanding our interaction with the world. Beyond that, western culture has developed a more intimate, metaphorical relationship between man and machine over the centuries. This development started in the anatomical theaters of the renaissance in the 16th century, when the human body was detached from the person, and turned into an object on the dissecting table (Kathan, 2003). Devoid of empathic relationships and personal interests, the body became physically manipulable, could be separated into parts, and ascribed with dedicated, non-personal functions. This permitted to view the body as a machine, and, vice versa, to employ the mechanistic body as a blueprint for the development of new machines. With the transfer of the body from a personal domain into the realm of technology,

П a

0

01 с

u и s

0J

S

technical innovations not only refine and create new interventions into the body, which marks the success story of modern western medicine. Even more, technological innovation since then has been progressively and radically transforming the way we conceive and ultimately experience our body.

As part of the body, the brain has been steadily re-conceptualized as machine in the light of current technology, as well (Kathan, 2003). With mechanical engineering being the dominant technology of the 17th century, the brain at that time was conceived as a hydraulic/pneumatic machine. With the rise of electromagnetism and the demonstration that the brain is electrically excitable, it became an electrical organ. Later on, the network structure of the brain revealed by the 19th century neuroanato-mists provided an analogy to telegraph and telephone nets, and thus a strong link to communication technology, which lead to a mutually stimulating and fruitful parallel development of brain and computer science. Thus, John von Neumann's theories of computation, which are the basis of modern computers, were strongly inspired by brain science (von Neumann, 1958). In turn, computational theory reentered brain science with the cognitive turn in the 1970s, and there has a prevailing influence, since then.

The strength of computational theories lies in the fact that they provide mechanisms of algorithmic problem solving that can be abstracted from their physical implementation. This makes it possible to describe mental processes in terms of computational functions realized by a physical brain machinery, and by this to alleviate the long-standing mind-body problem (Rorty, 1979). Serving dedicated computational functions, the cognitive performance of the brain/mind can then be quantified by the amount, speed, and precision of information processing. Even our emotions can jg then be described in the framework of economic computational principles, namely as error signals minimized by machine learning algorithms to optimize computational performance (Glimcher et al., 2008). ® However, whereas the performance of computers steadily increases, hu-

W man cognitive performance remains strictly bound, and can hardly be optimized. Thus, computational measures of cognitive performance like intelli-qj gence, memory span, and perceptual precision show only little improvement by training, and remains prone to errors. The brain/mind rather seems to be optimized on the time scale of biological evolution, and therefore appears to (B be outpaced by the development of information technology yielding the impression of the brain/mind to be maladapted to modern environments. Moreover, whereas computation as a disembodied process abstracted from its physical implementation can be everlasting, human cognition declines with ff age and disease reaching its ultimate end with death. Against the background of increasingly powerful computers, our brain/mind is thus getting in deficit. 4 Used as tools, machines can only externally compensate for this deficit. But with the brain/mind envisaged as a machine itself, we apparently have the opportunity to cancel this deficit by expanding the brain/mind's machinery with technical devices interfacing its internal processes. Such brain-machine

O

interfaces then could directly augment the functionality of the brain/mind serving as an prosthesis for our internal cognitive system. By the concerted technical expansion of our body, brain, and mind, as proposed by transhumanist movements, we could enhance our limited human performance to overcome our biological destiny, and finally even reach posthuman immortality by uploading our conscious mind from the brain to a disembodied whole-brain emulation running on a renewable and cosmically distributed computer (Kurzweil, 2012). At this point, the project of conceiving human body and mind as a machinery exposes itself as a transcendental, futuristic project, which not only drives the technological convergence of nanotechnology, biotechnology, information technology, and cognitive science, but also exerts tremendous political and economical power. Thus, with the Human Brain Project (HBP), the European Community provides 1 billion Euro funding for the development of a whole-brain emulation in a supercomputer, although most experts heavily doubt its feasibility.

In a world more and more dominated by machines, all this highlights the importance of reflecting and clarifying our increasingly intimated relationship to machines. In the development of brain-machine interfaces, this relationship is brought to an extreme, which makes them an interesting case for exploring the dependencies between human nature and artificial devices (Deliano, 2010).

The state of the art of brain-machine interfaces

Brain-machine interfaces have been developed since the 1950s mainly in the field of medicine, and some of them are already successfully applied in the clinic as so-called neuroprostheses, today. Here, brain-machine interfaces provide solutions to the fundamental neurological problem that in the adult mammalian central nervous system the capacity for the intrinsic repair of damage following destructing inflammation, degeneration, or injury is, compared to ^ other parts of the body, quite limited. Thus, in the central nervous system, neu- qj ral tissue lost through damage is hardly replaced. Although recent findings indicate that neurogenesis from endogenous stem cells occurs in certain regions of the adult brain, the number of newly generated neurons may not be suffi- ^ cient to replace lost neuronal tissue (Braun & Jessberger, 2013). Even though J the brain is highly plastic, and can compensate for some brain damages to an amazing degree, little damage to certain brain regions still can have devasta- § ting effects on a subject's perceptual, motor and cognitive performance. Classical treatment of the resulting symptoms consists of substituting rather than restoring the impaired or lost function by external prosthetic tools, outside the nervous system. This way, deaf patients do not acquire new hearing but learn g lip reading, blind patients do not acquire new vision but learn Braille-reading, and paralyzed patients do not reacquire their movement ability, but learn to use 4 a wheelchair instead. The alternative is the internal restoration of neural func-

01

tions by technical devices interfacing selected parts of the nervous system, ¿i so-called neuroprostheses (Ohl & Scheich, 2007). ^

IB

Commonly, the interface consists of electrodes chronically implanted into the brain (Fig. 1A, B, C), through which electric brain activity can be either recorded or stimulated allowing for causal interactions with the brain. Thereby, the aim is to establish spatially and temporally specific electrical contacts to as many brain cells as possible. This lead to the nanotechical development of miniaturized electrode systems with up to 1000 electrode contacts. Integrated with amplifiers and stimulators, these electrode systems yield brain chips, which can be durably implanted into the brain without major damage, and which can be controlled wireless from outside the skull (Grill et al., 2009). However, brain-computer interfacing might be further revolutionized by a new technique called optogenetics, by which the gene sequences of light-sensitive proteins derived from certain types of algae and bacteria are introduced into brain cells through well controlled transgenic modifications (Yizhar et al., 2011). Brain cells expressing these proteins can then be selectively activated or suppressed by light delivered to the brain via ultrafine optic fibers (Fig. 1E). This technique allows to target brain cells with certain functions, and to control their electric activity in a much more specific way than electric stimulation (Fig. 1F).

Independent from these hardware aspects, the design of brain-machine interfaces generally rests upon the assumption that the brain from its sensory input generates internal representations of the reality encoded in the electrical activity of the brain cells. In transforming the encoded information through neural computation, new internal representations are formed, by which the brain can solve problems, mediate decisions, and as a final result generate motor output, in order to intentionally change the outside world based upon its neural representations (de Charms & Zador, 2000). In most current approaches, brain-machine interim faces aim at accessing these internal representations by the direct interaction with the electric brain activity via an electric or optogenetic interface. Central sensory neuroprostheses for example, are devised to directly encode sensory in® formation into the brain/mind system by electrically stimulating brain cells W (Tehovnik & Slocum, 2013). In bypassing damaged sensory brain parts, lost neural functions can be restored. Properties of external stimuli in the brain are qj thereby often thought to be encoded in topographically organized map representations, with neurons at a certain location in the map responding best to a specific stimulus parameter. Such map representations are often found in a brain structure called cortex, which builds the folded surface of the brain, plays an important integrative role in most cognitive phenomena, and is often regarded as constituting the highest processing level in the hierarchy of the brain. The primary visual cortex, for example, forms such a map of the visual field. Neurons within ff this map are optimally recruited by the stimulation of the corresponding site in the visual field. Accordingly, electric stimulation of a site in the cortical map 4 elicits the perception of a dot of light, a so-called phosphene, located at the site of the visual field represented by the stimulated map locus. Already in 1953, Krieg (Krieg, 1953) proposed that based on this map organization, spatial patterns of ^ electric stimulation delivered to visual cortex could yield a single coherent raster

O

ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE

image of phosphenes, which could be used to restore vision in the blind. Various interfaces for visual, auditory and somatosensory cortices have been developed since then, in order to restore lost sensory functions. Although, none of them has yet reached the level of clinical applicability.

Figure 1: Brain implants: Electrode arrays (A,B,C) and optogenetic systems (E) for the recording (see Fig. 2) and stimulation (F) of the brain cell's electric activity used in human brain-machine interface technology (D) [(A) to (D) from Fig. 1 Hochberg L.R. et al. (2012), Nature: 442, (7099); (E) from http://www.stanford.edu/group/dlab/optogenetics/; (F) from Fig. 2 in Deisseroth, K.

On the other hand, brain-machine interfaces reading out information from the brain to restore lost motor functions are much more successful. Thus, neural activity recorded from multiple electrodes in the cortex can be used to reconstruct 3 dimensional arm movements (Hatsopoulos & Donoghue, 2009). These movements can be decoded even if they are only intended, without being actually carried out. It has been demonstrated that via such motor interfaces, paralyzed patients who are not able to move their limbs anymore, can actually operate external devices like a robotic arm by

IB ca e

01 c u u s

0J

S

mere intention, and reach a goal like eating a piece of chocolate (Collinger et al., 2013, Fig. 1D). By combining sensory and motor neuroprostheses (Fig. 2A, B), one might then actually devise whole-body neuroprostheses, which replace large parts of the body by rerouting its sensorimotor feedback via a whole-body exoskeleton or a robot (Lebedev & Nicolelis, 2009). Also, first steps are taken towards neuroprostheses for replacing central, cognitive brain functions. Though far from being applicable, a brain chip is currently under development, which aims at emulating the complex functions of the hippocampus, a brain structure that plays an important role in memory formation (Berger et al., 2011). Decoding hippocampal input, then artificially carrying the hippocampal computations, and finally feeding back the transformed information to the output structures of the hippocampus, such a brain chip once could replaced lost hippocampal functions, and by this alleviate severe memory deficits occurring for example with neurodegenerative diseases like Alzheimer's. Finally, neuroprostheses are also designed for suppressing unwanted, pathological brain states by modulating the activity of target structures deep in the brain. Target structure include motor structures, but also so called limbic structures involved in emotional processes Besides largely reducing Parkinsonian tremor as a brain pacemaker interfacing motor structures, deep brain stimulation of limbic structures has been demonstrated to be capable of suppressing unwanted symptoms of depression, obsessive-compulsive disorder, and addiction (Hoy & Fitzgerald, 2010).

Cyborg metaphors

As it becomes apparent from the research projects described above, the scope of brain-machine interface technology reaches far beyond the development of neuroprosthetic applications for the treatment of specific neuropathologies and disabilities. Although brain-machine interface techno-^ logy today still concerns only a very small community of ill or handicapped 0) persons, the borderline between pathological or disabled, and healthy states is rather fluent. Likewise, the step from restoring lost functions to augmenting normal functions is quite small. Many of us might accordingly become in-g cluded into the group of potential users of this technology in the future. But ir-J respective of whether we will actually be carrying such devices or not, brain machine interfaces concern us in a deeper way. By directly intervening into our brain, which we see as the seat of our perception, our actions, our cognition, and our emotions, brain-machine interface technology touches our soul. Creating nearly biblical miracles in letting paralyzed people walk, blind people see, or deaf people hear again, already current neuroprosthetic technology 31 nourishes our transcendental, spiritual desires, as described at the beginning.

Together with the promise of technical progress and innovation, this 4 technology strongly connects to our future expectations of what it means to be human. Therefore, brain-machine interface technology, since it appeared on stage during the last century, has inspired science fiction fantasies in numerous novels, movies and computer games, irrespective of being

IB

ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE

feasible or actually providing suitable applications. These fantasies have in turn strongly driven technological development, and with the recent advancements seem to reenter our reality. The role model in this fantastic story is the fictitious character of the cyborg, a cybernetic organism, a hybrid of machine and organism. The term cyborg has been invented in 1960 by the medical engineer Manfred Clynes and the psychiatrist Nathan Kline to describe their vision of augmenting the human body by technical devices to better adapt to space travel (Clynes & Kline, 1960).

Figure 2: Current conceptions and working principles of brain-machine interfaces: (A) The agent-world circuitry underlying brain-machine interfaces. (B) Decoding of motor intentions. (C) «Ratbots» [(A) and (B) from Fig. 6 in Hatsopoulos N.G., and Suminski A.J., Neuron (2011): Volume 72, Issue 3, Pages 477-487; (C) Illustration Dr. John Chapin/Meritum Media]

Since then, the cyborg has been developed into a science fiction protagonist that stands for the utopian and dystopian views, the hopes and fears related to the transformation of our human nature by artificial, technical devices. In the utopian view, the intimate coupling with machines strengthens our limited self by equipping our body, our brain and our mind it with superhuman abili-

18 ■

0

01 G u

(J

s

oj S

ties. Extending and enhancing the performance of our mind, it is above all the brain-machine interfaces that empower us to gain the dominion over the world, and over our biological destiny. Beyond the medical treatment of pathological states, the development of such neuroenhancement strategies are already today inherent to many research projects on brain-machine interfaces. On the other hand, in the dystopian view, this technology violates our self, our brain, and our body, and makes us suffer. Here, brain-machine interfaces provide ways for others to take over the control of our mind and our actions, maybe even without being noticed by us. Such a scenario does not seem to be too farfetched, as suggested by the "ratbot" experiment of Talwar and colleagues (2002), which has provoked a highly controversial debate about the potential dangers of brain-machine interface technology. In this experiment, the navigation of a rat through a three-dimensional maze could be remote-controlled via a brain machine interface (Fig. 2C). To move the rat forward, the experimenters delivered electrical stimulation of mesolimbic structures deep in the brain, which are known to drive appetitive seeking behavior. Virtual touch sensations at the rat's left or right whiskers evoked by electric stimulation of the corresponding representations in somatosensory cortex were used as signals to turn the animal either left or right. Today, research on the remote-control of animals is pursued in the field of military research largely hidden from the civil scientific community. The aim of this research is to create "animal-bots" that can spy-out enemies in carrying a camera, remove land-mines, or even place such explosive weapons in the enemies territory.

However, the fictitious figure of the cyborg is not just a prospect of our technologically determined future. Both, as utopian superhero and as non-human monster, the character of the cyborg radically puts into question jg the location and the boundaries of our mind- and body-self (Haraway, 1991). It questions our western conviction that our mind is enclosed within our physical brain in our head, and that action and perception by which the mind ® interacts with the world is related to our physical body. With the conception W of body, brain and mind as computational machine, the functions of mind and body can be extended to technical devices via an interface. Then the qj boundaries of mind- and body-self are merely determined by the reach of these devices capable of transcending all biologically predetermined temporal and spatial limits. However, without boundaries, it also becomes increa-(Q singly difficult to determine what actually belongs to this self, and what to the external world. The dissolving boundaries finally leave the operations of brain, body and mind without meaning, as it makes no sense to talk about a human self anymore. Freed from all limitations and constraints, the human S agent as an entity stops to exist. Interestingly, cyborgs in science fiction are never fully transformed into a machines, but preserve a rest of humanity in 4 being irrational, intuitive, empathic or desperate, in suffering from fear and pain, or in being mortal. This a precondition for the cyborg to exist. Removing the limited, vulnerable, and mortal residual subject would simply turn the cyborg into a meaningless entity, a trivial and boring machine.

O

In the figure of the cyborg a dichotomy comes into view: while conceiving body, brain and mind in terms of an universal, disembodied, rational, objective machine, we still experience ourselves as situated, affective, embodied subjects. In this dichotomy it becomes apparent that the relationship between humans and machines is only metaphoric. Brains and bodies actually are not machines. Rather machines are designed by humans serving their purposes. However, both scientific and folk conceptions of mind, brain, and body heavily draw on such metaphors, because it is through metaphors that concepts and explanations get productive and intelligible (Lakoff & Johnson, 1980). So what are brains and bodies, if not machines? The cyborg herein gives us reason to reconsider and to reconfigure the prevailing human-machine metaphors, together with the implicit conceptual presuppositions they come along.

Reconsidering the brain-machine

Current machine conceptions of brain, body, and mind originate from modern neuroscience. This highly heterogeneous field of research is a much less theory-based discipline like for example physics. It is an interdisciplinary undertaking that pursues many parallel lines of research on many different levels of observation. Neuroscience herein not only tries to explain the brain's physiology, but to relate it to a psychological description of behavior and cognition. Based on the conviction that the mind is somehow generated by the brain, neuroscientists seek for neural correlates of psychological phenomena like perception, learning, memory, attention, decision making, and action often with the aim of establishing an isomorphic, one-to-one relationship between physiological and psychological phenomena. However, the laws describing physiological and psychological phenomena are generally not comparable. In its effort to integrate different levels of observation and explanatory domains, brain science therefore is prone to category mistakes committed by projecting explanations at one level of observation, to another, incommensura- ^ ble level. The brain for example does not perceive, act, or learn anything like qj the cognitive agent it is part of (Bennett & Hacker, 2008). Still, a link between physiology, perception, action, and cognition can be established by employing the conception of causality. Via causal relations, more genuine bridges be- q| tween levels of observation and explanatory domains can be built. g

In this respect, the notion of a computational brain operating on neural representations of the world, which is at the heart of brain-machine interface jg technology, is commonly flawed. Computational approaches rely on information theoretic concepts that describe information in statistical terms devoid of any semantic aspects, in order to quantify and optimize the transfer and the algorithmic transformation of information. As Claude Shannon, one of the 3 founders of information theory, noted: "The fundamental problem of communication is that of reproducing at one point either exactly or approximately a 4 message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain ^ physical or conceptual entities. These semantic aspects of communication are ^

IB

irrelevant to the engineering problem". For the computer, these semantic aspects can be provided by its users, but in the brain, there is no such user, which could attribute neural representations with meaning. The neural representation and maps targeted by brain-machine interfaces therefore carry the information about the world only in the eye of the observer. They are obtained by correlating neural activities with a set of observables in the world, which does not even allow for creating a causal link between the events in the world and the brain. Although, correlation is a necessary prerequisite for causality, it is not sufficient for it. Thus, correlations are highly biased by the selection of the observables through the experimenter, and might be simply spurious due to the contribution of non-observed factors. The following example illustrates this: in Europe the body weight of the human population is negatively correlated with hair length. Though, this is not a causal relationship, but relies on a third factor, namely the gender differences in the population: women having a lower body weight often also have longer hair.

But even if a causal link between neural representations and the world can be established, this would still run into the problem that humans are not perceiving or acting on a representation of the world, but that they perceive and act on the world itself without being mediated by a kind of internal mirror image or model (Bennett & Hacker, 2008). As Rodney Brooks, a leading expert in robotics, puts it: "The world is its own best model". Still, correlative and causal dependencies between neural activities, and events in the external world yield important insight for neuroscienctists, as they can provide the experimenter with information about the brain's structure and its dynamical states, even though the brain does not and cannot exploit these dependencies in relation to the external world, as it can be done from jg the stand-point of an external observer.

If brain-machine interface technology rests on a flawed conception, why do state of the art interfaces still work, and yield suitable applications? Via the ® optical fibers or electrodes these interfaces causally interact with the brain by W stimulating or recording electric nerve cell activity. To explain the working principles of brain-machine interfaces, further causal links between the inter-Qj faced neural activities and the restored, enhanced, or simply altered cognitive phenomena have to be established. However, this is not a trivial task. With its massive reciprocal feedback connections, the linear causal chains we are used (B to employ in our explanations fail to describe the operations of the brain. This requires concepts of causality which include an understanding of circular cause and effect relationships. Linear systems theory has developed such concepts for linear feedback operations (Freeman, 1975). However, this theory S does not exactly hold for the brain's operations, which are highly nonlinear.

Nonlinear feedback can be described in terms of nonlinear dynamics and 4 chaos theory, but these theories are only designed for the solution of low-dimensional problems that are stationary in time. Therefore these theories do not apply well to the brain. With its rapidly changing states the brain is highly instationary, and with its large mass of brain cell connected via abundant dis-

O

tributed feedback and feedforward connections operates in a high-dimensional state space (Freeman, 2000a,b). Moreover, as noise and fluctuations play an important role for the brain dynamics, stochastic descriptions have to be included as well into brain theory. The brain can therefore be regarded as a nonlinear, instationary, high-dimensional, dynamic, and stochastic system. Currently, there is no theory that could fully describe such a system.

Reconfiguring the brain-machine

Still, the conceptualisation of the brain as a dynamical system has proven to be useful. In the growing field of neurodynamics, first steps towards an understanding of the brain on the basis of dynamic systems theory have been taken through linear approximations, and by numerical computer simulations (Freeman, 1975, 2000a,b). Neurodynamics, investigates the changing spatial and temporal distributions of neural activities based on the causal interactions in the brain. Spatiotemporal patterns of neural activity can be formalized as dynamic states in the brain system's state space. The state space thereby must not be confused with physical spacetime, but describes the possible dynamic behaviors and changes of the system along the dimensions of the causally relevant factors.

Notably, the brain as a system can be described on many different levels of observation. Modern neuroscience investigates proteins and genes in the brain on a molecular level, synapses on a subcellular level, neurons on a cellular level, microcircuits made up by small arrangements of different neurons, larger networks including millions of neurons, whole brain regions, as well as hierarchies of such brain regions forming global networks connected via neural pathways. Behavior and cognition could then be regarded as the ultimate, macroscopic level of brain function. Regarding neurons as the building blocks of the brain, the aim is often to causally explain the macroscopic cognitive operations of the brain on the microscopic level of single neurons. Like with ^ the elementary particles in newtonian physics, it is thereby assumed that all qj causal influences in the system emanate from single neurons and their interactions, and that explanations on this microscopic level are the most fundamental.

To causally link all these levels, it has proven to be helpful to create q| bridges between microscopic and macroscopic levels via an intermediate, J mesoscopic level constituting an original domain of explanation free from purely microscopic or macroscpopic properties. Statistical thermodyna- § mics developed in the 19th century is a good example for such a mesoscopic bridge. In providing an at that time revolutionary statistical description of ensembles of particles at a mesoscopic level, it allowed to create a causal link between the micoscopic level of Newtonian particle move- 3 ments and the macroscopic phenomenon of temperature. Why not creating a similar bridge between the activity of neurons and cognitive phenomena? 4

The first hard problem encountered on this way is to create a bridge between the microscopic actions of single neurons and the macroscopic ac- ^ tions of global brain regions and networks related to cognition. Here, the ^

IB

mesoscopic description of the mass action of large neural ensembles is an important step towards creating more causal links between the brain's activity and perception, behavior, and cognition (Freeman, 1975, 2000a,b).

A mesoscopic description of brain activity can be obtained in animal studies by recording fieldpotentials in the brain, which reflect the mean electrical activity of 100.000s of neurons around the recording electrode. In sensory cortex, fieldpotentials recorded from many electrodes display mesoscopic spatiotemporal activity patterns. These complex patterns repeatedly emerge from the ongoing activity, and cannot be discarded as noise (Lilly, 1954). Though, no systematic relationship between these activity patterns and the sensory input could be found. Recording from 400 electrodes DeMott (1966) suggested that sensory input "is presented to the cortex not as a map, but as a very complex spatial-temporal sequence, in which every part of the cortex participates in displaying information from every part of the [sensory] field" (DeMott, 1966, p. 29). The work of Walter Freeman and our own work has shown that such patterns are induced by external stimuli which have a meaning for the animal (Freeman, 2000a, Deliano et al., 2009b). Emerging from the ongoing eigenactivity of the brain, these patterns are not driven or determined by external stimuli like the patterns that can be evoked as direct stimulus response. Ongoing patterns do not form map representations of stimulus features like the evoked patterns (de Charms & Zador, 2000). Whereas evoked patterns are topographically organized, and covary with the physical stimulus parameters, ongoing patterns are distributed over a large area, and covary with the individual situation of the animal. Whenever the behavioral situation, and hence the meaning of the stimuli changes, e.g. by learning, the ongoing patterns change as well, even if the presented jg stimuli remain physically the same (Freeman, 2000a). When animals learn to sort physically different stimuli into the same category, then these patterns reflect the learned category, but not the physical features of the stimuli ® (Ohl et al., 2001). Physiologically, these patterns are carried by the ampli-W tudes of ongoing distributed neural oscillations in the so-called gamma-band (~20-80 Hz). They emerge within a few milliseconds, persist for a few qj 100 milliseconds, until they dissolve and give rise to a new pattern.

Walter Freeman (2000b) has worked out a comprehensive neurodynamic theory on these mesoscopic patterns, which explains their (B generation by the self-organized mass action of 100.000s of single neurons. During the existence of the pattern-state, the degrees of freedom of the dynamics momentarily governing the cortex are largely reduced, which locally lowers the brain's entropy. Due to the second law of thermodynamics, this is ff only possible because the brain is exchanging energy and matter with its surround. As an open, dissipative system, the brain is therefore capable of crea-4 ting order from chaos and noise. It turns out that this self-organization cannot be simply explained by a bottom up causality emanating from the microns scopic level. As proposed in one of the most successful theories on self-orga-^ nization developed by Hermann Haken (1983), called synergetics, this re-

O

quires a conception of circular causality operating across levels of observation. The microscopic elements of the system like the neurons in the brain thereby causally influence the formation of the mesoscopic pattern, due to their interactions. However, the mesoscopic pattern as well constrains and enslaves the behavior of the microscopic elements. Macroscopic brain states arising from the mesoscopic pattern states therefore are not only a result of the microscopic actions of neurons, but vice versa have a strong causal influence on the microscopic activity. Hence, mesoscopic neurodynamics is not only seeking for explanations on the level of singe neurons, but also on the level of more global brain states and patterns that constitute order parameters governing the dynamics of the brain (Haken, 1983).

Physical theories of self-organization explain how macroscopic patterns are formed. However, in the instationary brain such patterns are steadily formed and destroyed preventing the system from becoming trapped in a certain state. Such an itinerant alternation of order and disorder can be achieved by systems capable of organizing themselves into critical states, from which they are repeatedly kicked into ordered pattern states by internal random fluctuations or external perturbations. The capacity for self-organized criticality relies on the scaling properties of the system. It is typically found in fractal, i.e. self-similar systems. In the brain, self-similar states can be found over many different spatial and temporal scales ranging from ten to a few hundred milliseconds, and from milimeters to centimeters. In the alternation of order and disorder resulting from its fractal organization, the brain can generate sequences of dynamic states in a highly flexible manner. The important role of noise and fluctuation thereby affords to further extend the conception of causality by allowing causal relationships to exert their effects not only on deterministic variables, but also jg on the probability distributions of stochastic variables.

At this point, it should be noted that theories of nonlinear dynamics, self-organization, and self-organized criticality have been fully worked out ® only for comparatively simple physical systems like lasers, but not for the w brain, yet. Here the descriptions rather provide new metaphors, which is however of great importance for the development of new conceptions of the ф brain. As Walter Freeman once stated: the hurricane with its spiral patterns and turbulences serves a much better metaphor for the brain than the computer. Such dynamic metaphors are also much less prone to the aforementioned cate- (Q gory mistakes than many of the still widely used computational metaphors.

The appearance of self-organized states requires constraints and boundary conditions like the borders of a containment in which a pattern-forming chemical reaction is carried out. In physico-chemical systems, these boun- Я daries are imposed by the experimenter. In the brain, such boundaries are constituted by the brain's sensory surfaces connecting it to the sensory or- 4 gans. The boundary conditions for self-organization in the brain might therefore be imposed on the brain by the external world via its sensory surfaces. ^ However, the brain is capable to actively influencing its sensory surfaces, ^

О

and the sensory organs. Sensory brain regions are not purely afferent structures receiving external input, but send back massive efferent feedback projections all the way down to the sensory organs. For example, the auditory cortex often viewed as the end point of the auditory pathways ascending from the ear, can exert mechanical influence on the inner ear via cortico-ef-ferent neural projections, which in turn alters sensory transduction. Sensory parts of the nervous system are therefore not passive receivers or transmitters of external information, but actively control their own sensory state. The brain can therefore determine its own boundary conditions. As has been pointed out by the biologists Humberto Maturana and Francisco Varela (1992), this marks the crucial difference between physico-chemical systems and living systems like the brain. The latter are not only capable of self-organizing into pattern states, but also of self-generating their own conditions of existence, i.e. their metabolic, morphologic, and sensory boundary conditions. According to Maturana and Varela, living systems can be defined as autopoietic systems. Being operationally closed, autopoietic systems have an identity defined by their own operations (Rudrauf et al., 2003). Being an autonomous entity, it makes no sense to externally ascribe functions to a living, autopoietic system. Even though, such functional description can provide valuable means for external observer to deal with the living systems. But still, this does not provide an explanation for the operations of living systems, which can only be understood in terms of its internal causal interactions. However, there is a third way to gain an understanding of living system, which consists in sharing a world with it through coevolution.

As reflected by its autonomous, self-organized eigenactivity, the brain is an autopoietic, living system, which can be only perturbed but not driven or in jg any way determined by external input. As the neurobiologist Amos Arieli nicely describes it: ".. .the effect of a stimulus might be likened to the additional ripples caused by tossing a stone into a wavy sea" (Arieli, 1996). In an ® autopoietic brain, mind control and mind reading via a brain-machine inter-y faces appears unfeasible. Thus, there is no content that can be read out from the brain, as mind reading would afford, because the brain does not harbor an in-qj ternal world or create intentions that could be accessed via an interface. Also, there is no way of inscribing information into such a system via an interface, as would be required for mind control. The brain can only be causally perturbed (B via the interface, but the outcome of this perturbation is solely determined by the brain. To unravel the working principles of brain-machine interfaces, this leaves us with the task to study the causal interactions between the interface and the ongoing brain dynamics more thoroughly (Deliano et al., 2009a). ff Even though, the autopoietic organization of the brain has fostered

constructivist conceptions of the brain as creating its own virtual realities. 4 However, these conceptions run into the same problems already described for the representationalist accounts (Bennett & Hacker, 2008). The brain does not create an internal world neither as model of the external world nor

J

^ as an emulated virtual reality. Otherwise, this would leave us with a mysti-

O

cal brain that creates a ghost in its machinery. Again, this is not to say that the brain's operations are not correlatively and causally linked to cognition. The brain is just not the place of cognition, and it does not define the boundaries and functions of cognition, but merely operates on its own neural states governed by its internal dynamics. At this point we simply have to let go our conviction that the mind is in the brain in our head. But if it is not in the brain anymore, where has the mind gone, then?

Extending the mind into the world

Although being an operationally closed dynamic system, the brain is not like a solipsistic monad hanging in a vacuum. The brain is deeply immersed in the physiology of the body, not only via its sensory surfaces. It is downrightly bathed in the milieu of the body, and exchanges with it energy, building blocks of its morphology, and regulatory signals. As a dynamical system the brain can then be viewed as embedded into the body, which is in turn embedded in the external environment (Chiel & Beer, 1997). The dynamics arising from this embedded system is characterized by various distributed feedback loops that not only operate within the brain, but give rise to couplings across the borders of brain, body, and environment (Beer, 2000). Due to their self-referential action, these couplings constitute a higher-order autopoietic system which is capable of creating an autonomeous self, an agent (Rudrauf et al., 2003). In the view of embodied, situated cognition, what we call mind can be understood in terms ofthe dynamic operations ofthis agent (Varela et al., 1992). Once more, the agent does not have a mind, it does not have perceptions, memories, intentions, qualia or mental representations (Bennett & Hacker, 2008). Also it does not serve a function, an external observer might be inclined to ascribe to it. Rather, by actively generating its own order-states, and in being situated in the world, such an agent directly perceives, memorizes, thinks, feels, intends, decides, and does other cognitive things alike. Its operations are mediated by the ^ world itself, and not by some mental representations. qj

It is in this embodied, situated framework, that the brain's causal relation to cognitive and experiential phenomena can be fully appreciated. Here, the brain forms a dynamic core ofthe agent, which strongly shapes q| its dynamics (Rudrauf et al., 2003). In constraining the behavior of the J agent, the brain as a condensation nucleus of order serves to maintain its existence, or in dynamical terms maintain the order-states that define the jg agent's way of life, its being there. Based on pre-afferent recurrent feedback operations, the brain is capable of predicting its own neural states, and therefore can extend the actions of the agent into the future (Freeman, 2000c). In leveling the deviations from the expected neural states arising g from fluctuations within the brain, or from perturbations through body and environment, the brain maintains the agent in a state of order, while at the 4 time allowing for an evolution of order-states that adapts the agent to a rapidly changing environment (Friston, 2010). This is achieved either by ac- ^ commodating the neural predictions in changing the order-states of the ^

IB

agent, or by initiating actions through which the agent preserves its current order by assimilating the changes in its environment.

Cognition then arises from an extended ecological cognitive system made up by an agent evolving in a cognitive niche of its environment (Clark, 2010). On the one hand, the agent adapts to the constraints imposed by the cognitive niche, on the other hand, the agent actively constructs the niche through its cognitive actions. For an external observer, the coevolution of brain, body, and environment therefore creates the impression that the agent perfectly matches the world it lives in. It seems as if the agent is designed for living in its niche with all its functions purposely adapted to the niche.

Through embodied cognition the agent can therefore make use, and to deeply integrate the external world into its cognitive operations. By an ongoing coupling with the world through actions like eye-movements, the agent can obtain information about the world just on demand, without out the need to construct a detailed, compound world model. The sense organs would not allow for such a detailed description of the world, at an instant of time, anyway. The retina, for example, only provides a very narrow area of sharp central color vision of about a size of a euro cent. Still, we have the impression to see a full, detailed visual scene. Of course, we might reconstruct the scene by gathering successively foveated parts of it. However, as demonstrated by the striking phenomenon of change blindness, we do not create such a detailed compound representation (Noe, 2005). Thereby, large changes even in the central parts of a visual scene can go unnoticed by a subject visually exploring the scene. The analysis of the eye-movements during such a task reveals that subjects repeatedly look at those parts of the scene being meaningful to them, and ignore irrelevant parts that do not grab their attention. Through our jg eye-movements we do not systematically scan the scene, but actively retrieve its meaningful aspects. In the act of vision, we thereby apparently rely on expectations, which arise from our implicit knowledge about sensorimotor de® pendencies learned from exploring visual scenes. Through these expectations, W as proposed by the philosopher Alva Noe (O'Reagn & Noe, 2001), parts of a visual scene or an object can still be present for us, even if we are not currently qj looking at them. The invisible parts of scenes and objects are right before our eyes, just because we know how to bring them into view.

The conception of an embodied mind herein brings into fore the experien-(Q tial dimensions of our mental live, which are often neglected (Hurley & Noe, 2003). As can be exemplary seen from the experiments on change blindness, embodied actions nicely explain many aspects of our lived experience. The framework of embodied cognition therefore allows for drawing more direct ff causal links between the agent's lived experience and the cooperate dynamics of brain, body, and environment giving rise to the embodied actions of the 4 agent. To create such links would however afford to assess the lived experience through introspection. Though introspection methods are scientifically underdeveloped, since they have been discarded as unscientific from cognitive ^ psychology for more than a century. However, in a research project initiated

O

by the late Francisco Varela called neurophenomenology, more disciplined first person accounts are under development, which ground in the tradition of philosophical phenomenology originating from Edmund Husserl, Maurice Merleau-Ponty, and Martin Heidegger (Varela & Shear, 1999).

Besides for sensorimotor real-world coupling, the framework of embodied and situated cognition can also account for more abstract forms of reasoning. Thus, it can be shown that even the most abstract categories of rational thought, e.g. in the field of mathematics and logic, are ultimately grounded in the embodied actions ofthe agent (Lakoff, 1987). Furthermore, the arbitrary sign systems used in language, mathematics and logic might serve embodied agents to exploit their capacity for real-world coupling. As material entities, symbols are manipulable and could be used by an agent as embodied stand-ins for more abstract operations (Clark, 2010). As a dynamic, and flexible assembly, the embodied agent might not only extend its mind to the material world, but also to other embodied agents. The embodied framework is therefore also offering explanations for social phenomena like empathy, bond, dance, and team-work.

Dance with the machines

Through its recurrent, world-based actions, embodied agents can learn to integrate artifacts like tools, technical devices, signs, and symbols into their cognitive acts. At the beginning, like when learning to drive a car, the coupling with these devices creates an intransparent problem space. In the lack of experience, we then often apply explicit rules to solve the posed problems. Always shifting the gears when the speedometer of the car reaches a certain value is an example for such rules. Hence, explicit rules are often employed as an simplifying aid for novices to gather experience (Dreyfuss & Dreyfuss, 1980). However, with growing expertise, these rules not always apply well, anymore. Then we start to find our own ways to deal with the oc- ^ curring problems. At the latest, when we reach a level of mastery and exper- qj tise, the problem space disappears, and the car as a an external device becomes transparent in its use. We just drive the car by relying on our intuitions and our affect without having rules in mind. Emotions play an important role q| here, both in constituting a problem space in a new and unknown situation, g but also in dissolving it. When a problem space opens, and if we have no guiding rules at hand, we normally stop our actions and start to reflect upon jg the situation. However, in the complex world we live, most problems cannot be solved by rational thought, or at least there is no time for doing so. In calling us back into worldly action again, emotions can dissolve the problem space, and prevent us from getting trapped into endless, rational, egocentric 3 reflections (Damasio, 2005). Thus, emotions make us to decide upon the information we have right at hand. As embodied agents we can achieve this by 4 creating states of order that largely reduce the complexity ofthe world, we live in. Interestingly, this also dramatically changes our experience. Being an ^ expert car driver, the car becomes an extension of our body. We can then ^

IB

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

even feel the boundaries of the car, e.g. when we come too close to another car in danger of an collision. Embodied agents are therefore capable of steadily creating whole new agent-world circuits (Clark, 2010). They can learn to deeply incorporate artifacts like machines into their cognitive and experiential realm, and hence dynamically shift the boundaries of their self.

Figure 3: Embodied cognition: brain, body, and environment as embedded dynamic systems [after Fig. 10 in Klein T.J., and Lewis M.A., Journal of Neural

In this respect, particularly telling experiments have been carried out in the field of crossmodal "sensory substitution", where it is tried to replace or * augment a lost sensory modality by transforming stimuli characteristic of the (B lost modality to stimuli of another modality. For example Bach-y-Rita and ^ colleagues (1969) developed a tactile vision substitution system (TVSS), 0 which converts an image captured by a video camera into a "tactile image" produced by a matrix of 20 x 20 vibrotactile or electrotactile stimulators. If the camera was positioned by the experimenter, blind or blindfolded subjects qj were immediately able to discriminate different patterns of tactile stimulation derived from the camera images. Simple geometric shapes could be recognized by the subjects after some learning. The subjects reported that they IQ achieved this by different successive patterns of tickling or irritating sensations on their skin at the sites of tactile stimulation, but their psychophysical performance was poor. However, when the subjects were allowed to operate the camera by themselves for actively exploring their environment, the mode 31 of perception changed fundamentally. After about 10 hours of exploration, the subjects perceived objects in front of them neglecting the tactile input 4 most of the time (Bach-y-Rita & Kercel, 2003). Although the stimulation remained tactile, they had shifted their mode of perception from a body-bound tactile sensation located on their back to distal objects in the external space in front of them. This also dramatically increased the psychophysical perfor-

mance ofthe subject: despite the limited spatial resolution of the TVSS, subjects managed to localize objects in a three-dimensional space, to characterize the shape of an object, and to recognize objects, even faces. This is a striking illustration of how embodied agents can enactively shift the boundaries of their perception, and thus the boundaries of their selves.

Instead of doing such fancy experiments, we also can observe ourselves or others for example in using a smartphone. These devices are not ordinary tools just used by us. In the dance of our fingers, smartphones become inseparably linked to the sphere of our body and our mind (Clark, 2010). In this respect it might be not so important how deeply machines are implanted into our body or brain. Much more important are the ways the machines are coupled to agent, and how deeply machines can get integrated into the embodied dynamics, defining the boundary between agent and world. Sensory cortex prostheses described before, are an interesting example, in this respect (Tehovnik & Slocum, 2013, Deliano et al., 2009). Through the direct electric stimulation of neurons in visual or auditory cortex, the perception of dots of light or sounds can be elicited right away, respectively. However, as we have shown in animal experiments, the perception of these phosphenes or audenes is not just a correlate of the activity of the directly excited neurons, but involves the operation of a recurrent feedback circuitry that engages many sensory, emotional and motor brain regions (Happel et al., under review; Deliano et al., 2009a). However, clinical trials with human subjects implanted with prototypes of visual cortex prostheses have not yet achieved to establish a real-world coupling that allows for seeing objects or visual scenes. One reason for this might be that the elicited phosphenes move together with the eye. By this type of coupling, phosphenes are always perceived as fixed to the eye, but not as objects in the external environment se- jg parated the user. In contrast, such external objects can emerge from the couplings constituted by sensory substitution devices. But here, the type of coupling does not allow to create the sensation of light and color. For this ® reason, these devices have not been much more attractive for blind subjects w than their canes, yet. Also, the intentional control of robotic devices via cortical motor interfaces (Lebedev & Nicolelis, 2011) relies on a specific way of qj agent-world coupling. This control is not achieved right away by the user, but concurrently requires learning on the behalf of the agent, and adaptation of decoding schemes on the behalf of the machine (Hatsopoulos & (Q Donoghue, 2009). The coevolution of agent and machine during training then creates a match between the brain activity and the decoding schemes of the machine, which at the end appears as mind reading through the machine.

For a brain- or human-machine interface to work properly, it does not re- 3 quire broad-band interfaces that transmit large amounts of information. What is required is an agent capable of integrating the interface into its em- 4 bodiment. Doing so, the agent can open new communication channels with the world. However, as we have seen, this requires effort and learning on the ^ side ofthe agent, and flexibility and adaptation on the side ofthe machine. ^

0

In an embodied agent, cognition and lived experience cannot be attributed to its single parts, but rely on a cooperative interaction between these parts. The effects of removing or altering the parts that embody the agent then depend on their causal roles in the currently enacted agent-world circuit (Clark, 2010). Thus, bodily damage, dysfunctions or lesions of brain regions, or malfunctions of machines coupled to the agent, do not simply lead to a loss or change of function added by the affected part under normal conditions. If the affected parts do not belong to the agent's embodiment, their loss or dysfunction is irrelevant, and has no effect on the agent's behavior. Otherwise, the loss or alteration of integra-tive parts of the agent will profoundly change the dynamic operations of its remaining embodying parts, and consequently its mode of cognition and lived experience. The fact that a smartphone when taken away, or when not working properly might leave its user depressed, and with a feeling of being disabled, reveals the deep integration of such devices into the user's embodied living. The loss and alteration of parts embodying the agent might largely reduce the degrees of freedom of its behavior. Still, the agent's capacity for embodied actions, i.e. the process ofputting up new agent-world circuits itself, is quite robust. Even after removal or damage of large parts of their body, brain, and environment, humans often can keep up their ability to maintain a lived identity. However, there also exist environmental factors, body parts, and brain regions, which are critical for the agent to maintain its embodied activity. If these parts are removed, the agent stops to exist, it dies.

But not only the removal of relevant parts can restrain the actions of an agent. Coupled devices might also profoundly disturb the embodied dy-jg namics, as becomes clear from the "ratbot" experiment described above, leaving the rat agent as an object remote-controlled via a brain-machine interface (Talwar et al., 2002). From the direct stimulation of mesolimbic ® brain regions, as carried out in this experiment, animals learn to display a W vigorous appetitive searching behavior interpretable as strongly amplified intentional drive. Such a behavior is also elicited by the use of addictive qj drugs that interfere with mesolimbic brain structures. Both, with mesolimbic stimulation or drug addiction, the behavior of the agent narrows down to the single goal of seeking the brain stimulation or the drug. The subject gets trapped in a feedforward coupling, which largely reduces the degrees of freedom of the embodied dynamics leaving the agent with only a small number of selectable order-states. This constrains agent's behavior so much, that it appears to be remote-controlled. J In the embodied framework, not only the research on brain-machine

interfaces but more broadly the research on human-machine interfaces 4 shifts its focus from quantitative differences in the information transfer to qualitative differences in the perceptuomotor, emotional and cognitive modes of embodied action which arising either from the loss and alte-^ rations ofthe parts embodying the agent, or from the agent's coupling with

O

a machine. From this perspective the benefit of a human-machine interface cannot be predefined by the researcher, but only in close cooperation with the users of the interface (Varela, 1999).

Although, as the philosopher Andy Clark frames it, we are "natural born cyborgs" (Clark, 2010), and even though the conception of dynamic embodiment is still a mechanistic one, the new conceptions and metaphors of the relationship between humans and machines presented in this article allow us to escape from technological determinism, which ultimately will make us cease to be human. As the cyberfeministic philosopher Donna J. Haraway points out, we can achieve this by recognizing that "[t]he machine is not an it to be animated, worshipped, and dominated. The machine is us, our processes, an aspect of our embodiment".

References

Arieli A., Sterkin A., Grinvald A., & Aertsen A. (1996) Dynamics of ongoing activity:cexplanation of the large variability in evoked cortical responses. Science 273:1868-1871.

Bach-y-Rita P., Collins C.C., Saunders F.A., White B., and Scadden L. (1969) Vision substitution by tactile image projection. Nature221: 963-964.

Bach-y-Rita P., and Kercel S.W. (2003) Sensory substitution and the human -machine interface. TRENDS in Cognitive Sciences Vol.7, No.12.

Beer R. D. (2000) Dynamical approaches to cognitive science. Trends Cogn Sci.4: 91-99.

Bennett M.R., and Hacker P. (2008) A History of Cognitive Neuroscience. -A conceptual investigation. Wiley-Blackwell Publ., Oxford.

Berger, T.W., Hampson, R.E., Song, D., Goonawardena, A., Marmarelis, V.Z., О Deadwyler, S.A. (2011) A cortical neural prosthesis for restoring and enhancing memory. Journal of Neural Engineering, Volume 8, Issue 4.

18

Ф

Braun, S.M.G., Jessberger, S. (2013) Adult neurogenesis in the mammalian Jj brain. Frontiers in Biology, Volume 8, Issue 3, June 2013, Pages 295-304. S

Chiel H.J. & Beer R.D. (1997) The brain has a body: adaptive behavior Q emerges from interactions of nervous system, body and environment. Trends Neurosci.20: 553-557.

Clark, A., (2010) Supersizing the Mind: Embodiment, Action, and Cognitive q Extension. Oxford University Press.

Clynes M.E. & Kline N.S.(1960) Cyborgs and Space. Astronautics 26-27. Collinger, J.L., Wodlinger, B., Downey, J.E., Wang, W., Tyler-Kabara, E.C., Weber, D.J., McMorland, A.J.C., Velliste, M., Boninger, M.L., Schwartz, A.B. S (2013) High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet, Volume 381, Issue 9866, Pages 557-564. S

Damasio A. (2005) Descartes' Error: Emotion, Reason, and the Human Brain, S Putnam, 1994; revised Penguin edition, 2005.

De Charms C., and Zador A., (2000). Neural Representation and the Cortical Code. Annu. Rev. Neurosci. 23:613-647. ^

Deliano M., Scheich H., and Ohl F.W. (2009a). Auditory cortical activity after intracortical microstimulation and its role for sensory processing and learning. J. Neurosci. Dec 16;29(50):15898-909.

Deliano M., and Ohl F.W. (2009b) Neurodynamics of category learning: towards understanding the creation of meaning in the brain. New Mathematics and Natural Computation Vol. 05, issue 01, pages 61-81.

Deliano M. (2010), Prothesen für das Gehirn: Blinde sehen, Lahme gehen, Taube hören? In Böhlemann P., Hattenbach A., and Markus P. [Eds.] Der machbare Mensch? Moderne Hirnforschung, biomedizinisches Enhancement und christliches Menschenbild (Villigst Profile 13), Lit-Verlag, Münster.

DeMott D.W. (1966) Cortical micro-toposcopy. Med.Res.Eng 5: 23-29. Dreyfus S.E., Dreyfus H.L. (1980) A Five-Stage Model of the Mental Activities Involved in Directed Skill Acquisition. Washington, DC: Storming Media.

Freeman W.J. (1975) Mass Action in the Nervous System: Examination of Neurophysiological Basis of Adoptive Behavior Through the Eeg. Academic Press.

Freeman W.J. (2000a) Mesoscopic neurodynamics: from neuron to brain. J Physiol Paris94: 303-322.

Freeman W.J. (2000b) Neurodynamics: An Exploration in Mesoscopic Brain Dynamics (Perspectives in NeuralComputing). Springer-Verlag.

Freeman W.J. (2000c) Emotion is Essential to All Intentional Behaviors. In: Emotion, Development, and Self-Organization: Dynamic Systems Approaches to Emotional Development (eds. M.D. Lewis and I. Granic): 209-235. Cambridge University Press, Cambridge, U.K.

Friston K. (2010) The free-energy principle: a unified brain theory? Nat Rev Neurosci. 11(2):127-38.

Glimcher P.W., Fehr E., Camerer C., Poldrack R.A. (2008) Neuroeconomics: Decision Making and the Brain. Academic Press.

Grill, W.M., Norman, S.E., Bellamkonda, R.V. (2009) Implanted neural interim faces: Biochallenges and engineered solutions. Annual Review of Biomedical Engineering. Volume 11, pages 1-24.

Haken H. (1983) Synergetics, an Introduction: Nonequilibrium Phase Transi-J tions and Self-Organization in Physics, Chemistry, and Biology, 3rd rev. enl. ed. B New York: Springer-Verlag.

U Happel M., Deliano M., Hanschuh J., and Ohl F.W. (2013) Enhanced cogni-

S tive flexibility in reversal learning induced by removal of the extracellular matrix in ® auditory cortex. Under review by Journal of Neuroscience. j Haraway D.F. (1991) Simians, Cyborgsand Women. Routledege, New York.

Hatsopoulos, N.G., Donoghue, J.P. (2009) The science of neural interface (Q systems. Annual Review of Neuroscience, Volume 32, pages 249-266.

Hoy, K.E., Fitzgerald, P.B. (2010) Brain stimulation in psychiatry and its effects on cognition. Nature Reviews Neurology, Volume 6, Issue 5, May 2010, pages 267-275.

£ Hurley S., Noe A. (2003) Neural plasticity and consciousness. Biology and

Philosophy 18: 131-168. S Kathan B. (2003) Das Elend der ärztlichen Kunst. Eine andere Geschichte der

9 Medizin. Kadmos Kulturverlag, Berlin.

Krieg W. (1953) In: Functional Neuroanatomypp. 207-208. Blakiston, New Ja York.

Kurzweil R. (2012) How to Create a Mind. Viking.

O

Lakoff G., and Johnson M. (1980) Metaphors: We Live by. University of Chicago Press.

Lakoff G. (1987) Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. The University of Chicago Press.

Lebedev, M.A., Nicolelis, M.A.L. (2011) Toward a whole-body neuroprosthe-tic. Progress in Brain Research, Volume 194, 2011, pages 47-60.

Lilly J.C. (1954) Instantaneous relations between the activities of closely spaced zones on the cerebral cortex; electrical figures during responses and spontaneous activity. Am.JPhysiol 176: 493-504.

Maturana H., and Varela F. (1992) Tree of knowledge. Shambhala; Rev Sub edition.

Noe A. (2005) What does change blindness teach us about consciousness?

Trends Cogn Sci. May; 9(5): 218.

Ohl F.W., Scheich H., & Freeman W.J. (2001) Change in pattern of ongoing cortical activity with auditory category learning. Nature 412: 733-736.

Ohl F.W., Deliano M., Scheich H., & Freeman W.J. (2003a) Early and late patterns of stimulus-related activity in auditory cortex of trained animals. Biol. Cybern.88: 374-379.

Ohl F.W., Deliano M., Scheich H., & Freeman W.J. (2003b) Analysis of evoked and emergent patterns of stimulus-related auditory cortical activity. Rev. Neurosci.14: 35-42.

Ohl F.W., Scheich H. (2007) Chips in your head. Scientific American Mind: 64-69.

O'Regan J.K. & Noe A. (2001) A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 24: 939-973.

Rorty, R. (1979) Philosophy and the Mirror of Nature. Princeton University Press.

Rudrauf D., Lutz A., Cosmelli D., Lachaux J.P., Le Van Quyen M. (2003) From autopoiesis to neurophenomenology: Francisco Varela's exploration of the biophysics of being. Biol Res.; 36(1): 27-65.

Schmidt E.M., Bak M.J., Hambrecht F.T., Kufta C.V., O'Rourke D.K., & Vallabhanath P. (1996) Feasibility of a visual prosthesis for the blind based on q intracortical microstimulation of the visual cortex. Brain119 (Pt 2): 507-522.

Talwar S.K., Xu S., Hawley E.S., Weiss S.A., Moxon K.A., & Chapin J.K.

Tehovnik, E.J., Slocum, W.M. (2013) Electrical induction of vision. Neuroscience and Biobehavioral Reviews, Volume 37, Issue 5, Pages 803-818.

Varela F.J., Thompson E.T., & Rosch E. (1992) The Embodied Mind: Cognitive Science and Human Experience. The MIT Press, Cambridge, Massachusetts.

Varela F.J. & Shear J. (1999) The View from Within: First-person Approaches to the Study of Consciousness. Imprint Academic.

Varela F.J. (1999) Ethical Know-How: Action, Wisdom, and Cognition Stanford University Press.

John von Neumann (1958) The Computer and the Brain. Yale University Press, 2000.

Yizhar, O., Fenno, L., Davidson, T., Mogri, M., Deisseroth, K. (2011) Opto-genetics in Neural Systems. Neuron, Volume 71, Issue 1, pages 9-34.

IB

V

(2002) Rat navigation guided by remote control. Nature417: 37-38. W

W

s

0J

S

i Надоели баннеры? Вы всегда можете отключить рекламу.