Научная статья на тему 'TOWARDS A PURPOSE AND A DIRECTION.KEYNOTE FOR EUROPEAN ASSOCIATION FOR DIGITAL HUMANITIES (KRASNOYARSK, SIBERIA, SEPTEMBER 2021)'

TOWARDS A PURPOSE AND A DIRECTION.KEYNOTE FOR EUROPEAN ASSOCIATION FOR DIGITAL HUMANITIES (KRASNOYARSK, SIBERIA, SEPTEMBER 2021) Текст научной статьи по специальности «Философия, этика, религиоведение»

CC BY
80
12
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
PROGRESS AND PERIL / INTERDISCIPLINARY RELATIONS / HUMAN SCIENCES / IMAGINATION / CONVERSATION / ARTIFICIAL INTELLIGENCE

Аннотация научной статьи по философии, этике, религиоведению, автор научной работы — Mccarty Willard

Practitioners, scholars and practitioner-scholars in digital humanities have much to celebrate. The social, institutional and technical progress made since the field began in the mid 1960s gives abundant cause. It behoves all of us, however, to raise eyes from screen and keyboard to consider more deeply our relation to the other disciplines of the sciences, human, natural and artificial, and to the creative arts. Digital humanities is an adolescent among adults, with little awareness of the intellectual and cultural riches on which it needs to draw in order to take its place among them. It needs to become aware of its own antediluvian past and to consider its possible futures with the help of the arts. No avenue should remain unexplored in a collective effort to imagine what we do not know.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «TOWARDS A PURPOSE AND A DIRECTION.KEYNOTE FOR EUROPEAN ASSOCIATION FOR DIGITAL HUMANITIES (KRASNOYARSK, SIBERIA, SEPTEMBER 2021)»

I Journal of Siberian Federal University. Humanities & Social Sciences 2022 15(6): 752-761

DOI: 10.17516/1997-1370-0890 УДК 316.772

Towards a Purpose and a Direction.

Keynote for European Association for Digital Humanities

(Krasnoyarsk, Siberia, September 2021)

Willard McCarty*

Department of Digital Humanities King's College London, UK

Received 30.12.2021, received in revised form 27.01.2021, accepted 03.02.2022

Abstract. Practitioners, scholars and practitioner-scholars in digital humanities have much to celebrate. The social, institutional and technical progress made since the field began in the mid 1960s gives abundant cause. It behoves all of us, however, to raise eyes from screen and keyboard to consider more deeply our relation to the other disciplines of the sciences, human, natural and artificial, and to the creative arts. Digital humanities is an adolescent among adults, with little awareness of the intellectual and cultural riches on which it needs to draw in order to take its place among them. It needs to become aware of its own antediluvian past and to consider its possible futures with the help of the arts. No avenue should remain unexplored in a collective effort to imagine what we do not know.

Keywords: progress and peril, interdisciplinary relations, human sciences, imagination, conversation, artificial intelligence.

Research area: culturology.

Citation: McCarty, W. (2022). Towards a Purpose and a Direction. J. Sib. Fed. Univ. Humanit. soc. sci., 15(6), 752-761. DOI: 10.17516/1997-1370-0890.

© Siberian Federal University. All rights reserved

* Corresponding author E-mail address: willard.mccarty@mccarty.org.uk

На пути к Цели и Направлению. Основной доклад для Европейской ассоциации цифровых гуманитарных наук (Красноярск, Сибирь, сентябрь 2021 года)

В. Маккарти

Кафедра цифровых гуманитарных наук Королевский колледж Великобритания, Лондон

Аннотация. Практикам, ученым и ученым-практикам в области цифровых гуманитарных наук есть что отпраздновать. Социальный, институциональный и технический прогресс, достигнутый с момента начала работы в этой сфере в середине 1960-х годов, дает множество оснований для этого. Однако всем нам надлежит оторвать глаза от экрана и клавиатуры, чтобы глубже рассмотреть наше отношение к другим научным дисциплинам: гуманитарным, естественным, а также к творческому искусству. Цифровые гуманитарные науки - это подросток среди взрослых, мало осведомленный об интеллектуальных и культурных богатствах, которые ему необходимо использовать, чтобы осознать собственное допотопное прошлое и рассмотреть свое возможное будущее с помощью искусства. Ни один путь не должен оставаться неисследованным в коллективных усилиях представить то, чего мы не знаем.

Ключевые слова: прогресс и опасность, междисциплинарные отношения, гуманитарные науки, воображение, беседа, искусственный интеллект.

Научная специальность: 24.00.00 - культурология.

A decade ago, in the question period after a lecture, I was asked where I thought we would be with computers in twenty years' time. I've continued to ponder this question: not for what the future would bring (as if it were already determined, and so beyond our influence) but for where I wanted the discipline to be. There were then and are now, of course, many issues on the boil, and so many answers other than my own. When I was asked that question, digital research in the human sciences had already grown beyond my ability to take the measure of; in the intervening years this research has continued to spread and diversify. Nevertheless, I think there is still reason for each of us to ponder matters of disciplinary purpose and direction as best we can: not to define digital humanities (a discipline is not for defining) but to question, shape and direct it.

Such pondering has strengthened my conviction that the physical machine must not be taken for granted, that the machine is where our questioning must begin,1 and that all the disciplines of the human sciences,2 and much else, offer crucial help. We need this help - but on our terms. Relative disciplinary ignorance and immaturity make us vulnerable to facile solutions, two especially. One is to succumb to the widely supported fantasy of unqualified progress and so to the belief that the questions of research are for answering rather than deep-

1 I am thus in great sympathy with the mathematical engineer and computer scientist Richard Hamming's conviction in his Turing Award Lecture that «the computer, the information processing machine, is the foundation of our field» (1968, 5), although what we do in our respective fields with it is very different.

2 I use the term 'science' here to denote disciplined inter-

pretative enquiry of all kinds, for which see Lloyd and Vilaça

2020 and McCarty, Lloyd and Vilaça 2021.

ening. The other is to take the purpose and direction of the discipline from somewhere else, surrendering the struggle to someone else's agenda.3 And then, likely beneath critical notice, lurk some bad but stubborn notions about computing that must be weeded out. I'll come back to one of these in particular.

With regards to institutional matters, we've much to be happy about. Conditions were not always so favourable. Thirty years ago the distinguished historian of religion Jaroslav Pelikan made what was then a startling move by proposing that academic status be given to non-academic research 'staff', such as I then was: «not [as] a matter of courtesy, much less as a matter of condescension, but as a matter of justice and of accuracy» (1992, 62). This has happened: we have moved from feeding on the crumbs that fell off the table of the tenured to take our places among them as colleagues. Such is the opportunity we have struggled for and won, with a great deal of help from current fashions. But make no mistake: ours is not a sinecure but an opportunity to show our colleagues in other disciplines that the investment they have made in our thing at the expense of spending it elsewhere is paying off, not merely in large grants and the like but in ways that will last. So I must ask: what are we doing with this opportunity while we still have it, before it slips away or goes stale?

The theme of this conference, «Interdisciplinary perspectives on data», brings me to that question by way of the great Australian ethnographic historian Greg Dening's metaphor of the disciplines:

Where once we thought a discipline - history, say, or politics, or even economics - was at the centre of things by having a blinkered view of humanity, now we realise that we are all on the edge of things in a great ring of viewers.4

What, then, do we do, within that nexus of relations? What do we have to contribute and to learn?

3 For the dangers of this see the «Polemical Introduction» to Frye 1957, 12-13.

4 Dening 1998, 139. For the very Australian metaphor see McCarty 2006, 9.

Here's my train of thought in response to

that.

I think we have a fairly good start on understanding the input end of the machine (that is, digitalisation of data, including how to encode these data) and on developing tools for manipulation, analysis, visualisation and so on. But do we have a good theory of the data, to account not only for that which we successfully translate into computable form but especially for that which does not survive, and perhaps could, after a suitable change of mind and/or machine? Or are we still brushing the losses under the carpet, calling them 'residue'?5 Then there's the middle and the final stages of computing. I strongly suspect that we have very little grasp of them, that is, of the events within the black box between input and output (McCarty 2021; Winner 2003), and then of the recursive interactions between enquirer and machine, from output back to input, to more output and so on. On that latter point, Marvin Minsky pointed many years ago to the three-way relation between real-world artefact, the computational model of it and the modeller (Minsky 1965). Have our theories of modelling taken account of that, especially in relation to all that goes on with and within the modeller? And there's more: how about the social context within which the modeller is working? Much to be done - although I suspect - and hope - that much of this is already underway. But let me proceed as if the abyss of my ignorance were not as large and deep as I suspect it is.

Three problems to work on, then: (1) a theory of digitalisation that would fit into A Very Short Introduction-style booklet and be comprehensible to students and colleagues alike; (2) an accessible account of as much as is knowable concerning the structure of and events within the system software and hardware of that black box;6 and (3) a better idea of what happens when we work with the machine,

5 On 'residue' see McCarty 2012 and 2014; McGann 2004, 201-4.

6 A crucial qualification: even if achievable, an account that lives up to Thomas Sprat's ideal, of «a close, naked, natural way of speaking» (1667, 113) would explain away rather than explain the crucial role of enchantment by an indecipherable technology (Gell 1992).

materially, cognitively, psychologically, intellectually, within its social nexus.

Although there is more to be done on the first of these, starting inter alia with Jerome McGann's suggestions, I've belaboured it elsewhere and so will leave it alone for want of time. The second is essentially the problem that faces all genuinely interdisciplinary research: in this case, requiring a substantial amount of extraction and translation of highly technical sources from engineering, other areas of computer science and from mathematics. The job is not impossible, only time-consuming and laborious, involving native informants as well as printed and online sources. But the rewards from doing it are substantial: a much clearer and more persuasive idea of our contribution to the machine's further development and a better language in which to conceptualise and communicate it.

The third problem - bringing our manipulatory-cognitive actions with the tool and its responses into view - falls under the concerns of research in human-computer interaction (HCI), the cognitive sciences, especially cognitive psychology, and anthropology, among other fields. To my mind, the most difficult and promising challenges lie with this problem. I'll devote the rest of my time here to it, with some backward glances at the second.

In the language familiar to the human sciences, research with the machine means putting questions to it. At the moment we don't do that by speaking (say, to a really smart Alexa) but by taking other sorts of physical actions that lead to other sorts of physical responses. Nevertheless, it is productive to think of what we do as asking questions. In those terms, two corresponding socio-cultural models present themselves: conversation and divination. For reasons of time, I must give divination short shrift,7 but a few salient things about it are suggestive. The first is that divination may be considered a kind of physically mediated conversation, with the gods (if you like) or with whatever we put in their place. Second, belief

7 I explore the potential of divination in McCarty 2021, §§ 6

and 7.3; on the controversial use of divination in arguments

such as mine, see also Gell 1998, 102f.

is not required to see how divinatory practices work; the highly developed resources of historical and anthropological scholarship help us find this out as best we can, then to tap into millennia of cross-cultural accounts of how others have sought answers to their most difficult questions. Finally, no less than Alan Turing turned to it (in the form of an 'oracle machine') as a model for our intuitive ability to excel the explanatory power of mechanical processes (Appel 2012, 52-3; also 7 and 22-3).

Conversation can be equally mysterious: not so much the tedious, well-worn verbal routine with unsurprising outcome anticipated by the question, «Are we really going to have that conversation?» Rather I mean conversation that is unplanned, the kind we drift into and are led by rather than lead, the kind that surprises rather than fulfils expectations (Gadamer 2004, 385). In his 1950 paper on machine intelligence, Turing argued that the digital machine could genuinely surprise us, indeed that but for greater memory and speed Babbage's Analytical Engine could have done this as well. Just this year, Kazuo Ishiguro, in Klara and the Sun, has imagined the conversation and interior dialogue of a replicant, or Artificial Friend (AF), and by exploring how far it could go has done much to sharpen the question. (I will return to Klara later.) Unfortunately, human-computer interaction studies seem not to have done much with research in Conversation Analysis, or what the practitioners call 'talk-in-interaction', but their research and that of related areas in the cognitive sciences continue apace (McCarty 2021, §§ 4 and 7.1; Suchman 2007, Chapter 7). While plausible implementation may be a long way off, keeping in touch with that research seems to me part of our job.

But we need more. We need, as I suggested earlier, actual and detailed knowledge of hardware and software. Langdon Winner put the question to his colleagues in the philosophy of technology. He asked, how much do we need to know about the machine? «What kind of knowledge do we need to have... And how much?» (1993). He recommended looking «carefully at the inner workings of real technologies and their histories to see what is actually taking place.» Without that knowledge,

it seems to me, we cannot speak with authority about the digital machine and its possibilities for the human sciences and beyond. Without it we cannot understand fully what we are doing.

Historian Timothy Lenoir notes the difficulties and suggests that we attempt «to get inside the technology and use it for our own purposes» but also warns us away from «ex-amining the metaphoric use of technology in science fiction, film, print media, and ad campaigns» (Lenoir 2007, 210). The defining imperative of our field, it seems to me, is to do both: that we bring both together in a field of relations, together with whatever the other human sciences can contribute. Fifty years ago, in Science and Technology in Art Today, anthropologist Jonathan Benthall suggested why this assembly is so important for the conversational problem I have raised. «The essence of most computer jokes», he observed, «is that, wherever we choose to assign the computer in the 'social' hierarchy, as slave or oracle or working-partner, its anomalous nature will assert itself.» (1972, 46, my emph.) Its no-name uncanny manifestations signal a fault of conception or engineering only if we set imitation of the human as our goal. Otherwise, in circumstances of research, these de-familiarising phenomena are the very means of discovery.

If I'm right about this, then we need even more help, this time from those at the coalface, to get to the causes of that anomalous nature. For example, as a starting point I refer you to American engineer Jim Keller's first interview by Lex Fridman on YouTube.8 Among other things, Keller describes how non-deterministic behaviours at the microchip level are exploited for maximum processing speed but controlled to produce the deterministic results on which so much depends. Since the mid 20th Century, when mathematician John von Neumann worried about how digital processes could produce reliable results from unreliable components (as the human brain does far better than our most sophisticated machines),9 extravagant hardware and software engineering

8 https://www.youtube.com/watch?v=Nb2tebYAaOA (3/10/21).

9 von Neumann 1963/1956; cf. Pippenger 1990; Borkar 2005.

have shaped the unruly behaviour of physical electronics to produce closest possible approximation to the ideal machine, that is, the one we imagine that we have. Much to admire there. But to my mind the important realisation for the future - and here I step out onto thin ice - is that this control of non-deterministic behaviour is tuneable.10

I spoke earlier about weeding out bad ideas. Among the most persistent of these is the notion that the computer is a machine in the 'precomputational' sense:11 one that (to quote Ada Lovelace's well-known dictum) «can do [only] whatever we know how to order it to perform» (Menabrea 1843, 722). In 1955 Grace Hopper declared that, «The computer is an extremely fast moron... [that] will, at the speed of light, do exactly what it is told to do - no more, no less».12 From then until now, the term 'fast moron', its variants and the precomputa-tional image it evokes have given debilitating reassurance to the frightened, namely, that the digital machine cannot outsmart us.13 Herbert Simon pointed out in 1960 that the image of the machine as rote follower of instructions is «intuitively obvious [and] indubitably true,» but «supports none of the implications that are commonly drawn from it».14 Thus we wrongly infer that how the computational machine carries out its instructions is predetermined and that the surprise which I talked about earlier is a trivial matter of human error, stupidity or forgetfulness. Two features of the machine nullify this assumption: conditional interactions among components and stimuli external to it (cf. Wegner 1998, 318); two others, mnemonic capacity and speed, take those to a new level.

10 I say nothing about the intriguing promise of quantum computing here; see Bernhardt 2019; Pakin and Coles 2019.

11 Minsky in McCorduck 2004, 86.

12 Hopper 1955, 1. At the time she worked for IBM. Following McCorduck's allegation that IBM promulgated the meme of the moronic computer (note 13), it is tempting to speculate that Hopper was following a corporate directive.

13 In addition to Hopper 1955, quoted above, note Clarke 1999, 195; all editions of Drucker's The Effective Executive from 1967 up to 2017 repeat it; and McCorduck 2004, 151, 187, 202. I have counted more than 30 such statements, almost all of which come from supposedly authoritative sources - including an IBM computer manual (Andree 1958, 2, 106).

14 Simon 1977/1960, 67, quoted and discussed in McCarty 2020, 222f.

Lovelace in fact suggested that limiting surprise to trivial causes is a mistake when (as many who have quoted her dictum overlook) she went on to write that the recombinatorial potential of Babbage's Engine, hence of our own, would throw new light on «the relations and nature of many subjects,» leading to more profound investigation of them (Menabrea 1843, 721, 723). Undervaluing the intellectual potential of combinatorics impoverishes us, as any combinatorial mathematician will tell you.15

It does so because from the mathematics of the combinatorial art at speed and with capacity the machine gains complexity}6 In simplest terms, a complex system is one in which interaction among components predominates over a governing set of rules so that surprising behaviours emerge. A tropical rainforest's ecosystem, with an «almost endless variety of species» in constant development and rapid evolution, is an example.17 Complexity Theory is wild stuff, not for the faint-hearted; asked for a definition, experts will often retreat to an «I know it when I see it» defence.18 But for now we need only this: that as we proceed from the strictly linear and rule-governed (which, in fact, is very hard to find) toward the increasingly unpredictable, we reach a point of potentially innovating randomness. Complexity theorists call this «the edge of chaos»}9 Questions and qualifications crowd in, however. First is what is meant by 'random', for which I propose, 'beyond our ability to anticipate' (McCarty 2021, 330-2); second, our relation to the random stimuli, particularly how they affect us and what we do with them; third, how generation of these stimuli is tuned. I will return to these questions at the end.

Up to this point I have approached our common ground mostly from the side of the machine. I've done so because, as I suggested, our discipline has downplayed technical

15 See esp. Berge 1971/1968. The English version adds a valuable Foreword by Gian-Carlo Rota.

16 Simulation is the area of computational research in which complexity has been developed to a considerable extent; see esp. Lenhard 2019, also McCarty 2019 and Mago and Dab-baghian 2014 for examples.

17 This is John Holland's opening example (Holland 2014).

18 Jervis 1998, 5f; also, Miller and Page 2007, 3f.

19 Waldrop 1992, 12 and Chapter 6; Langton 1992.

knowledge of the physical device and its sciences - the 'digital' in 'digital humanities', the mathematics in 'algorithm'. But then, as I said, the metaphoric has to be an equal player. For that reason, I now turn to three literary ways of thinking about the machine in its field of relations, and, to bring us up to date, take up the unavoidable subject of artificial intelligence. In our terms, as I interpret them, AI's central problem of knowledge became clear quite early. The most succinct statement of it I know is from a book review by the pioneering systems scientist Sir Charles Geoffrey Vickers on the social impact of the computer: in his words, «how playing of a role differs from the application of rules which could and should be made explicit and compatible» (Vickers 1971). Fifty years on, this remains a fair statement of the epistemological problem Ishiguro takes up with his Klara - a 'machine who thinks', as Pamela McCorduck would say.

Among her artificial kind, Klara distinguishes herself by the ability to construct her world from observations. She is not the most up-to-date model of her kind, we are told, but (the shop manager remarks when selling Klara) «her appetite for observing and learning... [and] ability to absorb and blend everything she sees around her is quite amazing. As a result, she now has the most sophisticated understanding of any AF in this store» (Ishiguro 2021, 42). Much later, after the sale, her owner's father, Paul, raises the question of artificial intelligence (though Ishiguro never uses this term): whether a machine of finite states can stretch beyond her finitude. Paul asks Klara whether she thinks there is a «human heart. in the poetic sense. Something that makes each of us special and individual» (218). Klara imagines this 'heart' to be like a house with many rooms. Paul asks, what if there are indefinitely many rooms, one within another within another, and so on. To this conundrum of infinite regress, of «turtles all the way down»,20 Klara responds that the human heart «must be limited. Even. in the poetic sense,» she declares, «there'll be

20 This refers to a very old story of unknown origins; see https://en.wikipedia.org/wiki/Turtles_all_the_way_down (2/5/21) and the version in Geertz 1973, 28f. On the implicit point of Ishiguro's regress, see Gell 1998, 147-8.

an end to what there is to learn.» (219) And so, at the novel's end, having served her purpose to the letter, Klara is discarded, abandoned in a scrapyard (301-7).

My second example is Steven Millhauser's late twentieth-century short story «The new automaton theatre» (1999). Millhauser imagines a small German city in which everyone from birth is under the spell of the human-like automata that their master craftsmen make. These automata are, the narrator says, «carried by our masters to a pitch of brilliance unequalled elsewhere and unimagined by the masters of an earlier age. [they are] the source of our richest and most spiritual pleasure.» (76) Suddenly, after a long, unexplained absence, the greatest craftsman of them all, Heinrich Graum, returns with radically new ideas. He devises then stages performances of an utterly new kind of automata, ones that do not imitate the human but who have «grown conscious of themselves. a new race. [with] lives parallel to ours but are not to be confused with ours.» (94) These strange automata are thus not creatures lingering in Masahiro Mori's well-known and much explored «uncanny valley» (Mori 2012/1970); they are profoundly, shockingly, differently, intentionally themselves. «The old art flourishes.» the narrator concludes, «but something new and strange has come into the world. We may try to explain it, but what draws us is the mystery. For our dreams have changed.» (95)

With my third and last example, as promised, I take up Italo Calvino's remarkably insightful lecture from the mid twentieth century, «Cybernetics and Ghosts», on the relation of the digital machine to writing and to liter-ature.21 I skip the «two routes» his argument follows to get to his fundamental question: how is it that the new (or, better, the un-realised) can arise within the constraints of language and literary traditions? Abbreviating as much as I dare, this is his response:

Literature is a combinatorial game that pursues the possibilities implicit in its own ma-

21 Note Calvino's explicitly credited sources: Propp, Lévi-Strauss and the Russian Formalists (1986/1967, 5, 6); «Shannon, Weiner, von Neumann, and Turing» (8); see also Duncan 2012; Ricci 2001: 18f.

terial, independent of the personality of the poet, but it is a game that at a certain point is invested with an unexpected meaning... [that] has slipped in from another level. The literature machine can perform all the permutations possible on a given material, but the poetic result will be the particular effect of one of these permutations on a man endowed with a consciousness and an unconscious, that is, an empirical and historical man. It will be the shock that occurs only if the writing machine is surrounded by the hidden ghosts of the individual and of his society. (Calvino 1986/1967, 22)22

Like, but differently than, the digital machine Lovelace described a century before him, Calvino makes no claim for the writer's absolute originality, rather that (in Lovelace's words) «in so distributing and combining the truths and the formulae. the relations and the nature of many subjects. are necessarily thrown into new lights, and more profoundly investigated.» (Menabrea 1843, 722) Thus Calvino, with Claude Lévi-Strauss explicitly in mind, has his «storyteller of the tribe. [continue] imperturbably [making] his permutations of jaguars and toucans until the moment comes when one of his innocent little tales explodes into a terrible revelation.»

So then, in conclusion, what is the lesson here? It comes in two flavours, pragmatic and metaphysical.

For the pragmatic, I summon historian Michael Mahoney's advice from the recent history of technology: to get ourselves «into the driver's seat», then to ask, with intention to commit, what we want to do and what we are willing to spend to do it (2003, 122). Driving, or giving effective advice to the driver, I've argued,

22 ... la letteratura è si gioco combinatorio che segue le possibilité implicite nel proprio materiale, indipendentemente dalla personalità del poeta, ma è gioco che a un certo punto si trova investito d'un significato inatteso... ma slittato da un altro piano. La macchina letteraria puô effettuare tutte le permutazi-oni possibili in un dato materiale; ma il risultato poetico sarà l'effetto particolare d'una di queste permutazioni sull'uomo dotato d'una coscienza e d'un inconscio, cioè sull'uomo empirico e storico, sarà lo shock che si verifica solo in quanto attorno alla macchina scrivente esistono i fantasmi nascosti dell'individuo e della società. (Calvino 1980/1966, § 4)

requires technical knowledge and the insight that comes from it. Such knowledge not only focuses desire but awakens it, brings to the fore such exercises of the imagination as I've briefly presented. These, and much more of them, and not only literary ones but also the artistic, musical and however expressive, are likewise the tools of our craft. Their applicability is possible because the technologies we play with, though many forget or ignore this, are of the human sciences already, waiting for greater realisation under knowledgeable hands.

For the metaphysical I turn to the Preface of Terry Winograd and Fernando Flores's Un-

References

derstanding Computers and Cognition (1987). «All new technologies», they write there,

develop within the background of a tacit understanding of human nature and human work. The use of technology in turn leads to fundamental changes in what we do, and ultimately in what it is to be human. We encounter the deep questions of design when we recognize that in designing tools we are designing ways of being. (xi)

There you have it: what ways of being do we want to open for ourselves?

Andree, R. V., 1958. Programming the IBM 650 Magnetic Drum Computer and Data-Processing Machine. New York: Henry Holt and Company.

Appel, A.W., 2012. Alan Turing's Systems of Logic: The Princeton Thesis. Princeton: Princeton University Press.

Benthall, J., 1972. Science and Technology in Art Today. London: Thames and Hudson.

Berge, C., 1971/1968. Principles of Combinatorics. New York: Academic Press.

Bernhardt, C., 2019. Quantum Computing for Everyone. Cambridge MA: MIT Press.

Borkar, S., 2005. «Designing reliable systems from unreliable components: The challenges of transistor variability and degredation». IEEE Micro, November-December.

Calvino, I., 1986/1967. «Cybernetics and Ghosts». In Patrick Creagh, trans., The Uses of Literature. San Diego CA: Harcourt Brace & Company. pp. 3-27.

Calvino, I., 1980/1966. «Cibernetica e fantasmi (Appunti sulla narrative come processo combinatorio)». Unapietra sopra. 164-81. Torino: Einaudi.

Clarke, A. C., 1999. «The Obsolescence of Man». In Profiles of the Future. An Inquiry into the Limits of the Possible. London: Victor Gollancz, 1999. pp. 193-206.

Dening, G., 1998. Readings / Writings. Melbourne: Melbourne University Press.

Drucker, P. F., 1967. The Effective Executive. New York: Harper & Row.

Duncan, D., 2012. «Calvino, Llull, Lucretius: Two Models of Literary Combinatorics». Comparative Literature 64.1: 93-109

Frye, N., 1957. Anatomy of Criticism: Four Essays. Princeton NJ: Princeton University Press.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Gadamer, H.-G., 2004. Truth and Method. 2nd edn. Trans. Joel Weisenheimer and Donald C. Marshall. London: Continuum.

Geertz, C., 1973. «Thick Description: Toward an Interpretive Theory of Culture». In The Interpretation of Cultures: Selected Essays. 3-30. New York: Basic Books.

Gell, A., 1992. «The Technology of Enchantment and the Enchantment of Technology». In J. Coote and A. Shelton, eds., Anthropology, Art and Aesthetics. Oxford: Clarendon Press. pp. 40-67.

Gell, A., 1998. Art and Agency: An Anthropological Theory. Oxford: Clarendon Press.

Hamming, R., 1968. «One Man's View of Computer Science». 1968 Turing Award Lecture. Journal of the Association of Computing Machinery 16.1: 3-12.

Holland, J. H. 2014. Complexity, A Very Short Introduction. Oxford: Oxford University Press.

Hopper, G. M., 1955. Automatic Coding for Digital Computers. A talk presented at the High Speed Computer Conference, Louisiana State University, 16 February 1955. Remington Rand. http://www.bitsav-ers.org/pdf/univac/HopperAutoCodingPaper_1955.pdf (3/10/21).

Ishiguro, K., 2021. Klara and the Sun: A Novel. New York: Alfred A. Knopf.

Jervis, R., 1997. System Effects: Complexity in Political and Social Life. Princeton: Princeton University Press.

Langton, C. G., 1992. «Life at the Edge of Chaos». In Christopher G. Langton, Charles Taylor, J. Doyne Farmer and Steen Rasmussen, eds., Artificial Life II. Proceedings of the Workshop on Artificial Life held February, 1990, in Santa Fe, New Mexico. Redwood City CA: Addison-Wesley. pp. 41-91.

Lenhard, J., 2019. Calculated Surprises: A Philosophy of Computer Simulation. New York: Oxford University Press.

Lenoir, T., 2007. «Techno-Humanism: Requiem for the Cyborg». In Jessica Riskin, ed., Genesis Redux: Essays in the History and Philosophy of Artificial Life. Chicago: University of Chicago Press. pp. 196-220.

Lloyd, G. E. R. and A. Vilafa, eds., 2020. Science in the Forest, Science in the Past. Chicago: HAU Books.

Mago, V. K. and V. Dabbaghian, eds. 2014. Computational Models of Complex Systems. Cham: Springer International Publishing Switzerland.

Mahoney, M. S., 2003. «The histories of computing(s)». Interdisciplinary Science Reviews 30.2: 119-35.

McCarty, W., 2006. «Tree, Turf, Centre, Archipelago - or Wild Acre? Metaphors and Stories for Humanities Computing». Literary and Linguistic Computing 21.1: 1-13.

McCarty, W., 2012. «The Residue of Uniqueness». Historical Social Research 37.3: 24-45.

McCarty, W., 2014. «Getting there from here. Remembering the future of digital humanities.» Roberto Busa Award lecture 2013. Literary and Linguistic Computing 29.3: 283-306.

McCarty, W., 2019. «Modeling the actual, simulating the possible». In Julia Flanders and Fotis Jan-nidis, eds., The Shape of Data in the Digital Humanities: Modeling Texts and Text-Based Resources. London: Routledge. pp. 264-84.

McCarty, W., 2020. «Modeling, Ontology, and Wild Thought: Toward an Anthropology of the Artificially Intelligent». In Lloyd and Vilafa, eds. 2020: 209-36.

McCarty, W., G. E. R. Lloyd and A. Vilafa eds., 2021. Science in the Forest, Science in the Past II. Thematic issue of Interdisciplinary Science Reviews 46.3.

McCarty, W., 2021. «As perceived, not as known: Digital enquiry and the art of intelligence.» In McCarty, Lloyd and Vilafa, eds. 2021: 325-62.

McCorduck, P., 2004. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, MA: A. K. Peters, Ltd.

McGann, J., 2004. «Marking Texts of Many Dimensions». In S. Schreibman, R. Siemens and J. Un-sworth, eds. A Companion to Digital Humanities. Oxford: Blackwell. pp. 198-217.

Menabrea, L. F. 1843. Sketch of the Analytical Engine invented by Charles Babbage Esq. Trans. and comm. Lady Ada Lovelace. In Richard Taylor, ed., Scientific Memoirs selected from the Transactions of Foreign Academies of Science and Learned Academies and from Foreign Journals. London: Richard and John E. Taylor. pp. 666-731.

Miller, J. H. and S. E. Page, 2007. Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton: Princeton University Press.

Millhauser, S., 1999. «The New Automaton Theatre». In The Knife Thrower and Other Stories. London: Phoenix House. pp. 76-96.

Minsky, M., 1965. «Matter, Mind and Models». Artificial Intelligence Memo 77 (MAC-M-230), Project MAC, Massachusetts Institute of Technology, March. http://www.bitsavers.org/pdf/mit/ai/aim/ AIM-077.pdf (3/10/21)

Mori, M., 2012/1970. «The Uncanny Valley». Trans. Karl F. MacDorman and Nomi Kageki. IEEE Robotics & Automation Magazine, June: 98-100.

Pakin, S. and P. Coles, 2019. «The Problem with Quantum Computers». «Observations» for 10 June. Scientific American. https://blogs.scientificamerican.com/observations/the-problem-with-quantum-computers/ (3/10/21).

Pelikan, J., 1992. The Idea of the University: A Reexamination. New Haven: Yale University Press.

Pippenger, N., 1990. «Developments in «The Synthesis of Reliable Organisms from Unreliable Components». In James Glimm, John Impagliazzo and Isadore Singer, eds., The Legacy of John von Neumann. Washington DC: American Mathematical Society. pp. 311-24.

Ricci, F., 2001. Painting with Words, Writing with Pictures: Word and Image in the Work of Italo Calvino. Toronto: University of Toronto Press.

Simon, H. A., 1977/1960. The New Science of Management Decision. Rev. edn. Englewood Cliffs NJ: Prentice-Hall.

Spratt, T., 1667. The History of the Royal-Society of London, For the Improving of Natural Knowledge. London: Printed for T.R. by J. Martyn. and J. Allestry.

Turing, A. M., 1950. «Computing Machinery and Intelligence». Mind 59.236: 433-60.

Vickers, Sir C. G., 1971. «Keepers of rules versus players of roles». Rev. of Thomas L. Whisler, The Impact of Computers on Organizations, and James Martin and Adrian R. D. Norman, The Computerized Society. Times Literary Supplement, 21 May: 585.

von Neumann, J., 1963/1956. «Probabilistic logics and the synthesis of reliable organisms from unreliable components». In A. H. Taub, ed. Design of Computers, Theory of Automata and Numerical Analysis. John von Neumann Collected Works, Vol. V. Oxford: Pergamon Press.

Waldrop, M. M., 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster.

Wegner, P., 1998. «Interactive foundations of computing». Theoretical Computer Science 192: 315-51.

Winner, L., 1993. «Upon Opening the Black Box and Finding It Empty: Social Constructivism and the Philosophy of Technology». Science, Technology, & Human Values 18.3: 362-78.

Winograd, T., 1974. «The Processes of Language Understanding». In Jonathan Benthall, ed. The Limits of Human Nature: Essays based on a course of lectures given at the Institute of Contemporary Arts, London. New York: E. P. Dutton & Co. pp. 208-32.

Winograd, T. and F. Flores. 1987. Understanding Computers and Cognition: A New Foundation for Design. Reading MA: Addison-Wesley Publishing.

i Надоели баннеры? Вы всегда можете отключить рекламу.