Научная статья на тему 'THISTHEORYDOESNOTEXIST: HISTORICIZING AND UNDERSTANDING ARTIFICIAL-INTELLIGENCEGENERATED, HYPER-REAL, PHOTOGRAPHIC DATA VISUALIZATIONS'

THISTHEORYDOESNOTEXIST: HISTORICIZING AND UNDERSTANDING ARTIFICIAL-INTELLIGENCEGENERATED, HYPER-REAL, PHOTOGRAPHIC DATA VISUALIZATIONS Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
276
41
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
photography / indexicality / DeepFake / Artificial Intelligence / GAN / Generalized Adversarial Networks / Deep Learning / ThisPersonDoesNotExist / фотография / индексаль- ность / DeepFake / искусственный интеллект / GAN / Generalized Adversarial Networks / глубокое обуче- ние / ThisPersonDoesNotExist

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Kris Belden-Adams

Thispersondoesnotexist.com offers a refreshable, seductively realistic, series of images of exactly that: a steady supply of amalgamated images of fictional people. Built from an unknown number of Flickr photographs gleaned from the open archives of the internet, these photographically hyper-realistic images enjoy the appearance of veristic “truth,” yet are framed by their status as synthetic products generated by Artificial Intelligence, or A.I. Like other images generated using A. I. algorithms (General Adversarial Networks, or GANs), thispersondoesnotexist is known as a “DeepFake” generator. Thispersondoesnotexist, its spinoffs (thisAirBnBdoesnotexist, thesecatsdonotexist, thiswaifudoesnotexist.net, and thisstartupdoesnotexist), FakeApp, DeepFaceLab, DeepDream, Lyrebird, AIMonaLisa, DeepNude, and a proliferation of others, create images and videos so seemingly realistic usingan archive of materials that they hardly – if at all – can be distinguished from actual video clips and photographs of real people. This technology, currently in its adolescence, is feared by many for its capacity to create “fake news.” It feeds fears of a digitally-kindled, “post-truth,” fake news, era of “alternative facts,” and widespread information illiteracy. This essay examines the recent phenomenon of A.I . – generated DeepFakes and looks past the anxieties they have raised, in order to address their veracity and pace of machine learning. It considers that these images are extensions of digital photo/video montage practices that predate the digital era (even if the use of A. I.’s human-generated algorithms to make them is new). The emergence of digital media simply calls us to the task of articulating the complicated nature of ThisPersonDoesNotExist’s “data portraits” – ones that may be produced independently by computers following human-provided directives.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

«ЭТАТЕОРИЯНЕСУЩЕСТВУЕТ». ИСТОРИЯ И ОСМЫСЛЕНИЕ ГИПЕРРЕАЛИСТИЧНЫХ, СОЗДАВАЕМЫХ ИСКУССТВЕННЫМ ИНТЕЛЛЕКТОМ, ПОХОЖИХ НА ФОТОГРАФИИ ВИЗУАЛИЗАЦИЙ ДАННЫХ

На вебсайте Thispersondoesnotexist.com (название которого переводится с английского языка как «этот человек не существует») вниманию зрителя предлагается серия обновляющихся и обманчиво реалистичных изображений, суть которой описывается названием сайта: этот неиссякаемый поток – изображения несуществующих людей. Создаваемые на основе фотографий, находящихся в открытом доступе в сети интернет (изначально загруженных на сервис Flickr), эти неотличимые от фотографий, гиперреалистичные изображения выглядят как веристические портреты, однако относятся к синтетической продукции, создаваемой искусственным интеллектом (ИИ). Как и в случае с другими изображениями, которые генерируют алгоритмы ИИ (General Adversarial Networks или GANs), к thispersondoesnotexist применим термин «генератор дипфейков». Thispersondoesnotexist и его побочные продукты (thisAirBnBdoesnotexist, thesecatsdonotexist, thiswaifudoesnotexist.net, thisstartupdoesnotexist), FakeApp, DeepFaceLab, DeepDream, Lyrebird, AIMonaLisa, DeepNude, и множество других, обрабатывая архивы существующих изображений, создают новые изображения и видео, которые кажутся настолько реалистичными, что их сложно – и не всегда возможно – отличить от настоящих видеозаписей и фотографий реальных людей. Эта технология, сейчас находящаяся в самом начале своего развития, многих пугает своей способностью к созданию «фейковых новостей». Она наводит на страшную мысль о том, что цифровые технологии приближают эру «пост-правды», ложных новостей, «альтернативных фактов» и всеобщей информационной безграмотности. В данной статье исследуется феномен «дипфейков», генерируемых ИИ, сопряжённые с развитием этой технологии и со скоростью развития машинного обучения тревоги. Статья представляет эти изображения как продолжение более ранних практик цифрового монтажа видео и фотографии (несмотря на то, что использование алгоритмов ИИ – нововведение). Возникновение цифровых медиа лишь ставит нас перед необходимостью артикулировать сложную природу «data-портретов» на ThisPersonDoesNotExist – портретов, которые могли бы быть созданы компьютерами по инструкциям, выдаваемым людьми.

Текст научной работы на тему «THISTHEORYDOESNOTEXIST: HISTORICIZING AND UNDERSTANDING ARTIFICIAL-INTELLIGENCEGENERATED, HYPER-REAL, PHOTOGRAPHIC DATA VISUALIZATIONS»

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

Университет Миссиссипи, США Доцент отделения истории искусств, PhD по истории искусств

University of Mississippi, USA Associate Professor, Department of Art History, PhD in Art History [email protected]

«ЭТАТЕОРИЯНЕСУЩЕСТВУЕТ». ИСТОРИЯ И ОСМЫСЛЕНИЕ ГИПЕРРЕАЛИСТИЧНЫХ, СОЗДАВАЕМЫХ ИСКУССТВЕННЫМ ИНТЕЛЛЕКТОМ, ПОХОЖИХ НА ФОТОГРАФИИ

ВИЗУАЛИЗАЦИЙ ДАННЫХ

На вебсайте Thispersondoesnotexist.com (название которого переводится с английского языка как «этот человек не существует») вниманию зрителя предлагается серия обновляющихся и обманчиво реалистичных изображений, суть которой описывается названием сайта: этот неиссякаемый поток - изображения несуществующих людей. Создаваемые на основе фотографий, находящихся в открытом доступе в сети интернет (изначально загруженных на сервис Flickr), эти неотличимые от фотографий, гиперреалистичные изображения выглядят как веристические портреты, однако относятся к синтетической продукции, создаваемой искусственным интеллектом (ИИ).

Как и в случае с другими изображениями, которые генерируют алгоритмы ИИ (General Adversarial Networks или GANs), к thispersondoesnotexist применим термин «генератор дипфейков». Thispersondoesnotexist и его побочные продукты (thisAirBnBdoesnotexist, thesecatsdonotex-ist, thiswaifudoesnotexist.net, thisstartupdoesnotexist), FakeApp, DeepFaceLab, DeepDream, Lyrebird, AIMonaLisa, DeepNude, и множество других, обрабатывая архивы существующих изображений, создают новые изображения и видео, которые кажутся настолько реалистичными, что их сложно - и не всегда возможно - отличить от настоящих видеозаписей и фотографий реальных людей. Эта технология, сейчас находящаяся в самом начале своего развития, многих пугает своей способностью к созданию «фейковых новостей». Она наводит на

страшную мысль о том, что цифровые технологии приближают эру «пост-правды», ложных новостей, «альтернативных фактов» и всеобщей информационной безграмотности.

В данной статье исследуется феномен «дипфейков», генерируемых ИИ, сопряжённые с развитием этой технологии и со скоростью развития машинного обучения тревоги. Статья представляет эти изображения как продолжение более ранних практик цифрового монтажа видео и фотографии (несмотря на то, что использование алгоритмов ИИ - нововведение). Возникновение цифровых медиа лишь ставит нас перед необходимостью артикулировать сложную природу «data-портретов» на ThisPersonDoesNotExist - портретов, которые могли бы быть созданы компьютерами по инструкциям, выдаваемым людьми.

Ключевые слова: фотография, индексаль-ность, DeepFake, искусственный интеллект, GAN, Generalized Adversarial Networks, глубокое обучение, ThisPersonDoesNotExist.

THISTHEORYDOESNOTEXIST: HISTORICIZING AND UNDERSTANDING ARTIFICIAL-INTELLIGENCE-GENERATED, HYPER-REAL, PHOTOGRAPHIC DATA VISUALIZATIONS

Thispersondoesnotexist.com offers a refreshable, seductively realistic, series of images of exactly that: a steady supply of amalgamated images of fic-

137

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

tional people. Built from an unknown number of Flickr photographs gleaned from the open archives of the internet, these photographically hyper-realistic images enjoy the appearance of veristic "truth," yet are framed by their status as synthetic products generated by Artificial Intelligence, or A.I.

Like other images generated using A. I. algorithms (General Adversarial Networks, or GANs), thispersondoesnotexist is known as a "DeepFake" generator. Thispersondoesnotexist, its spinoffs (thi-sAirBnBdoesnotexist, thesecatsdonotexist,

thiswaifudoesnotexist.net, and thisstartupdoesnotexist), FakeApp, DeepFaceLab, DeepDream, Lyrebird, AI-MonaLisa, DeepNude, and a proliferation of others, create images and videos so seemingly realistic us-ingan archive of materials that they hardly - if at all -can be distinguished from actual video clips and photographs of real people. This technology, currently in its adolescence, is feared by many for its capacity to create "fake news." It feeds fears of a digitally-kindled,

"post-truth," fake news, era of "alternative facts," and widespread information illiteracy.

This essay examines the recent phenomenon of A.I . - generated DeepFakes and looks past the anxieties they have raised, in order to address their veracity and pace of machine learning. It considers that these images are extensions of digital photo/video montage practices that predate the digital era (even if the use of A. I.'s human-generated algorithms to make them is new). The emergence of digital media simply calls us to the task of articulating the complicated nature of ThisPersonDoesNotExist's "data portraits" - ones that may be produced independently by computers following human-provided directives.

Key words: photography, indexicality, Deep-Fake, Artificial Intelligence, GAN, Generalized Adversarial Networks, Deep Learning, ThisPerson-DoesNotExist.

138

rhispersondoesnotexist.com offers a refreshable, seductively realistic, steady supply of digitally amalgamated images of fabricated, fictional people (Fig. 1). Built from an unknown number of Flickr photographs harvested from the open-access archives of the internet, and assembled within a split-second by an algorithm, these photographically hyper-realistic images enjoy the appearance of visual exactitude and "truth." Yet these images' website frames them as synthetic composite portraits generated by the click of a "refresh" button by Artificial Intelligence (A.I.).

Like other images assembled using A. I. algorithms called General Adversarial Networks (GANs), thispersondoesnotexist is known as a

"DeepFake" generator. Thispersondoesnotexist and its spinoffs (thisAirBnBdoesnotexist, thesec-atsdonotexist, thiswaifudoesnotexist.net, and thisstartupdoesnotexist) are among many applications that create A.I.-made images and videos so seemingly realistic using an archive of materials that they sometimes are indistinguishable from actual video clips and photographs of real subjects. Moreover, the A.I. DeepFake algorithm also possesses "deep learning" capabilities that enable it to correct its own errors by comparing its images to Flickr archival materials that are presumed to be unaltered. This is to say, the naturalism achieved by DeepFakes keeps getting more seductive and error-free over time.

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

2019 - the artist looks and sounds just like he did in historic footage, the AI technology's archival source material. The DeepFake Dali is relatable and alive to museum visitors, rather than locked in a distant past.

Other DeepFakes are not as entertaining or well-intended. Some DeepFake, falsified, celebrity and "revenge" pornographic videos have prompted humiliation and defamation lawsuits, and have heightened concerns about the unrestrained circulation of online imagery and videos.1 Countries have been slow to respond to regulating DeepFakes. The U.S. Congress is actively drafting legislation, introduced by Ben Sasse on 21 December 2018, to curb the use of DeepFakes by the public, especially in the countdown to the 2020 U.S. election. This technology, currently in its infancy and growing more seamlessly realistic, also is feared by many computer-science and journalism scholars for its capacity to defame celebrities and public figures, sew political discord, and/or to create "fake news" - perhaps even for real. This technology in the wrong hands can be used to deceive a public that is easily misled by political

Fig. 1. Portrait, www.ThisPersonDoesNotExist.com, Generated at 9:13 a.m., Oct. 24, 2019. (ThisPerson-DoesNotExist.com/Phil Wang)

Some DeepFakes (photographs, videos, and hybrid products of both media) provide seemingly harmless entertainment, such as face-swap applications. They even can make history accessible to new museum audiences (i.e. - A.I. Mona Lisa, the Living Portrait of Albert Einstein, and (Salvador) Dalí Lives. For this reason, the American Alliance of Museums has declared DeepFakes and immersive storytelling to be integral to the future of museums (as exemplified by its use by the Salvador Dalí Museum in St. Petersburg, Florida, the Barnes Foundation in Philadelphia, the Vatican's Secret Archives, the [U. S.] National Soccer Hall of Fame, the Art Institute of Chicago, and its discussion at the "Museums and New Intelligences" conference in November of 2018. For example, in the DeepFake video Dalí Lives - on view at the Salvador Dalí Museum starting in

139

1 Jose Sanchez del Rio, Daniela Moctezuma, Christina Conde, Isaac Martin de Diego, Enrique Cabello, "Automated Border Control E-Gates and Facial Recognition Systems," Computers and Security, Vol. 62 (2016): pp. 49-72.; Karen Hao, "Congress Wants to Protect You from Biased Algorithms, Deepfakes, and Other Bad AI," MIT Technology Review (April 15, 2019). Accessed June 26, 2019: https://www.technologyreview.com/s/613310/congress -wants-to-protect-you-from-biased-algorithms-deepfakes-and-other-bad-ai/; Nicola Serra, "AI, Ethics, and Law on Artificially-Created Human Faces," Medium (March 12, 2019). Accessed June 25, 2019: https://becominghuman.ai/https-medium-com-nicola-del-serra-ai-ethics-and-law-on-artificial-human-faces-adf30a310fd1

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

gaslighting and lacks the information connoisseur-ship skills to discern fact from fiction, and "fake news" from the real thing. The solution, perhaps, is not to fear all news as "fake," but to be critical viewers of online material. Journalism scholars have predicted that DeepFakes could be weapon-ized to manufacture evidence that incites world wars, manipulates elections, and intentionally deceives the public. This technology feeds preexisting fears that we live in a digitally-kindled, "post-truth," "fake news" era of "alternative facts" and widespread information illiteracy2

If those issues alone were not frightening enough, DeepFakes, the products of computer algorithms, have incited dystopian debates in the digital humanities about the phenomenon of singularity, a feared theoretical condition in which trained machines improve their veracity - as they are programmed by human-constructed "deep learning" algorithms to do - and uncontrollably surpass the intelligence and control of their human programmers and counterparts.3 Stephen Hawking

2 Charles Jennings, Artificial Intelligence: The Rise of the Lightspeed Learners (New York: Rowman and Littlefield, 2019).; Timothy Druckrey, "From Dada to Digital: Montage in the Twentieth Century," in Metamorphoses: Photography in the Electronic Age (New York, Aperture, 1994), pp, 4-7.; Fred Ritchin, "Photojournalism in the Age of Computers, " The Critical Image: Essays on Contemporary Photography, ed. Carol Squiers (Seattle: Bay Press, 1990), pp. 28-37.

3 Vernor Vinge, "The Coming Technological Singularity: How to Survive In the Post-Human Era," Conference Presentation, NASA. Lewis Research Center, Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Dec. 1, 1993. pp, 11-22. Accessed Nov. 9, 2019: https://ntrs.nasa.gov/search.jsp?R=19940022856; Alex Steffen, "What Happens When Technology Zooms Off the Charts: Singularity and its Meanings," Whole Earth

and Elon Musk even suggested that A.I.'s unchecked, accelerating evolution would result in human extinction in the face of a technological superintelligence with which we cannot compete.4 Psychology scholars Bartosz Brozek and Bartosz Janik raised particular concerns about A.I.'s inability (so far) to morally self-regulate, and implicitly call computer scientists to the task of addressing this ethical public responsibility before it is too late.5

Despite these grave warnings, many of the anxieties raised by DeepFakes are not new, but similar to ones raised by photography at previous points of its history. This chapter takes a brief look at what photography's recent history might teach us about DeepFakes, takes a look at the current status of Deepfakes' veracity, and begins to track and account for the pace of the GAN algorithm's "deep learning" process.

Photography's Indexical Discourses

Questions about the truth value of photographs have been raised about photographs for far longer than DeepFakes have existed. For more than a century before digital photography would be realized, the earliest practitioner of the medium recognized and capitalized on the medium's expressive range or rhetorical realisms. William Henry Fox Talbot, for example, remarked on the

Catalog, Spring 2003. Accessed Nov. 9, 2019:

http: //www.wholeearth. com/index.php

4 Charles Jennings, Artificial Intelligence: The Rise of the Lightspeed Learners (New York: Rowman and Littlefield, 2019), p. 1.

5 Bartosz Brozek and Bartosz Janik, "Can Artificial Intelligences Be Moral Agents?" New Ideas in Psychology, Vol. 54 (Aug. 2019), pp. 101-106.

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

fidelity of this early negative's capacity to represent the pattern and weave of the lace, while also being untruthful to its appearance (Fig. 2). The lace itself was black. Early photographs also offered laterally reversed images that failed to represent the world as it appears, many were made on a reflective surface - also a departure from the real - and all the earliest photographs failed to reproduce subjects in full, life-like color. From its start, the medium trafficked in a flirtatious rhetoric of realisms, rather than delivering faithfully naturalistic views.

Fig. 2. William Henry Fox Talbot, Lace, from Pencil of Nature (1844-1846). Public Domain

But the issue of truth emerged strongly in more recent discursive debates about its digitization. During an interview in 1998, German photographer Andreas Gursky stated that "since the photographic medium has been digitalised, a fixed definition of the term 'photography' has become

impossible."6 His statement echoes the written thoughts of several photographers and scholars writing around the year 2000, who announced that photography was in the midst of an ontological identity crisis (which was given various names and explanations, including: "post-photography," "the post-medium condition," "photography after photography" and "the death of photography"). These discussions often were premised on the following: that the emergence of digitalmanipulation software in the 1980s caused the medium to lose touch with its defining characteristic: its relationship to reality.

Photography's official "time of death" was declared in the 1980s. In addition to Fred Ritchin, Timothy Druckrey argues that digital images are "postphotographic, as they no longer rely on the character of the photograph to verify something in the world."7 As such, Druckrey suggests that photographs "are not concerned with verification, classification, or any of the systemic epistemolo-gies of the camera."8 Photography's fidelity to realism, Peter Galassi has added, is rooted in the medium's necessary tie to Renaissance perspec-tival systems.9 As a result, W. J. T. Mitchell as-

141

6 Calvin Tomkins, "The Big Picture," Modern Painters, Vol. 14, No. 1 (2001), p. 32. Author's spelling has not been changed.

7 Timothy Druckrey, "From Dada to Digital: Montage in the Twentieth Century," in Metamorphoses: Photography in the Electronic Age (New York, Aperture, 1994), p. 7.

8 Ibid.

9 Photography's relationship to Renaissance vision is explored in: Peter Galassi, Before Photography: Painting and the Invention of Photography (New York: Museum of Modern Art, 1981). Michel Frizot presents a provocative definition of photography that preserves its tie to Renaissance vision, while also accommodat-

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

serted that photography has been "dead - or, more precisely, radically and permanently displaced" since 1989.10 Digital technology has been charged with severing the photograph's "indexicality."

Yet Jean Foncuberta and Corey Dzenko have suggested that digital images merely assumed the functions and "truth values" of their analog forerunners.11 Dzenko argues:

While digital photographic practices include a new ease of editing and transmission of images, this has not resulted in widespread mistrust of photographic transparency as was once feared. Imaging technologies will continue to provide new possibilities for the format and distribution

ing the medium's digital manifestations. He argues that "a photograph" is essentially a collection of "photons" (light) on a sensitive surface: Michel Frizot, 'Who's Afraid of Photons?," in Photography Theory, ed. James Elkins (New York: Routledge, 2007). 269-283, pp. 58-65. Frizot's definition of the medium's essence is not inclusive to composite photographs.

10 William J. T. (Cambridge: The MIT Press, 1992), p. 20. Fred Ritchin, "Photojournalism in the Age of Computers," in The Critical Image: Essays on Contemporary Photography, C. Squiers, ed. (Seattle: Bay Press, 1990), pp. 28-37. Other works of scholarship that present arguments that digitally-altered photographs have signaled the "death" of the photographic medium include: Fred Ritchin, "The End of Photography as We Have Known It," in Photo Video, ed. Paul Wombell (London: Rivers Oram Press, 1991), 8-15., Timothy Druckrey, "L'amour Faux," in Digital Photography: Captured Images, Volatile Memory, New Montage (San Francisco: San Francisco Camerawork, 1988), p. 4-9.

11 Joan Foncuberta, "Revisiting the Histories of Photography," in Joan Foncuberta, ed., Photography: Cri-

sis of History (Barcelona: Actar, 2002), pp. 10-11.;

Corey Dzenko, "Analog to Digital: The Indexical Function of Photographic Images," Afterimage, Vol. 37, No. 2 (Sept. 2009), pp. 19-23.

of images, and these developments will continue to be rooted in previous social uses of photography.12

Here, Dzenko's study suggests that not only is social use a factor in a photograph's perceived "indexicality," but our perceptions of the medium's status as a document did not universally change with the emergence of digital manipulation software in the 1980s.

It just so happened that digital photography also emerged amid these significant discursive shifts in the theorization of the medium.13 Druckrey, Mitchell, and Ritchin published books and essays on the digital photograph's divergence from the medium's essential documentary function from 1988 to 1992, just after the release of the mass-marketed digital-manipulation software Photoshop in 1987 by the Adobe Corporation. These authors' writings were published at a time in which much trepidation was expressed about Photoshop, which by 1992 was still was considered experimental by most graphic-design schools and studios, and by most magazines and newspapers - industries which have since accepted Photoshop as a standard tool. Photoshop's hesitant reception by these industries, and the skepticism expressed by Druckrey, Mitchell, and Ritchin, were shaped by the ethical issues raised by its antecedent technologies, Scitex, Crosfield, and Hell digital-image-manipulation systems. These pre-

12 Corey Dzenko, "Analog to Digital: The Indexical Function of Photographic Images," Afterimage, Vol. 37, No. 2 (Sept. 2009), p. 23.

13 Geoffrey Batchen, "This Haunting," in Photography Theory, Elkins, ed. (New York: Routledge, 2006), p. 284.

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

press imaging machines were in use in the early 1980s, and still are employed in many pre-press departments and printing shops in America. Scitex, Crosfield, and Hell systems are large, costly, and complicated to operate. These machines were not marketed to the mainstream public - as Photoshop would be. Undoubtedly, the idea of releasing the capability to digitally manipulate photographs in the hands of the public likely was a source of anxiety for these three commentators. But such an aversion to digital-manipulation software such as Photoshop also was prompted by the ethical questions sparked by the publication of several altered images made with pre-Photoshop technology, including a cover of National Geographic with a relocated Great Pyramid This altered image therefore provides a form of "photographic realism," but not the indexicality that might be offered by an unaltered photograph - or expected of a "news" publication. According to Ritchin, this incident "killed" photography. Rather, it marked the moment in which photography's reliable relationship to "truth" DIED.

Artificial Intelligence and the Indexical Debates

While the use of A. I. to generate photographs is new, the evolution of Photoshop's "content aware" tool (which uses an algorithm to "fill in" supplementary visual information) provides a precursor to its logic, as do concerns over the divorce of the human hand from the making process that emerged in the late-nineteenth and early-twentieth centuries as the then-new medium of photography threatened to replace painting as a machine-enabled manner of capturing likenesses.

The reception of digitally altered journalistic images from the birth of Photoshop to now, and the ebbs and flows of concern expressed by scholars and mass audiences, also has precedent, and a history influenced deeply by anxieties over Y2K, and fears over Photoshop DIY "fakes." The emergence of A.I.-driven images simply call us to articulate the complicated nature of automatically generated data portraits, and ones that may be produced independently by computers following human-provided directives.

DeepFakes are so new that they have yet to be the subject of academic study in photographic histories. But as they exist now, DeepFakes are far from flawless data portraits, and perhaps may put minds at ease over the rapidity of deep learning, singularity, and digital products' possible deception of viewers. To assess the veracity of these images, from April to October of 2019, I saved 250 portraits from the DeepFake generator This-PersonDoesNotExist.com. Then, I scrutinized each image for demographic information and any apparent flaws.

Demographically, images on ThisPerson-DoesNotExist were skewed to depict subjects that reflect the archive from which these DeepFakes were made. Flickr's most active users are 35-44 years old, female, post family photographs, and are from the United States of America, the United Kingdom, Japan, Germany and France.14 Therefore, white/Caucasian individuals appeared in 58% of the AI images, and the approximate ages of the subjects were skewed toward a demograph-

14 "Social Media Demographics, 2017 - Who's Using What?" Kahootz Media (2017). Accessed Nov. 9, 2019: https://www.kahootzmedia.com/social-media-demographics-2017-whos-using/

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

ic under age 50 (Figs. 3, 4, 5). Subjects also tended to be female (52%). Masculine portraits appeared 40% of the time. Gender-fluid subjects were generated by the AI algorithm at a rate of 8%, a significantly lower proportion than the 35% of the U.S. population that claimed to be gender-fluid in the 2015 U.S. Transgender Survey.15

Fig. 3.

15 James, S. E., Herman, J. L., Rankin, S., Keisling, M., Mottet, L., & Anafi, M. The Report of the 2015 U.S. Transgender Survey. Washington, DC: National Center for Transgender Equality (2016), p. 45. Accessed Nov. 9, 2019: URL: https://transequality. org/ sit es/default/files/docs/usts/US TS-Full-Report-Dec 17.pdf

Fig. 4.

144

Fig. 5.

| 4(37)2019 |

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

Errors, ThisPersonDoesNotExist Portraits

This audit was performed of 250 images generated by the website from April to October 2020.

Error Chance of appearance Notes/Observations

Erratic forehead texture 55.2% Almost always occurs In subjects wearing hats

In hat-wearing subjects, hair resembles texture of hat 11% In People of Color wearing hats, this texture problem has a 61 % chance of occurrence.

Pupils do not track in the same direction 12%

Mouth misshapen 16%

Misplaced skin texture/wrinkles 72% Almost always occurs for subjects under 50 years old

Misplaced dimples or over-accentuated dimple intensity 29%

Misplaced wrinkle/s between eyebrows 13%

Ear structure inaccurate 57%

Eyebrows uneven or one is reversed 23%

Nose structure is asymmetrical 34%

Facial shape is asymmetrical 6%

Erratic or unnatural hair texture 46.4%

"Blobs" appear in background 50.8%

"Blobs" appear on face 10.8%

"Blobs" or other distortions occur where body and background meet 44%

Glasses are asymmetrical 100% This happened in every Image Involving subjects with glasses, which was just 6.75% of all Images generated. Glasses are avoided.

When glasses appear, part of them (one half of them) is missing 50% In half of the appearances of subjects with glasses, one lens was missing entirely.

Ears do not match 49.6% Only audited for subjects in which both ears were visible.

Chin misshapen (not symmetrical, and/of cleft off-center) 38.8%

Clothing distorted or structurally unnatural 57.6%

Facial hair asymmetrical (in male subjects with goatees, mustaches, etc.) 82% Facial hair occurs in 21 % of all male portraits

Neck wrinkles are inaccurate to age 48% In 82.5% of these occurrences, the subjects' wrinkles made them appear far older than the textures of their faces suggested. 4% of the time, the subject needed wrinkles but did not have them.

Second person appears to side and is wildly distorted 100% Each time a second person was visible, they were wildly distorted. This suggests that the algorithm is directed to only generate one face, and to see the rest of the field as "background."

Missing teeth 6%

Teeth asymmetrical or unnaturally represented 54%

3+ prominent front teeth 41.2%

Subject is too distorted to pass as as "normal" human subject 2.8% When this occurs, 57% of the cases involve a hand within the picture field that is wildly distorted.

145

Fig. 6.

Kris Belden-Adams

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

Fig. 7. Portrait, www.ThisPersonDoesNotExist.com, Generated at 14:54 p.m., Nov. 1, 2019. (ThisPerson-DoesNotExist.com/Phil Wang).

Younger subjects' faces are blurry and enjoy the most surface detail in the eyes, mouth, and hair. The older the subject, the more patterns such as wrinkles, freckles, and other surface textures appear. Not one of the 250 images was error-free, and the more portraits I viewed, the more keen my eye became at finding their glitches. (This learning curve was so pronounced that I had to revisit my audits of the earlier images after two weeks of practice.) The most common portrait-generation glitches involved the appearance of erratic textures on the foreheads of portraits (55.2% occurrence), misplaced skin textures or wrinkles (72% appearance rate), inaccurate ear structure (57% likelihood), "blob"-like forms appearing in the background or on the portrait itself (50.8% rate of appearing), asymmetrical glasses (100% appearance in subjects with eyewear, which only ap-

peared in 6.75% of images - staggeringly low, considering that more than half of all U.S. women and 42% of U.S. men wear glasses), distorted clothing (57.6% occurrence), the distortion of any secondary subjects or hands (100% occurrence), asymmetrical facial hair in male subjects with goatees and mustaches (82% likely within this sample set), and teeth that are asymmetrical (54% likely), missing (6% likely), or which feature more than two front teeth (41.2% likely) (Fig. 6).16 Also, unless backgrounds were generated with a solid color (which occurred in only 8% of the portraits), the backdrops almost always departed dramatically from the usual appearance of landscapes and interior settings.

After producing the portraits, an algorithm instructs the application to compare flawed images to the (assumed to be unaltered) portraits on Flickr, so "deep learning" can occur, and the application can isolate and correct the glitches. In other words, the Artificial Intelligence is trained to locate errors such as the ones I have flagged, and to correct itself. As of November of 2019, significant learning had not occurred from April 2019, because the error statistics from April images showed the same glitches appeared in similar quantity as November. The images were relatively stagnant, in deep-learning terms, over eight months. Programmers promise that it will, though. I will continue to audit these DeepFakes at six-month intervals to see if information about the rate of "deep learning" can be tracked and quantified

146

16 Vision Council of America, "What Percentage of the Population Wears Glasses?" GlassesCrafter, Accessed Nov. 9, 2019: URL: http://www.glassescrafter.com/inf ormation/percentage-population-wears-glasses.html

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых искусственным интеллектом, похожих на фотографии визуализаций данных / ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated, Hyper-Real, Photographic Data Visualizations |

for the images generated by ThisPersonDoesNo-tExist.

As of November 2019, however, this audit of DeepFakes from ThisPersonDoesNotExist -though the subject of great mass-press trepidation - produce far more visual fascination and perhaps bad-dream fodder (Fig. 7) than they do seductive deceptions for those with discerning eyes (and indeed, not everyone has these, but assuming one is conscious as they type "thispersondoesnotexist" into a search engine, they have no reason to think that they are about to view images of actual people). However, DeepFakes provide another example of the medium's unique capacity to represent a rhetoric of possible realisms that are real without being actual. But they hardly provide a radical displacement of the medium's identity, but an affirmation of how tricky photographic "truth" always has been. Nevertheless, these DeepFakes do introduce new debates about the regulation and ethical obligations of A.I. that are entirely new, and urgently in need of debate by politicians, programmers, journalists, photographers, and interdisciplinary humanities scholars as the technology develops.

References

1. Batchen, Geoffrey. "This Haunting," in Photography Theory, Elkins, ed. New York: Routledge, 2006.

2. Brozek, Bartosz, Bartosz, Janik. "Can Artificial Intelligences Be Moral Agents?" New Ideas in Psychology, Vol. 54 (Aug. 2019), pp. 101-106.

3. del Rio, Jose Sanchez, Moctezuma, Daniela, Conde, Christina, de Diego, Isaac Martin, Cabello, Enrique. "Automated Border Control E-Gates and Facial Recognition Systems," Computers and Security, Vol. 62 (2016), p. 49-72.

4. Druckrey, Timothy. "From Dada to Digital: Montage in the Twentieth Century," in Metamorphoses: Photography in the Electronic Age. New York, Aperture, 1994, pp. 4-7.

5. Druckrey, Timothy. "L'amour Faux," in Digital Photography: Captured Images, Volatile Memory, New Montage. San Francisco: San Francisco Camerawork, 1988, pp. 4-9.

6. Dzenko, Corey. "Analog to Digital: The Indexical Function of Photographic Images," Afterimage, Vol. 37, No. 2 (Sept. 2009), pp. 19-23.

7. Foncuberta, Joan. "Revisiting the Histories of Photography," in Joan Foncuberta, ed., Photography: Crisis of History. Barcelona: Actar, 2002, pp.10-11.

8. Frizot, Michel. "A Critical Discussion of the Historiography of Photography," Arken Bulletin, Vol. 1 (2002), pp. 58-65.

9. Frizot, Michel. "Who's Afraid of Photons?," in Photography Theory, ed. James Elkins. New York: Routledge, 2007, pp. 269-283.

10. Galassi, Peter. Before Photography: Painting and the Invention of Photography. New York: Museum of Modern Art, 1981.

11. Hao, Karen. "Congress Wants to Protect You from Biased Algorithms, Deepfakes, and Other Bad AI," MIT Technology Review (April 15, 2019). Accessed June 26, 2019: https: //www.technologyreview. com/s/613310/cong ress-wants-to-protect-you-from-biased-algorithms-deepfakes-and-other-bad-ai/

12. James, S. E., Herman, J. L., Rankin, S., Keisling, M., Mottet, L., & Anafi, M. The Report of the 2015 U. S. Transgender Survey. Washington, D.C.: National Center for Transgender Equality (2016), 45. Accessed Nov. 9, 2019: https://transequality.org/sites/default/files/docs/usts /USTS-Full-Report-Dec 17.pdf

13. Jennings, Charles. Artificial Intelligence: The Rise of the Lightspeed Learners. New York: Rowman and Littlefield, 2019. Cambridge: The MIT Press, 1992.

147

| 4 (37) 2019 |

Крис БЕЛДЕН-АДАМС / Kris BELDEN-ADAMS

| «ЭтаТеорияНеСуществует». История и осмысление гиперреалистичных, создаваемых

искусственным интеллектом, похожих на фотографии визуализаций данных /

ThisTheoryDoesNotExist: Historicizing and Understanding Artificial-Intelligence-Generated,

Hyper-Real, Photographic Data Visualizations |

14. Ritchin, Fred. "Photojournalism in the Age of Computers," The Critical Image: Essays on Contemporary Photography, ed. Carol Squiers. Seattle: Bay Press, 1990, pp. 28-37.

15. Ritchin, Fred. "The End of Photography as We Have Known It," in Photo Video, ed. Paul Wombell London: Rivers Oram Press, 1991, pp. 815.

16. Serra, Nicola. "AI, Ethics, and Law on Artificially-Created Human Faces," Medium (March 12, 2019). Accessed June 25, 2019: https://becominghuman.ai/https-medium-com-nicola-del-serra-ai-ethics-and-law-on-artificial-human-faces-adf30a310fd1

17. "Social Media Demographics, 2017 - Who's Using What?" Kahootz Media (2017). Accessed Nov. 9, 2019: https://www.kahootzmedia.com/social-media-

demographics-2017-whos-using/ 148

18. Steffen, Alex. "What Happens When Technology Zooms Off the Charts: Singularity and its Meanings," Whole Earth Catalog, Spring 2003. Accessed Nov. 9, 2019: http://www.wholeearth.com/index.php

19. Tomkins, Calvin. "The Big Picture," Modern Painters, Vol. 14, No. 1 (2001), p. 32.

20. Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Post-Human Era," Conference Presentation, NASA. Lewis Research Center, Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Dec. 1, 1993. 11-22. Accessed Nov. 9, 2019:https://ntrs.nasa.gov/search.j sp?R= 19940022 856

21. Vision Council of America, "What Percentage of the Population Wears Glasses?" GlassesCrafter, Accessed Nov. 9, 2019: http://www.glassescrafter.com/information/percent age-population-wears-glasses.html

| 4 (37) 2019 |

i Надоели баннеры? Вы всегда можете отключить рекламу.