Научная статья на тему 'DEEPFAKES AS A CYBERSECURITY THREAT'

DEEPFAKES AS A CYBERSECURITY THREAT Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
deepfakes / cybersecurity / cyber crime / artificial intelligence / biometric data / privacy / дипфейки / кибербезопасность / преступность в киберсреде / искусственный интеллект / биометрические данные / приватность

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Fokina Sofia I., Vyvolokina Albina V., Garkushev Alexander Yu., Osipenko Elena A.

The article discusses the problem of deepfakes and their impact on cybersecurity. The technologies of Generative Adversarial Networks (GANs), with the help of which the creation of deepfakes have become widespread, are considered. Taking into account the technological aspects of GANs, the legal risks associated with the widespread use of deepfakes are considered. The problem of the legal personality of artificial intelligence when creating deepfakes is analyzed, and issues of responsibility for their creation and use are considered. Measures are proposed to protect individuals from the criminal use of deepfakes and unauthorized access to personal and biometric data.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

ДИПФЕЙКИ КАК УГРОЗА КИБЕРБЕЗОПАСНОСТИ

В статье рассматривается проблема дипфейков и их влияние на кибербезопасность. Рассматриваются технологии Generative Adversarial Networks (GANs), с помощью которых создание дипфейков получило широкое распространение. С учетом технологических аспектов GANs, рассматриваются правовые риски, связанные с широким распространением дипфейков. Анализируется проблема правосубъектности искусственного интеллекта при создании дипфейков, рассматриваются вопросы ответственности за их создание и использование. Предлагаются меры для защиты личности от преступного использования дипфейков и неправомерного доступа к персональным и биометрическим данным.

Текст научной работы на тему «DEEPFAKES AS A CYBERSECURITY THREAT»

12. Степашин В.М. Содержание принципа экономии репрессии // LexRussica. 2017. №11 (132). С. 24-37.

13. Стрилец О.В., Семененко Г.М., Пахомов А.Н. Неотвратимость наказания как принцип уголовного права // Вестник Волгоградской академии МВД России. 2020. №2 (53). С. 36-40.

14. Хаертдинова Э.Р. Индивидуализация наказания как принцип уголовного права // Аграрное и земельное право. 2019. №10 (178). С. 125-126.

15. Шабуров А.С., Жайкбаев Ж.С. Теория государства и права: учебное пособие. Курган: Изд-во Курганского гос. ун-та, 2019.

References and Sources

1. Afinogenov D.A., Vinogradova E.V., Polyakova T.A. Razvitie gosudarstvennoj politiki v oblasti strategicheskogo planirovaniya v Rossijskoj Federacii: publichno-pravovye i nauchno-metodologicheskie problemy i prioritety // Pravovaya politika i pravovaya zhizn'. 2021. №4. S. 23-37.

2. Boshno S.V. Principy prava: ponyatie, klassifikaciya // Pravo i sovremennye gosudarstva. 2019. №4. S. 53-59.

3. Vorob'eva S.V. K voprosu ob ukreplenii suverennosti rossijskoj gosudarstvennosti // Aktual'nye problemy gosudarstva i prava. 2017. №3/4. S. 48=57.

4. Gavryushenko P.I. Osoboe polozhenie mobilizacii Rossijskoj Federacii kak odnoj iz vazhnejshih gosudarstvennyh zadach // Molodoj uchenyj. 2020.№8 (298).S. 135-138.

5. Zorin V.Yu. Strategiya gosudarstvennoj nacional'noj politiki Rossijskoj Federacii: tradicionnost' i novye podhody k ukrepleniyu edinstva mnogonacional'nogo naroda Rossii (rossijskoj nacii) // Voprosy nacional'nyh i federativnyh otnoshenij. 2013. №2 (21). S. 36-51.

6. Korolev R.V. K voprosu o ponyatii principa demokratizma v ugolovno-ispolnitel'nom prave // Problemy ekonomiki i yuridicheskoj praktiki. 2007. №4. S. 125-127.

7. Mineeva V.N. Problema publichno-pravovogo regulirovaniya mobilizacii // Stolypinskij vestnik. 2022. №8. S. 4386-4394.

8. Peshkov F.I. Mesto i znachenie instituta osvobozhdeniya ot ugolovnoj otvetstvennosti v Rossijskoj Federacii // StudNet. 2022. №8. S. 1-7.

9. Pobegajlo E.F. Ugolovnaya politika sovremennoj Rossii: avtorskaya koncepciya // Vestnik Baltijskogo federal'nogo universiteta im. I. Kanta. Seriya: Gumanitarnye i obshchestvennye nauki. 2007. №9. S. 6-15.

10. Ryzhova O.A. Osnovaniya osvobozhdeniya ot ugolovnoj otvetstvennosti // Koncept. 2014. №27. S. 1-6.

11. Sabitov T.R. Princip neotvratimosti nakazaniya i ego transformaciya v ugolovno-pravovoj nauke i zakonodatel'stve // Aktual'nye problemy rossijskogo prava. 2010. №3. S. 261-269.

12. Stepashin V.M. Soderzhanie principa ekonomii repressii // LexRussica. 2017. №11 (132). S. 24-37.

13. Strilec O.V., Semenenko G.M., Pahomov A.N. Neotvratimost' nakazaniya kak princip ugolovnogo prava // Vestnik Volgogradskoj akademii MVD Rossii. 2020. №2 (53). S. 36-40.

14. Haertdinova E.R. Individualizaciya nakazaniya kak princip ugolovnogo prava // Agrarnoe i zemel'noe pravo. 2019. №10 (178). S. 125-126.

15. Shaburov A.S., Zhajkbaev Zh.S. Teoriya gosudarstva i prava: uchebnoe posobie. Kurgan: Izd-vo Kurganskogo gos. un-ta, 2019.

ФЕДОРЕНКО СЕРГЕЙ АЛЕКСАНДРОВИЧ - кандидат юридических наук, доцент, кафедра уголовного права, Российский государственный университет правосудия, Северо-Кавказский филиал; Московский институт современного академического образования, департамент юриспруденции, доцент (fedorenkosergey79@yandex.ru).

СТЕШИНА ЕЛИЗАВЕТА АЛЕКСЕЕВНА - студентка, Российский государственный университет правосудия, СевероКавказский филиал (eliza.alex777@gmail.com).

FEDORENKO, SERGEY A. - PhD.In Low, Associate Professor of the As Department of Criminal Law of the North Caucasus Branch of the Russian State University of Justice; Moscow Institute of Modern Academic Education, Department of Law, Associate Professor (fedorenkosergey79@yandex.ru).

STESHINA, ELIZAVETA A. - student of the North Caucasus branch of the Federal State Budgetary Educational Institution of Higher Education «Russian State University of Justice», Krasnodar, Russian Federation (eliza.alex777@gmail.com).

УДК 343.211.3:34.096 DOI: 10.24412/2411-2275-2024-2-119-123

ФОКИНА С.И., ВЫВОЛОКИНА А.В., ГАРЬКУШЕВ А.Ю., ОСИПЕНКО Е.А. ДИПФЕЙКИ КАК УГРОЗА КИБЕРБЕЗОПАСНОСТИ

Ключевые слова: дипфейки, кибербезопасность, преступность в киберсреде, искусственный интеллект, биометрические данные, приватность.

В статье рассматривается проблема дипфейков и их влияние на кибербезопасность. Рассматриваются технологии Generative Adversarial Networks (GANs), с помощью которых создание дипфейков получило широкое распространение. С учетом технологических аспектов GANs, рассматриваются правовые риски, связанные с широким распространением дипфейков. Анализируется проблема правосубъектности искусственного интеллекта при создании дипфейков, рассматриваются вопросы ответственности за их создание и использование. Предлагаются меры для защиты личности от преступного использования дипфейков и неправомерного доступа к персональным и биометрическим данным.

FOKINA, S.I., VYVOLOKINA, A.V., GARKUSHEV, A.YU., OSIPENKO, E.A. DEEPFAKES AS A CYBERSECURITY THREAT

Key words: deepfakes, cybersecurity, cyber crime, artificial intelligence, biometric data, privacy.

The article discusses the problem of deepfakes and their impact on cybersecurity. The technologies of Generative Adversarial Networks (GANs), with the help of which the creation of deepfakes have become widespread, are considered. Taking into account the technological aspects of GANs, the legal risks associated with the widespread use of deepfakes are considered. The problem of the legal personality of artificial intelligence when creating deepfakes is analyzed, and issues of responsibility for their creation and use are considered. Measures are proposed to protect individuals from the criminal use of deepfakes and unauthorized access to personal and biometric data.

With the development of artificial intelligence technologies (hereinafter AI), new challenges have emerged in the field of information security, among which a special place is occupied by deepfakes. Deepfakes are a type of multimedia content created or modified using AI and machine learning that allows to mimic a person's facial expressions, movements, and voice with a high degree of plausibility.

With the development of Generative Adversarial Networks (hereinafter GANs) and other machine learning methods, the process of creating deepfakes becomes more accessible and realistic.

GANs is a class of machine learning algorithms consisting of two parts: a generator that creates images, and a discriminator that evaluates them. The generator aims to create images that are realistic enough that the discriminator cannot distinguish them from the real ones. This training process allows the GANs to generate high-quality fakes. At a high level, GANs are neural networks that learn to generate realistic data samples they were trained on [1].

The development of GANs technologies has significantly increased the ability and availability of deepfakes, as they have provided a powerful tool for creating realistic and convincing multimedia content.

GANs' impact on the creation of deepfakes can be analyzed through multiple aspects:

1. Improving the quality of deepfakes. GANs have significantly improved the quality of fake images and videos. Modern GANs can create photos and videos that are difficult to distinguish from the real ones, making falsifications more convincing.

2. Facilitating the creation of deepfakes. Initially, creating deepfakes required significant knowledge in the field of graphics and video processing. With the development of GANs, creating a deepfake has become possible even for people without special skills, since software with GANs can automate most of the process.

3. Availability. With open access to source code of GANs and their models on platforms like GitHub, the availability of these technologies to the general public has expanded significantly. This lowers the entry threshold for creating deepfakes.

It is worth noting that the variety of applications of GANs allows not only to replace the face, but also to change a person's attributes, such as age, gender or race, and also imitate his voice. This opens up new opportunities to create even more convincing and diverse deepfakes.

However, these prospects for the development of technologies in the field of creating deepfakes have a negative side. Deepfakes pose a danger to social, political and other spheres of society.

The World Economic Forum (WEF) has reported that the number of deepfakes or fake videos is increasing at an annual rate of 900 % [2]. According to statistical data, more video and audio deepfakes were detected online in the first months of 2024 than in the entire 2023. Experts noted a threefold increase in the number of video deepfakes and an eightfold increase in audio deepfakes in 2023 compared to 2022. In 2023, just over half a million deepfakes were recorded worldwide. In 2024, we have already broken the previous record of half a million detected deepfakes [3].

Table 1.

Threats associated with the use of GANs

In the context of political campaigns and the destabilization of GANs, created deepfakes can serve as a tool for manipulating public opinion or discrediting opponents._

Social risks Realistic deepfakes can be used to create fake pornographic materials with celebrities or individuals, leading to violations of privacy rights and can lead to social problems.

Legal and ethical challenges The difficulties in proving the authenticity of content and the responsibility for its creation and distribution pose new serious challenges for legislation.

Thus, the development of GANs plays a significant role in increasing the opportunities and risks arise related to misuse of deepfakes. Ensuring digital security in a new era of artificial intelligence requires active action by legislators, technology developers and society at large.

GANs are a significant threat to cybersecurity and individual privacy. This technology can be used to create fake news, fraud, blackmail and manipulate public opinion.

f misrepresentation

Cundermining trust in information

^ impact on public opinion^

spreading false information using social networks

Deepfakes as a cybersecurity threat

^ threat to personal safety^

damage to brands and organizations

О

Necessity of countermeasures

Fig. 2. Deepfakes as a cybersecurity threat

In order to counteract the threat posed by deepfakes, countermeasures must be developed, such as deepfake detection technologies and a comprehensive study of methods to detect them (for example, FakeCatcher - a deepfake detection tool from Intel [4]), as well as the development of comprehensive legal regulation in this area.

The flow of deepfakes into a network using GANs requires not only technical measures to detect inappropriate content, but also ethical awareness of the artificial intelligence involved in creating fake content (for example, in the form of compliance with the Code of Ethics for Artificial Intelligence [5]). Artificial intelligence itself has no moral values or value judgment, making it a potentially dangerous tool in the hands of unscrupulous users. Therefore, AI developers for creating deepfakes must adhere to ethical requirements when setting up AI to prevent the user from generating content that violates someone's rights.

The study highlights the need for research to comprehend the extent to which artificial intelligence can be autonomous when creating deepfakes and what steps can be taken to mitigate its negative impact.

At present, the issue of liability for the creation and use of deepfakes remains unresolved. Most jurisdictions do not have clear laws regulating this area. This creates an easy environment for spreading false information and damaging the reputation and privacy of both individuals and organizations.

The fact that the author of an AI-created work cannot be identified exacerbates the problem of liability for the creation and use of deepfakes. It is supposed that the problem of ensuring information security can be partially solved by regulating the "authorship regime" from both legal and technical viewpoints [6].

Despite the fact that in 2019, the Russian Federation adopted a law establishing liability for the dissemination of unreliable public significant information under the guise of reliable messages [7], the

problem of violation of personal rights using deepfake technologies is still not fully understood and developed [8].

At the moment, artificial intelligence has no legal personality, that is, it cannot be a bearer of rights and obligations. The responsibility for the creation and distribution of deepfakes lies with the operators of these systems or with persons who directly use the results of AI work to mislead or harm.

Deepfake technologies are becoming increasingly accessible to the masses, leading to their increased use in criminal activities. To slow down this trend in Russia, it is necessary to introduce legislative regulation of the operation of neural network models, as well as toughen penalties for violating the rules of audio and visual information processing by AI developers and users.

Artificial intelligence can use biometric data such as facial features, voice and motion to create realistic deepfakes. This poses a serious threat to biometric authentication and personal data security. Strict security measures and controls of access to biometric data are necessary to prevent their misuse to create deepfakes.

The identification of fraudsters involved in the creation of deepfakes is difficult, but is possible through a combination of technological and legal methods.

1. The technical methods of analysis of digital traces.

Cybersecurity experts can analyze digital traces left by deepfake creators, such as IP addresses, file metadata, machine learning traces, etc. Such analyses can help identify the source and distributor of false information.

2. The use of digital signatures and certificates.

Digital signatures can be used to verify the authenticity of multimedia content and identify the creator. Also, the use of certificates to authenticate content authors can help with the identification of their identity in order to determine the person responsible for creating the deepfake.

3. The cooperation with Internet service providers.

Collaboration with Internet service providers and social media platforms can help identify and block the sources of deepfakes and their further spread.

4. The development of deepfake detection technologies.

Research into the development of deepfake detection technologies can help automate the process of identifying fake information and its creators.

5. Saving data about the used AI system in the file metadata.

To increase transparency and traceability of the deepfake creation process, data about the AI system used in the file metadata can be saved. This may include information about the configuration of machine learning models, training parameters, and usage and modification history. This approach can facilitate the process of identifying violators and increase the level of responsibility in creating fake content. This can be achieved using standard file formats that support embedded metadata, such as EXIF (Exchangeable Image File Format).

However, it should be borne in mind that the process of storing data about the AI system used in the file metadata may encounter confidentiality and privacy issues, especially when it comes to personal data or commercially important information.

To ensure confidentiality and anonymity, it is necessary to strictly comply with data protection legislation and take measures when collecting and storing such information, that is, to preserve in the system only those data necessary to determine the responsible person - author of the work, excluding information that is protected by its classification as a trade secret or other personal data.

Thus, deepfakes pose a serious threat to cybersecurity and personal privacy. Combating this threat and protecting society from its negative consequences requires strict legal measures, ethical standards, and technological innovation.

In order to protect individuals from the criminal use of deepfakes and unauthorized access to personal and biometric data, it is necessary to implement the following measures:

1. Developing and implementing legal guidelines that prohibit the creation and distribution of deepfakes, except in cases where it is permitted (like educational purposes).

It is necessary to create a regulatory framework to provide for liability for the creation and distribution of deepfake materials that may damage business reputation, humiliate the honor and dignity of a person, or materials distributed for other criminal purposes [9].

2. Development of recognition technologies. The use of AI to analyze video and audio materials for authenticity, which may include tracing signs of falsification.

It seems necessary to develop services and tools of fact-checking, the use of which should be free, simple, and not requiring special education or IT-skills. In addition to fact-checking, it is necessary to create automated tools for detecting deepfakes, able to determine the date, time and physical origin of the deepfake content, and in case of signs of potential danger to stop their placement.

3. Raising public awareness and learning to recognize falsified materials through specialized courses and seminars.

4. The cooperation of government authorities and distribution platforms. Interaction with social networks and media platforms to improve filtering algorithms and quickly remove deepfakes.

5. International cooperation. There is a need to develop international standards and agreements to combat the spread of deepfakes globally.

Thus, deepfakes pose a serious threat to cybersecurity in the modern world. A comprehensive approach that includes legislative measures, technological solutions, educational initiatives, and international cooperation is necessary to effectively combat this problem. It is important not only to develop new methods of protection, but also to create a culture of critical perception of information among the population.

References and Sources

1. James Loy. Fundamentals of Generative Adversarial Networks // Habr [Electronic resource]. URL: https://habr.com/ru/articles/726254/ (date of request: 14.04.2024).

2. The three most dangerous deepfake threats of 2023 // Time News [Electronic resource]. URL: https://time.news/the-three-most-dangerous-deepfake-threats-of-2023/ (date of request: 14.04.2024).

3. Information security expert Bederov: 2024 has already surpassed the previous year in the number of detected deepfakes // Rambler [Electronic resource]. URL: https://news.rambler.ru/community/52403927-ib-ekspert-bederov-2024-god-uzhe-oboshel-predyduschiy-po-chislu-vyyavlennyh-dipfeykov/ (date of request: 14.04.2024).

4. FakeCatcher - deepfake detection tool from Intel with 96% efficiency // Overclocker [Electronic resource]. URL: https://overclockers.ru/blog/New_Intel_Raptor_ES/show/78948/fakecatcher-instrument-obnaruzheniya-dipfejkov-ot-intel-c-effektivnostju-na-96 (date of request: 14.04.2024).

5. Ethics of Artificial Intelligence and Robotics // Stanford Encyclopedia of Philosophy [Электронный ресурс]. URL: https://plato.stanford.edu/entries/ethics-ai/ (дата обращения: 14.04.2024).

6. Fokina S.I., Vyvolokina A.V., Garkushev A.Yu., Osipenko E.A. "Intellectual property objects created using artificial intelligence as a threat to information security" // Sbornik materialov X Mezhdunarodnoy nauchno-prakticheskoy zaochnoy konferentsii "ETAP-2023". P. 404-408.

7. Federal Law of March 18, 2019 No 27-FZ "On Amendments to the Code of the Russian Federation on Administrative Offenses" // Collection of Legislation of the Russian Federation. 2019. № 12. Art. 1217.

8. Dobrobaba M.B. Deepfakes as a threat to human rights // Lex Russica. 2022. № 11 (192). P. 112-119.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

9. Klyueva A.A., Belov D.A. Current legal research of deepfake technologies and new challenges for the Russian legal system // Voprosy rossiyskoy yustitsii. 2021. № 14. P. 607.

10. Ignatiev A.G. Deepfakes in the digital space: main international approaches to research and regulation / A.G. Ignatiev, T.A. Kurbatova. - M.: Avtonomnaya nekommercheskaya organizatsiya "Tsentr kompetentsiy po global'noy IT-kooperatsii", 2023. 54 p.

ФОКИНА СОФИЯ ИГОРЕВНА - магистрант, специалист Института информационных технологий, Санкт-Петербургский Государственный Морской Технический Университет; младший научный сотрудник научно-образовательной лаборатории «Передовые цифровые технологии в судостроении («Умное судостроение»)» (НОЛ ПЦТС) (sofiya.fockina@gmail.com). ВЫВОЛОКИНА АЛЬБИНА ВИТАЛЬЕВНА - магистрант, специалист, Институт информационных технологий, Санкт-Петербургский Государственный Морской Технический Университет; младший научный сотрудник НОЛ ПЦТС (albina.vyvolokina@mail.ru).

ГАРЬКУШЕВ АЛЕКСАНДР ЮРЬЕВИЧ - кандидат технических наук, доцент, заведующий кафедры цифровой безопасности, Санкт-Петербургский Государственный Морской Технический Университет; заведующий лабораторией НОЛ ПЦТС (sangark@mail.ru).

ОСИПЕНКО ЕЛЕНА АНАТОЛЬЕВНА - кандидат филологических наук, Санкт-Петербургский Государственный Морской Технический Университет; (elena.osipenko12061976@gmail.com)

FOKINA SOFIA I. - master's student, specialist at the Institute of Information Technologies, St. Petersburg State Maritime Technical University; junior researcher at the scientific and educational laboratory "Advanced digital technologies in shipbuilding ("Smart Shipbuilding")" (NOL PCTS) (sofiya.fockina@gmail.com).

VYVOLOKINA, ALBINA V. - master's student, specialist, Institute of Information Technologies, St. Petersburg State Maritime Technical University; junior researcher at NOL PCTS (albina.vyvolokina@mail.ru).

GARKUSHEV, ALEXANDER Yu. - Ph.D. in Technical Sciences, Associate Professor, Head of the Department of Digital Security, St. Petersburg State Maritime Technical University; Head of the laboratory of NOL PCTS (sangark@mail.ru). OSIPENKO, ELENA A. - Ph.D. in Philology, St. Petersburg State Maritime Technical University; (elena.osipenko12061976@gmail.com).

i Надоели баннеры? Вы всегда можете отключить рекламу.