Research article
DOI: https://doi.org/10.21202/jdtl.2023.38
3
%
Check for updates
Towards Legal Regulations of Generative AI in the Creative Industry
Natalia I. Shumakova 0
Law Institute, South Ural State University (national research university) Chelyabinsk, Russia
Jordan J. Lloyd 0
Unseen History Essex, United Kingdom
Elena V. Titova 0
Law Institute, South Ural State University (national research university) Chelyabinsk, Russia
Keywords
Abstract
artificial intelligence, copyright law, creative industry, digital technologies, generative artificial intelligence, intellectual property, international law, neural network, object of copyright law, subject of copyright law
Objective: this article aims to answer the following questions: 1. Can generative artificial intelligence be a subject of copyright law? 2. What risks the unregulated use of generative artificial intelligence systems can cause? 3. What legal gaps should be filled in to minimize such risks? Methods: comparative legal analysis, sociological method, concrete sociological method, quantitative data analysis, qualitative data analysis, statistical analysis, case study, induction, deduction.
Results: the authors identified several risks of the unregulated usage of generative artificial intelligence in the creative industry, among which are: violation of copyright and labor law, violation of consumers rights and the rise of public distrust in government. They suggest that a prompt development of new legal norms can minimize these risks. In conclusion, the article constants that states have already begun to realize that the negative impact of generative artificial intelligence on the creative industry must not be ignored, hence the development of similar legal regulations in states with completely different regimes.
0 Corresponding author © Shumakova N. I., Lloyd J. J., Titova E. V., 2023
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (CC BY 4.0) (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Scientific novelty: the article provides a comprehensive study of the impact of generative artificial intelligence on the creative industry from two perspectives: the perspective of law and the perspective of the industry. The empirical basis of it consists of two international surveys and an expert opinion of a representative of the industry. This approach allowed the authors to improve the objectivity of their research and to obtain results that can be used for finding a practical solution for the identified risks. The problem of the ongoing development and popularization of generative artificial intelligence systems goes beyond the question "who is the author?" therefore, it needs to be solved by introduction of other than the already existing mechanisms and regulations - this point of view is supported not only by the results of the surveys but also by the analysis of current lawsuits against developers of generative artificial intelligence systems.
Practical significance: the obtained results can be used to fasten the development of universal legal rules, regulations, instruments and standards, the current lack of which poses a threat not only to human rights, but also to several sectors within the creative industry and beyond.
For citation
Shumakova, N. I., Lloyd, J. J., & Titova, E. V. (2023). Towards Legal Regulations
of Generative AI in the Creative Industry. Journal of Digital Technologies and Law, 7(4),
880-908. https://doi.org/10.21202/jdtl.2023.38
Content
Introduction
1. The voice of law
2. The voice of the industry
2.1. Generative AI as a subject of copyright law, products of generative AI as objects of copyright law
2.2. Plagiarism, violation of copyrights and other risks
2.3. Labeling products of generative AI
2.4. The voice of the industry being heard
Conclusions
References
Introduction
In the year 2023, even those who never showed interest in the development of generative AI systems have encountered with the results of their negative impact on the creative industry due to the strike of The Screen Actors Guild-American Federation of Television and Radio Artists (the SAG-AFTRA) and the Writers Guild of America (the WGA) strike, which have
already resulted into delay of releases of highly-anticipated products1 and, allegedly, can change the entire industry in the foreseeable future2.
It is safe to say that the named strikes have influenced the academic and legal view on the use of generative AI: from the attempts to establish whether generative AI can be seen as a creator and how to protect AI-generated outputs (Wan & Lu, 2021) they have switched to the study of its impact on artists livelihoods (Sparkes, 2022) and discussion of requirements to responsible generative AI systems (Diaz-Rodriguez et al., 2023).
Taking into account the results of previous research works, the authors of this article identified the need of conducting a comprehensive analysis of possible risk connected to the unregulated use of generative AI. In order to reveal whether it is actually an existential threat3 to the creative industry, they employed a number of multidisciplinary methods, conducted two surveys on ethics of the use generative AI in the creative and cultural industries, and invited a representative of the creative industry to provide an opinion on the subject where needed. Hence, the title of this article.
The article is separated in two chapters "The voice of law" and "The voice of the industry" and includes results of the conducted surveys, statistics, results of comparative legal analysis and case study etc.
In conclusion, the authors state that despite the current lack of international legal regulations of the usage of generative AI systems in the creative industry, states have already been coming up with fairly similar law projects the final goal of which is to increase the accountability of companies that produce and/or own generative AI systems, the key here is to adopt and enforce such regulations promptly in order to reduce the identified risks and prevent possible harm.
1. The voice of law
The attempts to invent a robot that would be able to create something aren't new. In fact, the first robots that were imitating the creative process were introduced over 500 years ago and it immediately raised the question about whether or not they could replace actual human beings4. In the 18th century, they became known as "automatons" and gained an enormous popularity - this is when Jaquet Droz produced his famous automatons that were drawing pictures, playing musical instruments and entertaining
Kelley, S. (2023, September 19). All the major movies and TV shows delayed by the strikes. Los Angeles Times. https://clck.ru/36n37w
Belloni, M., & Shaw, L. (2023, September 18). The Strike's Permanent Damage: Who Will Suffer the Most? The Ringer. https://clck.ru/36n38d
We're Fighting for the Survival of Our Profession. SAG-AFTRA Strike. https://clck.ru/36n39G
Marvellous machines: early robots. (2018, November 20). Science Museum. https://goo.su/Scuk
1
2
public in other ways by doing that they were programmed to do5. It would be fair to say that generative AI systems function more or less similar to those early robots - they do that they were programmed to do by employing various techniques to generate a product based on the data used to train them. And yet for years scholars have been asking the question not that different than 3 centuries ago - "Can generative AI be a creator?" (Somenkov, 2019). Usually, this question is immediately followed up by another one - "Can products of generative AI be an object of intellectual property rights and copyright law?" (Agibalova & Perekrestova, 2020). Responses to these questions vary. But this alone proves that the current legal status of generative AI systems is uncertain (Stokel-Walker, 2023). Here, we tend to support the opinion that questions about the relationship between humans and machines in the creative process and those about the shifting character of the network of relevant stakeholders implicated in this process are more important because responses to the others can be found in the existing legislature of most countries (Fenwick & Jurcys, 2023). Nevertheless, it's worth-mentioning that there are exclusions such as China and New Zealand. Should we take a look at Chinese lawsuits and court resolutions, we might notice that this country tends to practice a mixed approach towards the recognition of an object of copyright law - Y. Wan and H. Lu in their research work provides two examples of it: 1) Beijing Film Law Firm vs. Beijing Baidu Netcom Science & Technology Co Ltd, where the Beijing Internet Court concluded the object of the dispute was completely generated by AI and therefore, could not be protected by copyright; 2) Shenzhen Tencent Computer System Co Ltd vs Shanghai Yingxun Technology Co Ltd, where the Nanshan District Court of Shenzhen analyzed the actions taken by an actual human in the process of generation the object of the dispute and ruled that the output of it was protectable under the Copyright Law of China (Wan & Lu, 2021). New Zealand, in their turn, has chosen a completely different approach - according to the section "Interpretation" of their Copyright Act (1994), "computer-generated, in relation to a work, means that the work is generated by computer in circumstances such that there is no human author of the work"6, so theoretically, according to the logic of this norm, generative AI can be a subject of copyright and its products - objects of copyright law. However, Article 5 "Meaning of authorship" doesn't add it on the list of possible authors, more other, it says that "the author of a work is the person who creates it"7, which again causes the uncertainly of generative AI's legal status.
In Russia, no special legal regulations for the use of generative AI in the creative industry have been developed yet, but for the goal of this research it is important to study recommendations and commentaries provided by legal advisors and lawyers in regards of the cooperate protection of generated products. Some of them insist that it's high time
5 DNA. Jaquet Droz. https://clck.ru/36nqDJ
6 Copyright Act 1994 No. 143. Version as at 31 May 2023. (2023). Parliamentary Council Office. https://clck. ru/36n3Ds
7 Ibid.
the country developed new mechanisms and institutions to put generative AI systems under control8, whereas others consider the current legal norms being enough to respond to the new challenges associated with the development of the named technologies and their usage9. Recommendations provided in open sources for businesses in regards of employing generative AI systems should also be a matter of our interest. For instance, Semyonov A. (IT Moscow Digital School) suggests that products generated by AI are not objects of copyright law and thus, can be freely use for commercial purposes10. Yu. Brisov (Digital & Analogue Partners) represents an opposite point of view and recommends to carefully study terms and conditions provided by creators of each of the generative AI systems because according to them, not users but owners or creators such a system can be subjects of copyright law and that applies not exclusively to the use of Russian generative AI systems11. And indeed, YandexArt, for instance, restricts any commercial use of images and texts generated with their system, moreover, according to their terms and conditions, products generated in the application "Shedevrum" can be used for commercial purposes by the company itself12. Oddly enough, in the press-release of the mentioned application, no such information is provided, furthermore, it creates quite the opposite impression13.
Lawyers of the United States and the United Kingdom also tend to publically express their opinion on the matter. Joseph Saveri Law Firm on their official webpage claims that products generated with the use of Stable Diffusion, DreamStudio, DreamUp, and Midjourney "infringe on the rights of thousands of artists and creators" and cause nothing less than an actual "financial burden"14. This notion corresponds with the comments provided by D. Lee (BDB Pitmans), in which he highlights that even the lack of adequate terminology in case with the use of generative AI systems can be harmful, the lawyer also highlights that it can be "challenging to demonstrate tangible harm" due to the specifics of the training process of such systems, additionally, he suggests that the use of generative AI systems can violate the moral rights of human creators on whose works those systems were taught because "the AI's unauthorized use of their work might alter its meaning, potentially
8 Reshetnikova, A. (2019, October 29). A creator or a tool in the author's hamds? Advokatskaya Gazeta. https://clck.ru/36n3Ge
9 A brain twister: jurists' glance at artificial intelligence. (2023, April 20). Advokatskaya Gazeta. https://clck. ru/36n3HJ
10 Kildyushkin, R. (2022, July 13). It became known who owns copyright to images created by neural networks. Gazeta.ru. https://goo.su/ER4l
11 Brisov, Yu. (2023, May 25).May one use the creative works of neural networks in business? Bisnes Secrety. https://clck.ru/36n3KB
12 Terms of us of Shedevrum. Yandex. https://clck.ru/3663j8
13 YandexArt. Ya.ru. https://clck.ru/36n3L9
14 AI Image Generator - Copyright Litigation. Joseph Savery Law Firm. https://clck.ru/36n3LZ
damaging their reputation or the work's artistic value" (the right to object to the derogatory treatment of their work), then he adds, that under the current laws the use of copyright-protected material for training generative AI can be seen as "fair"15.
A special say has the United States Copyright Office. According to their decision from February 21, 2023, AI-generated works cannot be a subject of copyright, furthermore, they rescinded the first original registration of a work generated with the use of Midjourney (Kristina Kashtanova's comic book) and recognized as object of copyrights law only its text and "selection, coordination, and arrangement of text created by the author", but not the generated images16. The UK Court of Appeal takes a similar to the US Copyright Office position - according to their recent decision, generative AI systems cannot be inventors and therefore their products cannot be considered objects of patent law17.
The position of Australia towards the use of generative AI systems also cannot be ignored - the Albanese government, for example, considers generative AI systems an existential threat due to their ability to produce "deep-fakes", multiply disinformation and influence the democratic processes in other ways, hence the recent discussion of either banning or putting them under control18. Meanwhile, according to the recent survey conducted by BlackBerry Limited, 93 % of Australian companies are currently implementing or considering the implementation of bans on generative AI systems within the workplace because they see them as a threat to both security and reputation19. BlackBerry Limited in their research20 also demonstrates that this trend is global and 75 % companies worldwide share the Australian point of view on these digital technologies despite admitting the fact that they could be a useful instrument.
In order to understand a possible negative impact of the use of generative AI systems, two of the House of Common's committees conducted comprehensive investigations, the results of which were reported earlier this year21, 22: both of the reports revealed a real possibility of violation of copyrights, intellectual property rights, labor rights and the threat
15 AI authors - what a US lawsuit could mean for UK IP law. (2023, August 10). The Trademark Lawyer. https://clck.ru/36n3PR
16 Re: Zarya of the Dawn (Registration # VAu001480196). (2023, February 21). United States Copyright Office. https://clck.ru/36n3Pk
17 Neutral Citation Number: [2021] EWCA Civ 1374 Case No: A3/2020/1851. British and Irish Legal Information Institute. https://clck.ru/36n3Qb
18 Safe and responsible AI. (2023, June 1). Ministry for Industry and Science. https://goo.su/rs4z
19 Organisations in Australia set to ban ChatGPT and generative AI apps on work devices. APDR - Asia-Pasific Defense Reporter. (2023, August 14). https://clck.ru/36KzWP
20 Why Are So Many Organizations Banning ChatGPT? (2023, August 8). BlackBerry. https://clck.ru/36n3S4
21 UK Parliament. (2023). Connected tech: AI and creative technology: Eleventh Report of Session 2022-23. https://clck.ru/36n3Sf
22 UK Parliament. (2023). The governance of artificial intelligence: interim report: Ninth Report of Session 2022-23. https://clck.ru/36n3TN
of mass-production of disinformation, "deep-fakes" and other illegal content in case of the current legal gaps, including the abstractive terminology, will not be filled in the nearest future. All in all, the recommendations provided in the first report23 correspond with recommendations of The UK Intellectual Property Office - the UK legislation needs to be change in order to be able to adequately response to the challenges causes by the development of digital technologies24. The results of the named investigations were used to formulate a list of social harms that can be caused by the on-going unregulated use of generative AI systems, among which are: degradation of information environment; labor market disruption; bias and representational harms25.
Still and all, up to this day, China is the only country that has already regulated the use of generative AI systems, hence the importance of analyzing their approach. Article 7 of "The Interim Measures for Generative Artificial Intelligence Service Management", that came in force earlier this year, obliges to train generative AI systems only on ethically-sourced data in order to prevent any possible violation of copyrights or intellectual property rights, whereas Article 12 obliges providers of generative AI services to label their products as such26. Chinese lawyers clarify that according to the new rules, providers are also required to label data in the process of research and development27, additionally, they prove that public commentaries on the draft of these measures were taken into account28. And in order to make the enforced regulations work, the National Information Security Standardization Technical Committee released "Network Security Standard Practice Guide-Generative Artificial Intelligence Service Content Identification Method" that in details provides information on how products of generative AI should be labelled, why it needs to be done29. Thus, the fair claim that China is the pioneer in legal regulations of the usage of generative AI systems.
2. The voice of the industry
The analysis of the current attempts to regulate the use of generative AI systems shows that the UK and China try to take into account the voice of the industries (both - the creative and the cyber ones) and consumers of their products. In fact, the voices of human creators
23 UK Parliament. (2023). Connected tech: AI and creative technology: Eleventh Report of Session 2022-23. https://clck.ru/36n3Sf
24 IPO Transformation programme: second consultation. (2023, August 22). GOV.UK. https://clck.ru/36n3zQ
25 AI safety summit. Department For Science, Innovation and Technology. https://clck.ru/36n3zq
26 ot 1994 № 143 // S^S^Nis!^^. (2023). - ^15^ 10.07.2023. https://goo.su/fbbG
27 Regulatory and legislation: China's Interim measures for the Management of Generative Artificial Intelligence Services officially implemented. (2023, August). https://clck.ru/36n43s
28 Cai, R., & Zhu, W. (2023, July 14). Comparative Analysis of China's New Generative AI Regulations. Zhong Lun. https://clck.ru/36n44e
29 - 2023 № TC260-PG-20233A. (2023).
https://goo.su/Gl6Shf1
have become so laud recently that even the Senate of the USA had to listen to them30. Lawsuits, congressional hearings and, of course, the strikes - all of these can be considered signs of a growing public, or to be more precise - political public distrust. And indeed, when nationals of a country feel uncertain about their future (Kugukkomurler & Ozkan, 2022), feel that they have been "left behind" (Stroppe, 2023) or consider their government being unable to take appropriate legal actions in order to reduce the risks that those nationals see as an expectational threat, they tend to take actions such as strikes, protests and rallies (Torres & Bellinger, 2014). And certainly, it doesn't help the situation when media giants like Time release information about corporations like OpenAI lobbying their interests to "water down Europe's AI rules"31 and succeeding in it32. Furthermore, it seems that usual negotiators, whose entire purpose of existence of which is to represent lawful interests of the creative industry, have been doing the exact opposite33. On top of that, leaders of opinions such as Alex Winter, also publicly express their political distrust, accusing the government of being "captured by BigTech" and calling The People's Summit34 more essential than the AI Safety Summit35, which, in their opinion, will only worsen the situation because for the governments "it's impossible to protect their citizens"36. Hence, the importance to study the opinion of the creative industry and consumers of its products, which, in this article are expressed in the results of two international surveys and provided by the co-author of it -Jordan J. Lloyd (written in italics).
The surveys were conducted on social media and Telegram from July 11 to October 11, 2023.
Geography and of the surveys:
103 of 117 the English-speaking responders provided information about their residency -according to the responses, they represent 21 countries such as: The US, The UK, Argentina, Canada, Belgium, Germany, France, Norway, Netherlands, Turkey, Denmark, South Africa, Chile, Czech Republic, Serbia, Australia, Austria, Italy, Ireland, New Zealand, Sweden (Fig. 1), whereas the absolute majority of them work in the creative/cultural industry - 85.5 %, and only 14.5 % of the English-speaking responders are consumers of its products (Fig. 2).
30 Artificial Intelligence and Intellectual Property - Part II: Copyright. Subcommittee on intellectual property. https://clck.ru/36n3aE
31 Big Tech Is Already Lobbying to Water Down Europe's AI Rules. Time. https://clck.ru/36n3ak
32 Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation. Time. https://clck.ru/36n3bJ
33 We're Fighting for the Survival of Our Profession. SAG-AFTRA Strike. https://clck.ru/36n39G
34 The People's AI Summit - The citizens. YouTube. https://clck.ru/36n3cb
35 AI Safety Summit: introduction. GOV.UK. https://clck.ru/36n4AB
36 AI's threat to democracy and labour looms large. UK's 'doomsday' AI summit is poised to make things worse. Big Issue. https://clck.ru/36n3dx
Optional question: What country are you from? 103 responses
20
15
10
0
Argentina Czech Republic Italy South Africa, w... US United States, li...
Belguim France Norway Turkey (Turkiye) United Kindom
Fig. 1
5
Do you work in the cultural or creative industry (artist/translator/musician/journalist/ actor/writer/designer/content maker/other kind of creator)? 117 responses
Yes No
Fig. 2
31 of the 36 Russian-speaking responders also provided such an information and according to their answers, they represent 4 countries: Russia- 90.4 %, Moldova - 3.2 %, Poland - 3.2 % and Latvia - 3.2 %, and the absolute majority of them are involved in the creative industry too - 72.2 %, while 27.8 % of the Russian-speaking responders are consumers of their products.
Ethics of the surveys: the surveys were anonymous; all of the responders were informed about the possible use of their responses for academic purposes.
2.1. Generative AI as a subject of copyright law, products of generative AI as objects of copyright law
In the previous chapter, we established that neither academics nor law-makers don't have a universal understanding of whether we can consider generative AI a creator. Mr. Lloyd provided here his point of view, according to which, generative AI cannot be seen as such:
"Copyright Law as written covers expressions created by human endeavour. As noted, the creation of prompts is based on human imagination, but the resulting process and generated asset is not, therefore cannot be copyrighted if we accept the prevailing mindset. I akin Generative AI to a form of gambling, like a slot machine at a casino. Spinning the reels creates variations, where you can lock in certain variations you like, then spin the reel again to achieve a more desirable result. This is, more or less, how prompters work when utilising Generative AI". The question that always follows the discussion about the legal status of generative AI is whether we can protect products generated with the use of it as objects of copyright law and intellectual property rights. Again, as we established earlier, under the letter of law it is possible in several countries. But the question is - should we do it? "No, or at least, it should have a new form of copyright / intellectual property (IP) protection framework to cover assets generated by AI as a distinctly separate entity from existing copyright law. The existing copyright framework is not perfect but it is well established, benefiting creators and IP businesses alike. The protections and reimbursements offered by the existing system are of course, under threat from the deluge of AI generated assets. I read somewhere that it took just nine months to generate as many 'new' artworks as there have been in the entirety of recorded history. Clearly, copyright and IP legislation will need to act fast in order to protect original creators".
These questions were asked in the surveys and the results clearly indicated a view common within the creative industry and consumers of its products: 65 % of the English-speaking responders don't think that products of generative AI should be protected by copyright law (Fig. 3) and the same percentage don't consider that products of AI should be protected by intellectual property rights (Fig. 4), whereas 11.1 % think products generated with the use of AI should be protected by copyright law (Fig. 3) and 9.4 % suggest that such products should be protected by intellectual property rights (Fig. 4).
Should works generated with AI be protected by copyright? 117 responses
23,9 %
11,1%
Yes No
Not sure
Should works generated with AI be protected by intellectual property rights? 117 responses
Yes No
Not sure
J V
Fig. 3
The Russian-speaking responders answered to the same questions and 61.1% of them consider that such products should be protected neither with the copyright law (Fig. 5) nor by intellectual property rights (Fig. 6). However, 25 % of the Russian-speaking responders think that products of generative AI should be protect as objects of copyright law (Fig. 5) and the same percentage of them suggest that products of generative AI should be protected by intellectual property rights (Fig. 6).
Should works generated with AI be protected by copyright?
36 responses
13,9 %
25 %
Yes No
Not sure
Should works generated with AI be protected by intellectual property rights?
36 responses
J V
13,9 %
25 %
Yes No
Not sure
Fig. 5
Fig. 6
Another question that is yet to be answered both by creators and consumers, do generated products have artistic and cultural value? And can they actually be valued as much as products of creative human expression? "That's a very good question. For me the issue is that the average person will soon not be able to tell the difference between the two. Creative endeavours are subject to personal preference and opinion. For me, I am now far more interested in the process of creation and the addition of context and human imagination when I engage with a piece of work, and the savvy creators will incorporate videos of their process as a form of authenticity marker to their audience. Even the most unscrupulous 'prompt artist' cannot do that. And they have certainly tried". Art critics also have a say here: some of them compare artworks generated by AI systems to those produced by monkeys for both lack "intentionalism" (Fadeeva, 2023), others - consider a mixture of digital technologies and traditional art a new reality (Stepanov, 2022; Bylieva & Krasnoschekov, 2023) and some claim that the use of such technologies is nothing but another step towards dehumanization and demonstrate that an ordinary person not always understands which artwork is human-made and which is generated by AI (Panteleev, 2023).
2.2. Plagiarism, violation of copyrights and other risks
Another two claims that we need to discuss is whether generative AI can cause unfair competition and whether or not the industry actually considers that producers and owners of generative AI systems violate copyrights37.
"Yes, on both counts. As the numerous lawsuits and litigations filed earlier this year attest to, the developers of these platforms have to a greater or lesser extent, known about the existence of vast numbers of copyrighted material in their AI datasets. This is the big elephant in the room so to speak. Without exaggeration, the use of copyrighted material on this scale is so large and unprecedented it is almost an abstract entity, which makes it in some cases difficult to prove. But the proof is certainly out there.
The other side of the equation too is compensation. Creatives are being replaced, as simple as that. There are too many numerous examples to count, but there is a substantial material impact on the creative industries, which has traditionally been underpaid and relies largely on a patronage model. I always thought creatives were the canaries in the coal mine, so to speak. If left unchecked and unregulated, then there will not be many industries which would not be materially affected in some way by AI.
A couple of things to note here: one is this populist notion that creative people are Luddites who are against technology. I don't believe that rhetoric for a moment. It is not the technology that is the issue, it is the abuse of it as I noted earlier. Automation in factory work is arguably necessary as repetitive tasks in particular environments pose a risk to life. The same cannot be said for automating the culture we collectively view as sacred, and like any medium, can be turned to nefarious ends. So therefore, it's not just a question ofcopyright, but also of the impact of how the technology affects us in our day to day lives".
All of the above can be supported by the demands of SAG-AFTRA38 and WGA39 strikes and those of the Authors Guild40 as well as by lawsuits against producers of generative AI systems such as: 1) Sarah Andersen's, Kelly McKernan's and Karla Ortiz' class action versus STABILITY AI LTD, Delaware corporation and DEVIANTART41; 2) Authors Guild v. OpenAI Inc., where the most notorious claim is that OpenAI doesn't even deny that they train their systems of materials protected by copyright42.
The opinion expressed by the English-speaking respondents correlates with it too: 72.5 % of the English-speaking responders agree that producers of generative AI systems violate copyrights, whereas 11.1 % of them disagree with this notion (Fig. 7). Even more - 76.9 %
37 Case updates. Stable Diffusion litigation. (2023, October 31). https://clck.ru/36n4fM
38 We're Fighting for the Survival of Our Profession. SAG-AFTRA Strike. https://clck.ru/36n39G
39 WGA Contract 2023. Summary of the 2023 WGA MBA. https://clck.ru/35shcD
40 Artificial Intelligence. The Authors Guild. https://clck.ru/36n4h8
41 United States District Court Northern District of California San Francisco Division. Stable Diffusion litigation. https://clck.ru/36n4hr
42 Authors Guild v. OpenAI Inc. (1:23-cv-08292). Court Listener. https://clck.ru/36n4mC
of the responders believe that such companies violate intellectual property rights, however, 14.5 % express the opposite opinion (Fig. 8).
Do you agree that creators of image/ text/video/sound generators violate copyrights? 117 responses
Agree Disagree Not sure
Do you agree that creators of image/ text/video/sound generators violate intellectual property rights?
117 responses
Agree Disagree Not sure
J
Fig. 7
Fig. 8
The Russian-speaking audience demonstrated the opposite trend - 50 % of it don't think that producers of generative AI systems violate copyrights (Fig. 9) and 58.3 %% disagree on the notion that such companies violate intellectual property rights (Fig. 10). Only 19.4 % of the Russian-speaking responders share their foreign colleagues' point of view on violation of copyrights by producers of generative AI systems (Fig. 9) and only 16.7 % support the opinion about violation of intellectual property rights by such companies (Fig. 10). In both cases, a big percentage of responders are not sure about their positions -it's 30.6 % (Fig. 9) and 25 % (Fig. 10) respectively.
Do you agree that creators of image/ text/video/sound generators violate copyrights? 36 responses
30,6 %
19,4 %
Agree Disagree Not sure
Fig. 9
Do you agree that creators of image/ text/video/sound generators violate intellectual property rights?
36 responses
Agree Disagree Not sure
The SAG-AFTRA strike made it perfectly clear: they consider AI an existential threat to their profession, thus the slogan "We're fighting for the survival of our profession" and what they mean is generative AI systems allow studios to hire an actor for one working day, pay them a minimum wage but then reproduce the image and the voice of this actor whenever and however they want43. Hence, another question - will the creative industry survive the impact of such a mass-usage of generative AI systems? Or it is a real threat that should not be ignored before it is too late?
"In my line of work, I've seen other practitioners charge good money to effectively run photographs through AI filters and call it the finished result. In order to adapt, I've leaned into the process and the contextualisation of the work as the primary generators of value, because it is an authentic representation of human endeavour.
The threat has already been and gone, and my niche industry trained. However, as the adage goes: you get what you pay for. There will always be a demand for human led curation, restoration and contextualisation in my particular field, and it has led to some interesting developments on how to make revenue by drawing on your strengths, rather than compensate for weaknesses. Generative AI simply cannot replicate many of the processes we've set up. We'll quietly do our own thing, and leave it at that" - comments Mr. Lloyd.
The opinions shared by the English-speaking responders are a bit less optimistic -60.7 % suggest that generative AI poses a real threat to the creative industry's jobs, 18.8 % disagree with them, 17.9 % aren't sure and 2.6 % claim that they have already been replaced with generative AI (Fig. 11).
Again, the Russian-speaking audience showed the directly opposite trend: 75 % of the responders don't see generative AI as a threat to the industry, 16.7 % do, 8.3 % aren't sure and none of the responders have been replace with generative AI yet (Fig. 12).
Do you think AI will replace cultural/ creative industries jobs? 117 responses
18,8 %
17,9 %
Yes No
No sure
AI has replaced my job already
Do you think AI will replace cultural/ creative industries jobs? 36 responses
Yes No
No sure
J
16,7 %
AI has replaced my job already
Fig. 11
Fig. 12
43
We're Fighting for the Survival of Our Profession. SAG-AFTRA Strike. https://clck.ru/36n39G
It is worth-mentioning that the responses of the Russian-speaking audience correlate with the general view of the Russian creative industry on these technologies - they tend to see it only as an instrument and make philosophical commentary that instruments do not have a soul and therefore, cannot be a creator - meaning, they will never be able to replace human creators44. But does it mean there can be benefits of using generative AI as an instrument in the creative industry? "First and foremost, it's important to make some distinctions which are being conflated in the discussions about AI today. Fundamentally as an aid or tool in specific applications, AI processes make things possible which were not possible before, and they are specific to particular workflows. In my career working with archive visual material - such as photographic scans - upscaling to a larger resolution is only possible with the use of AI. There are other workflows which are highly specialised where the application of AI as a tool or aid is simply part of much longer technical process.
The problem arises when users conflate the idea of an 'aid' or 'tool' with the wholesale creation of a new piece of material; whether or not it's a piece of artwork in the style ofa living artist, or a piece of prose generated from a few text prompts. This 'generative' usage of an AI process is different to the usage I described above. It is not an aid for example to create a Derivative or Transformative Work in my opinion, merely an imitation of something created by someone else.
To put it another way: there's a spectrum between *use* and *abuse*. I've had many discussions with creatives about the use of Generative AI. I know one artist who uses Midjourney to simply generate some different compositions around a subject, and then picks one to then as a visual reference for an entirely original work done by hand. I can imagine that would be a timesaver when faced with commercial deadlines, and to me, an acceptable use of the technology.
Let's compare that with an instance I can think of where a self-published author won a prize based on their cover art, only to discover the artist had charged a considerable sum of money to create a cover featuring entirely Generated art collaged together. It is arguable whether or not the generated art could really be considered a Derivative or Transformative Work as something like that under UK law requires 'itself [to] be an original work of skill, labour and judgement'. Further, 'minor alterations that do not substantially alter the original would not qualify.'
In the case of the book cover artist, it could be argued the only creative act involved was the final arrangement of composition of the generated assets. In the case of my artist acquaintance, the process of creation was entirely by human hand and imagination.
From a generative standpoint: the only possible way I can think of for it to be truly ethical is if the dataset was only based on original works that you have provided, or taken from the Public Domain. Sadly, as we all know that is not the case".
44 At Gorkiy fest, the problem of neural networks participation in cinematography was discussed. Bulleten Kinoprokatchika. https://clck.ru/36n4pf
Again, all of mentioned points can be supported by the results of the investigations conducted by the House of Commons earlier this year - the experts participated in them expressed the concern of the abuse of generative AI technologies that becomes possible due to the identified legal gaps and include the rise of plagiarism, replacement human creators with generative AI and violation of other rights, however, they also suggested to encourage the use of AI technologies (not just the generative ones) in the industry because of their enormous potential, but only when such technologies are going to be used ethically45 46.
As another case of abuse of the generative AI technologies, we can provide an example of the most recent and quite scandalous Russian lawsuit - Alena Andronova against the Tinkoff bank. A dubbing actress, she recorded her voice for the bank needs but then it got synthesized and used by a third party to dub several types of illegal content that, allegedly, resulted in her losing contracts47.
And what are other risks the industry has been facing due to the mass-usage of generative AI technologies? "As noted, unscrupulous actors simply wanting to cash in on an industry which is small but perpetually of great interest to the public. Many historians rightfully are alarmed at the decontextualisation of historical material and the lack of attribution. I agree with them in this respect. I'm not entirely sure what the way out is, but I'm confident the industry is small enough to not go into cataclysmic collapse because of the introduction of AI. Practitioners should be aware of their ethical responsibilities in the pursuit of their work".
2.3. Labeling products of generative AI
From the analysis of the Chinese approach towards the legal regulation of generative AI, we conclude that AI-labeling is seen as a measure to protect both artists and users of generative AI systems48. Recently, several companies have begun offer their services to do the exactly the same49, 50 - to create "AI nutrition labels" in order to increase transparency and encourage responsible usage of generative AI systems, so according to their claims, such a simple action as putting a label of "AI ingredients" can prevent the abuse of these technologies.
45 Connected tech: AI and creative technology: Eleventh Report of Session 2022-23. (2023). UK Parliament. https://clck.ru/36n3Sf
46 The governance of artificial intelligence: interim report. Ninth Report of Session 2022-23. (2023). UK Parliament. https://clck.ru/36n3TN
47 Information on the primary document № M-6609/2023. Oficialniy Portal Sudov Moskvy. https://clck. ru/36KzHu
48 1994. No. 143. (2023). https://goo.su/ fbbG
49 AI Nutrition Facts. Twilio. https://clck.ru/36n4xc
50 Open Ethics Label: AI nutrition labels. Open Ethics. https://clck.ru/36n4yq
From monitoring the news, we also can suggest that politicians51 and digital-security experts52 support these claims, furthermore, all of them suggest that such labeling must be obligatory because otherwise we cannot prevent the on-going spread of misinformation and "deep-fakes", which is also crucial, considering the fact that the British government has already linked it to such a dangerous threat as terrorism53.
But does the industry agree that this measure can be as effective as the providers54, 55 of AI-labeling services claim? "I very much doubt it, though it would be a welcome legal requirement. I akin to any form of advertising as noted earlier. Consumers should be aware if something they see or read is generated by AI, and held to the same regulatory standards as advertisers with their products. 'False Advertising' is a well-established regulatory process. Time and time again when a form of marketing by organisations has been called out for using AI generated assets, the initial denials are usually met with a begrudging acceptance followed by a proclamation to adjust their working practices".
Our responders almost unanimously they said "Yes, products of generative AI should be labeled us such" - 88% of the English speakers support this idea and only 7.7% find it unnecessary (Fig. 13), and 80.6% of the Russian speaker consider that labeling AI-products should be obligatory, whereas only 13.9% dislike this idea (Fig. 14).
Should the works generated with AI be labeled as such? 117 responses
Yes No
No sure
Fig. 13
Should the works generated with AI be labeled as such? 36 responses
Yes No
No sure
Fig. 14
51 AI generated content should be labelled, EU Commissioner Jourova says. Reuters. https://clck.ru/36n5B8
52 Ministry of Digital Development was offered to introduce marking of the content created with neural networks. (2023, May 15). TASS. https://clck.ru/34RfkG
53 AI safety summit. Department For Science, Innovation and Technology. https://clck.ru/36n3zq
54 AI Nutrition Facts. Twilio.https://clck.ru/36n4xc
55 Open Ethics Label: AI nutrition labels. Open Ethics. https://clck.ru/36n4yq
It is necessary to add that technologically, it is possible to effectively label or, as other researchers call it "to watermark" all sorts of data, including digital audio (Patil & Shelke, 2023) and even do it invisibly if needed (Liu et al., 2022). Furthermore, it is possible to create a screen-shooting resistible watermark (Cao et al., 2023). Various watermarking methods can help with content authentication (Yuan et al., 2024), protection and even recovery of it (M. Swain & D. Swain, 2022). However, other research works demonstrate that a watermark within neural networks, for example, should not be seen as a panacea because it can be removed (Aiken et al., 2021).
2.4. The voice of the industry being heard
"Intellectual Property constitutes a major contributor to the national economy of the United Kingdom; from our scientific research to our cultural output in the arts. As with many countries, arts funding and access has always been challenging, and the advent of Generative AI will certainly accelerate some negative aspects of it. I believe it is in the interests of our legal framework to regulate as quickly as possible".
One of the questions of our survey was about whether our responders believed that the current laws of their country could protect them as professionals against the negative impact of the generative AI, and the gathered data supports the opinion about the inability of states adequately and timely eliminate concerns of their nationals being a cause of public political distrust in government - 72.6 % of the English-speaking responders do not trust the current legislature of their countries with it, 3.4 0% think that they can be protected by the existing legal norms, 13.7 % are not sure and 10.3 % are consumers of the creative industry products, so this question wasn't meant for them (Fig. 15).
The Russian-speaking audience is again, demonstrates a more optimistic attitude, nevertheless, 50 % of the responders don't trust the current laws of their countries with the protection against generative AI, 16.7 % believe that they are already protected enough and 33.3 % are not sure (Fig. 16).
Do you thing that the current laws of your country protect you as a professional enough against negative impacts of generative AI? 117 responses
Yes No
No sure
I don't work in any cultural / creative industry
Do you thing that the current laws of your country protect you as a professional enough against negative impacts of generative AI? 117 responses
I
10,7 %
Yes No
Not sure
J
Fig. 15
But the voice of the industry clearly hasn't been ignored - numerous law projects have been appearing all over the globe the final goal of which is to protect both the creative industry and consumers of its products, and to increase transparency and responsibility of the usage of generative AI systems.
The WGA, for example, ended their strike in September - the agreement has been reached and to be ratified, so according to it: 1) AI can't write or rewrite literary material, and AI-generated material will not be considered source material under the MBA, meaning that AI-generated material can't be used to undermine a writer's credit or separated rights; 2) A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can't require the writer to use AI software (e.g., ChatGPT) when performing writing services; 3) The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material, 4) The WGA reserves the right to assert that exploitation of writers' material to train AI is prohibited by MBA or other law56.
The voice of Alena Andronova also has been heard - even though the court left her case without movement57, after she teamed-up with the Union of Narrators and other victims whose voices "have been stolen"58 to prove that a human voice is a biometric data and thus, shouldn't be collected without consent, the Soviet of Federation has come up with a decision to protect human voices from the negative impact of generative AI and deep-synthesis technologies and to prevent further legal collisions59.
The Senate of the USA has been listening to the voice of the industry too - they've come up with a similar to the Russian legal act that is currently known as "No fakes law" and that is supposed to put under the legal protection "image, voice and visual likeness" of individuals for the entire life period for 70 years after the death on an individual60.
The European Parliament, apparently, have found inspiration in the Chinese approach61 towards regulations of generative AI because now they demand from producers of digital AI systems the following: 1) Disclosing that the content was generated by AI; 2)
56 WGA contract 2023. Summary of the 2023 WGA MBA. https://clck.ru/35shcD
57 Information on the primary document № M-6609/2023. Oficialniy Portal Sudov Moskvy. https://clck.ru/36KzHu
58 Andronova, A. (2023, August 30). We beg to protect our voices from theft and fraud!. CHANGE ORG. https:// clck.ru/36KzMK
59 Federation Council was offered to protect a human voice and its synthesis. PRAVO.RU. https://clck.ru/36j2Sy
60 Senate Legislative Counsel Draft Copy of EHF23968 GFW - To protect the image, voice, and visual likeness of individuals, and for other purposes. Senate GOV. https://clck.ru/36nutL
61 1994. No. 143. S^S^Nis!^^. (2023). 5^. https://goo.su/fbbG
Designing the model to prevent it from generating illegal content; 3) Publishing summaries of copyrighted data used for training62.
Additionally, corporations like Microsoft63, Adobe64 and Google65 have decided to implement protection for users of their generative AI systems against copyright and IP-related lawsuits, even promising to pay legal damages in such cases. Microsoft explains that the new measures will also help human creators "retain control of their rights under copyright law and earn a healthy return on their creations"66.
Conclusions
The conducted research revealed that currently there is no universal understanding of whether generative AI can be considered a subject of copyright law and its products - objects of copyright law/IP rights as well as there is no international legal framework that could be able to regulate the mass-use of such technologies. Should such regulations not be developed promptly, the harm to the creative industry and through it - to state economics will be inevitable. Among the risks that the unregulated use of generative AI systems our analysis identified the following: 1) violation of copyright and IP rights; 2) violation of moral rights; 3) violation of labor rights; 4) disruption of labor market; 5) violation of customers rights; 6) mass-production of illegal content; 7) the crisis of originality; 8) unfair competition; 9) public distrust in government; 10) public disorder; 11) extremism and terrorism.
To minimize the identified risks, it is important to promptly develop new international and national legal frameworks, which will help increase accountability of producers, owners and users of generative AI systems and will make them liable for abuse of these technologies: "First and foremost, the developers of these AI services should be open to scrutiny and not rely on technical obfuscation and held accountable for their training data. No one would be having a problem with this if the developers simply stuck to Public Domain material and Opt-in participation. Second, a fairer form of compensation for creators whose work has ended up in these training sets. If we have the means to scrape data en masse, then we have the means to fairly acknowledge the role of creatives in this process and pay them accordingly. Third, commercial usage should be formalised and regulated. The stock photography industry is very much thriving and well established with comparatively little abuse of the system which makes commercial
62 EU AI Act: first regulation on artificial intelligence. EU Parlaiment. https://clck.ru/36n5Lv
63 Microsoft announces new Copilot Copyright Commitment for customers. Microsoft. https://clck.ru/36n5MQ
64 Adobe offers copyright indemnification for Firefly AI-based image app users. Computer World. https:// clck.ru/36n5Mx
65 Shared fate: Protecting customers with generative AI indemnification. Google. https://clck.ru/36n5NP
66 Microsoft announces new Copilot Copyright Commitment for customers. Microsoft. https://clck.ru/36n5MQ
sense for the platform holders and the creatives who submit their work to them. I can't see why an opt-in arrangement regarding Generative AI can't be implemented in some form to stop the rampant abuse. Forth, search engines in particular should be vigilant in how they present AI generated material. How this is achieved on a technical level is not for me to say, but again, possible".
We can also state that countries with different regimes have begun to adopt more less similar measures close to those enforced in China67, which include: 1) transparency about the data used for training; 2) generative AI-products labeling; 3) liability for violation of copyright and intellectual property rights; 4) protection of the image, voice and likeness of an individual. In our opinion, in the foreseeable future the use of generative AI systems will be regulated by similar measures on the international level as well.
In conclusion, we would like to highlight that the question of ethical use of generative AI goes far beyond the question "Who's the author?" and affects not only the creative industry but has an impact on states economies and even democratic institutions themselves as shown by our analysis, hence the necessity of filling in the existing legal gaps, including such a simple, at first glance, thing as the lack of appropriate terminology.
References
Agibalova, E. N., & Perekrestova, E. A. (2020). Copyright for the works created by artificial intelligence. Ehpokha
nauki, 24, 124-126. (In Russ.). https://doi.org/10.24411/2409-3203-2020-12424 Aiken, W., Kim, H., Woo, S. S., & Ryoo, J. (2021). Neural network laundering: Removing black-box backdoor watermarks from deep neural networks. Computers & Security, 106, 102277. https://doi.org/10.10Wj. cose.2021.102277
Bylieva, D., & Krasnoschekov, V. (2023). The original and a copy: a technological challenge to art. Bulletin oftheMoscow Region State University. Series: Philosophy, 2, 77-91. (In Russ.). https://doi. org/10.18384/2310-7227-2023-2-77-91 Cao, F., Wang, T., Guo, D., Li, J., & Qin, C. (2023). Screen-shooting resistant image watermarking based on lightweight neural network in frequency domain. Journal of Visual Communication and Image Representation, 94, 103837. https://doi.org/10.1016/j.jvcir.2023.103837 Diaz-Rodriguez, N., Del Ser, J., Coeckelbergh, M., De Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896. https://doi.org/10.1016/j. inffus.2023.101896
Fadeeva, T. E. (2023). "Union" of an artist with a non-human agent: utopia or a working model of artistic production? Izvestiya of the Samara Science Centre of the Russian Academy of Sciences. Social, Humanitarian, Biomedical Sciences, 25(88), 108-115. (In Russ.). https://doi.org/10.37313/2413-9645-2023-25-88-108-115
Fenwick, M., & Jurcys, P. (2023). Originality and the future of copyright in an age of generative AI. Computer Law
& Security Review, 105892. https://doi.org/10.1016Zj.clsr.2023.105892 KuQukkomurler, S., & Ozkan, T. (2022). Political interest across cultures: The role of uncertainty avoidance and trust. International Journal of Intercultural Relations, 91, 88-96. https://doi.org/10.1016/j.ijintrel.2022.09.004 Liu, G., Xiang, R., Liu, J., Pan, R., & Zhang, Z. (2022). An invisible and robust watermarking scheme using convolutional neural networks. Expert Systems With Applications, 210, 118529. https://doi.org/10.1016/j.
67 1994. No. 143. mmsmm^m^^^.. (2023). %1 5^. https://goo.su/fbbG
eswa.2022.118529
Panteleev, A. F. (2023). The problem of comparative evaluation of paintings created by an artist and generated by a neural network. Izvestiya of Saratov University. Philosophy. Psychology. Pedagogy, 23(3), 326-330. (In Russ.). https://doi.org/10.18500/1819-7671-2023-23-3-326-330 Patil, A. P., & Shelke, R. (2023). An effective digital audio watermarking using a deep convolutional neural network with a search location optimization algorithm for improvement in Robustness and Impercepetibility. High-Confidence Computing, 100153. https://doi.org/10.1016Zj.hcc.2023.100153 Somenkov, S. A. (2019). Artificial intelligence: from object to subject? Courier of the Kutafin Moscow State Law
University, 2(54), 75-85. (In Russ.). https://doi.org/10.17803/2311-5998.2019.54.2.075-085 Sparkes, M. (2022). AI copyright. New Scientist, 256(3407), 17. https://doi.org/10.1016/s0262-4079(22)01807-3 Stepanov, M. A. (2022). De-Autonomy of Post-Human Imagination: New Directions in the Theory of Art. Actual Problems of Theory and History of Art (No.72, pp. 663-673). (In Russ.). http://dx.doi.org/10.18688/aa2212-07-53
Stokel-Walker, C. (2023). ChatGPT's knowledge of copyrighted novels highlights legal uncertainty of AI. New
Scientist, 258(3438), 13. https://doi.org/10.1016/s0262-4079(23)00837-0 Stroppe, A. (2023). Left behind in a public services wasteland? On the accessibility of public services and political
trust. Political Geography, 105, 102905. https://doi.org/10.1016/j.polgeo.2023.102905 Swain, M., & Swain, D. (2022). An effective watermarking technique using BTC and SVD for image authentication
and quality recovery. Integration, 83, 12-23. https://doi.org/10.1016/j.vlsi.2021.11.004 Torres, G. & Bellinger, N. (2014). The Public Trust: The Law's DNA. Cornell Law Faculty Publications. Paper 1213.
http://scholarship.law.cornell.edu/facpub/1213 Wan, Y., & Lu, H. (2021). Copyright protection for AI-generated outputs: The experience from China. Computer
Law & Security Review, 42, 105581. https://doi.org/10.1016/j.clsr.2021.105581 Yuan, Z., Zhang, X., Wang, Z., & Yin, Z. (2024). Semi-fragile neural network watermarking for content authentication and tampering localization. Expert Systems With Applications, 236, 121315. https://doi.org/10.1016/j. eswa.2023.121315
Authors information
Natalia I. Shumakova - Associate Professor, Department of Constitutional and Administrative Law, Law Institute, South Ural State University (National Research University), Chelyabinsk, Russia
Address: 76 Lenin Str., 454080 Chelyabinsk, Russian Federation
E-mail: [email protected]
ORCID ID: http://orcid.org/0009-0004-6053-0650
RSCI Author ID: https://www.elibrary.ru/author_items.asp?authorid=1211522
Jordan J. Lloyd - Creative Director, Unseen History
Address: Howes Farm, Doddinghurst Road, Brentwood, Essex, CM15 0SG, United Kingdom
E-mail: [email protected]
ORCID ID: https://orcid.org/0009-0007-8733-7261
Elena V. Titova - Dr. Sci. (Law), Associate Professor, Department of Constitutional and Administrative Law, Law Institute, South Ural State University (National Research University)
Address: 76 Lenin Str., 454080 Chelyabinsk, Russian Federation
E-mail: [email protected]
ORCID ID: http://orcid.org/0000-0001-9453-3550
Scopus Author ID: https://www.scopus.com/authid/detail.uri?authorId=57201640405 Google Scholar ID: https://scholar.google.ru/citations?user=Pqj6OiQAAAAJ РMНЦ Author ID: https://www.elibrary.ru/author_items.asp?authorid=451302
Authors' contributions
The idea of the article is joint and belongs to Natalia I. Shumakova and Jordan J. Lloyd. Natalia I. Shumakova formulated the idea; drafted the manuscript; developed the methodology; organized sociological surveys; collected and analyzed literature and legislation; formulated key conclusions, proposals and recommendations.
Jordan J. Lloyd drafted, processed and presented his expert opinion on the key provisions of the article; sampled media publications; interpreted the overall results of the study.
Elena V. Titova analyzed legislation; considered the processes occurring in the creative industry from the viewpoint of public/political distrust manifestation; partially collected and analyzed scientific literature.
Conflict of interest
The authors declare no conflict of interest.
Financial disclosure
The research had no sponsorship.
Acknowledgements
The authors are grateful to the Editorial Office of the Journal of Digital Technologies and Law for their assistance in conducting a sociological survey in the Journal's Telegram channel at https://t.me/JournalDTL
Thematic rubrics
OECD: 5.05 / Law РASJC: 3308 / Law WoS: OM / Law
Article history
Date of receipt - October 31, 2023 Date of approval - November 20, 2023 Date of acceptance - November 30, 2023 Date of online placement - December 15, 2023
Научная статья
УДК 34:004:34.096:347.211:004.8
EDN: https://elibrary.ru/wxwsvu
DOI: https://doi.org/10.21202/jdtl.2023.38
з
%
Check for updates
На пути к правовому регулированию генеративного ИИ в творческой индустрии
Наталья Игоревна Шумакова 0 ф
Южно-Уральский государственный университет (национальный исследовательский университет) г. Челябинск, Российская Федерация
Джордан Дж. Ллойд ®
Компания «Unseen History» г. Эссекс, Великобритания
Елена Викторовна Титова 0
Южно-Уральский государственный университет (национальный исследовательский университет) г. Челябинск, Российская Федерация
Ключевые слова
Аннотация
авторское право, генеративный искусственный интеллект, интеллектуальная собственность, искусственный интеллект, международное право, нейронная сеть, объект авторского права, субъект авторского права, творческая индустрия, цифровые технологии
Цель данной статьи - ответить на следующие вопросы: 1. Может ли генеративный искусственный интеллект быть субъектом авторского права? 2. К каким рискам может привести нерегулируемое использование систем генеративного искусственного интеллекта? 3. Какие правовые пробелы необходимо закрыть для минимизации таких рисков? Методы: сравнительно-правовой анализ, социологический метод, частно-социологический метод, количественный и качественный анализ данных, статистический анализ, метод кейсов, индукция, дедукция. Результаты: авторы выявили ряд рисков, возникающих при нерегулируемом использовании генеративного искусственного интеллекта в творческой индустрии, среди которых нарушение авторского и трудового права, нарушение прав потребителей и рост недоверия населения к власти. Авторы полагают, что оперативная разработка новых правовых норм может минимизировать эти риски. В заключение констатируется, что государства уже начали осознавать опасность игнорирования негативного влияния генеративного искусственного интеллекта на творческую индустрию, что обусловливает разработку аналогичных правовых норм в государствах с совершенно разными режимами.
0 Контактное лицо
© Шумакова Н. И., Ллойд Дж. Дж., Титова Е. В., 2023
Статья находится в открытом доступе и распространяется в соответствии с лицензией Creative Commons «Attribution» («Атрибуция») 4.0 Всемирная (CC BY 4.0) (https://creativecommons.Org/licenses/by/4.0/deed.ru), позволяющей неограниченно использовать, распространять и воспроизводить материал при условии, что оригинальная работа упомянута с соблюдением правил цитирования.
Научная новизна: в работе проведено комплексное исследование влияния генеративного искусственного интеллекта на творческую индустрию с двух точек зрения: с позиции права и с позиции индустрии. Эмпирическую базу составляют два международных исследования и экспертное мнение представителя отрасли. Такой подход позволил авторам повысить объективность исследования и получить результаты, которые могут быть использованы для поиска практического решения выявленных рисков. Проблема непрерывного развития и роста популярности систем генеративного искусственного интеллекта выходит за рамки вопроса «кто автор?», поэтому ее необходимо решать путем внедрения иных, нежели уже существующих, механизмов и правил. Данная точка зрения подтверждается не только результатами проведенных исследований, но и анализом текущих судебных исков к разработчикам систем генеративного искусственного интеллекта.
Практическая значимость: полученные результаты могут быть использованы для ускорения разработки универсальных правовых норм, правил, инструментов и стандартов, отсутствие которых в настоящее время представляет угрозу не только для прав человека, но и для ряда отраслей творческой индустрии и других областей.
Для цитирования
Шумакова, Н. И., Ллойд, Дж. Дж., Титова, Е. В. (2023). На пути к правовому регулированию генеративного ИИ в творческой индустрии. Journal of Digital Technologies and Law, 7(4), 880-908. https://doi.org/10.21202/jdtl.2023.38
Список литературы
Агибалова, E. Н., Перекрёстова, E. А. (2020). Право авторства на произведения, созданные искусственным
интеллектом. Эпоха науки, 24, 124-126. https://doi.org/10.24411/2409-3203-2020-12424 Быльева, Д. С., Краснощеков, В. В. (2023). Оригинал и копия: технологический вызов искусству. Вестник Московского государственного областного университета. Серия: Философские науки, 2, 77-91. https://doi.org/10.18384/2310-7227-2023-2-77-91 Пантелеев, А. Ф. (2023). Проблема сравнительной оценки картин, созданных художником и сгенерированных нейросетью. Известия Саратовского университета. Новая серия. Серия: Философия. Психология. Педагогика, 23(3), 326-330. https://doi.org/10.18500/1819-7671-2023-23-3-326-330 Соменков, С. А. (2019). Искусственный интеллект: от объекта к субъекту? Вестник Университета имени
О. Е. Кутафина, 2(54), 75-85. https://doi.org/10.17803/2311-5998.2019.54.2.075-085 Степанов, М. А. (2022). Деавтономия постчеловеческого воображения: новые направления в теории искусства. В сб: Актуальные проблемы теории и истории искусства (№ 12, с. 663-673). http://dx.doi. org/10.18688/aa2212-07-53 Фадеева, Т. Е. (2023). «Союз» художника с нечеловеческим агентом - утопия или рабочая модель художественного производства? Известия Самарского научного центра Российской академии наук. Социальные, гуманитарные, медико-биологические науки, 25, 1(88), 108-115. https://doi. org/10.37313/2413-9645-2023-25-88-108-115 Aiken, W., Kim, H., Woo, S. S., & Ryoo, J. (2021). Neural network laundering: Removing black-box backdoor watermarks from deep neural networks. Computers & Security, 706, 102277. https://doi.org/10.1016/j. cose.2021.102277
Cao, F., Wang, T., Guo, D., Li, J., & Qin, C. (2023). Screen-shooting resistant image watermarking based on lightweight neural network in frequency domain. Journal of Visual Communication and Image Representation, 94, 103837. https://doi.org/10.1016/j.jvcir.2023.103837
Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., De Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896. https://doi.org/10.1016/j. inffus.2023.101896
Fenwick, M., & Jurcys, P. (2023). Originality and the future of copyright in an age of generative AI. Computer Law
& Security Review, 105892. https://doi.org/10.1016/j.clsr.2023.105892 KüQükkomürler, S., & Ozkan, T. (2022). Political interest across cultures: The role of uncertainty avoidance and trust. International Journal of Intercultural Relations, 91, 88-96. https://doi.org/10.1016/j.ijintrel.2022.09.004 Liu, G., Xiang, R., Liu, J., Pan, R., & Zhang, Z. (2022). An invisible and robust watermarking scheme using convolutional neural networks. Expert Systems With Applications, 210, 118529. https://doi.org/10.10Wj. eswa.2022.118529
Patil, A. P., & Shelke, R. (2023). An effective digital audio watermarking using a deep convolutional neural network with a search location optimization algorithm for improvement in Robustness and Impercepetibility. High-Confidence Computing, 100153. https://doi.org/10.10Wj.hcc.2023.100153 Sparkes, M. (2022). AI copyright. New Scientist, 256(3407), 17. https://doi.org/10.1016/s0262-4079(22)01807-3 Stokel-Walker, C. (2023). ChatGPT's knowledge of copyrighted novels highlights legal uncertainty of AI. New
Scientist, 258(3438), 13. https://doi.org/10.1016/s0262-4079(23)00837-0 Stroppe, A. (2023). Left behind in a public services wasteland? On the accessibility of public services and political
trust. Political Geography, 105, 102905. https://doi.org/10.10Wj.polgeo.2023.102905 Swain, M., & Swain, D. (2022). An effective watermarking technique using BTC and SVD for image authentication
and quality recovery. Integration, 83, 12-23. https://doi.org/10.10Wj.vlsi.2021.11.004 Torres, G. & Bellinger, N. (2014). The Public Trust: The Law's DNA. Cornell Law Faculty Publications. Paper 1213.
http://scholarship.law.cornell.edu/facpub/1213 Wan, Y., & Lu, H. (2021). Copyright protection for AI-generated outputs: The experience from China. Computer
Law & Security Review, 42, 105581. https://doi.org/10.10Wj.clsr.2021.105581 Yuan, Z., Zhang, X., Wang, Z., & Yin, Z. (2024). Semi-fragile neural network watermarking for content authentication and tampering localization. Expert Systems With Applications, 236, 121315. https://doi.org/10.1016/j. eswa.2023.121315
Информация об авторах
Шумакова Наталья Игоревна - доцент кафедры конституционного и административного права, Южно-Уральский государственный университет (национальный исследовательский университет)
Адрес: 454080, Российская Федерация, г. Челябинск, пр. Ленина, 76
E-mail: [email protected]
ORCID ID: http://orcid.org/0009-0004-6053-0650
РИНЦ Author ID: https://www.elibrary.ru/author_items.asp?authorid=1211522
Ллойд Джордан Дж. - Шеффилдский университет; креативный директор, компания «Unseen History»
Адрес: Хоуес Фарм, Доддингхерст Роуд, Брентвуд, Эссекс, CM15 0SG, Великобритания
E-mail: [email protected]
ORCID ID: https://orcid.org/0009-0007-8733-7261
Титова Елена Викторовна - доктор юридических наук, доцент, директор Юридического института, заведующий кафедрой конституционного и административного права, Южно-Уральский государственный университет (национальный исследовательский университет)
Адрес: 454080, Российская Федерация, г. Челябинск, пр. Ленина, 76
E-mail: [email protected]
ORCID ID: http://orcid.org/0000-0001-9453-3550
Scopus Author ID: https://www.scopus.com/authid/detail.uri?authorId=57201640405 Google Scholar ID: https://scholar.google.ru/citations?user=Pqj6OiQAAAAJ РИНЦ Author ID: https://www.elibrary.ru/author_items.asp?authorid=451302
Вклад авторов
Идея статьи является совместной и принадлежит Н. И. Шумаковой и Дж. Дж. Ллойду. Н. И. Шумакова осуществляла формулирование идеи; выполняла составление черновика и чистовика рукописи; разработала дизайн методологии; организовала проведение социологических опросов; осуществляла сбор и анализ литературы и законодательства; сформулировала ключевые выводы, предложения и рекомендации.
Дж. Дж. Ллойд составил, обработал и предоставил свое экспертное мнение по ключевым положениям статьи; провел выборку публикаций в медиа; осуществлял интерпретацию общих результатов исследования.
Е. В. Титова осуществляла анализ законодательства; рассмотрела с точки зрения проявления публичного/политического недоверия процессы, происходящие в творческой индустрии; провела частичный сбор и анализ научной литературы.
Конфликт интересов
Авторы сообщают об отсутствии конфликта интересов.
Финансирование
Исследование не имело спонсорской поддержки.
Благодарность
Авторы выражают благодарность редакции Journal of Digital Technologies and Law за помощь в проведении социологического опроса в телеграм-канале журнала https://t.me/JournalDTL
Тематические рубрики
Рубрика OECD: 5.05 / Law Рубрика ASJC: 3308 / Law Рубрика WoS: OM / Law
Рубрика ГРНТИ: 10.41.91 / Авторское право и смежные права в отдельных странах Специальность ВАК: 5.1.2 / Публично-правовые (государственно-правовые) науки
История статьи
Дата поступления - 31 октября 2023 г. Дата одобрения после рецензирования - 20 ноября 2023 г. Дата принятия к опубликованию - 30 ноября 2023 г. Дата онлайн-размещения - 15 декабря 2023 г.