Научная статья на тему 'HUMAN VS. ARTIFICIAL INTELLIGENCE – EU’S LEGAL RESPONSE'

HUMAN VS. ARTIFICIAL INTELLIGENCE – EU’S LEGAL RESPONSE Текст научной статьи по специальности «Экономика и бизнес»

CC BY
65
31
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Pravo – teorija i praksa
Область наук
Ключевые слова
artificial intelligence / the Europan Union / regulatory framework / the Proposal for the Artificial Intelligence Act / veštačka inteligencija / Evropska unija / regulatorni okvir / Predlog Uredbe o veštačkoj inteligenciji

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Mladenov Marijana

Artificial intelligence (AI) has the capacity to improve not only the individual quality of life, but also economic and social welfare. Although the AI systems have many advantages, they also pose significant risks, creating a wide range of moral and legal dilemmas. The European Union has been creating a legal framework for developing, trading, and using AI-driven products, services, and systems to reduce the risks connected with the AI systems and to prevent any possible harm they may cause. The main focus of this paper refers to the analysis of the Proposal for the Artificial Intelligence Act submitted by the European Commission in April 2021. The goal of the article is to move toward a possible resolution to the dilemma of whether the AIA proposal is appropriate for the AI era by addressing the scope of its application, the prohibited AI practices, rules on high-risk AI systems, specific transparency obligations, as well as certain regulatory gaps. The article should be viewed as an initial analysis of the AIA proposal in order to provide a useful framework for the future discussion.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

LJUDSKA PROTIV VEŠTAČKE INTELIGENCIJE – PRAVNI ODGOVOR EU

Veštačka inteligencija ima kapacitet da poboljša ne samo kvalitet života pojedinca, već i ekonomsko i socijalno blagostanje. Iako sistemi veštačke inteligencije imaju mnoge prednosti, oni takođe predstavljaju značajne rizike, stvarajući širok spektar moralnih i pravnih dilema. Evropska unija kreira pravni okvir za razvoj, trgovinu i upotrebu proizvoda, usluga isistema vođenih veštačkom inteligencijom kako bi smanjila rizike povezane sa sistemima veštačke inteligencije i sprečila svaku moguću štetu koju oni mogu da izazovu. Glavni fokus ovog rada odnosi se na analizu Predloga Uredbe o veštačkoj inteligenciji koji je Evropska komisija podnela u aprilu 2021. Cilj članka je da pruži doprinos u kontekstu razrešenja dileme da li je predlog navedene uredbe adekvatan zahtevima ere veštačke inteligencije, adresirajući obim primene ovog akta, zabranjene prakse veštačke inteligencije, pravila o visokorizičnim sistemima veštačke inteligencije, specifične obaveze transparentnosti kao i određene pravne praznine. Članak treba posmatrati kao početnu analizu predloga Uredbe o veštačkoj inteligenciji kako bi se obezbedio koristan okvir za buduću diskusiju.

Текст научной работы на тему «HUMAN VS. ARTIFICIAL INTELLIGENCE – EU’S LEGAL RESPONSE»

Mladenov Marijana

http ://or cid. org/0000-0002-45 74-5159

UDK: 159.922:004.8(4-672EU)

Original scientific paper DOI: 10.5937/ptp2300032M Received: 24.01.2023.

Approved on: 28.02.2023. Pages: 32-43

HUMAN VS. ARTIFICIAL INTELLIGENCE - EU'S LEGAL RESPONSE

ABSTRACT: Artificial intelligence (AI) has the capacity to improve not only the individual quality of life, but also economic and social welfare. Although the AI systems have many advantages, they also pose significant risks, creating a wide range of moral and legal dilemmas. The European Union has been creating a legal framework for developing, trading, and using Al-driven products, services, and systems to reduce the risks connected with the AI systems and to prevent any possible harm they may cause. The main focus of this paper refers to the analysis of the Proposal for the Artificial Intelligence Act submitted by the European Commission in April 2021. The goal of the article is to move toward a possible resolution to the dilemma of whether the AIA proposal is appropriate for the AI era by addressing the scope of its application, the prohibited AI practices, rules on high-risk AI systems, specific transparency obligations, as well as certain regulatory gaps. The article should be viewed as an initial analysis of the AIA proposal in order to provide a useful framework for the future discussion.

Keywords: artificial intelligence, the Europan Union, regulatory framework, the Proposal for the Artificial Intelligence Act.

* LLD, Associate professor, Faculty of Law for Commerce and Judiciary in Novi Sad, The University of Business Academy in Novi Sad, Republic of Serbia, e-mail: alavuk@pravni-fakultet.info LQ—GLJ© 2023 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/ licenses/by/4.0/).

1. Introduction

Artificial intelligence (AI) has transformed many industries in recent years and still attracts global headlines (Perucica& Andjelkovic, 2021. p.348). AI has the capacity to improve not only the individual quality of life but also economic and social welfare (Kolarevic, 2022, p.111). However, while AI systems have many advantages, they also pose significant risks, creating a wide range of moral and legal dilemmas (Bjelajac& Filipovski, 2021. p.11).

The European Union has been creating a legal framework for developing, trading, and using Al-driven products, services, and systems to reduce the risks connected with AI systems and to prevent any possible harm they may cause.The European Parliament passed a "Resolution on Civil Law Rules on Robotics" on February 16, 2017, which specifically called for legislation on the liability of robots and AI (Resolution on Civil Law Rules on Robotics, 2017). Furthermore, the Commission adopted "Communication on Artificial Intelligence for Europe" on April 25, 2018 (Communication on Artificial Intelligence for Europe, 2018,). With the help of an expert panel, the Commission stated in this communication that it will examine if the national and EU liability frameworks are appropriate in the context of problems posed by AI. Two years later, the Commission published a package consisting of four documents,including the White Paper "On Artificial Intelligence - A European approach to excellence and trust" (Koch, 2020). In April 2021 European Commission moved ahead with the Proposal for the Artificial Intelligence Act (hereinafter: AIA proposal), which will present the main subject of the research in the paper (Proposal for the Artificial Intelligence Act, 2021).

The AIA proposal is the first initiative to horizontally regulate AI on a global level (Bogucki, Engler, Perarnaud & Renda, 2022). It establishes fundamental, cross-industry norms for the creation, exchange, and application of AI-driven systems, products, and services within EU territory. This act aims to formalize the high requirements of the "Ethics guidelines for trustworthy AI(a)", which calls for AI to be technically proficient, ethical, and lawful while safeguarding democratic principles, human rights, and the rule of law (Hickman & Petrin, 2021). In order to meet this aim, the AIA proposal follows a risk-based approach to differentiate between AI systems uses that create the following categories of risks: "an unacceptable risk, a high risk,and a low or minimal risk" (Explanatory Memorandum of the AIA proposal, 2021, p.12).This implies, among other things, that applications using AI that pose an unacceptable risk are prohibited, while AI systems with low risks, can be created and used in compliance withcurrent regulations.

Considering the abovementioned, the goal of the article is to move toward a possible resolution to the dilemma of whether the AIA proposal is appropriate for the AI era by addressing the scope of this act, the prohibited AI practices, rules on high-risk AI systems, specific transparency obligations as well as certain regulatory gaps.

2. The scope of the AIA proposal

The scope of the AIA proposal is defined by the subject matter of the regulation as well as the scope of its application. Concerning the subject matter, Article 1 states thatthe AIA proposal establishes:

(a) "harmonised rules for the placing on the market, the putting into service and the use of artificialintelligence systems ('AI systems') in the Union;

(b) prohibitions of certain artificial intelligence practices;

(c) specific requirements for high-risk AI systems and obligations for operators of such systems;

(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotionrecognition systems and biometric categorisation systems, and AI systems used to generate ormanipulate image, audio or video content;

(e) rules on market monitoring and surveillance " (Proposal for Artificial Intelligence Act, 2021).

According to Article 1, the AIA proposal regulates "AI systems". Along with the issue of how to distinguish between "AI" and "AI systems", theextremely broad conceptual scope of the AIAproposal also looks unclear. The definition of "AI systems" is provided by Article 3(1) of the AIA proposal, which together with Annex I mainly includes any computer program. As a result of such a wide approach, the designers, operators, and users of AI systems may experience different legal uncertainty(Helberger & Diakopoulos, 2022).Undoubtedly, a broad definition of "AI systems" may be reasonable in the context of the AI practices expressly forbidden by Article 5 of the AIA proposal in order to balance the risks that various types of software pose tofundamental human rights. Contrary, when it concerns high-risk AI systems, such a broad definition is too general. The required conditions proposed within Title III of the AIA proposal for these systems are based on the understanding that many fundamental rights are negatively affected by the unique features of machine learning, including transparency, complexity,

reliance on data, and autonomous behaviour (Smuha et al., 2021, p. 11). The wide definition of AI may result in overregulation because these features are either not present or only partially present in simple algorithms (Ebers, Hoch, Rosenkranz, Ruschemeier & Steinrotter, 2021, p. 591).

In regards to the territorial scope, the AIA would apply to public and commercial actors both inside and outside the EU, so long as their AI system is sold on the EU market or has an impact on EU citizens. The AIA wouldapply to three types of companies (or other parties, including public bodies), that use AI systems in different ways: providers, users, and producers of products used in the EU. The first and third categories, give the AIA proposal extraterritorial impact outside of the EU (Greenleaf, 2021, p. 3).By restricting the geographic application of the AIA proposal to the "use" of AI systems within the EU, it is possible that some high-risk AI systems or even forbidden AI systems are developed, sold, or exported from the EU but used outside the EU. Therefore, it seems that this provision has the potential to create various legal and ethical problems for users of AI systems outside the EU (Ebers, Hoch, Rosenkranz, Ruschemeier, & Steinrotter, 2021, p. 591).

3. Prohibited uses of AI

Article 5 of the AIA proposal establishes a list of prohibited AI practices. The list of prohibited practices includes all AI systems whose use is not in accordance with fundamental European values, such as respect for fundamental human rights and freedoms.Four different types of AI are generally included under the list ofAI practices that are prohibitedunder the standards outlined in Article 5 of the AIA proposal.

The first one, "subliminal or manipulative AI practices", is defined as one that has "a significant potential to manipulate persons through subliminal techniquesbeyond their consciousness" to materially modify someone'sbehaviour in a way that harms or is likely to negatively affect their physical or psychological well-being or the well-being of another person (Explanatory Memorandum of the AIA proposal, 2021, p.12). Even though the AIA proposal does not define the term "subliminal", this phrase typically describes a perception that is below the level of awareness (Klein, 1966, p. 726). The activity's potential to harm someonephysically or psychologically should be considered a final trigger. The scope of the provision is significantly limited by this requirement(Veale & Borgesius, 2021 p. 99).

The second type of prohibited AI is referring to the AI practices exploiting vulnerabilities of particularly vulnerable groups including children

or persons with disabilitiesto materially influence a person's behaviour in a way that harms or is likely to harm that person or another person's physical or psychological well-being.The main aspect of this provision is vulnerability, which isnot extensively defined but only demonstrated by the examples of particularly vulnerable groups, such as children or individuals with disabilities (Neuwirth, 2022, p. 7).

The thirdcategory of prohibited AI practices, "social scoring systems", includes systems used by "public authorities or ontheir behalf for the evaluation or classification of the trustworthiness of natural personsover a certain period of time-based on their social behaviour or known or predictedpersonal or personality characteristics'' (Article 5 of the Proposal for the Artificial Intelligence Act, 2021). It seems that by restricting the use of social scoring to public authorities, the AIA proposal ignores the use of such systems by private businesses, especially in high-risk sectors where they may have the potential to indirectly impactfundamental rights.Various infrastructures including delivery, telecommunications, and transportation are under the authority of so-called AI companies (Rahman, 2017). Therefore, the above exclusion can have serious socioeconomic implications for individuals, which imposes the needto make this provision universally applicable.

The use of "real-time remote biometric identification systems in publicly accessible locations" falls under the fourth category of prohibited AI practices with exception of certain law enforcement reasons (Article 5 of the Proposal for the Artificial Intelligence Act, 2021).The Law Enforcement Directive (Directive (EU) 2016/680),regulates the use of biometric identification for law enforcement purposes.The widely accepted critics of the doctrine are referring to the narrow scope and limitation of lawenforcement thatallows the use of such AI systems for different purposes (Gill-Pedro, 2021).The use of remote biometric identification for non-law enforcement objectives like crowd control or public health is not prohibited by the restriction. The GDPR normally applies to these uses (Regulation (EU) 2016/679). In general, the GDPR imposes a criterion of high-quality, individual permission for each person scanned, which is practically hard to provide in the absence of a corresponding Member State law authorizing such biometrics (Veale & Borgesius, 2021, p. 101).

In addition, the fact that Article 5 could not be amended by the European Commission could be quite challenging in the context of the implementation of the AIA due to the fact that some problematic aspects of AI practices can only be recognized ex-post.

4. Rules on high-risk AI systems

For AI systems that create a high risk to human health and safety or fundamental rights, or "high-risk AI systems," Title III of the AIA proposalestablishes a new regulatory regime with precise standards. The AIA Proposal adopts a prescriptive "list-based approach," which outlines which systems are considered a high risk rather than defining the term itself.Based on the AI system's intended use and current product safety regulations, a system is categorized as high-risk. As a result, the categorization of a high-risk depends not only on the task performed by the AI systembut also on the precise objectives and operating procedures of that system.

Two main groups of high-risk AI systems are identified in Title III, along with the classification criteria. Systems intended for use as safety components of products that are covered by "third-party ex-ante conformity assessment" under EU law are included in Annex II of the proposal as high-risk systems, as are other standalone AI systems used in high-risk domains (Explanatory Memorandum of the AIA proposal, 2021, p. 14).The European Commission has identified eight use categories for high-risk standaloneAI systems listed in Annex III. By using a set of criteria and a risk assessment methodology, the European Commission may expand the list of high-risk AI systems used within specified pre-defined sectors in order to ensure that the legislation may be modified to develop uses and applications of AI.However, it is important to note that the Commission can only do this if the high-risk AI systems are intended to be used in any of the activities stated in Annex III points 1 through 8. This provision could be quite challenging due to the fact, that we cannot be aware of all categories of high-risk systems sinceAI is a rapidly evolving field that is progressively influencing other industries (Smuha et al., 2021, p. 11).

In addition, Chapter 2 outlines the legal requirements for high-risk AI systems related to "data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy and security" which links to obligations of regulated actors stated within Chapter 3 (Explanatory Memorandum of the AIA proposal, 2021, p. 13). The great majority of all obligations are theresponsibility of providers.With respect to data and data governance, Article 10 of the AIA proposal mostly refers to training, validation, and testing data sets. Data quality criteria for sets of data on individuals, or groups of people (not necessarily involving personal data in GDPR terms), including "special categories of personal data" (as defined in Article 9 of GDPR) are highly detailed in the subject requirements (Regulation (EU) 2016/679).

The following requirement is referring totechnical documentation. Providers must submit technical documentationthatincludes all information in line with Annex IV. Moreover,according to Article 12 of the AIA proposal and record-keeping requirements, providers need to facilitate logging in order to enable traceability that is acceptable for a system's risks.Providers are only required to keep logs for the relevant period while such logs are still under their control; otherwise, users are required to do so.The standards for high-risk AI systems transparency are defined in Article 13. A high-risk AI system must be created in accordance with Article 13 in order to be "sufficiently transparent to enable users to interpret the system's output and use it appropriately" and it must also come with instructions and information that are "relevant, accessible, and comprehensible to users" (Article 12 of the Proposal for the Artificial Intelligence Act, 2021). In addition to the standards above, Article 14 stipulates that providers must create systems that can be properly supervised by natural persons, using "human-machine interface tools" (Article 14 of the Proposal for the Artificial Intelligence Act, 2021). To ensure the protection of fundamental rights, oversight is necessary for all actions linked to the creation, implementation, and use of AI systems.Moreover, Article 15 states that high-risk AI systems must be created and constructed in such a way that, in the context of their intended use, they achieve the required level of accuracy, robustness, and cybersecurity and operate consistently over the period of their lifecycle (Article 15 of the Proposal for the Artificial Intelligence Act, 2021).

The framework for notified bodies' participation in conformity assessment processes as independent third parties is provided in Chapter 4, while the specific conformity assessment processes that must be implemented for every type of high-risk AI system are included in Chapter 5. The approach to conformity assessment aims to reduce pressure on both notified entities and economic operators, whose capability must be gradually ramped up through time.

5. Specific transparency obligations

Title IV of the AIA proposal outlinesspecific transparency obligations. The AIA proposal introduces transparency requirements for systems thatinteract with humansdue to the fact thatpeople have a right to know when they are engaging with a machine's algorithm rather than a human being. Similar requirements for transparency apply to the disclosure of deep fake/ synthetics, biometric categorization, and automated emotion detection systems.Except for biometric categorization systems that are legally allowed to be used for crime prevention, users of emotion recognition or biometric

categorization systems are required to notify exposed persons of the system's operation. In comparison with data protection law, it is quite challenging to understand the contribution of this provision. Data protection law indicates that users of emotional recognition or biometric categorization systems that process personal datanotify individuals of, among other things, the existence and purposes of such processing.Therefore, it is difficult to determine what is the real scope of this provision.

In addition, specific transparency obligations are also introduced for limited-risk AI systems like chatbots. The Low-Risk AI Systems category is the only one that is excluded from transparency obligations (Kop, 2021).

6. Identifying additional regulatory gaps of the AIA proposal

Even though the above analysis of the AIA proposal has already identified certain aspects of the Act that need further clarification, the doctrine concluded that this act has some additional gaps. The most significant one is referring to the fact that the AIA proposal does not include any individual right of enforcement. Although the Act is designed to protect fundamental rights, it has no remedies through which individuals can seek redress if the regulation is violated. The AIA proposal does not include any mechanism to allow individuals to challenge AI-driven decision-making (Ebers, 2021 p. 19).

Moreover, a European approach to AI, on the other hand, should consider not only human rights but also other priorities such as climate change and sustainability. In this respect, the AIA proposal makes no direct mention of "Green AI" or "Sustainable AI" as a clear objective of a European understanding of AI development according to the standards of the European Green Deal (Gailhofer et al. 2021). The Act only recognizes the necessity for relevant action in the high-impact field of climate change and the potential of AI to help socially and environmentally positive outcomes.

7. Conclusion

The AIA proposal intends to establish a uniform legal system for AI in the EU. Through a comprehensive framework, the AIA proposal addressesboth the potential benefits of AI and the moral questions raised by the different threats associated with it.Nevertheless,some aspectsrequirefurther clarification.The main aspect that needs to be improved is the definition of the term "AI". The AIA proposalincludes a quite broad definition, which increases the risk of overregulation of systems. Furthermore, the lack of individual enforcement

rights in the AIA proposal underminesthe protection of fundamental rights as the most important goal of this regulation. The AIA must guarantee the right to remedy that addresses potential Regulation violations or infringements of fundamental rights.

This article cannot and has not discussed all aspects of the AIA proposal. The author has demonstrated some of the complexities of this particularly significant instrument. After all, creating a safe and adequate regulatory framework for AI in Europe is not only the way we design technology but also the way we shape our society's future.

Mladenov Marijana

Pravni fakultet za privredu i pravosucte u Novom Sadu, Univerzitet Privredna akademija u Novom Sadu, Srbija

LJUDSKA PROTIV VESTACKE INTELIGENCIJE - PRAVNI ODGOVOR EU

REZIME: Vestacka inteligencija ima kapacitet da poboljsa ne samo kvalitet zivota pojedinca, vec i ekonomsko i socijalno blagostanje. Iako sistemi vestacke inteligencije imaju mnoge prednosti, oni takode predstavljaju znacajne rizike, stvarajuci sirok spektar moralnih i pravnih dilema. Evropska unija kreira pravni okvir za razvoj, trgovinu i upotrebu proizvoda, usluga isistema vodenih vestackom inteligencijom kako bi smanjila rizike povezane sa sistemima vestacke inteligencije i sprecila svaku mogucu stetu koju oni mogu da izazovu. Glavni fokus ovog rada odnosi se na analizu Predloga Uredbe o vestackoj inteligenciji koji je Evropska komisija podnela u aprilu 2021. Cilj clanka je da pruzi doprinos u kontekstu razresenja dileme da li je predlog navedene uredbe adekvatan zahtevima ere vestacke inteligencije, adresirajuci obim primene ovog akta, zabranjene prakse vestacke inteligencije, pravila o visokorizicnim sistemima vestacke inteligencije, specificne obaveze transparentnosti kao i odredene pravne praznine. Clanak treba posmatrati kao pocetnu analizu predloga Uredbe o vestackoj inteligenciji kako bi se obezbedio koristan okvir za buducu diskusiju.

Kljucne reci: vestacka inteligencija, Evropska unija, regulatorni okvir, Predlog Uredbe o vestackoj inteligenciji.

References

1. Artificial Intelligence Act. (2021). Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. EUR-Lex - 52021PC0206

2. Bjelajac, Z., &Filipovic, A. M. (2021). Specificnosti digitalnog nasilja i digitalnog kriminala [Specific characteristics of digitaliolence and digital crime]. Pravo - teorija ipraksa, 38(4), pp. 16-32. DOI: 10.5937/ ptp2104016B

3. Bogucki, A., Engler, A., Perarnaud, C., Renda, A. (2022). The AI Act and Emerging EU Digital Acquis, Overlaps, gaps and inconsistencies, CEPS. Downloaded 2022, September 23 from https://www.ceps.eu/wp-content/ uploads/2022/09/CEPS-In-depth-analysis-2022-02_The-AI-Act-and-emerging-EU-digital-acquis.pdf

4. Communication from the Commission to the European Parliament, the European Council, the Council, European Economic and Social Committee and the Committee of the Regions Artificial Intelligence for Europe. (2018). C0M(2018) 237 final

5. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA, OJ L 119/89

6. Ebers, M. (2021). Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework. In: Colonna, L., Greenstein G. (eds.), Nordic Yearbook of Law and Informatics, (pp. 1-20). Downloaded 2022, October 15 from https://papers.ssrn.com/sol3/papers. cfm?abstract_id=3901732

7. Ebers, M., Hoch, V. R., Rosenkranz, F., Ruschemeier, H., & Steinrötter, B. (2021). The European commission's proposal for an artificial intelligence act - a critical assessment by members of the robotics and AI law society (RAILS). J - Multidisciplinary Scientific Journal, 4(4), 589-603. DOI: 10.3390/j4040043

8. European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics. (2017). (2015/2103(INL))

9. Gailhofer, P., Herold, A., Schemmel, J.P., Scherf, C.-S., Urrutia, C., Köhler, A., &Braungardt, S. (2021). The Role of Artificial Intelligence in the European Green Deal, Study Requested by the AIDA Committee of the European Parliament, Study requested by the AIDA Committee. Downloaded 2022, October 15 from https://www.europarl.europa.eu/ RegData/etudes/STUD/2021/662906/IPOL_STU(2021)662906_EN.pdf

10. Gill-Pedro, E. (2021). The Most Important Legislation Facing Humanity? The Proposed EURegulation on Artificial Intelligence. Nordic Journal of European Law, 4(1), pp. 4-10. Downloaded 2022, October 5 from https:// journals.lub.lu.se/njel/article/view/23473/20819

11. Greenleaf, G. (2021). The 'Brussels effect' of the EU's 'AI Act'on data privacy outside Europe. Privacy Laws & Business International Report, 1, pp. 1-10. Downloaded 2022, September 25 from https://papers.ssrn. com/sol3/papers.cfm?abstract_id=3898904

12. Helberger, N., & Diakopoulos, N. (2022). The European AI act and how it matters for research into AI in media and journalism. Digital Journalism, pp. 1-10. DOI: 10.1080/21670811.2022.2082505

13. Hickman, E., Petrin, M. (2021). Trustworthy AI and Corporate Governance: The EU's Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective. Eur Bus Org Law Rev, 22, pp. 593-625. DOI: 10.1007/s40804-021-00224-0

14. Klein, E. (1966). A Comprehensive Etymological Dictionary of the English Language. Amsterdam:Elsevier

15. Koch, B. A. (2020). Liability for Emerging Digital Technologies: An Overview. Journal of European Tort Law, 11 (2), pp. 115-136. DOI: 10.1515/jetl-2020-0137

16. Kolarevic, E. (2022). Uticaj vjestacke inteligencije na uzivanje prava na slobodu izrazavanja. [The influence of Artificial intelligence on the right to freedom of expression] Pravo - teorija i praksa, 39(1), pp. 111-126. DOI: 10.5937/ptp2201111K

17. Kop, M. (2021). EU Artificial Intelligence Act: The European Approach to AI. Transatlantic Antitrust and IPR Developments, 2, 1-11. Downloaded 2022, October 15 from https://law.stanford.edu/publications/ eu-artificial-intelligence-act-the-european-approach-to-ai/

18. Neuwirth, R., J. (2022). Prohibited Artificial Intelligence Practices in the Proposed EU Artificial Intelligence Act. DOI: 10.2139/ssrn.4261569

19. Perucica, N., Andjelkovic, K. (2022). Is the future of AI sustainable? A case study of the European Union. Transforming Government: People, Process andPolicy, 16 (3), pp. 347-358. DOI: 10.1108/TG-06-2021-0106

20. Rahman, K. S. (2017). The new utilities: Private power, social infrastructure, and the revival of the public utility concept. Cardozo L. Rev., 39, 1621-1692. Downloaded 2022, October 5 from https:// brooklynworks.brooklaw.edu/cgi/viewcontent.cgi?article=1987&context =faculty.

21. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) OJ L 119/1

22. Smuha, N. A., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren, J., Piselli, R., & Yeung, K. (2021). How the EU can achieve legally trustworthy AI: a response to the European commission's proposal for an artificial intelligence act. DOI: 10.2139/ssrn.3899991

23. Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), pp. 97-112. DOI: 10.9785/cri-2021-220402

i Надоели баннеры? Вы всегда можете отключить рекламу.