Научная статья на тему 'Recommendations on the Ethical Aspects of Artificial Intelligence, with an Outlook on the World of Work'

Recommendations on the Ethical Aspects of Artificial Intelligence, with an Outlook on the World of Work Текст научной статьи по специальности «Право»

CC BY
470
84
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Artificial intelligence / digital technologies / digitalization / ethics / labor law / labor relations / labor / law / legislation / principles of law / Законодательство / искусственный интеллект / право / принципы права / труд / трудовое право / трудовые отношения / цифровизация / цифровые технологии / этика

Аннотация научной статьи по праву, автор научной работы — Zsofia Riczu

Objective: the spread and wide application of Artificial Intelligence raises ethical questions in addition to data protection measures. That is why the aim of this paper is to examine ethical aspects of Artificial Intelligence and give recommendations for its use in labor law. Methods: research based on the methods of comparative and empirical analysis. Comparative analysis allowed to examine provisions of the modern labor law in the context of use of Artificial Intelligence. Empirical analysis made it possible to highlight the ethical issues related to Artificial Intelligence in the world of work by examining the disputable cases of the use of Artificial Intelligence in different areas, such as healthcare, education, transport, etc. Results: the private law aspects of the ethical issues of Artificial Intelligence were examined in the context of ethical and labor law issues that affect the selection process with Artificial Intelligence and the treatment of employees as a set of data from the employers’ side. Author outlined the general aspects of ethics and issues of digital ethics. Author described individual international recommendations related to the ethics of Artificial Intelligence. Scientific novelty:this research focused on the examination of ethical issues of the use of Artificial Intelligence in the specific field of private lawlabor law. Authors gave recommendations on ethical aspects of use of Artificial Intelligence in this specific field. Practical significance: research contributes to the limited literature on the topic. The results of the research could be used in lawmaking process and also as a basis for future research.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Рекомендации по этическим аспектам искусственного интеллекта в приложении к сфере трудовых отношений

Цель: распространение и широкое использование искусственного интеллекта выдвигает на первый план не только проблемы защиты данных, но и этические вопросы. Цель данной статьи – изучение этических аспектов искусственного интеллекта и предложение рекомендаций для его использования в трудовом праве. Методы: исследование основано на методах сравнительного и эмпи­рического анализа. Сравнительный анализ позволил изучить положения современного трудового законодательства в контексте искус­ственного интеллекта. Эмпирический анализ выявил этические проблемы, относящиеся к искусственному интеллекту в сфере труда, путем изучения спорных случаев использования искусственного интеллекта в различных областях, таких как здравоохранение, образование, транспорт и др. Результаты: частноправовые аспекты этических проблем искусственного интеллекта были изучены в контексте этических и трудовых вопросов права, влияющих на процесс отбора с помощью искусственного интеллекта и на обращение с работниками с точки зрения работодателя. Автор выделяет как общие аспекты этики, так и вопросы цифровой этики. Предложены отдельные международные рекомендации относительно этики искусственного интеллекта. Научная новизна: исследование посвящено изучению этических аспектов использования искусственного интеллекта в конкретной отрасли частного праватрудовом праве. Автор дает рекомендации относительно этических аспектов использования искусственного интеллекта в данной сфере. Практическая значимость: исследование восполняет имеющиеся пробелы в научной литературе по указанному вопросу. Результаты работы могут использоваться в процессе законотворчества и служить базой для дальнейших исследований.

Текст научной работы на тему «Recommendations on the Ethical Aspects of Artificial Intelligence, with an Outlook on the World of Work»

Research article

DOI: https://doi.org/10.21202/jdtl.2023.21

3ps>

Check fo updates

heck for pdates

Recommendations on the Ethical Aspects of Artificial Intelligence, with an Outlook on the World of Work

Zsofia Riczu ©

University of Miskolc Miskolc, Hungary

Keywords

Artificial intelligence,

digital technologies,

digitalization,

ethics,

labor law,

labor relations,

labor,

law,

legislation, principles of law

© Riczu Zs., 2023

Abstract

Objective: the spread and wide application of Artificial Intelligence raises ethical questions in addition to data protection measures. That is why the aim of this paper is to examine ethical aspects of Artificial Intelligence and give recommendations for its use in labor law.

Methods: research based on the methods of comparative and empirical analysis. Comparative analysis allowed to examine provisions of the modern labor law in the context of use of Artificial Intelligence. Empirical analysis made it possible to highlight the ethical issues related to Artificial Intelligence in the world of work by examining the disputable cases of the use of Artificial Intelligence in different areas, such as healthcare, education, transport, etc. Results: the private law aspects of the ethical issues of Artificial Intelligence were examined in the context of ethical and labor law issues that affect the selection process with Artificial Intelligence and the treatment of employees as a set of data from the employers' side. Author outlined the general aspects of ethics and issues of digital ethics. Author described individual international recommendations related to the ethics of Artificial Intelligence. Scientific novelty:this research focused on the examination of ethical issues of the use of Artificial Intelligence in the specific field of private law - labor law. Authors gave recommendations on ethical aspects of use of Artificial Intelligence in this specific field.

article, distributed under the terms of the Creative Commons Attribution licence (CC BY 4.0)

.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the

is properly cited.

Practical significance: research contributes to the limited literature on the topic. The results of the research could be used in lawmaking process and also as a basis for future research.

For citation

Riczu, Zs. (2023). Recommendations on the Ethical Aspects of Artificial Intelligence, with an Outlook on the World of Work. Journal ofDigital Technologies and Law, 7(2), 498-519. https://doi.org/10.21202/jdtl.2023.21

Introduction

1. Ethics - Digital Ethics

2. Regulations related to the ethical aspects of AI

2.1. UNESCO - Recommendation on the ethics of artificial intelligence

2.2. Ethical guidelines of the high-level expert group (HLEG) established by the European Commission

2.3. European Parliament - Framework for Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies

2.4. Other Recommendations and Guidelines

3. Ethical issues related to AI in the world of work - based on the EPRS paper on AI

Conclusions

References

Introduction

The rise of Artificial intelligence is indisputable. In many forums, we can find studies that approach the topic from a social perspective. The research clearly shows that artificial intelligence (AI - Artificial Intelligence) has a mixed reception, on the one hand it is suitable for carrying out useful activities in many fields, but opposing opinions already involve conspiracy theories.

The legal regulation of AI is also a significant area of research, primarily with regard to regulatory issues (Cyman et al., 2021; Alikhademi, et al., 2022). Given that it is difficult to keep up with the dynamism of technological development, the legal regulatory background is currently in follow mode: after the appearance of technological achievements, it tries to establish regulations regarding specific issues - on the one hand, this is positive, given that the legislation itself is based on empirical knowledge, and on the other hand, the development of legal regulation requires caution, as it is necessary to take into account issues that have not yet appeared, but which can already be predicted in certain cases. Scientific and technological development, the digital revolution, AI and algorithms are

Content

putting legal regulations to new tests, and the dominance of legal institutions and the rethinking of individual legal categories can be observed (Harmathy, 2019). The development of regulation is greatly aided by community standards, but it is also necessary to carry out regulatory tasks at the national level in accordance with the specificities.

The application and spread of algorithms and artificial intelligence is a very interesting research area. Their application promises speed and accuracy. Recently, the ever-increasing dangers inherent in intelligent robots have become a central topic in many areas of research on artificial intelligence. This point of view, called «alarmist» by Laszlo Z. Karvalics, is a logical consequence (and strong ally) of the aging paradigm of «strong AI» and the latest versions of this paradigm. (Karvalics, 2015)

Technology is constantly changing. Just like the previous industrial revolutions, now Industry 4.0 (or even Industry 5.0) can bring new things day by day, hour by hour, minute by minute. The rate of development is increasing exponentially, which law and regulations cannot keep up with. The protection of individuals and their rights is essential both in the digital space and the offline space, since, as Stefan Ibolya put it, the lack or insufficiency of legal regulation can be an obstacle to economic development (Stefan, 2020).

The presence of AI is not only typical in the research world and the scientific field, although it also plays a significant role in supporting the development of science and managing global challenges, from Apple Inc.'s Siri to space exploration to self-driving cars, they have become a part of our daily lives, appearing as convenience devices, given that human capabilities are supplemented with elements of computing. Learning algorithms can perform many tasks, whether it is driving a car or making decisions. These applications make decisions about our lives on many occasions, which is why there is a need for certain guarantees. However, the spread and wide application of AI raises ethical questions in addition to data protection measures. The theme indicated in the title of the thesis is one of the cornerstones of my research.

Recent debates have highlighted the urgent need for work on the social and ethical aspects of algorithmic and data-driven systems that control our lives. Largely focused on machine learning (ML) and artificial intelligence (AI), conversations among scientists, journalists and advocates have begun to address issues of equity, bias, transparency, access, participation and discrimination, often referred to as «AI and ethics». Underlying these discourses are concerns about how to mitigate discrimination and data bias, as well as open questions about whether algorithmic systems can be fair and whether their use will promote an equitable future or, on the contrary, perpetuate or even reinforce existing inequalities. Despite the apparent 'novelty' of these issues, many of the underlying concerns have a long tradition and intellectual pedigree in the field of library and information science (LIS) (Hoffmann et al., 2019).

The pace of technological innovation and the speed of its global adoption, supported by the digital economy, is clearly outpacing the speed of human consciousness. Jerome

Beranger1 has sought to list and define the main ethical criteria that form the basis of a future frame of reference in order to move towards AI in the service of human intelligence, and to anticipate and foresee the drifts and possible consequences of the development of algorithmic systems (Beranger, 2021).

The size and complexity of the data and algorithms used in artificial intelligence (AI)-based systems pose significant challenges in predicting their ethical, legal and policy implications. Drawing on interviews with stakeholders in AI research, law and policy, Locating the work of artificial intelligence ethics reports that the work of AI ethics is structured by personal values and professional commitments (Fleischmann et al., 2023).

In the first part, I outline the general aspects of ethics, from which I examine the issues of digital ethics. After that, I would like to describe the individual international recommendations related to the ethics of AI. In the present study, however, regarding the ethical issues of AI, I examine its private law (including labor law) aspects in more detail in connection with my research area. Within this topic, I examine ethical and labor law issues that affect the selection process with artificial intelligence and the treatment of employees as a set of data from the employers' side.

1. Ethics - Digital Ethics

The word ethics comes from the Greek word ethos, which covers habit, form of behavior. The subject of ethics is human action and the person unfolding in action. It is often used as a synonym for the term morality, which refers to forms of behavior and activity related to a person's purpose (Turay, 2000). Ethics covers the moral standards that exist subconsciously, the values that lie behind our actions (Muller & Kerenyi, 2019). Ethics is the doctrine of moral values, which must be separated from etiquette, which is the science of custom, manners, decorum, i.e. human behavior (Legeza, 2013).

Ethics and applied ethics focus on practical problems and everyday situations. Its area is the lower and higher manifestation of moral phenomena, moral generality -Fobel refers to Kansky's definition and notes that at the XX World Congress of Philosophy the central theme was the problem of applied ethics, within which bioethics, health ethics, environmental protection and business ethics came to the fore sports ethics, as well as issues of technological and legal ethics (Fobel, 2002). Within the scope of technological ethics, digital ethics is the determining factor, the appearance of which can be dated at the same time as the appearance of the Internet and the development of technology. Ethical behavior is just as important on the world wide web, accessible to everyone, as it is outside the digital dimension. Nowadays, anyone can

1 The scientific expert on the ethical approach of the digital revolution, the cofounder and CEO of ADELIAA and is also an associate researcher in the Inserm 1295 BIOETHICS team at the University of Toulouse.

produce digital content, post comments on a given topic, and express an opinion on certain issues. We express our values during these activities.

Digital ethics, or information ethics in a broader sense, deals with the impact of digital information and communication technologies (ICT) on our society and the environment in general. More narrowly, information ethics (or digital media ethics) examines ethical issues related to the Internet and Internet-based information and communication media, such as mobile phones and navigation services (Capurro, 2022).

The emergence of digital ethics has also generated legal ethical challenges. Digital communication, the emergence of algorithms and the rise of AI also have an impact on people's everyday lives. Based on Daniel Eszteri's point of view, new ethical problems such as automated decision-making without human intervention and the rights of artificial entities may come into focus (Eszteri, 2021). According to Gabriella Nemeth's view: The appearance of machine entities can redefine man himself, or the values associated with human existence (Nemeth, 2021). Imre Negyesi examined the ethical issues of AI for military purposes (Negyesi, 2020), Reka Pusztahelyi explained in her work that the ethical principles of AI can be grouped from several points of view, depending on who they target. Accordingly, the field has separated specific, sectoral and comprehensive ethical principles (Pusztahelyi, 2019). I base this study on the latter finding. After all, the ethical principles of AI applied in medicine do not necessarily cover the ethical conceptions of AI applied in the labor market.

AI systems are not fundamentally about injustice and inequity, yet the question of AI systems and their treatment of the unjust world is a moot point. As Knowles has pointed out, one mechanism for dealing with the ethics of AI is the ethics education of AI: it teaches clear moral reasoning, responsible choices and right action to those who build, use and/or submit to AI systems (Knowles, 2021).

The first step to creating ethical AI systems is to explore ethical dilemmas. The community of AI-related researchers and professionals prefers the use of frameworks in AI regulation over ad-hoc standards (Yu et al., 2018). Such regulatory frameworks were created by UNESCO, the European Parliament, and the European Commission, whose recommendations and resolutions I describe in the next chapter.

2. Regulations related to the ethical aspects of AI

The emergence of AI has raised many questions from the regulatory side. These are not only operations of legal conceptualization and categorization, legal responsibility issues related to the ethical challenges of AI form a separate subject area.

The ethical risks of AI are significant. This is supported by Deloitte's 2018 research, according to which 32% of managers rated the ethical risks of AI as so significant that

they stopped their AI initiatives if appropriate2. Based on a 2019 survey by the Capgemini Research Institute, nearly half of users have experienced an ethical problem with AI, and 86% of managers stated that they are aware of cases where the use of AI has led to ethical problems (Capgemini Research Institute, 2019) 3. These researches also prove that it is not enough to define the legal framework of AI, and attention must also be paid to the ethical aspects of AI.

Although the terminology is changing, the essence of AI ethics has slowly emerged. The principles of the OECD aim at the supervision of reliable AI, similar declarations were formulated in the principles of the United Center for Artificial Intelligence (where traceability, reliability and manageability are the most important principles). The EU's high-level expert group focuses on the principles of data protection, data management, transparency, nondiscrimination and fairness (Tilesch & Hatamleh, 2021).

2.1. UNESCO - Recommendation on the ethics of artificial intelligence

AI is present in the lives of billions of people, even without their knowledge, transforming society unnoticed. Its application has many advantages, as it helps in finding a job in addition to completing studies. However, in addition to the benefits, these applications also generate risks and challenges. AI has significant social and cultural implications, raising issues of freedom of speech and expression, right to privacy, property rights, discrimination, manipulation and distortion of information. In addition to the former, AI poses challenges related to human cognitive ability and its interaction. Algorithms are able to support the spread of disinformation, and can influence political and ideological attitudes. Deep learning processes can strengthen the institution of bias, which is opposed to the requirement of equal treatment, highlights the asymmetry between individual social strata and groups, increasing the digital divide, thereby also the chances of digital disconnection - states UNESCO's preliminary study on artificial intelligence4.

In order to mitigate these risks and overcome the challenges, it is necessary to establish both international and national regulatory frameworks based on UNESCO's position. In accordance with the above, in November 2021, UNESCO adopted its recommendation on AI at the General Conference. Regarding its antecedents, at the 40th meeting in November 2019, they voted to develop a global standard-setting tool. Accordingly, the Ad Hoc Expert

Deloitte, 2018. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/state-of-ai-and-intelligent-automation-in-business-survey-2018.html

Why addressing ethical questions in AI will benefit organization. https://www.expertbibliotheek.nl/ publicaties/data

Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/en/legal-affairs/ recommendation-ethics-artificial-intelligence

2

3

4

Group held multidisciplinary consultations for the comprehensive implementation of the text of the recommendation5.

UNESCO's recommendation deals with ethical issues related to the fields of AI, in which it approaches AI and the relevant ethical propositions as a normative reflection. It defines ethics as the basis for evaluating AI technologies as a compass. The purpose of the recommendation is not to define the definition of AI, but rather to formulate a recommendation regarding the issues that are of central importance from an ethical point of view6. The goal is to create a globally accepted normative instrument, in which values such as human rights, freedoms, dignity, environmental protection, diversity, inclusion, the creation of peaceful and equitable societies and the formulation of basic principles also formulate specific policy recommendations7, primarily regarding the fact that the member states need to introduce frameworks for ethical impact assessments, during which they carry out risk assessment, supervision measures and create mechanisms for security guarantees8. In addition to ethical considerations, it also places great emphasis on data protection, international cooperation and development, and also touches on environmental and gender issues.

2.2. Ethical guidelines of the high-level expert group (HLEG) established by the European Commission

Since its inception, the European Union has faced a number of social and environmental challenges, the range of which has expanded exponentially with the increase in the number of member states. Recognizing the importance of sustainability and climate change, the EU has committed itself to several international initiatives, such as the climate convention9 or the UN Sustainable Development Goals10. Efforts have also been made within the EU along the principles of sustainability, within the framework of which the Commission established the High-Level Expert Group (HLEG) investigating the ethical aspects of AI11.

In the comprehensive work of the HLEG, the guideline covers all sectors of the application of artificial intelligence and defines not only the development, but also the moral framework of its use (Pusztahelyi, 2019). The recommendations of the expert group contributed to the Commission's initiatives such as Communication on building trust in human-centered

6

7

8

9

10 11

Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/en/legal-affairs/

recommendation-ethics-artificial-intelligence

Ibid.

Ibid.

Ibid.

The Paris Agreement. https://www.un.org/en/climatechange/paris-agreement Sustainable Development Goals. https://sdgs.un.org/goals

High-Level Expert Group (HLEG). https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai

5

artificial intelligence12, the White Paper on Artificial Intelligence: A European Approach to Excellence and Trust13, and the updated Coordinated Plan for Artificial Intelligence14.

The HLEG already states in the executive summary that three basic conditions must be met in the case of AI, which are the following: legality, ethics, stability. The publication establishes the framework for the implementation of reliable artificial intelligence, which provides guidelines for ethical and stable AI, in terms of legality, as Réka Pusztahelyi draws attention to, it starts from the fact that the application of AI takes place within the framework provided by the standards (Pusztahelyi, 2019).

In the first chapter, starting from fundamental rights, it lays down the foundations of artificial intelligence, and then lays down the purpose of ethics15, at the same time, it states that a code of ethics for a subfield cannot replace ethical reasoning. Similar to the UNESCO publication, it is based on fundamental rights (values): dignity, freedom, justice, democracy, equality, the rule of law, and non-discrimination appear as ethical and legal entitlements. In addition, it lays down 4 basic principles: respect for human autonomy, prevention of harm, fairness and explainability, which principles go beyond legal requirements, but the fulfillment of these principles causes a serious dilemma for programmers (Pusztahelyi, 2019). In addition to the general principles, it lays down seven requirements, which together must be met to ensure the full application of the ethical principles. These are: human agency and supervision (including fundamental rights, human agency and human supervision), technical stability and security (resistance to attacks and security, accuracy, reliability and reproducibility), data protection and data management (respect for privacy, quality and integrity of data, access to data), transparency (including traceability, explainability), diversity, non-discrimination and fairness (avoidance of unfair bias, accessibility and universal design, and interested parties issues of the participation of parties) social and environmental well-being (emphasizing the line of sustainability and environmental protection) and accountability (emphasizing auditability, minimization of negative effects, compromises and legal remedies)16. The publication emphasizes that these requirements are equally important, that their implementation is necessary during the entire cycle of AI application, but at the same time, some of these requirements can also be found in existing legal regulations.

12 Communication on building trust in human-centered artificial intelligence. https://digital-strategy. ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelligence

13 White Paper on Artificial Intelligence: a European approach to excellence and trust. https://commission. europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

14 Coordinated Plan on Artificial Intelligence 2021 Review, 2021. https://digital-strategy.ec.europa.eu/en/ library/coordinated-plan-artificial-intelligence-2021-review

15 Ethics Guidelines for Trustworthy AI, 2021. https://ec.europa.eu/futurium/en/ai-alliance-consultation/ guidelines/l.html

16 Ibid.

2.3. European Parliament - Framework for Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Following the presentation of UNESCO's recommendation and the guidelines of the HLEG, it is essential to present the framework regulation of the European Parliament (EP). The basic principles of the proposal are to support the use of AI, robotics, and other related technologies and their alignment with ethical principles. The purpose of the proposal is to create a regulation on ethical principles for the development, introduction and use of artificial intelligence, robotics and related technologies17. The purpose of the regulation is to establish a comprehensive and durable EU regulatory framework for ethical principles and legal obligations.

Iban Garciadel Blanco, the representative responsible for the report, highlighted the advantages and risks of artificial intelligence: the goal is to achieve a more sustainable and just society, but at the same time, a great emphasis must be placed on the protection of privacy and the avoidance of discrimination. It is essential to create a regulation that lays down the basic ethical principles. In addition to building trust, creating security is also necessary to create human-centric AI. Accordingly, the decree is created along principles such as the evaluation of high-risk AI, robotics and similar technologies, ensuring safety, transparency, accountability, guarantees against discrimination, legal remedies and the right to do so, social responsibility, sustainable AI technologies, respecting privacy and applying restrictions on biometric identification, as well as providing appropriate control over the data used and generated by the technologies. The regulation stipulates: in the Union, any artificial intelligence, robotics and related technologies - including the software, algorithms and data used or produced by such technologies - in accordance with EU legislation, human dignity, autonomy and security, as well as other fundamental rights enshrined in the Charter are fully must be developed, implemented and used with due respect18. The text of the decree was adopted on October 20, 202019, the proposal has been submitted. The legal basis for submitting the proposal is Article 114 of the TFEU on the adoption of measures to ensure the creation and operation of the single internal market. The proposal is one of the central elements of the single digital market strategy. The proposal is based on already existing legal regulations, is proportionate and necessary to achieve its objectives. It takes a risk-based approach

17 Framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL), 2020). https://oeil.secure.europarl.europa.eu/oeil/popups/summary.do?id=1636985&t=d&l=en

18 Ibid.

19 Ibid.

and imposes a regulatory burden only when AI systems are likely to pose a high risk to fundamental rights or security20.

It can be seen that the framework regulation of the EP is structured along similar ethical principles as the recommendation of UNESCO and the guidelines of the HLEG.

2.4. Other recommendations and guidelines

The documents detailed above contain general guidelines and recommendations, with a similar logical structure and highlighting the importance of guarantees of almost identical ethical values. In the following, I would like to briefly describe the publications that are not aimed specifically at standard-setting, and that strive to implement sector-specific ethical principles.

The Institute of Electrical and Electronics Engineers (IEEE), an international organization, also issued its AI-related ethical guidelines entitled Ethically Aligned Design (EAD) in 2019. The goal of the IEEE Global Initiative is to provide pragmatic and directional insights and recommendations that serve as a key reference for technologists, educators, and policy makers (Pusztahelyi, 2019). In 2021, the organization introduced the IEEE7000 ethical standard with the aim of enabling the development of ethical and fair AI systems. This standard is already being examined by the European Commission. The standard goes beyond the issue of data protection, transparency and reliability, as they also conducted an analysis of the social consequences of technology, whether technology changes the character of an autonomous person. The standard provides engineers with a clear system design and development framework. It uses various ethical theories to elicit relevant values and then ranks them using corporate or industry value lists. It then derives a new artifact called an "ethical value requirement" (EVR), which is translated into system requirements. The system requirements are derived using a risk assessment (Spiekermann et al., 2022).

The United States Department of Defense also created the Joint Artificial Intelligence Center (JAIC), which launched its mission initiatives in 2019. Although the JAIC formulates guidelines for AI primarily in the military field, I still consider it significant from an ethical point of view, considering that in order to limit the endangerment of innocent civilians, the key priority of AI systems is to comply with the legal provisions, from the first moment the requirements are established until to the last rigorous testing step21.

20 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

21 Joint Artificial Intelligence Center. https://www.defense.gov/News/News-Stories/Article/Article/2418970/ joint-artificial-intelligence-center-has-substantially-grown-to-aid-the-warfigh/

Not only organizations established at the international or national level dealt with the ethical issues of AI: IBM also considered it important to incorporate ethics into the design and development process22, and Adobe also recorded its ethical principles related to AI23.

The Russian Commission for the Implementation of the AI Ethics Code has identified the protection of human interests and rights as a key priority for the development of AI technologies, which, in addition to addressing liability issues and possible consequences, also stipulates that AI technologies should be used for their intended purpose and integrated only in areas where they benefit people24.

3. Ethical issues related to AI in the world of work - based on the EPRS paper on AI

AI and related technologies are expected to result in numerous economic and social benefits in all sectors, affecting the financial sector, healthcare, and agriculture. The use of AI is particularly useful for improving forecasts and optimizing certain operations. However, we must be careful that the consequences of AI systems are fundamental rights protected by the Charter of Fundamental Rights, and that AI systems can threaten fundamental rights such as equal treatment, non-discrimination, human dignity, protection of personal data and privacy25.

AI systems are capable of processing data sets more accurately and faster than humans, but their use carries significant risks in cases where AI makes its decision without human intervention. In the case of systems that avoid human control or operate with minimal human intervention, many ethical questions may arise, so in the absence of a certain level of testing and security guarantees, I believe that these work processes cannot be trusted.

In healthcare, AI can also be used in many cases to perform diagnostic tasks. Machine learning takes place on the basis of samples included in the data set provided to it, however, the data may be distorted due to external influences, and accordingly the result produced by the AI may also be distorted. The ethics of artificial intelligence in radiology: In the presentation on the European and North American multi-social declaration, referring to the IEEE standard, it was emphasized that the priority of the human factor

22 IBM: everyday ethics for Artificial Intelligence. https://www.ibm.com/design/ai/ethics/everyday-ethics/

23 Adobe AI ethics principles. https://www.adobe.com/about-adobe/aiethics.html

24 AI Alliance in Russia - AI Ethics Code. https://ethics.a-ai.ru/

25 European Parliamentary Research Service: Artificial intelligence act. (2021). https://www.europarl.europa. eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

is necessary, that is, that AI should be subordinated to human judgment, supervision and control26.

Like healthcare, education, transport and energy, finance and banking, employment is also classified by the EP as a high-risk sector from the point of view of AI. AI systems are increasingly used in decision-making processes in the world of work. The consideration of ethical aspects is also significant during the practical use of AI, especially in the case of vulnerable or disadvantaged groups and persons, or in cases where the positions of the parties are unequal, be it the case of business and consumer or employer and employee (relevant from the point of view of this study). In the latter case, it is the actual subordination-superiority relationship that establishes the need for the enforcement of ethical principles.

Accordingly, the Commission presented a proposal for the AI regulatory framework in 2021. The general goal of the proposal is to create the right conditions for the development of reliable AI systems, and to this end, it defines a harmonized legal framework. In addition, it defined objectives that ensure the compliance of AI systems with EU law, legal certainty, and promote the creation and maintenance of the single market. In addition, the new normative framework would establish a technologically neutral definition of AI systems. The proposal analyzes in detail the system of risks related to use. It uses a risk-based approach, based on which it ranks individual AI systems: from an unacceptable rating to systems with high risk or limited risk. The proposal lays down the set of requirements for high-risk systems: the requirement for prior compliance assessment, the service provider's registration in the EU-level database, testing, technical stability, transparency, the requirement for human supervision and cyber security. In addition, it fixes the issue of the coordination of the proposal and the EU standards that are being developed27.

In Annex III of the proposal, in addition to AI systems suitable for biometric identification and student evaluation in education, the group of high-class AI systems also includes systems related to access to employment and self-employment. Pursuant to this, distributors and service providers of AI systems used to advertise and prescreen job vacancies, recruit people, select employees, and evaluate candidates based on interviews must meet a number of requirements before putting the system into operation, and strict regulation and supervision apply to the entire operation process28.

26 Ethical Issues of Using AI in radiology. https://www.neuroct.hu/blog/a-mesterseges-intelligencia-alkal-mazasanak-etikai-kerdesei-a-radiologiaban

27 Harmonising Artificial Intelligence: The Role of Standards in the EU AI Regulation. https://montrealethics.ai/ harmonizing-artificial-intelligence-the-role-of-standards-in-the-eu-ai-regulation/

28 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

In connection with the employment relationship, we can already encounter work done by AI in the field of selection and recruitment in the first round - often without our knowledge, AI selects our CV from the multitude of applications received. The use of AI for this purpose significantly increases the efficiency of selection from the employer's side. However, in my opinion, an application cannot replace the work of an HR specialist, rather it should be seen as a kind of supplementary and auxiliary activity, I believe that human interaction in the context of a personal interview cannot be reproduced by an AI or even a robot.

During the selection process, AI ensures the conditions of justice, reduces the risk of bias and prejudice - during recruitment, candidates are selected solely on the basis of their professional experience and skills. At the same time, this advantage can generate ethical risks in terms of the organization's receptiveness. An excellent example of this is Amazon Inc.'s HR robot, which was trained based on resumes submitted to the company over a 10-year period. Given that the technical field is characterized by male dominance, based on the CVs submitted by men, the AI has «trained» itself to give preference to applications submitted by men. It automatically rejected resumes that included the word «female», thereby violating gender equality as an ethical principle (Dastin, 2018).

In connection with the above case, the question may arise as to how ethical it is for the employer to treat employees as a set of data? Data sets can typically be modeled, but in the absence of context, the value of the data is reduced or completely meaningless. As an example, Boyd and Crawford cite the graphical modeling of the network of personal relationships mapped on the basis of social media, which may provide data, but does not provide complete and accurate information about the relationships under investigation (Boyd & Crawford, 2012). In the case of selecting employees, we can face a similar ethical dilemma: can a recruiting AI map the candidate's suitability? The received CVs are practically taken out of context, impersonal data sets, from which, although conclusions can be drawn, I believe that we cannot fully rely on them.

A similar dilemma can also arise if AI is used to analyze the workforce to carry out a group downsizing process. In Annex III of the proposal, AI software whose purpose is to promote or terminate work-related contractual relationships and to make related decisions are also classified as high-risk systems. The same classification applies to systems that monitor and evaluate the performance and behavior of persons in a legal relationship. (AI Act Proposal). There is no doubt that AI makes fair decisions based on performance evaluation, but is it ethical to get rid of an employee who performs less well at a given moment, who is raising his children alone, just because the AI ranked him in a lower category during the performance evaluation? Although the usefulness of AI in certain processes is indisputable, I do not support treating employees as a set of data. I believe that when making decisions, we must consider other factors, such as the employee's other opportunities for further training and retraining, the employee's flexibility, and we cannot forget the issue of his/her family background or belonging to other disadvantaged groups. In this regard, when developing the operating regulations

of a member state or a specific organization, in addition to striving for innovation, we must consider the definition of the machine-human relationship and the supervision-responsibility relationships and (in accordance with the EU legislative proposal) it is necessary to ensure the transparency of the application of AI and to ensure the decision the possibility of human override. It can be seen that machines are increasingly involved in ethical decisions that require ethical explanation. Current machine learning algorithms are ethically inscrutable, but not very different from human behaviour. The article by Marius Dorobantu, Yorick Wilks explores the role of rationality and reasoning in traditional ethical thinking and artificial intelligence, emphasising the need for some explanation of actions. He explores Neil Lawrence's embodiment factor as an approach to the differences between human and machine intelligence, linking it to a theological understanding of personhood, and proposes the notion of artificial moral ortheses that can provide ethical explanations for both artificial and human agents as a more promising unifying approach to human and machine ethics (Dorobantu & Wilks, 2019).

Conclusions

In the era of Artificial Intelligence (AI), ethics has taken on a whole new level of importance and debate (Sudhi & Huraimel, 2021). To date, no consensus has been reached regarding the risks of AI. Although it would be a shame to block innovation, there is no doubt that based on some ethical issues, human supervision is necessary for the application of AI. The guidelines, recommendations and draft regulations briefly presented in this study are aimed at this as well, which show that the goal is to create a legal framework in which the technology can be successfully applied in the future.

Innovation is at the heart of every civilisation. The Belmont Report of 1978 outlined three ethical principles: respect for persons, charity and justice, which have formed the basis of human sciences research. However, the Independent Human Research Review Committees and their regulations are struggling to cope with internet research ethics, big data and artificial intelligence research, as evidenced by the 2014 Facebook* Emotional Contagion study and the controversies surrounding the 2016 «AI gaydar» research (Tang, 2020).

As can be seen from the analyzes above, the ethical risk of artificial intelligence capable of autonomous operation poses a significant challenge to regulation. Machine learning has become a popular tool in many criminal justice applications, including sentencing and policing. However, there is also the potential for predictive policing systems to create uneven effects and exacerbate social injustices. Although previous research has shown that machine learning models can effectively handle certain tasks, they are prone to replicating the systemic bias of previous human decision makers. However, little academic research has addressed the importance of fairness in machine learning applications in policing (Alikhademi et al., 2022).

The relevant recommendations provide a framework for the development of regulations; however, it is necessary to harmonize international and national standards and to reorganize the standards systems. Labor regulations can also play a big role in this approach. The binding contracts and resources that set the goal of achieving full employment will be of decisive importance, and the instruments regulating collective redundancies will also play a significant role in this regard (Stefano, 2018). Efforts to regulate AI are visible not only in the European Union: while a risk-based approach to the problem is typical at EU level, more specific guidelines have been proposed in the United States (Sussman, 2021). However, the common feature is that in order to promote research and development and to expand the use of AI technology, it is necessary to create an infrastructure that encourages the cooperation of the actors and ensures operation according to regulatory and, not least, ethical frameworks. The interaction between AI (robots) and humans must be understood, as well as the issues of emotional intelligence - points out Jozsef Hajdu (Hajdu, 2020).

States have recognized the importance of AI and have begun to develop a regulatory environment centered on an autonomous AI strategy. On April 25, 2018, Hungary signed the Declaration on Cooperation on Artificial Intelligence together with 25 European states, which recorded the intention of the signatories to cooperate in the field of European AI developments and the support of AI-supported innovation29.

We are at a turning point in the debate on the ethics of artificial intelligence (AI), because we are witnessing general-purpose AI textual agents, such as GPT-3, that can generate large-scale, highly refined content that appears to be written by humans. On the other side, there is the Natural Language Processing (Krutilla & Kovari, 2022). We can see a lack of discussion in the business community about the ethical issues surrounding the merging of the roles of humans and machines in content production (Illia et al.,2023).

Ethical decision-making is the central issue of our time. In my opinion, autonomous intelligent systems do not consider the human needs appearing on the human side, they make an objective-based decision. Therefore, it is necessary to create a human-centered AI that is in line with the values and ethical principles of society, that is, to put brakes necessary from an ethical point of view in order to protect the institution of fairness and dignity.

Based on the above, it can be said that the application of AI results in a complex change. For this, the creation of regulatory frameworks and ensuring transparency are essential. From this aspect, I wanted to present the individual regulatory solutions in this study, highlighting the question of the ethics of AI applied in labor relations.

"AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We are not smart enough to make AI ethical. We are not smart enough to make AI moral ... In the end, I believe that the

29 Hungary's Artificial Intelligence Strategy 2020-2030. https://ai-watch.ec.europa.eu/countries/hungary/ hungary-ai-strategy-report_en

only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI." (Connock, 2023).

The organization is recognized as extremist, its activity is prohibited in the territory of the Russian Federation.

*

References

Alikhademi, K., Drobina, E., Prioleau, D., Richardson, B., & Gilbert, J. E. (2022). A review of predictive policing from the perspective of fairness. Artificial Intelligence and Law, 4(23), 1 -17. https://doi.org/10.1007/ s10506-021-09286-4

Beranger, J. (2021). Societal Responsibility of Artificial Intelligence: Towards an Ethical and Eco-responsible AI. UK: Wiley-Iste.

Boyd, D., & Crawford, K. (2012). Az adatrengeteg kínos kérdései. Információs Társadalom, 12(2), 7. https://doi.

org/10.22503/inftars.xii.2012.2.1 Candriam Academy. (2022). What is the European Commission's HLEG?

Capgemini Research Institute. (2019). Why addressing ethical questions in AI will benefit organization. Capurro, R. (2018). Digital Ethics. International Journal of Applied Research on Information Technology and Computing, 9/1, 23-31.

Connock, A. (2023). Media Management and Artificial Intelligence: Understanding Media Business Models in the Digital Age. UK: Routledge.

Cyman, D., Gromova, E., & Juchnevicius, E. (2021). Regulation of Artificial Intelligence in BRICS and the European

Union, BRICS Law Journal, 8(1), 86-115. https://doi.org/10.21684/2412-2343-2021-8-1-86-115 Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Dorobantu, M., & Wilks, Y. (2019). Moral orthoses: a new approach to human and machine ethics. Zygon Journal

of Religion and Science, 54(4), 12-23. https://doi.org/10.1111/zygo.12560 Eszteri, D. (2015). A mesterséges intelligencia fejlesztésének és üzemeltetésének egyes felelosségi kérdései.

Infokommunikáció és Jog, 47-57. ISSN 1786-0776 Fleischmann, S. S., Greenberg, K. R., Verma, N., Cummings, B., Li, L., & Shenefel, C. (2023). Locazing the work of artificial intelligence ethics. Journal of the Association for Information Science and Technology, 74(3), 311-322. https://doi.org/10.1002/asi.24638 Fobel, P. (2002). Alkalmazott filozófia és etika. In S. Karikó, & S. Karikó (Szerk.), Az alkalmazott filozófia esélyei. Budapest: Áron Kiadó.

Hajdú, J. (2020). A mesterséges intelligecia hatása a munkaeropiacra, avagy elveszik-e a robotok az ember

munkáját. Infokommunikáció és Jog, 7. Harmathy, A. (2019). A polgári jog a változó jogrendszerben. In V. Lamm, & A. Sajó, Studia in honorem Lajos

Vékás. Budapest: HVG-ORAC Lapés Konyvkiadó Kft. Hoffmann, A. L., Roberts, S. T., Wolf, C. T., & Wood, S. (2019). Beyond fairness, accountability, and transparency in the ethics of algorithms: Contributions and perspectives from LIS. Proceedings of the Association for Information Science and Technology, 55(1), 694-696. https://doi.org/10.1002/pra2.2018.14505501084 Illia, L., Colleoni, E., & Zyglidopoulo, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201-210. https://doi.org/10.1111/ beer.12479

Karvalics, Z. L. (2015). Mesterséges intelligencia - a diskurzusok újratervezésének kora. Információs Társadalom. Knowles, M. A. (2021). Five Motivating Concerns for AI Ethics Instruction. Proceedings of the Association for

Information Science and Technology, 58, 472-476. https://doi.org/10.1002/pra2.481 Krutilla, Z., & Kovári, A. (2022). The origin and primary areas of application of natural language processing. In 2022 IEEE 22nd International Symposium on Computational Intelligence and Informatics and 8th IEEE International Conference on Recent Achievements in Mechatronics Automation Computer Science and Robotics (pp. 293-298). https://doi.org/10.1109/cinti-macro57952.2022.10029432 Legeza, L. (2013). Mérnôki etika. Budapest.

Müller, J., & Kerényi, Á. (2019). A biizalom ésetika igénye a digitális korszakban. Hitelintézeti Szeml, 18/4, 8-19. Négyesi, I. (2020). A mesterséges intelligencia és az etika. Társadalomtudomány, 104.

Nemeth, G. (2021). Jogäszi etikai kihiväsok a technologiai fejlodes tükreben: az etika es jog innoväciojänak aktualis kerdesei. Tanulmanyok. http://real.mtak.hU/108838/1/JAP-2020-01_NG.pdf

Pusztahelyi, R. (2019). Bizalmunkra melto MI - A mesterseges intelligencia fejlesztesenek es alkalmazasanak erkölcsi-etikai vonatkozasairol. Publicationes Universitatis Miskolcinensis Sectio Juridica et Politica, XXXVII/2, 99.

Spiekermann, S., Krasnova, H., Hinz, O., Baumann, A., Benlian, A., Grimple, H., & Trenz, M. (2022). Values and Ethics in Information Systems. Bus Inf Syst Eng, 64, 247-264. https://doi.org/10.1007/s12599-021-00734-8

Stefan, I. (2020). A mesterseges intelligencia fogalmanak polgari jogi ertelmezese. Pro Futuro, 1, 29-39.

Stefano, V. (2018). "Negotiating the algorithm": Automation, artificial intelligence and labour protection. Employment Working Paper, 246.

Sudhi, S., & Huraimel, K. (2021). Dealing with Ethics, Privacy, and Security. In Reimagining Businesses with AI (pp. 193-206).

Sussman, E. H. (2021). U. S. Artificial Intelligence Regulaton takes shape.

Tang, B. (2020). Independent AI Ethics Committees and ESG Corporate Reporting on AI as Emerging Corporate and AI Governance Trends. In S. Chishti, I. Bartoletti, A. Leslie, & S. Millie, The AI Book: The Artificial Intelligence Handbook for Investors, Entrepreneurs and FinTech Visionaries. https://doi.org/10.1002/9781119551966.ch48

Tilesch, G., & Hatamleh, O. (2021). Mesterseg es Intelligencia. Libri Kiado.

Turay, A. (2000). Az ember äs az erkölcs- Alapveto etika Aquinoi Tamas nyoman. Szeged: Agape, Ferences Nyomda es Könyvkiado Kft.

Yu, H., Shen, Zh., Miao, Ch., Leung, C., Lesser, V. R., & Yang, O. (2018). Building ethics into Artficial Intelligence. http://arxiv.org/pdf/1812.02953.pdf

Author information

Zsofia Riczu - PhD candidate, Faculty of Law, Agricultural and Labour Law Department, University of Miskolc

Address: Miskolc-Egyetemvaros, Miskolc, Hungary

E-mail: jogriczu@uni-miskolc.hu

ORCID ID: https://orcid.org/0000-0002-4024-5833

Conflict of interest

The author declares no conflict of interest.

Financial disclosure

The research had no sponsorship.

Thematic rubrics

OECD: 5.05 / Law PASJC: 3308 / Law WoS: OM / Law

Article history

Date of receipt - April 9, 2023 Date of approval - April 22, 2023 Date of acceptance - June 16, 2023 Date of online placement - June 20, 2023

Научная статья

УДК 341.1/8:004.8

EDN: https://elibrary.ru/cpffyw

DOI: https://doi.org/10.21202/jdtl.2023.21

з

Check for updates

Рекомендации по этическим аспектам искусственного интеллекта в приложении к сфере трудовых отношений

София Рицу О

Университет Мишкольца г. Мишкольц, Венгрия

Ключевые слова

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Законодательство, искусственный интеллект, право,

принципы права, труд,

трудовое право, трудовые отношения, цифровизация, цифровые технологии, этика

Аннотация

Цель: распространение и широкое использование искусственного интеллекта выдвигает на первый план не только проблемы защиты данных, но и этические вопросы. Цель данной статьи - изучение этических аспектов искусственного интеллекта и предложение рекомендаций для его использования в трудовом праве. Методы: исследование основано на методах сравнительного и эмпирического анализа. Сравнительный анализ позволил изучить положения современного трудового законодательства в контексте искусственного интеллекта. Эмпирический анализ выявил этические проблемы, относящиеся к искусственному интеллекту в сфере труда, путем изучения спорных случаев использования искусственного интеллекта в различных областях, таких как здравоохранение, образование, транспорт и др.

Результаты: частноправовые аспекты этических проблем искусственного интеллекта были изучены в контексте этических и трудовых вопросов права, влияющих на процесс отбора с помощью искусственного интеллекта и на обращение с работниками с точки зрения работодателя. Автор выделяет как общие аспекты этики, так и вопросы цифровой этики. Предложены отдельные международные рекомендации относительно этики искусственного интеллекта.

Научная новизна: исследование посвящено изучению этических аспектов использования искусственного интеллекта в конкретной отрасли частного права - трудовом праве. Автор дает рекомендации относительно этических аспектов использования искусственного интеллекта в данной сфере.

© Рицу С., 2023

Статья находится в открытом доступе и распространяется в соответствии с лицензией Creative Commons «Attribution» («Атрибуция») 4.0 Всемирная (CC BY 4.0) (https://creativecommons.Org/licenses/by/4.0/deed.ru), позволяющей неограниченно использовать, распространять и воспроизводить материал при условии, что оригинальная работа упомянута с соблюдением правил цитирования.

Практическая значимость: исследование восполняет имеющиеся пробелы в научной литературе по указанному вопросу. Результаты работы могут использоваться в процессе законотворчества и служить базой для дальнейших исследований.

Для цитирования

Рицу, С. (2023). Рекомендации по этическим аспектам искусственного интеллекта в приложении к сфере трудовых отношений. Journal ofDigital Technologies and Law, 7(2), 498-519. https://doi.org/10.21202/jdtl.2023.21

Список литературы

Alikhademi, K., Drobina, E., Prioleau, D., Richardson, B., & Gilbert, J. E. (2022). A review of predictive policing from the perspective of fairness. Artificial Intelligence and Law, 4(23), 1 -17. https://doi.org/10.1007/ s10506-021-09286-4

Beranger, J. (2021). Societal Responsibility of Artificial Intelligence: Towards an Ethical and Eco-responsible AI. UK: Wiley-Iste.

Boyd, D., S Crawford, K. (2012). Az adatrengeteg kínos kérdései. Információs Társadalom, 72(2), 7. https://doi.

org/10.22503/inftars.xii.2012.2.1 Candriam Academy. (2022). What is the European Commission's HLEG?

Capgemini Research Institute. (2019). Why addressing ethical questions in AI will benefit organization. Capurro, R. (2018). Digital Ethics. International Journal of Applied Research on Information Technology and Computing, 9/1, 23-31.

Connock, A. (2023). Media Management and Artificial Intelligence: Understanding Media Business Models in the Digital Age. UK: Routledge.

Cyman, D., Gromova, E., & Juchnevicius, E. (2021). Regulation of Artificial Intelligence in BRICS and the European

Union, BRICS Law Journal, 8(1), 86-115. https://doi.org/10.21684/2412-2343-2021-8-1-86-115 Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Dorobantu, M., S Wilks, Y. (2019). Moral orthoses: a new approach to human and machine ethics. Zygon Journal

of Religion and Science, 54(4), 12-23. https://doi.org/10.1111/zygo.12560 Eszteri, D. (2015). A mesterséges intelligencia fejlesztésének és üzemeltetésének egyes felel6sségi kérdései.

Infokommunikáció és Jog, 47-57. ISSN 1786-0776 Fleischmann, S. S., Greenberg, K. R., Verma, N., Cummings, B., Li, L., S Shenefel, C. (2023). Locazing the work of artificial intelligence ethics. Journal of the Association for Information Science and Technology, 74(3), 311-322. https://doi.org/10.1002/asi.24638 Fobel, P. (2002). Alkalmazott filozófia és etika. In S. Karikó, & S. Karikó (Szerk.), Az alkalmazott filozófia esélyei. Budapest: Áron Kiadó.

Hajdú, J. (2020). A mesterséges intelligecia hatása a munkaer6piacra, avagy elveszik-e a robotok az ember

munkáját. Infokommunikáció és Jog, 7. Harmathy, A. (2019). A polgári jog a változó jogrendszerben. In V. Lamm, S A. Sajó, Studia in honorem Lajos

Vékás. Budapest: HVG-ORAC Lapés K0nyvkiadó Kft. Hoffmann, A. L., Roberts, S. T., Wolf, C. T., S Wood, S. (2019). Beyond fairness, accountability, and transparency in the ethics of algorithms: Contributions and perspectives from LIS. Proceedings of the Association for Information Science and Technology, 55(1), 694-696. https://doi.org/10.1002/pra2.2018.14505501084 Illia, L., Colleoni, E., & Zyglidopoulo, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201-210. https://doi.org/10.1111/ beer.12479

Karvalics, Z. L. (2015). Mesterséges intelligencia - a diskurzusok újratervezésének kora. Információs Társadalom. Knowles, M. A. (2021). Five Motivating Concerns for AI Ethics Instruction. Proceedings of the Association for

Information Science and Technology, 58, 472-476. https://doi.org/10.1002/pra2.481 Krutilla, Z., & K6vári, A. (2022). The origin and primary areas of application of natural language processing. In 2022 IEEE 22nd International Symposium on Computational Intelligence and Informatics and 8th IEEE

International Conference on Recent Achievements in Mechatronics Automation Computer Science and Robotics (pp. 293-298). https://doi.org/10.1109/cinti-macro57952.2022.10029432

Legeza, L. (2013). Mernöki etika. Budapest.

Müller, J., & Kerenyi, Ä. (2019). A biizalom esetika igenye a digitalis korszakban. Hitelintezeti Szeml, 18/4, 8-19.

Negyesi, I. (2020). A mesterseges intelligencia es az etika. Tarsadalomtudomany, 104.

Nemeth, G. (2021). Jogäszi etikai kihiväsok a technolögiai fejlodes tükreben: az etika es jog innoväciöjänak aktualis kerdesei. Tanulmanyok. http://real.mtak.hu/108838/1/JAP-2020-01_NG.pdf

Pusztahelyi, R. (2019). Bizalmunkra melto MI - A mesterseges intelligencia fejlesztesenek es alkalmazasanak erkölcsi-etikai vonatkozasairol. Publicationes Universitatis Miskolcinensis Sectio Juridica et Politica, XXXVII/2, 99.

Spiekermann, S., Krasnova, H., Hinz, O., Baumann, A., Benlian, A., Grimple, H., & Trenz, M. (2022). Values and Ethics in Information Systems. Bus Inf Syst Eng, 64, 247-264. https://doi.org/10.1007/s12599-021-00734-8

Stefan, I. (2020). A mesterseges intelligencia fogalmanak polgari jogi ertelmezese. Pro Futuro, 1, 29-39.

Stefano, V. (2018). "Negotiating the algorithm": Automation, artificial intelligence and labour protection. Employment Working Paper, 246.

Sudhi, S., & Huraimel, K. (2021). Dealing with Ethics, Privacy, and Security. In Reimagining Businesses with AI (pp. 193-206).

Sussman, E. H. (2021). U. S. Artificial Intelligence Regulaton takes shape.

Tang, B. (2020). Independent AI Ethics Committees and ESG Corporate Reporting on AI as Emerging Corporate and AI Governance Trends. In S. Chishti, I. Bartoletti, A. Leslie, & S. Millie, The AI Book: The Artificial Intelligence Handbook for Investors, Entrepreneurs and FinTech Visionaries. https://doi.org/10.1002/9781119551966.ch48

Tilesch, G., & Hatamleh, O. (2021). Mesterseg es Intelligencia. Libri Kiado.

Turay, A. (2000). Az ember es az erkölcs- Alapveto etika Aquinoi Tamas nyoman. Szeged: Agape, Ferences Nyomda es Könyvkiado Kft.

Yu, H., Shen, Zh., Miao, Ch., Leung, C., Lesser, V. R., & Yang, O. (2018). Building ethics into Artficial Intelligence. http://arxiv.org/pdf/1812.02953.pdf

Сведения об авторе

София Рицу - аспирант кафедры аграрного и трудового права, Университет Мишкольца

Адрес: Университетский городок, г. Мишкольц, Венгрия

E-mail: jogriczu@uni-miskolc.hu

ORCID ID: https://orcid.org/0000-0002-4024-5833

Конфликт интересов

Автор заявляет об отсутствии конфликта интересов.

Финансирование

Исследование не имело спонсорской поддержки.

Тематические рубрики

Рубрика OECD: 5.05 / Law Рубрика ASJC: 3308 / Law Рубрика WoS: OM / Law

Рубрика ГРНТИ: 10.87.91 / Международное право в практике отдельных государств Специальность ВАК: 5.1.5 / Международно-правовые науки

История статьи

Дата поступления - 9 апреля 2023 г. Дата одобрения после рецензирования - 22 апреля 2023 г. Дата принятия к опубликованию - 16 июня 2023 г. Дата онлайн-размещения - 20 июня 2023 г.

i Надоели баннеры? Вы всегда можете отключить рекламу.