Научная статья на тему 'THE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN RESOURCES MANAGEMENT STRATEGY: OPPORTUNITIES FOR THE HUMANISATION AND RISKS'

THE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN RESOURCES MANAGEMENT STRATEGY: OPPORTUNITIES FOR THE HUMANISATION AND RISKS Текст научной статьи по специальности «Экономика и бизнес»

CC BY
257
170
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Wisdom
Область наук
Ключевые слова
artificial intelligence / digital humanism / experience management / engagement / wellbeing / discrimination / HR management strategy

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Valeriya Konovalova, Elena Mitrofanova, Alexandra Mitrofanova, Rita Gevorgyan

The article discusses the growing role of artificial intelligence in human resources management strategy. The results of research and practical experience confirm the possibility of using artificial intelligence to humanise human resource management (reducing bias in the selection of personnel, mastering employees‟ experience, personalising training, analysing the emotional state of employees, and managing their wellbeing) are generalised. Highlighted are the risks of dehumanisation of personnel management when introducing artificial intelligence, which can be caused by both new threats and the strengthening of existing problems in this area.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «THE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN RESOURCES MANAGEMENT STRATEGY: OPPORTUNITIES FOR THE HUMANISATION AND RISKS»

DOI: 10.24234/wisdom.v2i1.763 Valeriya KONOVALOVA, Elena MITROFANOVA, Alexandra MITROFANOVA, Rita GEVORGYAN

THE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN

RESOURCES MANAGEMENT STRATEGY: OPPORTUNITIES FOR THE HUMANISATION AND RISKS

Abstract

The article discusses the growing role of artificial intelligence in human resources management strategy. The results of research and practical experience confirm the possibility of using artificial intelligence to humanise human resource management (reducing bias in the selection of personnel, mastering employees' experience, personalising training, analysing the emotional state of employees, and managing their wellbe-ing) are generalised. Highlighted are the risks of dehumanisation of personnel management when introducing artificial intelligence, which can be caused by both new threats and the strengthening of existing problems in this area.

Keywords: artificial intelligence, digital humanism, experience management, engagement, wellbeing, discrimination, HR management strategy.

Introduction

There is no doubt that the development of intelligent automation (robotics, artificial intelligence) will revolutionise all areas of activity. According to data presented in a global Deloitte study, 59% of organisations believe that redesigning workplaces to integrate artificial intelligence (AI) technologies is essential or very important to their success in the coming years (Deloitte, 2020). Digital platforms and tools have already significantly changed working models with human resources.

Large companies and even midsize organisations with employees spread across geographic regions are investing in cloud, mobile and AI to offer integrated human resource management services in real-time and seamlessly. At the same time, there is still a lack of a comprehensive understanding of the consequences of using these technologies in human resource management at

the organisational (firm) and individual (employees) levels.

The perspectives and problems of implementing digital technologies and AI in the management of companies and human resources strategy are, in most cases, considered in the context of increasing operational efficiency, income and productivity, obsolescence of jobs and replacement of employees, and the need to master new skills in connection with changing professional requirements (Arslan, Ruman, Naughton, & Tar-ba, 2021; Coupe, 2019; Ivanov & Webster, 2019; Malik, Budhwar, & Srikanth, 2020).

Meanwhile, no less significant are the possibilities of using AI in the context of transhuman-ism to complement and expand human capabilities in areas such as value and talent management, human resource development, mastering of employees' experience, motivation and expansion of work opportunities, creation of a more positive work environment (Diéguez, 2017;

Wagner, 2020; Wagner, 2021).

The AI revolution makes HR processes less mechanical and more human-centred. However, AI technologies develop according to their laws while giving rise to new problems, hidden risks, and threats, both of a technical and socio-ethical nature (Cortina y Serra, 2016; Martin, 2019). In various professional fields, ethical dilemmas can arise in an AI application, as moral values and principles join the game, and even human rights may be violated (Coeckelbergh, 2021; Fernández-Fernández, 2021; Mittelstadt, Alio, Taddeo, Wachter, & Floridi, 2016).

Artificial Intelligence as a Factor in Human-Centered Management

Human resource management is getting on a new level: from information technology-based management (eHRM) to management driven by intelligent automation.

In a few organisations, the first generation of AI is a common occurrence (using AI only for specific tasks). It is predicted that shortly, the AI of the first generation will turn into the so-called artificial general intelligence, which will be able to independently reason, plan and solve problems for tasks for which they were not even designed (Yampolskiy, 2015).

In human resource management, AI has a wide range of applications. Gartner identifies three common use cases for AI in human resource management: attracting and retaining talent; analysis of surveys (analytics of the "voice of the employee"); HR virtual assistants (Gartner, 2019). According to a global study by Mercer company (Mercer, 2019), the main areas of application of AI in human resource management include identification of the best candidates based on publicly available data; provision of training recommendations and training for employees; checking and evaluation of candidates for employment, including using chatbots.

Companies planning to invest in AI are targeting the following areas: use of chatbots for employee self-service (for example, to change

privileges or provide vacations), identification of employees planning to leave, planning of job offers or career advancements for employees, assistance in the performance management process, benchmarking to create or improve a system of benefits and compensation, etc. (Nica, Miklencicova, & Kicova, 2019; Rodney, Valas-kova, & Durana, 2019).

Summarisation of the research results allows us to conclude that the expansion of the use of AI can contribute to the humanisation of both HR strategy and technologies (by introducing electronic recruitment, e-learning, or e-competency management), as well as to the activities of HR specialists. Among other things, the multidimen-sionality process of humanisation of modern management occurs significantly when it impacts various aspects of the organisation's activities. The main thing, in this case, will remain the creation of more comfortable conditions for a person and deduction of his needs and requirements to the first plan.

In the economy of talents, an organisation's future depends on attracting and retaining outstanding people. Continuously improving through new data and machine learning, AI can identify talents with characteristics similar to existing successful employees, actively invite them to apply, collect and summarise demographic data and work history from candidate interviews and, on this basis, predict how well they will be able to do their job in the company.

AI technology is expected to help identify potential partialities in their hiring patterns and avoid discrimination based on gender, age, race, and ethnicity, reducing human bias and providing data-driven objective representation.

AI can also be used to improve staff adaptation and experience management. An increase in the popularity of HR mentoring through the Organization Guidance System (OGS) is predicted. Such systems will determine the desired investment outcomes, the roadmap for achieving those outcomes, and the requirements for sustainable development. It is worth mentioning that AI technology allows new hires to receive support

anytime and anywhere via chatbots and remote support apps and empowers employees to adapt at their own pace.

AI helps optimise motivation, engagement, and participation strategies by creating a transparent culture of collaboration and provides personalised on-the-job training to employees throughout their tenure with the minimum staff effort. AI can be used to update e-learning with specialised game programs and workplace simulated learning tailored to specific needs, contributing to employee retention.

Implementing AI for competency mapping, succession planning, and career development enables data-driven solutions that lead to long-term employee engagement. Some AI programs can measure employee KPIs to determine who should be promoted, thereby stimulating internal staff mobility.

AI-integrated systems can also help train employees in an environment of continuous change and generate individual programs and learning strategies based on activities and competency analysis.

Today, rather than relying on outdated employee engagement methods, companies can use an employee-generated database that reflects their emotional state (for example, internal chat platforms such as Jabber, Yammer, and Chatter). While it is impossible to distinguish all the reasons why someone might leave work, it is pretty realistic to monitor indicators such as productivity and job satisfaction. Based on their combination with new analytical approaches such as sentiment analysis, it is possible to obtain a detailed matrix of employee mind states and predict layoffs. AI integration also allows for exploring the common traits of laid-off employees, highlighting talents at risk company can proactively solve potential issues.

In addition, research shows that AI tools are better at analysing employee surveys than people. With tools such as Oracle Fusion HCM, HR managers can access the personalised information of their employees' concerns and use it to prevent negative moods or challenging situations

before they escalate.

AI can help maintain a consistent tone of content by personalising messages sent to each individual recipient, which means it can effectively communicate a message to a range of demographic groups, both inside and outside the company. Real-time answers to frequently asked questions via chatbots, available to all employees, who can enter questions and receive automatic responses quickly, provide additional convenience for employees. Smart objects and the Internet of Things (IoT) can facilitate more effective coordination and collaboration.

At the core of any AI system, there is massive amounts of data that can be applied to any number of practical human resource management benefits, from employee satisfaction to decreasing workloads and increasing revenue (currently, only 29% of employees consider that HR helps them perform better) (Mercer, 2019). By freeing employees from labour-intensive and intellectually unattractive tasks, AI can give them time to learn new skills or develop the existing ones, resulting in more experienced and valuable employees.

AI can be used to analyse time off requests and build smarter, personalised work schedules so that employees can better control their work-life balance.

In a crowded job market, AI can be used to relieve pressure on hiring managers by helping to select candidates before a person is even involved. Expanding the practice of communicating with customers using automated systems will allow the staff to focus on more complex issues.

According to a study by LinkedIn, 67% of hiring managers and recruiting agencies said AI saves them time when looking for candidates for jobs. AI can make the hiring process more convenient for the hiring organisation and its job seekers (Konovalova & Mitrofanova, 2021). For example, artificial intelligence technology can streamline application processes by creating more user-friendly forms that a candidate for a job is more likely to fill, effectively reducing the

number of abandoned applications.

At the same time, the employees' assessments of the impact of the spread of intelligent automation on the sphere of human resource management are controversial Demir, McNeese, & Cooke, 2020; Diéguez, 2021; Gillath et al., 2021). For example, according to a study presented by KPMG, most business leaders believe that AI will create more jobs than it eliminates (KPMG, 2019). However, the downsizing and restructuring in many companies due to these changes means that the traditional psychological and social contract offered job security in exchange for organisational loyalty has changed (Petriglieri, Ashford, & Wrzesniewski, 2019).

The ethics of AI will be fundamentally different from the ethical aspects of the application of non-cognitive technologies since the specific behaviour of the AI system cannot be predicted and checking the security of the AI system requires checking what the system is trying to do (instead of testing security based on a particular behaviour in specific work contexts) (Baker-Brunnbauer, 2021; Bostrom & Yudkowsky, 2014; Kaplan & Haenlein, 2020). A difficult question arises: how to develop and formalise ethical principles for AI. Some researchers have dealt with the issue of their formalisation (for example, Muehl-hauser & Helm, 2012; Yudkowsky, 2011).

In these studies, the critical question was what ways could be found to overcome the contradictions between the clarity of AI computational algorithms and the ambiguous, inconsistent, subjective diversity of human values. For example, sometimes, some researchers propose to base AI systems on morality, but they do not accurately explain how an AI agent should choose actions that are consistently based on it (Haidt & Kese-bir, 2010).

The risk is increasing due to the lack of transparency in AI algorithms used in making life-critical decisions, such as recruiting employees in a company (Diéguez, 2021). Some of the threats are related to the fact that intelligent automation expands the possibilities of using Big Data for making decisions in the field of human resource

management, which, in its turn, is fraught with the risk of building false relationships, distorting causal relationships and the emergence of new forms of discrimination on this basis (for example, when making hiring decisions, assessing the potential of employees, etc.) (Andersen, 2017; Bhave, Teo, & Dalal, 2020; Levenson & Fink, 2017; Wenzel & Van Quaquebeke, 2017).

Discussion

There are two interrelated problems with using AI from an ethical point of view. The first is the coordination of AI work with the existing value attitudes in society. The second is the formalisation of these value attitudes. Most AI programs focus on neutral data analysis. However, this is often impossible for several tasks related to assessing human activities, as it is contrary to legislation or unethical. The management sphere is characterised by the problem of the formalisation of human decisions. People are not entirely rational agents, and sometimes emotional reactions prevent us from acting rationally. Not all human decisions are flawless when ethically judged.

A significant number of ethical issues are caused by the risks of discriminatory practices of algorithms that reproduce or even intensify hostile moods in society, and existing prejudices can be transferred to AI systems. The primary attention should be paid to the problem of discrimination, both individual and collective, to lay the foundation for measuring discriminatory bias, tools for its identification and possible correction. For example, the European Parliament Regulation 2016/679 of April 27, 2016 (Regulation (EU), 2016) strictly regulates the collection of personal data (religious, political, sexual, ethnic, etc.) and prohibits those who are responsible for algorithmic decisions to take them into account in automated processing.

Special attention should also be paid to vulnerable groups such as persons with disabilities and others who have historically been disadvan-taged, at isolation risk, in asymmetric power or

information situations (particularly between employers and employees).

There is also a whole class of ethical problems associated with the ethics of predictability. Many AI programs are written to solve predictive problems. Based on already known information about a person, AI can model the values and behaviour of people observed over a sufficiently long period of time and predict the results of choosing different options better than a person. Nevertheless, the consequences of this interaction between AI and human beings present an ethical challenge.

Notably, various aptitude testing or professional and career portfolio planning programs face the problem of the readiness of the program users to familiarise themselves with the result. Subsequently, various types of subjective discomfort associated with the "programming" of choice, the effects of reducing motivation, etc., may arise. When using AI in hiring models, experts ask themselves whether AI will be able to identify candidates with "unusual talent" who do not fit the standard model but can bring new skills and experience (Konovalova & Mitrofano-va, 2021; Tikhonov, 2020).

The proactive approach (assessing a person based on a forecast of what he will do) is common to the entire spectrum of Big Data applications. A candidate or employee is evaluated using much information about people and their behaviour, which does not have any common tasks with a natural work environment.

Before the full range of their actual and potential uses, the data collected from various sources is determined, and algorithms and analytics are mobilised to understand the past sequence of events and predict and intervene before actions, events, and processes. Indirect appraisal leads to erroneous rejections (the employer rejects potentially good candidates) and erroneous approvals (the unsuitable people are hired for improper reasons). Some of the data used to make decisions about employees are not objective but result from a certain operationalisation. Meanwhile, social networks and other large-scale digital plat-

forms are critical tools for disseminating false information.

AI-powered data management can enhance micromanagement and expand the capabilities of behavioural nudge based on big data processing.

Another threat to the dehumanisation of human resource management stems from the fact that technological developments have far outstripped the existing legal and ethical frameworks that govern the privacy and inviolability of private life. With the increase and detailing of the database, it becomes increasingly easier to identify an individual with their help.

Intelligent automation allows for to enhancement of tracking practices. More and more companies use employee performance monitoring systems to control working hours, evaluate and control work efficiency, identify disloyal employees and fraudulent schemes within the company, search for possible information leaks and protection against insiders, investigate information security incidents, and identify risk groups.

Employees (already working in the company and the potential ones) are often not even aware that some aspects of their life have been converted into data, do not fully realise the multiplicity of algorithms that collect and store data, the possibility of their further use, conclusions, and forecasts that data may allow, and procedures for ensuring informed consent for the use of data are not always achievable.

The security and protection of employee personal data are also a considerable concern, as the misuse of personal information and posting information on websites can potentially harm employees' welfare.

There is a risk of a growing lack of direct contact between different stakeholders. On the one hand, employees will be able to be much less dependent on the subjective attitude of the management and its unfavourable behaviour. On the other hand, there is a potential risk of the leaders receiving minor criticism and attitude towards their decisions and actions.

Teamwork of employees and AI increases the risk of "technological anxiety" (the degree to

which an individual feels frustrated and anxious when using a particular technology). Given the complexity of AI-based processes compared to relatively old technologies (personal computers or organisational IT systems), it is logical to expect that the level of "technological anxiety" may be higher, which in turn will affect trust towards AI as a team member and acceptance of a new reality in the working life. Lack of trust amid fear of job loss is one of the most significant barriers to taking full advantage of AI.

Conclusion

The development of artificial intelligence can change the fundamental nature of work and pose a severe threat to employment. However, it can also create significant opportunities for cooperation and human-machine integration.

As digitalisation expands, AI is trusted with more complex and sophisticated tasks such as managing employee wellbeing and mental health (BCG, 2020). The younger generation is increasingly embracing open discussion of mental health and is willing to make some changes in the workplace. As AI is increasingly becoming the starting point for these kinds of conversations, AI's discretion about personal matters gives employees more comfort when initiating conversations that the employee finds awkward.

AI can also mobilise additional help from the needed people if required. A recent global survey of HR leaders by Oracle and Future Workplace found that 64% of employees trust AI chatbots more than their managers.

Thus, AI can bring new opportunities to humanise HR and the workforce by helping HR professionals identify and retain high-potential employees, improve the talent acquisition process, reduce hiring bias, and increase productivity.

The risks of dehumanisation in the implementation of AI can be caused by both new threats and the intensification of problems that currently exist in human resource management, in particular, a decrease in the level of employee engage-

ment and professional burnout. At the same time, AI is unlikely to pose a fundamental threat to the uniquely human aspects of modern management, such as social interaction and the emotional intelligence of managers and employees.

However, according to KPMG, only 36% of HR leaders have started adopting AI and are sure they have the necessary skills and resources to use it. According to Deloitte (2020), only 12% of respondents said their organisations primarily use AI to replace staff, while 60% say their organisation uses AI to help their employees (primarily to address alignment issues and improve productivity, not the new ideas). In addition, 17% of respondents reported that they are willing to manage human resources by working side by side with people, robots, and AI.

At the same time, using AI in the field of human resource management, there is a threat of the possibility of multiple cases of abuse, and even new cybercrimes, loss of privacy, unfair use of algorithms while making decisions, or less obviously contribute to concealing, giving legitimacy, or perpetuating unfair prejudices and unacceptable discrimination processes. Nevertheless, while AI can have blind spots and unintended flaws, each glitch brings a new lesson that can be applied in the future. It is necessary to rethink the strategy for introducing artificial intelligence: from the parallel control of AI and people to integrating people and AI into "super teams".

References

Andersen, M. (2017). Human capital analytics: The winding road. Journal of Organizational Effectiveness: People and Performance, 4(2), 133-136. https://-doi.org/10.1108/J0EPP-03-2017-0024

Arslan, A., Ruman, A., Naughton, S., & Tarba, S. Y. (2021). Human dynamics of automation and digitalisation of economies: Discussion on the challenges and opportunities. In S. H. Park, M. A. Gonzalez-Perez, & D. E. Floriani

(Eds), The Palgrave Handbook of Corporate Sustainability in the Digital Era (pp. 613-629). Palgrave Mac-millan: Springer Nature.

Baker-Brunnbauer, J. (2021). Management perspective of ethics in artificial intelligence. AI and Ethics, 1, 173-181. doi: 10.1007/s43681 -020-00022-3

BCG (2020, April 2). The rise of the ai-powered company in the postcrisis world. Retrieved March 11, 2022, from https:-//www.bcg.com/publications/2020/b usiness-applications-artificial-intelli-gence-post-covid

Bhave, D. P., Teo, L. H., & Dalal, R. S. (2020).

Privacy at work: A review and a research agenda for a contested terrain. Journal of Management, 46(1), 127164. https://doi.org/10.1177/014920-6319878254

Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish, & W. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 316-334). Cambridge: Cambridge University Press. https//-doi.org/10.1017/CBO978113904685 5.020

Coeckelbergh, M. (2021). Ética de la Inteligencia Artificial (Ethics of artificial intelligence, in Spanish). Madrid: Cátedra.

Cortina, A. y Serra, M.-A. (2016). Humanidad.

Desafíos éticos de las tecnologías emergentes (Humanity. Ethical challenges of emerging technologies, in Spanish). Madrid: Ediciones Internacionales Universitarias.

Coupe, T. (2019). Automation, job characteristics and job insecurity. International Journal of Manpower, 40(7), 12881304.

Deloitte (2020). Global human capital trends.

Retrieved March 15, 2022, from https://www2.deloitte.com/us/en/insi ghts/focus/human-capital-trends-/2-

020

Demir, M., McNeese, N. J., & Cooke, N. J.

(2020). Understanding human-robot teams in light of all-human teams: Aspects of team interaction and shared cognition. International Journal of Human-Computer Studies, 140, 102436.

Diéguez, A. (2017). Transhumanismo. La búsqueda tecnológica del mejoramiento humano (Transhumanism. The technological quest for human enhancement, in Spanish). Barcelona: Herder.

Diéguez, A. (2021). En el control de la inteligencia artificial nos jugamos el futuro (In the control of artificial intelligence we play the future, in Spanish). Retrieved March 9, 2022, from https://theconversation.com/en-el-control-de-la-inteligencia-artificial-nos-jugamos-el-futuro-157019

Fernández-Fernández, J. L. (2021). Hacia el Humanismo Digital desde un denominador común para la Cíber Ética y la Ética de la Inteligencia Artificial (Towards digital humanism, from a common denominator for cyber ethics and artificial intelligence (AI) ethics, in Spanish). Disputatio. Philosophical Research Bulletin, 10(17), 107-130.

Gartner (2019). Gartner identifies the three most common AI use cases in HR and recruiting. Retrieved March 5, 2022, from https://www.gartner.com/en/-newsroom/press-releases/2019-06-19-gartner-identifies-three-most-common-ai-use-cases-in-

Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R.

(2021), Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607.

Haidt, J., & Kesebir, S. (2010). Morality. In S.

Fiske, D. Gilbert, & G. Lindzey

(Eds.), Handbook of Social Psychology (pp. 797-832). (5th ed.). Hobo-ken, NJ: Wiley. Ivanov, S., & Webster, C. (2019). Robots, artificial intelligence and service automation in travel, tourism and hospitality. Bingley: Emerald Publishing. Kaplan, A., & Haenlein, M. (2020). Rulers of the world unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63, 37-50. Konovalova, V. G., & Mitrofanova, A. E.

(2021). Social and ethical problems of digital technologies application in human resource management. Lecture Notes in Networks and Systems, 133, 735-742. https://link.springer.-com/chapter/10.1007/978-3-030-47-458-4_85?error=cookies_not_sup-ported&code=ba930d99-8110-4bec-bb9d-f38524a0f4dd KPMG (2019). Rise of the humans 3: Shaping the workforce of the future. Retrieved February 20, 2022, from https ://assets .kpmg/content/dam/kpm g/xx/pdf/2018/11/rise-of-the-hu-mans-2019.pdf Levenson, A, & Fink, A (2017) Human capital analytics: too much data and analysis, not enough models, and business insights. Journal of Organizational Effectiveness: People and Performance, 4(2), 145-156. https://doi.-org/10.1108/J0EPP-03-2017-0029 Malik, A., Budhwar, P., & Srikanth, N. R.

(2020). Gig economy, 4IR and artificial intelligence: Rethinking strategic HRM. In Kumar, P., Agrawal, A. and Budhwar, P. (Eds.), Human & technological resource management (HTRM): New insights into revolution 4.0 (pp. 75-88). Bingley: Emerald Publishing Limited. https://doi.-org/10.1108/978-1-83867-223-2202-01005

Martin, K. (2019). Ethical implications and ac-

countability of algorithms. Journal of Business Ethics, 160, 835-850.

Mercer (2019). Global talent trends study. Retrieved February 9, 2022, from https ://www.mercer.com/our-thin-king/career/global-talent-hr-trends.-html

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21.

Muehlhauser, L, & Helm, L. (2012). The singularity and machine ethics. In Singularity hypotheses (pp. 101-126). Berlin, Heidelberg: Springer.

Nica, E., Miklencicova, R., & Kicova, E. (2019).

Artificial intelligence-supported workplace decisions: Big data algorithmic analytics, sensory and tracking technologies, and metabolism monitors. Psychosociological Issues in Human Resource Management, 7(2), 31-36.

Petriglieri, G., Ashford, S. J., & Wrzesniewski, A. (2019). Agony and ecstasy in the gig economy: cultivating holding environments for precarious and personalised work identities. Administrative Science Quarterly, 64(1), 124-170.

Regulation (EU) 2016/679 of the European Parliament and of the Council (2016, April 27) on the protection of natural persons concerning the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). Retrieved April 6, 2022, from http ://data.europa.eu/eli/reg/20-16/679/2016-05-04.

Rodney, H., Valaskova, K., & Durana, P. (2019).

The artificial intelligence recruitment process: How technological advancements have reshaped job application and selection practices. Psychosocio-

logical Issues in Human Resource Management, 7(1), 42-47.

Tikhonov, A. I. (2020). Modern approaches to the integrated assessment of personnel risks of an industrial enterprise. Research in World Economy, 11(3), 99-107.

Wagner, D. N. (2020). Augmented Human-Centered Management - Human Resource Development for highly automated business environments. Journal of Human Resource Management, XXJII{1), 13-27.

Wagner, D. N. (2021). Artificial intelligence and the dark side of management. ROBONOMICS: The Journal of the Automated Economy, 1, 10. Retrieved April 8, 2022, from https://-journal.robonomics.science/index.-

php/rj/article/view/10

Wenzel, R., & Van Quaquebeke, N. (2017). The Double-Edged Sword of Big Data in Organizational and Management Research: A Review of Opportunities and Risks. Organizational Research Methods. https://doi.org/10.1177/1-094428117718627

Yampolskiy, R. V. (2015). Artificial superintelligence: A futuristic approach. Oxon: Routledge.

Yudkowsky E. (2011). Complex value systems in friendly AI. In J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds), Artificial general intelligence. AGI 2011. Lecture notes in computer science. (Vol. 6830). Berlin, Heidelberg: Springer.

i Надоели баннеры? Вы всегда можете отключить рекламу.