Научная статья на тему 'Improving customer experience with artificial intelligence by adhering to ethical principles'

Improving customer experience with artificial intelligence by adhering to ethical principles Текст научной статьи по специальности «Экономика и бизнес»

CC BY-NC-ND
1323
213
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Бизнес-информатика
ВАК
RSCI
Область наук
Ключевые слова
ethics / artificial intelligence (AI) / ethical AI / customer experience / ethical principles / machine learning / trust / robot

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Olga I. Dolganova

The intensive development and application of artificial intelligence technologies in organizing interaction with clients is accompanied by such difficulties as: the client’s unwillingness to communicate with the robot, distrust, fear, negative experience of the clients. Such problems can be solved by adhering to ethical principles of using artificial intelligence. In scientific and practical research on this topic, there are many general recommendations that are difficult to apply in practice, or, on the contrary, that describe the methods for solving a highly specialized technical or management problem. The purpose of this article is to determine the ethical principles and methods, the observance and implementation of which would increase confidence in artificial intelligence systems among client of a particular organization. As a result of the analysis and synthesis of the scientific and practical investigations, as well as the empirical experience of Russian and foreign companies, the main areas of application of artificial intelligence technologies affecting the customer experience were identified. The ethical principles recommended to be followed by business have been formulated and systematized. The main methods have been also identified to enable implementation of these principles in practice, and so to reduce the negative effects of customer interaction with artificial intelligence and increase their confidence in the company.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Improving customer experience with artificial intelligence by adhering to ethical principles»

BUSINESS INFORMATICS Vol. 15 No 2 - 2021 DOI: 10.17323/2587-814X.2021.2.34.46

Improving customer experience with artificial intelligence by adhering to ethical principles

Olga I. Dolganova

E-mail: [email protected]

Financial University under the Government of the Russian Federation Address: 38, Scherbakovskaya Street, Moscow 105187, Russia

Abstract

The intensive development and application of artificial intelligence technologies in organizing interaction with clients is accompanied by such difficulties as: the client's unwillingness to communicate with the robot, distrust, fear, negative experience of the clients. Such problems can be solved by adhering to ethical principles of using artificial intelligence. In scientific and practical research on this topic, there are many general recommendations that are difficult to apply in practice, or, on the contrary, that describe the methods for solving a highly specialized technical or management problem. The purpose of this article is to determine the ethical principles and methods, the observance and implementation of which would increase confidence in artificial intelligence systems among client of a particular organization. As a result of the analysis and synthesis of the scientific and practical investigations, as well as the empirical experience of Russian and foreign companies, the main areas of application of artificial intelligence technologies affecting the customer experience were identified. The ethical principles recommended to be followed by business have been formulated and systematized. The main methods have been also identified to enable implementation of these principles in practice, and so to reduce the negative effects of customer interaction with artificial intelligence and increase their confidence in the company.

Key words: ethics; artificial intelligence (AI); ethical AI; customer experience; ethical principles; machine learning; trust; robot.

Citation: Dolganova O.I. (2021) Improving customer experience with artificial intelligence by adhering to ethical principles. Business Informatics, vol. 15, no 2, pp. 34-46. DOI: 10.17323/2587-814X.2021.2.34.46

Introduction

Using the artificial intelligence (AI) technologies when interacting with customers provides significant economic potential, but requires solving problems with data security, transparency of algorithms for machine behavior, and trust in such tools on the customer side. This is a reason for the increasing interest in the issues of digital ethics both in scientific literature and in practice. Searching for publications in the Web of Science and Scopus with the keyword "AI Ethics" shows that over 57% (483) papers found in Web of Science and 61% (587) of papers found in Scopus have been published in 2019 and 2020. Until 2019, on average about 30 papers on this topic were published each year on this topic, with more than 40% of the articles written by authors from the United States and Great Britain. Also among the leaders are authors from Australia, Italy, the Netherlands and Canada. Authors from the United States have published 528 papers, and Russian scientists have published only 14. It is also important to note that only half of all published papers are related to computer and social sciences, business and economics.

This trend is also observed in practice-oriented publications, as well as in the materials of state and international committees and expert councils [1—5]. They suggest high-level approaches, principles and methods for solving the problems of AI technologies implementation, which are difficult to apply in practice at the company level. At the same time, more than half of CEOs of companies using AI technologies emphasize the importance of ensuring their ethics and transparency [6]. Gartner also notes that in the coming years, the issues of digital ethics will remain at the peak of popularity as an element of corporate architecture [7].

In marketing, sales and after-sales processes, artificial intelligence can be used to solve a variety of problems. Examples include improving speech recognition and analyz-

ing the emotional state of a customer, routing calls, handling requests, providing an individual approach to every customer, and finding new customers.

The term "artificial intelligence" does not have a single, well-established definition. Many researchers believe that the concept of AI refers to the programs, algorithms and systems that demonstrate intelligence [8]. However, such a formulation raises a lot of discussion on the subject of how to determine that a machine is demonstrating intelligence, and what are its actions which prove its intelligence. Therefore, in this work we will adhere to a slightly different point of view, implying that information systems with AI are built on the basis of machine learning technologies and they can use tools for robotic process automation, natural language processing, neural networks and deep learning methods [9—11]. Such software solutions allow interpretation of the available data, learning from it and adapting it to the current needs of the user [12].

The use of such advanced digital technologies is associated with ethical problems [11, 13, 14]. Clients are not inclined to confide in artificial intelligence. From the point of view of compliance with ethical standards, they expect more from such information systems than is regulated by the current legal norms [15]. It is also important for many people that interaction with them is carried out honestly and transparently; only in this case do they begin to trust the seller [16]. It is trust that acts as the most powerful factor influencing customer loyalty [17—19]. It arises from the company's consistent behavior, demonstrating its integrity and reliability. The importance of these aspects in building the relationship between the seller and the consumer is increasing every year. This trend, in particular, is confirmed by the results of studies by KPMG in 2019-2020 [17, 20].

The issues of improving the customer experience are thoroughly considered by research-

ers from the point of view of marketing, sales automation and after-sales service, including the use of artificial intelligence technologies [21, 22]. However, their use raises ethical problems. Therefore, there is an urgent need to study these issues and identify possible ways to reduce the negative consequences.

The goal of this paper is to identify ethical principles and approaches to improve the customer experience of interacting with AI during sales and the after-sales phases. As a result of the study, we intend to answer the following questions:

♦ What ethical principles need to be followed to improve the AI customer experience?

♦ What steps would reduce the negative attitude of buyers and consumers to the use of AI technologies by the company in management and implementation processes?

As part of the study, we carried out an analysis of scientific and practical publications, frameworks, "white papers" and analytical reports related to the questions raised from the point of view of marketing, business, psychology, ethics and information technology.

Based on the classification of artificial intelligence systems used in customer service [23], two scenarios of their application can be distinguished: 1) a robot as an assistant to a person who serves a client and 2) a robot as a replacement for a person serving the customer. In this paper, the second option is mainly considered. This will allow narrowing the research area and focusing on the problems that arise with this type of customer interaction.

1. Scope of artificial intelligence technologies when implemented for interaction with customers

In thefield of managing and implementing customer interactions, artificial intelligence technologies can be used to solve such issues as automating sales processes, processing

requests and complaints, finding and attracting new customers, increasing loyalty and retaining existing customers. Among the tasks most frequently implemented with the support of AI, one can single out the management of incoming content, the implementation of simple sales processes, the analysis of information about the client and the formation of personalized offers for him.

Artificial intelligence tools, together with robotic process automation (RPA) technologies, capture customer messages and letters, identify them, recognize them, extract the necessary and useful information about the customer and his request, verify the received data with those already available in the company and then transfer them for processing and making decisions on a specific question.

Artificial intelligence is largely used in simple sales processes on marketplaces such as eBay and Amazon, Facebook and WeChat. Yamato Transport, one of Japan's largest courier companies, uses a chatbot to schedule deliveries and answer queries on the parcel's location [24]. Domino's Pizza uses a chatbot that accepts orders for online delivery.

To implement behavioral targeting in real time, based on the analysis of customer transactions, personalized offers are generated based on the buyer's behavior and sales experience in the company. For example, similar solutions are used in Netflix, Amazon, Out-brain, Taboola [22].

The determination of an individual trajectory of interaction with a client can also be implemented using AI systems, which are based on natural language processing and machine learning technologies. For example, the Stitch Fix company has created an online clothing store, where customers are invited to define their unique style (instead of choosing from the proposed template options), and choose items that suit this style, thus taking into account the individual characteristics of a person [25].

Artificial intelligence helps to improve communication and increase customer loyalty, sell him a product, by analyzing his emotional state by voice or text of the message. Such solutions allow predicting the client's behavior, his desires, and building interaction with him in the best possible way.

Robotization and the use of artificial intelligence technologies, like many other innovations, make it possible to satisfy customer needs in terms of the quality and speed of processing their requests [26-28]. Sometimes it even succeeds in surpassing the expectations of buyers and consumers, reducing their efforts to interact with the company.

A 2020 KPMG study [22] showed that the number of consumers willing to use digital technologies (social networks, web chats, instant messengers) to interact with sellers has recently tripled. According to McKinsey [29], the use of AI for deeper (in comparison with traditional solutions) data analytics allows a company to increase its value by 30-128%, while retailers increase their sales by only 1-2%. Therefore, to strengthen their competitiveness, increase profits and improve customer experience, companies are actively implementing IT solutions based on machine learning methods.

Companies are beginning to actively use artificial intelligence to interact directly with customers. The study [21] showed that using hidden chat bots (when customers think they are talking to a person) is four times more effective than the employment of an inexperienced salesperson. In addition, in some situations such solutions help to turn the negative experience of a client into a positive one by promptly and transparently resolving emerging problems.

2. Problems of using artificial intelligence technologies to interact with customers

Behavioral economics points to such a feature of a person as the formation of a trusting

relationship with those we like. Many companies have been aware of this for a long time and are taking appropriate steps so that the buyer feels sympathy to the employees he communicates with. However, the question of how to make the buyer comfortable to communicate with the AI, which replaces the contact person from the company, remains unresolved. Capgemini [6] estimates that roughly two out of five companies that are encountering ethical issues in using AI have opted to abandon its use completely.

Empathy and personalization are other important aspects of a positive customer experience and increased customer loyalty, which are difficult to achieve with artificial intelligence technologies. Theoretically, when designing an algorithm for the functioning of a robot, one can try to build the logic of its interaction with a client in such a way that it takes into account the client's circumstances and shows a deep understanding of his problems, doubts, fears. However, a person communicating with, for example, a bot, most likely will not feel interest in himself from the AI, will not feel himself valuable and unique, since he will understand that he is not communicating with a person, but with a machine. Customers are especially reluctant to interact with a robot if they need subjective assessment, help in choosing a product, if they expect complicity and empathy [29-31].

There is a large category of customers who believe that a company that uses an artificial intelligence system to interact with them is in some way deceiving a buyer or consumer of services [24]. This seriously reduces the customer's confidence in the brand. So, after the disclosure of information that a bot and not a person is working in the sale process in the company, the frequency of interruption of the current contact increases, and the number of purchases decreases by almost 80% [21].

The studies described in [28] show that the introduction of innovative solutions based on

AI from potential customers and consumers is more likely to cause a negative reaction than a positive one; this especially affects the perception of the ethical side of a company's reputation.

Customers are very concerned about the confidentiality and security of interaction with the robot [32]. When interacting with a living person, in contrast to artificial intelligence systems, the feeling remains that the conversation may not be recorded, not all information that is communicated to the seller or manager will get into the accounting system and will be used for further contacts or for any other purposes. In addition, due to the opacity of the algorithms for the functioning of AI, the client feels insecure when transferring personal data about himself to the robot.

In the course of applying artificial intelligence, for example, in the analysis of customer experience and interaction with the consumer, a question of the ethical use of customer information arises. The principles of determining the ethics of the company's behavior in this case can also be an important criterion for the transparency of interaction and the honesty of the organization. In order to provide customers with a personalized service, many firms overuse personal data, which can negatively affect the customer experience. It may seem to a person that the seller violates acceptable boundaries and reaches into his personal life.

The exchange of personal data between companies of the same ecosystem is prohibited by the legislation of the Russian Federation. However, it is possible to transfer aggregated, anonymized data to each other as a service. This allows using the customer experience of ecosystem participants without violating basic ethical standards. This is what the companies Megafon and Mail.ru are doing [33]. Despite the fact that this does not violate the rights of customers, many of them would like to know what kind of personal data is used, by whom and for what purposes.

Thus, solutions based on artificial intelligence are perceived by many people as innovative, but having an incomprehensible functioning algorithm. This has a number of ethical implications that can negatively impact customer experience as well.

3. Ethical principles of using artificial intelligence technologies to interact with customers

The concept of "ethical" is very multifac-eted, it can refer to both a process and a result or a value [18]. When considering the ethics of the process, we will talk about the internal procedures and actions that the company implements. The ethical values dimension refers to a set of parameters of organization interaction with its customers. In this case, we are talking about the transparency of information interaction, fairness of pricing, confidentiality of personal data, etc. The ethics of the results is related to the properties of the output of the AI system, non-discrimination, fairness and objectivity. It is important to consider the ethics of AI from all of the above points of view, as it addresses the ethical issues of design, development, implementation and use of appropriate technologies in practice.

People have begun to pay more attention to how well brands are behaving in relation to their ethical and social obligations. Many researchers [11, 13, 14] note that this contributes to the long-term success of the company in maintaining customer loyalty.

Fear, misunderstanding of the mechanisms of functioning of systems in which AI technologies are embedded can significantly reduce the potential positive effects of their use. Some companies that are actively testing various AI technologies (for example, Walmart Inc.) are concerned about the attitude of customers towards the robots they encounter during the purchase [34]. This is mainly due to the ethical aspect. In particular, the following risks apply:

♦ the client feels that a decision is made for him, that is, the possibility of self-realization decreases and the person's ability to research and choose a product that will satisfy his needs is devalued;

♦ opacity of areas of responsibility for decisions or conclusions made by artificial intelligence. For example, when a call center operator advises a caller of something, the responsibility for the recommendation lies with a specific employee or person. If the recommendations are made by a robot, who will be responsible for them?

♦ control over the actions of artificial intelligence. If the system is self-learning, then a situation may arise when conclusions from the analyzed information, behavior and AI decisions may turn out to be unpredictable not only for the user, but also for the developer.

Leading organizations working in the field of the formation of ethical norms and rules for the use of AI have formulated a large number of different approaches and tools to comply with the above principles [1, 2, 4, 5]. Almost all of them adhere to similar views and do not contradict but complement each other.

The Atomium - European Institute for Science, Media and Democracy white paper on AI ethics outlines five principles of AI ethics: 1) promoting human well-being; 2) harmless-ness (confidentiality, security and "attention to opportunities"); 3) autonomy (the right of people to make their own decisions); 4) fairness (respect for the interests of all parties that can be influenced by the actions of the system with AI, the absence of discrimination, the possibility of eliminating errors); 5) explain-ability (transparency of the logic of artificial intelligence, accountability) [3]. These principles represent the quintessence of those set forth in codes, regulations and other advisory and regulatory documents issued by the expert and regulatory authorities of the European Union countries.

The Japanese Society for Artificial Intelligence (JSAI) identifies the following ethical principles to be followed by developers of artificial intelligence systems [35]: 1) respect for human rights and respect for cultural diversity; 2) compliance with laws and regulations, as well as not harming others; 3) respect for privacy; 4) justice; 5) security; 6) good faith;

7) accountability and social responsibility;

8) self-development and promotion of understanding of AI by society. It is also important to note that in contrast to the European principles of AI, here special attention is paid to the development of AI in such a way that it also observes the above principles in the course of its functioning.

Google has also formulated seven principles of artificial intelligence that the company is following in creating and using such technologies. These include [36]: 1) AI should be socially useful; 2) it is necessary to strive to avoid unfair influence on people; 3) application of best security practices; 4) responsibility for the actions of AI in front of people; 5) ensuring guarantees of confidentiality, proper transparency and control over the use of data; 6) maintaining standards of excellence; 7) limiting the use of potentially harmful and offensive software products. A feature of this list of principles is indication to the importance of the qualifications of people who create and manage systems with AI.

The Russian code of ethics for the use of data, which was developed at the initiative of the Big Data Association and the Institute for the Development of the Internet, states that it is necessary "to be based on the fundamental principles of protecting human rights and freedoms, to prevent discrimination and harm" as the basic principles for the use of AI. They must also comply with Russian and international legislation in the field of information security and data protection from illegal use [37].

The Capgemini Research Institute has also formulated the core characteristics of ethi-

cal AI. These include [6]: 1) ethical actions from design to application; 2) transparency;

3) explainability of the functioning of AI;

4) the interpretability of the results; 5) fairness, lack of bias; 6) the ability to audit.

Table 1 shows the results of comparison of the sets of principles that are recommended by the scientific and expert communities [3, 6, 35], principles used by the leading IT company Google [36], as well as those formulated in the Russian code of ethics for the use of data [37].

As noted earlier, the main challenge for a company in this area is to build trust with the customer. At the company level, it is important to adhere to ethical principles related to various aspects of trust. IBM distinguishes among them fairness, reliability, transparency, accountability, explainability and equalization of values (compliance with rules, business processes, norms, laws, ethics and morality) [38].

An analysis of the recommendations for the formation of ethical principles allows us to

Table 1.

Comparative characteristics of different sets of principles of ethics of artificial intelligence

Source of the body of principles

Principles JSAI [35] Atomium -EISMD [3] Capgemini [6] Google [36] ABD*) [37]

Equality and fairness + + + + +

Benefit, harmlessness + + + +

Respect for cultural diversity and pluralism + +

Non-discrimination and lack of stigma + + + + +

Individual responsibility and accountability + + + +

Autonomy and consent +

Confidentiality, respect for privacy + + + +

Data protection and control of their use + + + +

Promoting public understanding of AI +

Good faith + +

Social responsibility +

High skill, self-development + +

*) Big Data Association and Internet Development Institute

conclude that two categories can be distinguished as the key principles of artificial intelligence ethics that commercial companies need to comply with:

♦ category "trust": fairness, reliability, transparency, accountability, explainability;

♦ compliance category: data protection, data control, confidentiality.

Organizational measures taken by business to ensure adherence to these principles also play a significant role in improving the customer experience. Examples are the development of joint industry regulation and self-regulation in AI, the development and implementation of moral values for the company, and the definition for the degree of transparency of artificial intelligence functioning in advertising, marketing, sales and after-sales management systems.

4. Methods to reduce negative customer experience of interaction with artificial intelligence

The ethical use of AI increases customer loyalty, trust and sales. Unethical behavior can lead to reputational risks, court trials, and the loss of up to 30% of clients [6].

To improve the customer experience with AI technologies, it is important to ensure that the above categories of principles are adhered to, and enable people to help the company to improve the algorithms and methods for using artificial intelligence. In this way, the company will demonstrate adherence to the basic principles of AI ethics.

One of the main ways to increase the degree of trust when interacting with artificial intelligence is to inform the client about the fact that he or she is communicating with a robot. It is important to note that it is advisable to convey this information at the end of the communication process, when a person has placed

an order, or submitted an application complaint that has been processed, or even a decision has already been made. Thus, the company adheres to ethical standards to ensure transparency in the use of AI in communication, and also allows the client to have a positive experience of communicating with the machine. If the client is informed about this at the beginning of communication, there is a high probability of interruption of this contact at the initiative of the client [21]. Because of this, the company loses customer loyalty, and the client, in turn, will not be able to form his own opinion (possibly positive) about the functioning of the robot.

Another tool is to inform the client about the algorithm for generating proposals for him. As a rule, this is done on demand, rather than being imposed on every person on a regular basis. For example, the special option "Why do I see this?" in Facebook shows the criteria that worked for the selection of personalized advertising [22]. Also, at the request of the user, a newsletter can be created which provides information about the digital policy, the data used, the purpose of collection and processing the data, risks, results of verification for compliance with the company's ethical principles. In this way, the principles of explainability and accountability are adhered to, and the user's trust in Facebook is maintained.

If a company does not disclose the algorithm for forming an individual price for a product or service, personal discounts and promotional offers, then there is a feeling of deception or discrimination. Today there are already many examples of such unethical use of AI. As an example, the insurance company Allstate in Maryland can be mentioned, which formed the cost of insurance depending on the client's ability to pay. To estimate this ability, the company was referring to external sources of information [18]. Such secret behavior is already alarming the buyer today.

Therefore, similar to the disclosure of financial statements, public companies should make publicly available the information about the data they collect and the policy of their use by AI systems. This will ensure compliance with the principles of transparency and accountability. At the same time it will ensure the protection of trade secrets, since details are not disclosed, and information of general nature is provided instead.

It is also advisable to use open source tools that reveal bias and discrimination in artificial intelligence algorithms. An example is AI Fairness 360 (AIF360), a set of tools that provides the means for identifying and eliminating bias in machine learning models [39]. Such mechanisms allow the company to follow ethical principles, tactical and strategic goals. An example is the practice of identifying and reducing the age discrimination when issuing a loan from German Credit using a solution from IBM [40].

The General Data Protection Regulation (GDPR) of the European Union provides for reporting on the applied AI algorithms to ensure compliance with individual rights and systemic co-management [41]. This includes a Data Protection Impact Assessment (DPIA). When this assessment is executed in accordance with the concept of "multilevel explanation" [42] of algorithmic systems, it makes possible to ensure the implementation of client rights to explain the principle of functioning of the AI with which they interact.

In addition, it is important to communicate information about AI to customers in such a way that people understand it, can appreciate the importance and safety of collecting information about them. It is also recommended to inform clients about the risks that accompany the operation of systems based on artificial intelligence, about possible misuse or incorrect interpretation of the received data.

Hence, another method of increasing the client's confidence follows - the transfer to the client of part of the functions to control and manage the area of functioning of the artificial intelligence system. The participant in the interaction should be able to ask for help from a real employee of the company, or report incorrect actions of the robot. Moreover, this option must be known to him and easily accessible. The client can be delegated the authority to determine the categories of personal data available to AI for processing, as well as the list of online services that he would like to receive in the course of contact with the company. For example, in the Beeline company [43] as a criterion for ethical behavior, the client's ability to indicate the level of personalization of interaction is highlighted. This allows a person to feel that he or she is in control of the conditions of interaction and feel the informational security.

A person will become more confident in artificial intelligence if the corresponding system will provide up-to-date statistics on successfully completed work, as well as demonstrate the connection of its recommendations with the goals of the person it interacts with [44]. Here, different indicators of productivity and effectiveness can be defined, for example, the number of customers satisfied with the service with a particular robot, the proportion of successfully solved problems, or the average processing time of application.

The afore-mentioned methods are aimed at forming trust, almost partnership relations between the company and its customers. Their implementation may not achieve the desired effects if these actions are only formal and do not become an element of the corporate culture of the company. In ensuring flexibility and openness in building relationships with customers, especially when it comes to the use of artificial intelligence, an

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

important role is played by the digital culture which should be formed and developed in the company [45]. The organization must create conditions for the development of ethical thinking among its employees, so that they want to act correctly at all stages of creating and managing an artificial intelligence system. Such events are mainly aimed at creating an atmosphere of effective and ethical use of advanced information technologies. But it is above all to strengthen the client's trust that they shall be convincingly demonstrated in the external environment. It seems expedient to inform the public about the principles and methods of developing the corporate culture, about the goals that the company wants to achieve in this direction, and what steps it is taking for this. It is also important to show the client that this is not only a declaration of intent, but also a real action. An example for such a real action is the active use of benchmarking and conducting research in this area, employee training and creating internal centers of excellence for ethics, and also participating in various communities and associations dealing with AI ethics.

Despite the fact that the goal of introducing such technologies is to reduce personnel costs, in order to form trust relationships at the first stage of AI implementation, as part of the after-sales service process, a company employee must contact the client and inquire about his experience. One of the tasks of such a contact will be to demonstrate to the client the intentions to provide him with the comfort and safety of interacting with the AI, taking into account his doubts and wishes.

The application of the proposed list of methods and approaches to improving the interaction with AI allows almost any organization to comply with key ethical principles and thereby preserve the client's trust in the company.

Conclusion

With the active use of AI technologies in the organization and implementation of processes of promotion, sales and after-sales service, the problem arises of ensuring a positive customer experience. This is due to the fact that many people fear and distrust robots and artificial intelligence. In order to reduce negative attitudes towards such systems, it is important to convey to the client the ethical application of these solutions, to ensure the transparency and accountability of their operation.

This article considers the main areas of application of artificial intelligence technologies in the processes of interaction with the client. The necessity is demonstrated for defining and formulating the ethical principles for the development, implementation and application of AI in practice. The analysis of recommendations of international and local expert commissions, scientific research and leading IT companies allows formulating two categories of basic principles of artificial intelligence ethics which are recommended to be followed by commercial organizations.

These principles, scientific and practical experiments and the experience of companies that are actively using AI technologies represent the basis for key methods of improving the customer experience of interacting with IT solutions.

The guidelines presented in this article may be useful for organizations planning or already using AI-powered systems. The results obtained can also serve as a starting point for further research. For example, it is interesting to identify changes in the significance of certain ethical principles depending on the industry affiliation of the company, as well as to study the impact of each of the described methods on changing the trust in AI of clients of Russian companies. ■

References

1. Universite de Montreal (2017) The Montreal declaration for a responsible development of artificial intelligence. Available at: https://www.montrealdeclaration-responsibleai.com/ (accessed 20 August 2020).

2. Partnership on AI (2020) Tenets of the partnership on AI. Available at: https://www.partnershiponai.org/ tenets/ (accessed 21 August 2020).

3. Floridi L., Beltrametti M., Burri T., Chatila R., Chazerand P., Dignum V., Madelin R., Luetge C., Pagallo U., Rossi F., Schafer B., Valcke P., Vayena E. (2020) AI4People's ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Available at: https://www.eismd.eu/wp-content/uploads/2019/03/AI4People%E2%80%99s-Ethical-Framework-for-a-Good-AI-Society.pdf (accessed 21 August 2020).

4. OECD (2019) What are the OECD Principles on AI? Available at: https://www.oecd.org/going-digital/ai/ principles (accessed 27 August 2020).

5. European Commission (2019) Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/ digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed 29 August 2020).

6. Capgemini Research Institute (2019) Why addressing ethical questions in AI will benefit organizations. Available at: https://www.capgemini.com/wp-content/uploads/2019/08/AI-in-Ethics_Web.pdf (accessed 11 November 2020).

7. Allega P. (2020) Hype cycle for enterprise architecture, 2020. Gartner. Available at: https://www.gartner.com/en/documents/3989875 (accessed 25 October 2020).

8. Shankar V. (2018) How artificial intelligence (AI) is reshaping retailing. Journal of Retailing, vol. 94, no 4, pp. vi-xi. DOI: 10.1016/S0022-4359(18)30076-9.

9. Huang M.-H., Rust R.T. (2018) Artificial intelligence in service. Journal of Service Research, vol. 21, no 2, pp. 155-172. DOI: 10.1177/1094670517752459.

10. Syam N., Sharma A. (2018) Waiting for a sales renaissance in the fourth industrial revolution: Machine learning and artificial intelligence in sales research and practice. Industrial Marketing Management, vol. 69, pp. 135-146. DOI: 10.1016/j.indmarman.2017.12.019.

11. Davenport T., Guha A., Grewal D., Bressgott T. (2020) How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, no 48, pp. 24-42.

DOI: 10.1007/s11747-019-00696-0.

12. Kaplan A., Haenlein M. (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, vol. 62, no 1, pp. 15-25. DOI: 10.1016/j.bushor.2018.08.004.

13. Fukukawa K., Balmer J.M., Gray E.R. (2007) Mapping the interface between corporate identity, ethics and corporate social responsibility. Journal of Business Ethics, no 76, pp. 1-5.

DOI: 10.1007/s10551-006-9277-0.

14. Stanaland A.J.S., Lwin M.O., Murphy P.E. (2011) Consumer perceptions of the antecedents and consequences of corporate social responsibility. Journal of Business Ethics, no 102, pp. 47-55. DOI: 10.1007/s10551-011-0904-z.

15. Gray K. (2017) AI can be a troublesome teammate. Harvard Business Review. Available at: https://hbr.org/2017/07/ai-can-be-a-troublesome-teammate (accessed 12 October 2020).

16. Lee J.D., See K.A. (2004) Trust in automation: Designing for appropriate reliance. The Journal of the Human Factors and Ergonomics Society, vol. 46, no 1, pp. 50-80. DOI: 10.1518/hfes.46.1.50_30392.

17. KPMG (2020) Customer experience in the new reality. Global Customer Experience Excellence research 2020: The COVID-19 special edition. Available at: https://assets.kpmg/content/dam/kpmg/xx/ pdf/2020/07/customer-experience-in-the-new-reality.pdf (accessed 15 October 2020).

18. MAIEI (2020) The state of AI ethics report. Available at: https://montrealethics.ai/the-state-of-ai-ethics-report-june-2020/ (accessed 29 August 2020).

19. Winfield A.F.T., Jirotka M. (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A. Mathematical, physical and engineering sciences, vol. 376, no 2133, article ID: 20180085. DOI: 10.1098/rsta.2018.0085.

20. KPMG (2019) With the consumer "on thou". 100 brands with the best customer service. KPMG research in Russia. Available at: https://drive.google.com/file/d/1kdLf8yMxPi5N6ddkjUJrgrycIpppuq8o/view (accessed 15 October 2020) (in Russian).

21. Luo X., Tong S., Fang Z., Qu Z. (2019) Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, vol. 38, no 6, pp. 937—947. DOI: 10.1287/mksc.2019.1192.

22. Andre Q., Carmon Z., Wertenbroch K., Crum A., Frank D., Goldstein W., Huber J., van Boven L., Weber B., Yang H. (2018) Consumer choice and autonomy in the age of artificial intelligence and big data. Customer Need and Solution, no 5, pp. 28-37. DOI: 10.1007/s40547-017-0085-8.

23. Lariviere B., Bowen D., Andreassen T.W., Kunz W., Sirianni N.J., Voss C., Wunderlich N.V.,

De Keyser A. (2017) Service Encounter 2.0: An investigation into the roles of technology, employees and customers. Journal of Business Research, no 79, pp. 238-246. DOI: 10.1016/j.jbusres.2017.03.008.

24. Thompson C. (2018) May A.I. help you? Intelligent chatbots could automate away nearly all of our commercial interactions — for better or for worse. New York Times. Available at: https://www.nytimes. com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html (accessed 28 September 2020).

25. Wilson J., Daugherty P., Shukla P. (2017) Bold clothes: How to sell things with the help of people and artificial intelligence. Harvard Business Review Russia. Available at: https://hbr-russia.ru/innovatsii/ tekhnologii/p18595/#ixzz4WWRr3huw (accessed 04 November 2020) (in Russian).

26. Ostrom A.L., Parasuraman A., Bowen D.E., Patricio L., Voss C.A. (2015) Service research priorities in a rapidly changing context. Journal of Service Research, vol. 18, no 2, pp. 127—159. DOI: 10.1177/1094670515576315.

27. Kim J., Kim K.H., Garrett T.C., Jung H. (2015) The contributions of firm innovativeness to customer value in purchasing behavior. Journal of Product Innovation Management, vol. 32, no 2, pp. 201—213. DOI: 10.1111/jpim.12173.

28. McLeay F., Osburg V.S., Yoganathan V., Patterson A. (2021) Replaced by a robot: Service implications in the age of the machine. Journal of Service Research, vol. 24, no 1, pp. 104—121. DOI: 10.1177/1094670520933354.

29. Chui M., Manyika J., Miremadi M., Henke N., Chung R., Nel P., Malhotra S. (2018) Notes from the AI frontier: Applications and value of deep learning. McKinsey Global Institute discussion paper. Available at: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning (accessed 20 October 2020).

30. Froehlich A. (2018) Pros and cons of chatbots in the IT helpdesk. InformationWeek. Available at: https://www.informationweek.com/strategic-cio/it-strategy/pros-and-cons-of-chatbots-in-the-it-helpdesk/a/d-id/1332942 (accessed 20 October 2020).

31. Kestenbaum R. (2018) Conversational commerce is where online shopping was 15 years ago — Can it also become ubiquitous? Forbes. Available at: https://www.forbes.com/sites/richardkesten-baum/2018/06/27/shopping-by-voice-is-small-now-but-it-has-huge-potential/?sh=3a41ebe337ac (дата обращения: 27.09.2020).

32. Wirtz J., Patterson P.G., Kunz W.H., Gruber T., Lu V.N., Paluch S., Martins A. (2018) Brave new world: Service robots in the frontline. Journal of Service Management, vol. 29, no 5, pp. 907—931. DOI: 10.1108/JOSM-04-2018-0119.

33. Sobolev A. (2020) Revenue from digital services at MegaFon is growing at a double-digit rate. KPMG. Available at: https://mustread.kpmg.ru/interviews/vyruchka-ot-tsifrovykh-servisov-u-megafona-rastet-dvuznachnymi-tempami/ (accessed 28 October 2020) (in Russian).

34. Nassauer S. (2020) Walmart scraps plan to have robots scan shelves. The Wall Street Journal. Available at: https://www.wsj.com/articles/walmart-shelves-plan-to-have-robots-scan-shelves-11604345341 (accessed 04 November 2020).

35. JSAI (2017) The Japanese Society for Artificial Intelligence Ethical Guidelines. Available at: http://ai-elsi. org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf (accessed 14 May 2021).

36. Pichai S. (2018) AI at Google: our principles. Google. Available at: https://blog.google/topics/ai/ai-principles/ (accessed 05 November 2020).

37. Big Data Association. Institute of internet development (2019) Data usage code of ethics. Available at: https://ac.gov.ru/files/content/25949/kodeks-etiki-pdf.pdf (accessed 08 November 2020) (in Russian).

38. Hind M., Houde S., Martino J., Mojsilovic A., Piorkowski D., Richards J., Varshney K.R. (2020) Experiences with improving the transparency of AI models and services. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHIEA '20). Honolulu, USA,

25-30April 2020. P. 1-8. DOI: 10.1145/3334480.3383051.

39. IBM Developer Staff (2018) AI Fairness 360. Available at: https://developer.ibm.com/technologies/ artificial-intelligence/projects/ai-fairness-360/ (accessed 08 November 2020).

40. Jupyter nbviewer (2020) Detecting and mitigating age bias on credit decisions. Available at: https://nbviewer.jupyter.org/github/IBM/AIF360/blob/master/examples/tutorial_credit_scoring.ipynb (accessed 05 November 2020).

41. About GDPR in Russian. Information about the General Data Protection Regulation. Available at: https://ogdpr.eu/ (accessed 08 November 2020) (in Russian).

42. Kaminski M.E., Malgieri G. (2020) Algorithmic impact assessments under the GDPR: producing multi-layered explanations. International Data Privacy Law, ipaa020. DOI: 10.1093/idpl/ipaa020.

43. Elaeva M. (2020) We will rebuild all our business processes "from the client". KPMG. Available at: https://mustread.kpmg.ru/interviews/my-budem-perestraivat-vse-nashi-biznes-protsessy-ot-klienta/ (accessed 01 October 2020) (in Russian).

44. Lara F., Deckers J. (2020) Artificial intelligence as a Socratic assistant for moral enhancement. Neuroethics, no 13, pp. 275-287. DOI: 10.1007/s12152-019-09401-y.

45. Dolganova O.I., Deeva E.A. (2019) Company readiness for digital transformations: problems and diagnosis. Business Informatics, vol. 13, no 2, pp. 59-72. DOI: 10.17323/1998-0663.2019.2.59.72.

About the author

Olga I. Dolganova

Cand. Sci. (Econ.);

Associate Professor, Department of Business Informatics, Financial University under the Government of the Russian Federation, 38, Scherbakovskaya Street, Moscow 105187, Russia;

E-mail: [email protected]

ORCID: 0000-0001-6060-5421

i Надоели баннеры? Вы всегда можете отключить рекламу.