Interactive Applications with Artificial Intelligence: The Role of Trust among Digital
Assistant Users
Pur Purwanto
Associate Professor, Faculty of Economics and Business, cakpo3r@gmail.com
Supratman University of Surabaya, Jl. Arief Rahman Hakim No. 14, Keputih, Kec. Sukolilo, Kota SBY,
Jawa Timur 60111, Indonesia
Kuswandi Kuswandi
Lecturer, kuswandi56andi@gmail.com
Mahardhika School of Economics of Surabaya, Jl. Wisata Menanggal No. 42 A, Dukuh Menanggal, Kec. Gayungan,
Kota SBY, Jawa Timur 60234, Indonesia
Fatmah Fatmah
Lecturer, Faculty of Economics and Business
Sunan Ampel University of Surabaya, Jl. Ahmad Yani No. 117, Jemur Wonosari, Kec. Wonocolo,
Kota SBY, Jawa Timur 60237, Indonesia
Abstract
People are increasingly dependent upon technology. However, companies' large-scale investments to establish ongoing loyalty to technology platforms and ecosystems show negative results. This is due to lower levels of trust, concerns about risks, and increasing issues of privacy. Despite the continuous development of digital assistant applications to increase interactivity, however, there is no guarantee that the concept of interactivity is capable of gaining users' trust and addressing their concerns. The purpose of the present study is to analyze the effects of controllability, synchronicity, bidirectionality on perceived performance, and user satisfaction with digital assistant applications as moderated by perceived trust. Amos 22.0
was used to analyze a sample of 150 digital assistant users of the brands Samsung Bixby, Google Assistant, Apple Siri, and others.
The results show that bidirectionality is the most worrying feature in terms of the perceived performance of digital assistants related to trust and privacy protection issues of personal data, whereas the other two features contribute to perceived performance and digital assistant users' satisfaction. Perceived trust plays a role in moderating the relationship between controllability, synchronicity, and the bidirectionality of perceived performance. Finally, perceived performance has an effect upon digital assistant users' satisfaction.
Keywords:
artificial intelligence; digital assistants; digital services; interactivity; technology innovation; perceived trust; perceived performance; satisfaction
Citation: Purwanto P., Kuswandi K., Fatmah F. (2020) Interactive Applications with Artificial Intelligence: The Role of Trust among Digital Assistant Users. Foresight and STI Governance, vol. 14, no 2, pp. 64-75. DOI: 10.17323/25002597.2020.2.64.75
0 I © 2020 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.Org/licenses/by/4.0/).
Emotionally, people are currently highly dependent upon digital technology [Peart, 2018; Karapanos, 2013], despite the ethical and social issues of the privacy and security of personal data, such as the recent data leak of Facebook's database. However, it does not discourage people from continuing to use digital services for personal or business affairs [Brill et al., 2019; Pappas, 2016; Kumar et al., 2016; Hauswald et al., 2015].
Today, there are several smart digital assistant applications that make work easier, such as Amazon Alexa, Samsung Bixby, Microsoft Cortana, Google Assistant, Apple Siri, and other digital assistants. Digital assistants are artificial intelligence technology (AI) capable of thinking as though they are humans and interacting with their users. According to Juniper Research, the number of digital assistant users is currently estimated at approximately 3.25 billion worldwide, and this figure is projected to reach 8 billion by 2023 [Moar, 2019]. Digital assistants offer a variety of benefits to consumers, as demanded by the customers. They are contextually and personally relevant, work in real-time, and offer high quality results and are further reliable and comfortable [Baier et al., 2018; Wise et al., 2016]. This technology can also dynamically help study consumer behavior in detail, making companies capable of creating more efficient business processes by completely automating customer service delivery [Kumar et al., 2016; Koehler, 2016]. Therefore, businesspeople are currently innovating by integrating this technology into their operations in the hope of increasing productivity significantly [Baier et al., 2018; Bittner et al., 2019; Brill et al., 2019].
Digital assistants work interactively and in realtime with their users. Interactivity is the two-way communication between the user and the computer [Ha, James, 1998; Coyle, Thorson, 2001; Moar, 2019]. Digital assistants' interactive features provide services such as the chatbot, social media, mobile applications, inventory management, automated banking, feedback form, bulletin boards, engine search, calendar and appointment management, text message sending, phone-call making, home automation, song search on YouTube, car navigation, trade conversations, and health monitoring [Massey, Levy, 1999; McMillan, 1998; Brill et al., 2019; Moar, 2019].
Interactivity in the context of digital service consists of three dimensions: controllability, syn-chronicity, and bidirectionality [Yoo et al., 2010; McMillan, 2005; Fortin, Dholakia, 2005; Yadav, Varadarajan, 2005]. Controllability is the feature that enables users can manipulate the content, timing, and sequence of communication with the digital assistant [Fortin, Dholakia, 2005; Yadav, Varadarajan, 2005; Yoo et al., 2010; Hauswald et al., 2015; Brill et al., 2019]. Synchronicity is the
speed of communication processes and facilities to respond quickly [McMillan, 2005; Novak et al., 2000]. Bidirectionality is the two-way communication facilitated by digital assistants as a form of information exchange [McMillan, 2005; Pavlik, 1998; Yoo et al., 2010; Baier et al., 2018]. Liu [Liu, 2003] asserts that the components of interactivity, which consists of controllability, synchronicity, and bidirectionality, are interrelated [Wu, 2005; Yoo et al., 2010; Brill et al., 2019]. The performance of a digital assistant is determined by that of its three dimensions.
Among the indicators of the performance of a digital assistant is the customers' perceived trust in the goods and service providers [Brill et al., 2019]. One key factor in the success of information exchange in technology is trust [Ejdys et al., 2019] since, from the users' perspective, trust can distinguish the technological quality of a particular brand. Trust consists of security, credibility, reliability, loyalty, and accuracy of the performance of a technology [Ejdys, 2018]. A high level of perceived interactivity (controllability, synchronicity, and bidirectionality) can increase trust [Merrilees, Fry, 2003]. The quality of interactivity of digital assistants can build trust [Stewart, Pavlou, 2002; Mithas, Rust, 2016; Pappas, 2016]. Digital assistant features can improve decision quality, sensitivity to information, and result in value creation and user satisfaction [Kim, LaRose, 2004; Brill et al., 2019].
Companies today focus on massive investments and redesigning their product lines by competitively making state-of-the-art digital assistants in order to serve their users well [Mithas, Rust, 2016; Pappas, 2016]. Despite the producers' endeavor to develop increasingly interactive digital assistant applications to improve technology performance and value creation capable of increasing user satisfaction, empirical literature shows scant attention to said efforts. In addition, there remain many uncertainties with regard to the concept of interactivity in the context of personal digital assistants [Yoo et al., 2010; Yadav, Varadarajan, 2005]. The main purpose of the present study is to examine the relationship between interactivity dimensions and perceived performance, which ultimately results in consumer satisfaction with artificial intelligence applications.
Given that currently individuals work with their private data stored in their digital assistants, which requires that it be accessible to the providers of digital assistant application services [Alpaydin, 2014; Pappas, 2016], a number of users are worried that their data will be misused [Bhat, 2014; Belanger, Xu, 2015]. On the other hand, the application of technology with decision support systems is designed for complex tasks with the potential risks, making trust a success factor for the relationship
between humans and digital application machines. As the trend of trust in and loyalty to technology
is increasingly declining, should service providers compromise or ignore the trade-off between technological innovations and the risk of security, credibility, and accuracy?
It is therefore important to examine the extent to which cognitive considerations related to perceived trust moderate the relationship among the interactivity dimensions of digital applications. Furthermore, the issue of privacy and trust also must be investigated in the realm of digital assistants in order to fill the empirical gap in the field of digital application consumer behavior. Finally, the authors review the literature, develop research hypotheses, and then present the research methodology, including a delineation of the measurement used to test the hypotheses. Following an examination of the results, we provide discussions, managerial implications, limitations, and further directions for research.
Literature Review
The Concept of Interactivity
Interactivity represents a real-time communication interaction between individual users or organizations with computers that is not limited by space and time [Ha, James, 1998; Coyle, Thorson, 2001; Blattberg, Deighton, 1991; Kumar et al., 2016]. Interactivity is a form of user interaction via real-time content modification using the artificial machine facilities [Steuer, 1992]. Interactivity is also defined as an interactive man-machine communication to search for information [Zeithaml et al., 2002]. Stromer-Galley [Stromer-Galley, 2000] defines interactivity using cybernetics, rooted in media interaction. Furthermore, cybernetics is the use of information and feedback. Thus, interactivity is feedback on media in cybernetics [Wiener, 1948].
Interactivity consists of search engine interactions, user-user interactions, and user-message interactions [Hauswald et al., 2015; Kumar et al., 2016; Cho, Leckenby, 1997]. Interactivity emerges due to the rapid development of new communication technologies, such as the internet, making digital assistant users more interactive [Baier et al., 2018; Wise et al., 2016; Ha, James, 1998; Liu, Shrum, 2002]. These features contribute to the roles of the three dimensions of e-interactivity. For example, chatbot, social media, mobile apps, and feedback forms improve the perceived performance of digital assistants that is affected by synchronicity since users can immediately find the necessary information [Brill et al., 2019; Moar, 2019; Ghose, Dou, 1998]. Search engines affect perceived performance since
users can control the information relevant to users [Brill et al., 2019; Moar, 2019; Hoffman, Novak, 1996].
Many researchers paid special attention to the performance of digital assistants in terms of the level of interactivity as indicated by the three dimensions of interactivity: controllability, synchronicity, and bidirectionality. The importance of these three dimensions are noted due to the two-way communication [van Dijk, 1999; Purwanto, Kuswandi, 2017]; thus, a high level of synchronicity and controllability is needed to achieve the highest interactivity. Therefore, based on previous studies, interactivity can describe the extent to which controllability, synchronicity, and bidirectionality play a role in digital assistant applications.
Interactivity Dimensions and Perceived Performance of Digital Assistants
A number of previous researchers examined the effect of interactivity on website marketplaces. Their results showed that a high level of interactivity increases trust [Merrilees, Fry, 2003]. Furthermore, it was found found that interactivity can create a value, thereby increasing trust in e-commerce [Stewart, Pavlou, 2002]. Interactivity and flexibility can increase customer value and satisfaction [Purwanto, Kuswandi, 2017]. Since digital assistants aim to help their users handle their jobs, the various recommendation systems, such as personalized facilities, are used to assist in the decision-making process. This feature can improve the quality of customer decisions and customer trust. In addition, many researchers suggest that digital assistants' interactivity has an effect on the perceived quality, self-regulation, trust, privacy, and satisfaction [Brill et al., 2019; Kim, LaRose, 2004].
The features of digital assistants positively impact the perceived consumer values, such as a sense of security, trustworthiness, and maintenance of users' privacy [Teo et al., 2003]. Given that state-of-the-art digital assistants are among the most important factors for business success [Brill et al., 2019; Zeithaml, 1988], the benefits of the various features of digital assistants would be seen by users as an output of the performance of digital assistants [Brill et al., 2019; Sheth et al., 1991].
Performance is subdivided into objective performance and perceived performance [Venkatesh et al., 2003]. Objective performance is the real performance of a product or service, while perceived performance is the result of a subjective assessment. Perceived performance is generally used as a guide to validate satisfaction models. Despite the very dependence upon the individual and the very-difficult-to-measure nature of perception [Yi, 1990], users of digital assistants objectively have
equal access to their performance. Therefore, perceived performance can be measured objectively based on performance appraisals in general [Brill et al., 2019]. Performance is an individual's cognitive evaluation of product performance attributes [Spreng, Olshavsky, 1993]. Thus, the following hypothesis is proposed:
H : Controllability of digital assistants has a significant effect upon perceived performance.
H2: Synchronicity of digital assistants has a significant effect upon perceived performance.
H3: Bidirectionality of digital assistants has a significant effect upon perceived performance.
Customer Satisfaction
Customer satisfaction is an indicator of a company's success in delivering services to consumers [Akbari et al., 2015; Minta, 2018]. In the marketing literature, customer satisfaction reflects various dimensions that offer value, quality, and loyalty to customers. Therefore, the definition of customer satisfaction cannot be universally accepted since it is highly dependent upon individual consumers [Giese, Cote, 2000]. The differences in definition is caused by the dynamic, complex, and specific nature of the services [Zhao et al., 2012].
The present study adopted the definition proposed by Oliver [Oliver, 2014] that satisfaction is a consumer response to the fulfillment of consumer expectations. Consumer response is an assessment of products/services, which either fails to meet or exceeds expectations. If individual consumers' assessments are pleasant, consumers would feel satisfied, and vice versa, due to the dissonance between the expected level and the perceived level of satisfaction [Hasan, Nasreen, 2014]. Perceived performance is an antecedent of customer satisfaction by confirming comparison of expectations with the actual performance of the products or services [Spreng, Page, 2003]. Thus, perceived performance serves as a standard of expectations and perceived reality. When reality exceeds expectations, there would be satisfaction, and vice versa. Thus, the following hypothesis is proposed:
H4 : Perceived performance has a significant effect upon the satisfaction of digital assistant users.
Moderating the Role of Perceived Trust
The concept of trust has been widely used in many ways, but it relates to one's attitude and the intentions of being vulnerable in anticipation of certain outcomes [Brill et al., 2019]. Perceived trust involves an individual's assessing the certainty of the performance of products and services. Trust includes interpersonal trust (between at least two
people), institutional/organizational trust, and technological trust [Ejdys, 2018]. Despite the distinction between the different types of trust above, users' perceived trust focuses more on the vendor and its technological capabilities, while with regard to the people behind the operation of a technology, the authors argue that an individual's performance integrity is implicitly the organization's responsibility. Thus, users let the organization or company be entirely responsible for the trusted people in question.
Thus, trust referred to in the present study is specific to certain vendors (organizations) and the attributes of digital assistant applications (technology) in terms of competence, virtue, and integrity [Komiak, Benbasat, 2006; Ejdys, 2018]. Trust in technology represents the expectation in the efficiency, reliability, and effectiveness of equipment and technical systems from the perspective of an individual who creates or a creator of a particular technology or material object [Ejdys, 2018]. Since perceived trust is very subjective, the trustworthiness of digital assistant applications can be determined by the quality of information, perceived privacy protection, perceived security of systems, third-party authentication systems, organizational reputation, and user experience [Ejdys, 2018].
The performance of the interactivity dimension depends upon how users' perceiving digital assistants in terms of content, timing, process speed, and data protected by technology as providing certainty [Yoo et al., 2010; Bhatt, 2014]. Digital assistants' very promising potential in terms of technology adoption is not without problems. Given that this technology leaves digital footprints for its users, it means that personal data are vulnerable to being misused by others [Bhatt, 2014; Belanger, Xu, 2015; Pappas, 2016]. Smith et al. [Smith et al., 1996] describes such violations of rights as the unauthorized use of data, access stealing, and the misuse of personal information for publication. Thus, digital assistant users are faced with difficult tradeoffs between technological innovation and the risk of information privacy problems [Acquisti et al., 2015]. Digital assistants are not sensitive to these problems [Belanger, Xu, 2015]. Therefore, consumers see these risks as an issue that needs to be mitigated or avoided by not adopting technological innovation in the form of digital assistants. Thus, the performance of technology is inseparable from that of the people and organizations. Therefore, perceived trust can be either a synergistic interaction or a buffering interaction between interactivity dimensions and perceived performance [Brill et al., 2019; Cohen et al., 2003]. Thus, the following hypothesis is proposed:
H5: Perceived trust positively moderates the effect of controllability upon perceived performance.
H6: Perceived trust positively moderates the effect of synchronicity upon perceived performance.
H7: Perceived trust positively moderates the effect of bidirectionality upon perceived performance.
Research Methodology
Samples and Data Collection
Samples of digital assistant users with an average age of 41.5 years from the large city of Surabaya, East Java, Indonesia were used. Respondents tended to be younger and had a higher level of education than those of the study respondents who did not employ artificial intelligence technology [McKnight et al., 2002]. Data were collected online by means of questionnaires with a computer-assisted web interviewing system connected to the internet. The items were accompanied by instructions during the interviewing process in order to ensure rapid responses from participants.
Respondents were asked to share their personal experiences with using digital assistants and, at the same time, to describe their demographic structure. Thus, the data describes the real respondents. Participants who completed the survey and provided a valid email address and contact person would be given an internet data package as a reward. Two hundred and sixty-five (N=265) respondents took part, but 115 respondents were eliminated because their responses did not meet the requirements or the total return rate of 56.6%. Thus, 150 respondents could be used, of which 85 (56.6%) were male and the remaining 65 (43.4%) were female. The average age of the digital assistant users was 41.5 years.
Respondents were mostly concentrated in the top three brands: Samsung Bixby (65%), Google Assistant (15%), Apple Siri (7%) and others (13%). Experience with using digital assistants was higher than 18 months on average. Sample characteristics are shown in Table 1.
Measures
The measures used in the present study were adopted from a number of previous studies. The questionnaire consisted of five parts: controllability, synchronicity, bidirectionality, perceived performance, and customer satisfaction. Perceived controllability, synchronicity, and bi-directionality were measured with a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree), consisting of nine constructs adopted from [Liu, 2003; Yoo et al., 2010]. Perceived performance, consisting of six constructs, was adopted from [Davis et al., 1989; Xiao, Benbasat, 2002; Malhotra et al., 2004; Kim et al., 2008]. Customer satisfaction, consist-
Figure 1. Conceptual Framework
Source: authors.
ing of one construct, was adopted from [Yoo et al., 2010]. Finally, Perceived Trust, consisting of six constructs, was adopted from [Ejdys, 2018; Ejdys et al., 2019; Brill et al., 2019]. Those items are shown in Table 2.
Results
Confirmatory Factor Analysis (CFA)
Anderson and Gerbing [Anderson, Gerbing, 1988] recommends the following for conducting a structural analysis: First, test the model fit which is hypothesized as a whole. The test results show \/df= 2.155, GFI = 0.908, AGFI = 0.904, TLI = 0.922, CFI = 0.929, RMSEA = 0.0 76). There is no standard residual of more than 2.0, and Chi-square of 637.315
Table 1. Sample ^ara^enst^s (N 4=150)
Items Frequency Share (%)
Gender
Male 85 56.60
Female 65 43.40
Geographic Background
Megapolitan 30 20.00
Metropolitan 92 61.33
Small City 28 18.66
Merk Digital Assistants
Bixby Samsung 97 65.00
Google Assistant 22 15.00
Apple Siri 11 7.00
Other 20 13.00
User Experience
6-12 months 37 24.6
1-2 years 106 70.6
Over 2 years 7 4.6
Note: mean age of respondents is 41.5 years, standard deviation is 5.41. Source: authors.
(100 df, p = 0.000) means that the overall model fit is acceptable [Hair et al., 2010]. Second, test the adequacy of each scale consisting of the number of questions on each construct. Test results show a satisfactory residual and unidimensional scale. This means that each item shows a significant standard by convergent validity.
The reliability of the instrument was tested by calculating Cronbach's alpha. The test results shows that each construct has a reliability level above Cronbach's alpha of 0.78, meaning that each item has moderate to high internal consistency. In addition, the average variance extracted (AVE) ranges from 0.57 to 0.81, indicating that the variance accounted for by the construct is greater than that
caused by measurement errors [Fornell, Larcker, 1981], as shown in Table 3.
Structural Model and Hypothesis Testing
Since the proposed measurement model was consistent with the data, hypotheses were tested with AMOS using the covariance matrix. As shown in Table 4, the three latent constructs account for 67% of the effect of perceived performance of digital assistants and bidirectionality accounts for 18% of the effect of perceived performance of digital assistants.
Thus, hypotheses 1-3 were supported. Perceived performance has a significant effect upon satis-
Table 2. Measurement Scale
Items Description Mean SD Cronbach's Alpha
Controllability [Liu, 2003; Yoo et al., 2010; Brill et al., 2019] - I feel a lot of control over this digital assistant application. 5.17 1.17 G.7S
- I feel free to do anything with this digital assistant application. 5.23 1.19
- I gain a lot of experience from this digital assistant application. 5.2S 1.27
Synchronicity [Liu, 2003; Yoo et al., 2010; Brill et al., 2019] - My digital assistant processes my request quickly. 4.3G 1.56 G.S1
- I get more information than what I expect from this application. 5.7S 1.37
- I can obtain information immediately without delay. 5.21 1.2S
Bidirectionality [Liu, 2003; Yoo et al., 2010; Brill et al., 2019] - Digital assistants provide feedback correctly. 5.S6 1.31 G.79
- This digital assistant provides the user with the opportunity to interact more freely. 5.S5 1.2S
- This digital assistant makes me feel like continuing to use it 5.72 1.32
Perceived Performance [Davis et al., 1989; Xiao, Benbasat, 2002; Malhotra et al., 2004; Kim et al., 2008] - This digital assistant is capable of increasing my work productivity. 1.74 1.54 G.S5
- This digital assistant is capable of understanding my needs. 2.67 1.66
- I am convinced that other people are also concerned about the privacy of personal data. 2.S9 1.5S
- I am afraid that digital assistant application providers will use my personal data. 3.3S 1.57
- Overall, interactivity dimensions of digital application assistants can be trusted. 2.55 1.57
- Overall, interactivity dimensions of digital assistant application providers can be trusted 2.51 1.S4
Customer satisfaction [Yoo et al., 2010] - Overall, I am satisfied with the performance of digital assistants 3.G4 G.S2 G.SG
Perceived Trust [Ejdys, 2018; Ejdys et al., 2019; Brill et al., 2019] - All digital application assistant brands can be trusted. 2.91 G.76 G.S7
- I believe that this digital assistant application brand gives me a sense of security. 2.5G 1.S2
- I believe that this digital assistant application brand protects users' personal data. 2.56 1.7S
- I believe that service providers (companies) will not misuse users' personal data. 2.G9 1.75
- All tasks are easier with this digital assistant application brand. 2.1S 1.71
- I believe that this digital assistant application makes our lives better 1.67 1.52
Note: all items were measured at 5-point Likert scale, from 1= strongly disagree to 5= strongly agree Source: compiled by the authors.
Table 3. Correlation Matrix CFA (Fornell-Larcker criterion)
Controllability Synchronicity Bi-directionality Perceived performance Satisfaction Perceived trust
Controllability 0.791
Synchronicity 0.241 0.852
Bidirectionality -0.021 0.111 0.794
Perceived performance 0.222 0.111 0.004 0.780
Satisfaction 0.251 0.080 -0. 140 0.311 0.781
Perceived trust 0.311 0.651 0.231 0.541 0.376 0.787
Age 0.057 0.125 0.113 0.136 0.135 0.115
Gender -0.072 -0.041 -0. 026 -0. 023 -0. 125 -0.165
Geographic background -0.053 -0.210 0.012 -0.008 0.041 -0.091
Merk Digital Assistants -0.076 -0.051 -0.041 -0.031 -0.022 -0.037
Experience -0.067 -0.032 0.021 -0.015 -0.012 -0.017
Composite Reliability (CR) 0.927 0.945 0.928 0.729 0.797 0.728
Average Variance Extracted (AVE) 0.768 0.811 0.765 0.641 0.571 N/A
Mean 0.913 0.946 0.928 0.729 0.792 0.732
Standard Deviation (SD) 0.014 0.008 0.007 0.045 0.034 0.018
Model fit: Chi-square = 2.155, p < 0.01, df = 1.407; CFI = 0.929; TLI = 0.922; RMSEA = 0.076; SRMR = 0.06 Notes: a The square roots of AVE for each construct are presented in bold on the diagonal of the correlation matrix. b AVEs of formative indicators are not applicable c N = 150 Source: compiled by the authors.
faction since it can account for the satisfaction of users of digital assistants. The users were assured that digital assistants facilitated their work, despite concern for the security and privacy of personal data, but customers assume that all people also feel the same [Brill et al., 2019].
The moderation effects were tested using the moderated multiple regression (MMR) analysis as recommended by [Cohen et al., 2003]. The test results show adjusted R2 = 0.48, 0.37, and 0.028 for the relationship of controllability, synchronicity, and bidirectionality, respectively, with perceived performance as an interaction moderation. Respectively this means that 48%, 37%, and 2.8% of variations in satisfaction can be accounted for by the three dimensions of interactivity and perceived trust. Despite the small adjusted R2, the results of ANOVA test or F-test show a F „ = 3.147 and a
count
probability of 0.026, meaning that the model can be accepted. Respectively beta values indicate significant values of 0.13, 0.19, and 0.21 and p = 0.001, p = 0.004, and p = 0.012, meaning that perceived trust strengthens the relationship of controllability, synchronicity, and bidirectionality with perceived performance. Thus, H5, H6, and H7 are supported.
Discussion
The purpose of the present study was to examine the effect of controllability, synchronicity, and bidirectionality upon perceived performance and satisfaction. The model is generally capable of accounting for 77.2% of variance in interactivity in predicting perceived performance of and satisfaction with digital assistants significantly. The results of the present study confirm the first three hypotheses, namely that controllability, synchronicity, and bidirectionality have a significant effect upon perceived performance. The fourth hypothesis was confirmed, namely that perceived trust positively and significantly moderates the relationship of controllability, synchronicity, and bidirectional-ity with perceived performance. Finally, perceived performance has an effect upon the satisfaction of digital assistant users.
Results also show that users of artificial intelligence (AI) in the form of digital assistants need two-way interactions in which the user's wishes can be understood. In general, the present study is consistent with the previous literature. Interactivity, consisting of controllability, synchronicity, and bidirec-
Table 4. Hypothesis Test
Hypothesis Structural path Standardized estimate t-statistic p-values
H1 Controllability -*■ Perceived performance 0.676 15.685 0.007*
H2 Synchronicity Perceived performance 0.681 23.114 0.001**
H3 Bidirectionality -»■ Perceived performance 0.182 6.761 0.009*
H4 Perceived performance -*■ Satisfaction 0.786 21.876 0.000**
H5 Moderating Controllability -*■ Perceived performance 0.128 11.621 0.002**
H6 Moderating Synchronicity -*■ Perceived performance 0.251 32.111 0.003**
H7 Moderating Bidirectionality Perceived performance -0.117 12.743 0.012**
Note: Significant at: * p < 0.05; ** p < 0.01; *** p < 0.001. Source: compiled by the authors.
tionality, plays a significant role in improving the perceived performance of digital assistants [Yoo et al., 2010; Brill et al., 2019; Teo et al., 2003; Raney et al., 2003]. The present study found new empirical findings about how the performance of digital assistants is measured by the three dimensions of interactivity.
First, controllability helps users to manage the content, timing, and sequence of activities; thus, a digital assistant performs like a personal assistant capable of thinking like humans and meeting most of the user's demands with natural language [Kumar et al., 2016; Hauswald et al., 2015]. Second, synchronicity shows the speed with which digital assistants respond to users by meeting the user's requests in a real-time manner with high quality, reliability and convenience [Baier et al., 2018; Wise et al., 2016; Yoo et al., 2010]. Third, bidirectional-ity shows that digital assistants can exchange data reciprocally, serving as a conversation agent employing the principle of equality in communication [Peart, 2018; Moar, 2019; Yoo et al., 2010].
This finding is also reinforced by the moderating role of perceived trust. Perceived trust has a positive and significant role in the relationship of interactivity dimensions with perceived performance. The use of technology raises concerns that data can be misused [Bhatt, 2014]. Due to the concerns about the misuse of private information by organizations without permission, the unauthorized use of data, errors in personal information and access, an individual's perceived trust can strengthen the dimensions of interactivity based upon the performance of digital assistant applications. Despite the release of digital assistant applications by strong brands, however, managers should continue to re-approve the principles of trust with customers in
any interaction as a factor that should be maintained. Given that users indicate that they have a high level of trust, perceived risks related to information quality, integrity, and reliability will be reduced [Kim et al., 2012]. The present study confirms that a higher level of trust strengthens the relationship between interactivity dimensions and perceived performance. Thus, given the extent of potential risks, managers should invest in securing personal information physically and systematically.
Despite the significant effect of the three dimensions of interactivity, bidirectionality is the smallest factor affecting the performance of digital assistants. This finding is consistent with previous studies on trust in terms of concern about privacy and security of personal data with digital assistants [Brill et al., 20l9; Fitzgerald, 2019]. According to data by Cohn&Wolfe1, 75% of consumers were prepared to share their personal information with brands they trust. The involvement of digital assistants with its users allows data exchange to be more vulnerable to abuse. This users' concern is not absurd since trust can be fragile and subjective [Yannopoulou et al., 2011]. Users sincerely expect that their personal information in digital assistants must be made confidential, protected, and used under the owner's approval. Thus, they can integrate broader data into digital assistants for the benefit of their daily lifestyle. Therefore, owners of digital assistant brands must realize that trust constitutes a performance item of paramount importance for them. Finally, perceived performance has a significant effect upon satisfaction. This effect proves that digital assistant users assess, evaluate, compare, and ensure that the settings, the processing speed, and data exchange meet and even exceed their expectations.
1 Available at: http://www.authenticl00.com, accessed 17.01.2019.
Managerial Implications
Consumers use digital assistants for their personal and organizational tasks, expecting that the capabilities and features of their applications are continuously improved in line with their needs [Baier et al., 2018], despite the various features of each brand of digital assistants [Kumar et al., 2016]. Thus, digital assistant service providers should be aware of important factors of perceived performance of digital assistants.
Digital assistants can be involved in marketing activities as a medium of conversation in transactions, such as amplification tools, interface devices, feedback tools, and creative tools, to obtain valuable values from customers [Harmeling et al., 2017]. Data collected by digital assistants can serve as a source of analysis for companies. Therefore, companies should monitor and evaluate it as a whole to ensure that this technology is in line with customer needs [Ranjan, Read, 2016]. The present study demonstrates that customer expectations are met through interaction with digital assistants. Thus, this technology can serve as a catalyst for the development of digital assistant technology in sustainable business activities. Additionally, the users would obtain a greater understanding of how digital assistants can provide more recent relevant information and efficiently perform important tasks for them [Brill et al., 2019].
Limitations and Future Research
The present study only examines the performance of digital assistants in terms of interactivity dimen-
sions (controllability, synchronicity, and bidirectionality) and user satisfaction in general. Thus, the performance of digital assistant brands cannot be inferred partially. However, user expectations and patterns of use of interactivity features can be varying for each brand of digital assistants. For example, the two-way communication provided by each digital assistant cannot respond to individual users' desires due to the difference in language in each country. Therefore, future studies can examine various brands of personal assistants specifically to gain more in-depth knowledge of the role of interactivity in the perceived performance of digital assistants.
The samples of the present study were all current users of digital assistants, a number of which were new users, whereas former users who quit using it for some reason were not included in this study. Thus, this study is too exclusive and incapable of exploring in detail other predictors of perceived performance and user satisfaction. Future studies can explore commitment and loyalty and examine the factors causing users quit using digital assistant applications and, at the same time, improve various features more fully. Finally, the unit of analysis of the present study was well-known brands (Samsung Bixby, Google Assistant, and Apple Siri) and is undoubtedly related to the performance delivered (image, high level of trust, protection of user privacy). Therefore, future studies can explore more closely other brands of digital assistants not dominating the market of artificial intelligence application technology.
References
Acquisti A., Brandimarte L., Loewenstein G. (2015) Privacy and human behavior in the age of information. Science, vol. 347, pp. 509-514.
Akbari M., Salehi K., Samadi M. (2015) Brand heritage and word of mouth: The mediating role of brand personality, product involvement and customer satisfaction. Journal of Marketing Management, vol. 3, no 1, pp. 83-90.
Alpaydin E. (2014) Introduction to machine learning, Cambridge, MA: MIT Press.
Anderson J.C., Gerbing D.W. (1988) Structural equation modeling in practice: A review and recommended two-step approach. Psychology Bulletin, vol. 103, pp. 411-423.
Baier D., Rese A., Roglinger M. (2018) Conversational user interfaces for online shops? A categorization of use cases. Paper presented at the 39th International Conference on Information Systems (ICIS), December 2018, San Francisco, USA.
Belanger F., Xu H. (2015) The role of information systems research in shaping the future of information privacy. Information Systems Journal, vol. 25, no 6, pp. 573-578.
Bhatt A. (2014) Consumer attitude towards online shopping in selected regions of Gujarat. Journal of Marketing Management, vol. 2, no 2, pp. 29-56.
Bittner E., Oeste-ReiE S., Leimeister J.M. (2019) Where is the bot in our team? Toward a taxonomy of design option combinations for conversational agents in collaborative work. Paper presented at the 52nd Hawaii International Conference on System Sciences, January 8-11, Maui, Hawaii, USA.
Blattberg R.C., Deighton J. (1991) Interactive marketing: Exploiting the age of addressability. Sloan Management Review, vol. 33, no 1, pp. 5-14.
Bretz R. (1983) Media for Interactive Communication, Beverly Hills, CA: Sage.
Brill T., Munoz L., Richard J. (2019) Siri, Alexa, and other digital assistants: A study of customer satisfaction with artificial intelligence applications. Journal of Marketing Management, vol. 35, no 15-16, pp. 1401-1436.
Cho C.-H., Leckenby J.D. (1997) Internet-related programming technology and advertising. Proceedings of the Annual Conference of the American Academy of Advertising (ed. M.C. Macklin), Cincinatti, OH: American Academy of Advertising, p. 69.
Cohen J., Cohen P., West S., Aiken L. (2003) Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.), Mahwah, NJ: Lawrence Erlbaum Associates.
Coyle J.R., Thorson E. (2001) The effects of progressive levels of interactivity and vividness in web marketing sites. Journal of Advertising, vol. 30, no 3, pp. 65-77.
Davis F.D., Bagozzi R.P., Warshaw P.R. (1989) User acceptance of computer technology: A comparison of two theoretical models. Management Science, vol. 35, no 8, pp. 982-1003.
Ejdys J. (2018) Building technology trust in ICT application at a University. International Journal of Emerging Markets, vol. 13, no 5, pp. 980-997.
Ejdys J., Ginevicius R., Rozsa Z., Janoskova K. (2019) The Role of Perceived Risk and Security Level in Building Trust in E-Government Solutions. Ekonomie a Management, vol. 22, pp. 220-235. DOI: 10.15240/tul/001/2019-3-014.
Fitzgerald K. (2019) In the 'Opt-In' data economy, consumer confidenceis key. Available at: https://www.forbes.com/ sites/forbestechcouncil/2019/01/16/in-the-opt-in-data-economy-consumer-confidence-is-key/#7410e4736634, accessed 15.02.2020.
Fornell C., Larcker D.F. (1981) Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, vol. 18, no 1, pp. 39-50.
Fortin D.R., Dholakia R.R. (2005) Interactivity and vividness effects on social presence and involvement with a web-based advertisement. Journal of Business Research, vol. 58, no 3, pp. 387-396.
Ghose S., Dou W.Y. (1998) Interactive functions and their impacts on the appeal of internet presences sites. Journal of Advertising Research, vol. 38, no 2, pp. 29-43.
Giese J.L., Cote J.A. (2000) Defining consumer satisfaction. Academy of Marketing Science Review, vol. 1, pp. 1-24. Available at: https://www.proserv.nu/b/Docs/Defining%20Customer% 20Satisfaction.pdf, accessed 18.11.2019.
Ha L., James E.L. (1998) Interactivity reexamined: A baseline analysis of early business web sites. Journal of Broadcasting and Electronic Media, vol. 42, no 4, pp. 457-469.
Hair J.F., Anderson R.E., Tatham R.L., Black W.C. (2010) Multivariate Data Analysis, Delhi: Pearson Education.
Hanssen L., Jankowski N.W., Reinier E. (1996) Interactivity from the perspective of communication studies. The Contours of Multimedia: Recent Technological, Theoretical, and Empirical Developments (eds. N.W. Jankowski, L. Hanssen), Luton (UK): University of Luton Press, pp. 61-73.
Harmeling C.M., Moffett J.W., Arnold M.J., Carlson B.D. (2017) Toward a theory of customer engagement marketing. Journal of the Academy of Marketing Science, vol. 45, no 3, pp. 312-335.
Hasan U., Nasreen R. (2014) The empirical study of relationship between post purchase dissonance and consumer behaviour. Journal of Marketing Management, vol. 2, no 2, pp. 65-77.
Hauswald J., Laurenzano M.A., Zhang Y., Li C., Rovinski A., Khurana A., Tang L. (2015) Sirius: An open end-to-end voice and vision personal assistant and its implications for future warehouse scale computers. Paper presented at the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, March 14-18, 2015, Istanbul, Turkey.
Hoffman D.L., Novak T.P. (1996) Marketing in hypermedia computer-mediated environments: Conceptual foundations. Journal of Marketing, vol. 60, no 3, pp. 50-68.
Karapanos E. (2013) User experience over time, Heidelberg, New York, Dordrecht, London: Springer: Springer.
Kim D.J., Ferrin D.L., Rao H.R. (2008) A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decision Support Systems, vol. 44, no 2, pp. 544-564.
Kim H.W., Xu Y., Gupta S. (2012) Which is more important in internet shopping, perceived price or trust? Electronic Commerce Research and Applications, vol. 11, no 3, pp. 241-252.
Kim J., LaRose R. (2004) Interactive e-commerce: Promoting consumer efficiency or impulsivity? Journal of ComputerMediated Communication, vol. 10, no 1, pp. 211-219.
Koehler J. (2016) Business process innovation with artificial intelligence — benefits and operational risks. European Business & Management, vol. 4, no 2, pp. 55-66.
Komiak S.X., Benbasat I. (2006) The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, vol. 30, no 4, pp. 941-960.
Kumar V., Dixit A., Javalgi R.R.G., Dass M. (2016) Research framework, strategies, and applications of intelligent agent technologies (IATs) in marketing. Journal of the Academy of Marketing Science, vol. 44, no 1, pp. 24-45.
Liu Y. (2003) Developing a scale to measure the interactivity of websites. Journal of Advertising Research, vol. 43, no 2, pp. 207-216.
Liu Y., Shrum L.J. (2002) What is interactivity and is it always such a good thing? Implications of definition, person, and situation for the influence of interactivity on advertising effectiveness. Journal of Advertising, vol. 31, no 4, pp. 53-64.
Malhotra N.K., Kim S.S., Agarwal J. (2004) Internet users' information privacy concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research, vol. 15, no 4, pp. 336-355.
Massey B.L., Levy M.R. (1999) Interactivity, online journalism, and English language web newspapers in Asia. Journalism & Mass Communication Quarterly, vol. 76, no 1, pp. 138-151.
McKnight D.H., Choudhury V., Kacmar C. (2002) Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, vol. 13, pp. 334-359.
McMillan S.J. (1998) Who pays for content? Funding in interactive media. Journal of Computer Mediated Communication, vol. 4, no 1, pp. 89-96.
McMillan S.J. (2005) The researchers and the concept: Moving beyond a blind examination of interactivity. Journal of Interactive Advertising, vol. 5, no 1 (online). Available at: https://doi.org/10.1080/15252019.2005.10722096, accessed 17.12.2019
Merrilees B., Fry M.-L. (2003) E-trust: The influence of perceived interactivity on e-retailing users. Marketing Intelligence & Planning, vol. 21, no 2, pp. 123-128.
Minta Y. (2018) Link between satisfaction and customer loyalty in the insurance industry: Moderating effect of trust and commitment. Journal of Marketing Management, vol. 6, no 2, pp. 25-33.
Mithas S., Rust R.T. (2016) How information technology strategy and investments influence firm performance: Conjecture and empirical evidence. MIS Quarterly, vol. 40, no 1, pp. 223-246.
Moar J. (2019) The Digital Assistants of Tomorrow (White Paper), Basingstoke (UK): Juniper Research Ltd.
Novak T.P., Hoffman D.L., Yung Y.F. (2000) Measuring the customer experience in online environments: a structural modeling approach. Marketing Science, vol. 19, no 1, pp. 22-43.
Oliver R.L. (2014) Satisfaction: A behavioral perspective on the consumer (2nd ed.), New York: Routledge.
Pappas N. (2016) Marketing strategies, perceived risks, and consumer trust in online buying behaviour. Journal of Retailing and Consumer Services, vol. 29, pp. 92-103.
Pavlik J.V. (1998) New Media Technology: Cultural and Commercial Perspectives, Boston, MA: Allyn and Bacon.
Peart A. (2018) Conversational AI platforms demand is growing. Available at: https://blog.worldsummit.ai/ conversational-ai-platforms-demand-is-growing, accessed 04.02.2020.
Purwanto P., Kuswandi K. (2017) Effects of Flexibility and Interactivity on the Perceived Value of and Satisfaction with E-Commerce (Evidence from Indonesia). Market-Trziste, vol. 29, no 2, pp. 139-159. Available at: https://doi. org/10.22598/mt/2017.29.2.139, accessed 02.12.2019.
Raney A.A., Arpan L.M., Pashupati K., Brill D. (2003) At the movies, on the web: An investigation of the effects of entertaining and interactive web content on site and brand evaluations. Journal of Interactive Marketing, vol. 17, no 4, pp. 38-53.
Ranjan K.R., Read S. (2016) Value co-creation: Concept and measurement. Journal of the Academy of Marketing Science, vol. 44, no 3, pp. 290-315.
Sheth J.N., Newman B.I., Gross B.L. (1991) Consumption values and market choice, Cincinnati, OH: South Western Publishing.
Smith H.J., Milberg S.J., Burke S.J. (1996) Information privacy: Measuring individuals' concerns about organizational practices. MIS Quarterly, vol. 20, no 2, pp. 167-196.
Spreng R.A., Olshavsky R.W. (1993) A Desires Congruency Model of Consumer Satisfaction. Journal of the Academy of Marketing Science, vol. 21, no 3, pp. 169-177.
Spreng R.A., Page T.J. (2003) A test of alternative measures of disconfirmation. Decision Sciences, vol. 34, no 1, pp. 31-62.
Steuer J. (1992) Defining virtual reality: Dimensions determining telepresence. Journal of Communication, vol. 42, no 4, pp. 73-93.
Stewart D.W., Pavlou P.A. (2002) From consumer response to active consumer: Measuring the effectiveness of interactive media. Journal of the Academy of Marketing Science, vol. 30, no 4, pp. 376-396.
Stromer-Galley J. (2000) Online interaction and why candidates avoid it. Journal of Communication, vol. 50, no 4, pp. 111-132.
Teo H.H., Oh L.B., Liu C., Wei K.K. (2003) An empirical study of the effects of interactivity on web user attitude. International Journal of Human Computer Studies, vol. 58, no 3, pp. 281-305.
Van Dijk J. (1999) The Network Society: Social Aspects of New Media, London: Sage.
Venkatesh V., Morris M.G., Davis G.B., Davis F.D. (2003) User acceptance of information technology: Toward a unified view. MIS Quarterly, vol. 27, no 3, pp. 425-478.
Wiener N. (1948) Cybernetics, or Control and Communication in the Animal and the Machine, Cambridge, MA: Technology Press.
Wise J., VanBoskirk S., Liu S. (2016) The rise of intelligent agents, Cambridge, MA: Forrester Research. Available at: https://www.forrester.com/report/The+Rise+0f+Intelligent+Agents/-/E-RES128047#figure1, accessed 12.01.2016.
Wu G. (2005) The mediating role of perceived interactivity in the effect of actual interactivity on attitude toward the website. Journal of Interactive Advertising, vol. 5, no 2, pp. 29-39.
Xiao S., Benbasat I. (2002) The impact of internalization and familiarity on trust and adoption of recommendation agents (Working Paper 02-MIS-006), Vancouver: University of British Columbia.
Yadav M.S., Varadarajan P.R. (2005) Interactivity in the electronic marketplace: An exposition of the concept and implications for research. Journal of the Academy of Marketing Science, vol. 33, no 4, pp. 585-603.
Yannopoulou N., Koronis E., Elliott R. (2011) Media amplification of a brand crisis and its affect on brand trust. Journal of Marketing Management, vol. 27, no 5-6, pp. 530-546.
Yi Y. (1990) A critical review of consumer satisfaction. Review of Marketing, vol. 4, no 1, pp. 68-123.
Yoo W.S., Yunjung L., Jung K. P. (2010) The role of interactivity in e-tailing: Creating value and increasing satisfaction. Journal of Retailing and Consumer Services, vol. 17, pp. 89-96.
Zack M.H. (1993) Interactivity and communication mode choice in ongoing management groups. Information Systems Research, vol. 4, no 3, pp. 207-239.
Zeithaml V.A. (1988) Consumer perceptions of price, quality and value: A means-end model and synthesis of evidence. Journal of Marketing, vol. 52, no 3, pp. 2-22.
Zeithaml V.A., Parasuraman A., Malhotra A. (2002) An Empirical Examination of the Service Quality Value-Loyalty Chain in an Electronic Channel (Working Paper), Chapel Hill, NC: University of North Carolina.