Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 13 August 2021
Sec. Emotion Science
This article is part of the Research Topic Emotion Recognition Using Brain-Computer Interfaces and Advanced Artificial Intelligence View all 36 articles

Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects

Updated
  • 1Business and Economic Research Institute, Harbin University of Commerce, Harbin, China
  • 2School of Business, Dalian University of Technology, Dalian, China
  • 3Accounting and Auditing College, Guangxi University of Finance and Economics, Nanning, China

The complexity of the emotional presentation of users to Artificial Intelligence (AI) virtual assistants is mainly manifested in user motivation and social emotion, but the current research lacks an effective conversion path from emotion to acceptance. This paper innovatively cuts from the perspective of trust, establishes an AI virtual assistant acceptance model, conducts an empirical study based on the survey data from 240 questionnaires, and uses multilevel regression analysis and the bootstrap method to analyze the data. The results showed that functionality and social emotions had a significant effect on trust, where perceived humanity showed an inverted U relationship on trust, and trust mediated the relationship between both functionality and social emotions and acceptance. The findings explain the emotional complexity of users toward AI virtual assistants and extend the transformation path of technology acceptance from the trust perspective, which has implications for the development and design of AI applications.

Introduction

With the advancement of AI technology, there are increasing numbers of Artificial Intelligence (AI) applications, such as service robots, chatbots, and AI virtual assistants (Gummerus et al., 2019). Regarding AI virtual assistants, which can offer convenience and more efficient services to users (van Doorn et al., 2017; Fernandes and Oliveira, 2021), people's interest and frequency of use is gradually increasing. Since technology acceptance is a key variable reflecting whether AI virtual assistants are accepted by users (Fernandes and Oliveira, 2021), it is important for product developers and corporate investors to explore the drivers of AI virtual assistant acceptance and their mechanisms of action.

However, current research on the acceptance of AI virtual assistants still suffers from the following three deficiencies. First, the current studies focus more on the impact of functionality and social emotions (Wirtz et al., 2018) than on the acceptance of AI virtual assistants (AVA), which helps reveal consumers' intention to use AI virtual assistants. However, the dual satisfaction of technical and social needs does not induce users to trust AI virtual assistants, which leads to low loyalty. In this case, it will be difficult for human users to collaborate with AI virtual assistants, thus limiting their application in society and making it difficult for AI virtual assistants to be truly accepted by human users.

In fact, trust can reduce human users' negative emotions about and affect their tendency to accept new technologies (Sparks and Browning, 2011). Consequently, the applicability of trust in the field of AI still needs further verification. Second, some scholars have tried to explore the potential mechanisms of users' trust in AI virtual assistants from the perspective of trust (Hassanein and Head, 2007; Glikson and Woolley, 2020); however, the relationship between trust and functionality and social emotions and whether the existing drivers have an impact on trust remain to be further explored. Finally, there is still a lack of effective transformation paths between AVA and their drivers, and it is unclear whether trust can carry the transformation between the two. This research scenario is not conducive to expanding the potential paths and intrinsic functions of AVA from a trust perspective.

This paper describes the following research findings. First, trust can reduce users' rejection of new things, thus promoting users' acceptance of AI virtual assistants at the psychological level, which is a direct driver of AI virtual assistant acceptance. Second, trust building depends on the user's motivation and perception of using the AI virtual assistant, that is, it is affected by functionality and social emotion. In this process, a positive experience contributes to user trust; for example, users are more concerned with whether something is useful or convenient. Nevertheless, users' perceptions of AI virtual assistants hardly have a coherent effect on trust, which includes the positive effects of perceived social presence and perceived social interaction as well as the inverted U-shaped effects of perceived humanity. In other words, the satisfaction of users' social needs by AI virtual assistants can effectively increase users' trust in them, but as perceived humanity tends to contribute to the trust transition effect, AI virtual assistants should be designed to maintain a moderate level of perceived humanity so that users can trust their services better. Finally, this paper reveals the transformation path between functionality, social emotion and AVA, namely, the mediation effect of trust is examined. In this procedure, trust presents two different effect mechanisms, which are partially mediated between functionality and acceptance and fully mediated between social emotion and acceptance, with two different degrees of mediation effects also indicating the effectiveness of relying on trust as a transformation path.

Research Framework and Hypothesis Development

Referring to the technology acceptance model and service robot acceptance model, this paper contains three levels of research on AVA: first, it investigates the relationship between trust and acceptance; second, it investigates the relationship between trust and functionality and social emotion; third, it investigates the mediating effect of trust between both functionality and social emotion and acceptance. Based on the above theoretical models, this study's theoretical model and hypotheses are shown in Figure 1.

FIGURE 1
www.frontiersin.org

Figure 1. Artificial intelligence virtual assistant acceptance model.

Trust is defined as the user's confidence that the AI virtual assistant can reliably deliver a service (Wirtz et al., 2018). The services of AI virtual assistants are based on artificial intelligence algorithms, but due to the inherent black box problem of artificial intelligence technology (Asatiani et al., 2020), users will not fully trust the information or services provided by artificial intelligence virtual assistants (Kaplan and Haenlein, 2019). The existing research shows that only meeting the technical and social needs of users does not truly increase their loyalty to AI virtual assistants (Hassanein and Head, 2007). Trust will prompt users to subjectively reduce their negative emotional perception of AI virtual assistants, which contributes to their reduced complexity and vulnerability and plays a key role in improving the acceptance of AI virtual assistants (Shin, 2021). Therefore, this paper introduces trust variables to explore the mechanism underlying their effect on acceptance and makes the following hypothesis:

H1: Users' behavior of trusting AI virtual assistants is positively correlated with the AVA.

Perceived usefulness is the degree to which an individual perceives that a technology improves their performance and is an important factor in determining user acceptance, adoption, and usage (Kulviwat et al., 2007; Jan and Contreras, 2011). Perceived ease of use is the degree to which an individual's perception that using a technology requires minimal physical and mental effort, which is an important driver of technology acceptance and adoption (Kulviwat et al., 2007). Wirtz et al. (2018) put perceived usefulness and perceived of ease of use at the core of the service robot acceptance model. McLean and Osei-Frimpong (2019) found that utility benefits (namely, perceived usefulness and perceived of ease of use) have a positive impact on users' use of AI virtual assistants. The results show that perceived usefulness and perceived of ease of use are important antecedents that influence the formation of consumer trust. Venkatesh and Bala (2008), on the other hand, found that perceived usefulness and perceived of ease of use were significant predictors of behavioral intention. Perceived usefulness and perceived of ease of use have a positive effect on individuals' acceptance of the technology, and the higher the indicators of P perceived usefulness and perceived of ease of use, the more positive are users' attitudes toward the AI virtual assistant, which leads to trust in the AI virtual assistant. Glikson and Woolley (2020) argued that trust formation also depends upon machine competence (i.e., the extent to which it does its job properly). Therefore, this paper makes the following hypotheses.

H2a: Perceived usefulness is positively correlated with users' behavior of trusting AI virtual assistants.

H2b: Perceived ease of use is positively correlated with users' behavior of trusting AI virtual assistants.

Perceived humanity, also known as anthropomorphism, refers to whether the user perceives the AI virtual assistant as human during interaction with it. Perceived humanity is an important determinant of customer use of AI virtual assistants (van Doorn et al., 2017). Scholars hold different views on, with research showing that users tend to use anthropomorphized AI assistants (Epley et al., 2008). Drawing on the “Uncanny Valley” effect, a highly anthropomorphic AI virtual assistant will make users more inclined to measure human-computer interaction by the rules of human interaction and form higher expectations. When the AI virtual assistant makes a low-level mistake, the inconsistency between the high form of anthropomorphism and the mistake behavior defies the user's expectation and creates a sense of disgust. Once the AI virtual assistant is anthropomorphized, the user will experience a sense of connection with it (van Pinxteren et al., 2019), but because the AI virtual assistant is not human, it will create a sense of unnaturalness and may even cause the user's interaction with the AI virtual assistant to be completely interrupted (Tinwell et al., 2011; van Doorn et al., 2017). Users who are more sensitive to perceived humanity believe that AI virtual assistants with human-like characteristics threaten human specificity and self-identity (Gursoy et al., 2019). In addition, humans must interact with AI virtual assistants and learn how to interact with AI virtual assistants, thus increasing the burden on consumers to use AI devices (Kim and McGill, 2018). According to existing research, moderate perception of humanity will enhance the trust between human users and AI virtual assistants, while excessive perception of humanity will make human users feel threatened or even fearful, which may even cause human users to interrupt their interactions. Based on this, the paper makes the following assumptions:

H3a: Perceived humanity has an inverted U-shaped relationship with user trust in artificial intelligence virtual assistants.

Interaction means that people connect with information through interaction, in which they communicate and exchange emotions, energy, resources and other contents while generating judgments and reactions to the activities and words of others. From the perspective of information dissemination, interaction is based on the relationship between people and computers, using new technologies to enhance the interaction between users and computers (Shin, 2020). Current research suggests that interaction promotes both emotional and behavioral loyalty to technology (Wirtz et al., 2018; Sundar, 2020) and that enhanced interactions can increase user satisfaction with a website (Song and Zinkhan, 2008; Jiang et al., 2019). Hence, the interactivity of AI virtual assistants can engage users with positive effects. Perceived social interactivity can be defined as the perception that the AI virtual assistant displays appropriate actions and “emotions” according to societal norms (Wirtz et al., 2018). If an AI virtual assistant interacts in a social manner, demonstrates its social capabilities, and provides favorable service to the user, its social appeal increases (McLean and Osei-Frimpong, 2019), thus promoting trust in it (Chattaraman et al., 2019). Therefore, the following assumptions are made in this paper:

H3b: Perceived social interactivity is positively correlated with user trust in artificial intelligence virtual assistants.

Perceived social presence refers to the degree to which the user perceives the AI virtual assistant as a social entity. Drawing on social presence theory, perceived social presence is an inherent quality of AI virtual assistants. Perceived social presence means that the user has a perception of human interaction with the AI virtual assistant that is personal, social, warm and sensitive. If an AI virtual assistant conveys a sense of interpersonal and social connection to the user, the user will have a positive experience with it (Holzwarth et al., 2006; Wirtz et al., 2018) and perceive the AI virtual assistant as a real social entity. In cases where AI virtual assistants demonstrate a higher perceived social presence, users will build stronger trust in them (Wang and Emurian, 2005). Moreover, AI virtual assistants have real-time communication, voice, politeness, and other language-based communication skills, which can meet the social needs of human users and generate positive emotions, creating a harmonious social atmosphere, thus prompting human users to develop trust in AI virtual assistants (Fernandes and Oliveira, 2021). Therefore, the following assumptions are made in this paper.

H3c: Perceived social presence is positively correlated with user trust in AI virtual assistants.

Functionality refers to the degree of usefulness and convenience of the AI virtual assistant. Social emotion refers to the social experience of human users during their interaction with the AI virtual assistant. However, there is no effective transformation path between the two dimensions and acceptance; therefore, this paper introduces the trust variable to explore its mechanism of action between usage motivation and perception and acceptance. According to the brand effect, AI virtual assistants that can provide favorable services and information tend to bring users a sense of cozy experience, forming a positive cycle, which means that a positive product usage experience drives users to trust AI virtual assistants more, thus increasing their loyalty and acceptance. In environments lacking social emotion, users tend to hide information and reduce their trust behavior. Therefore, users will trust AI virtual assistants more in contexts where social emotions are stronger (Glikson and Woolley, 2020); that is, social emotions are necessary for trust development (Hassanein and Head, 2007). Research shows that trust is influenced by both rational (i.e., functionality) and emotional (i.e., social emotion) dimensions, and trust mediates between the rational and emotional dimensions and user acceptance (Palmatier et al., 2006; Glikson and Woolley, 2020). If the AI virtual assistant can bring more trust to the user, it will reduce the user's suspicion of it, and it will improve the AVA. Therefore, this paper explores the mediating role of trust between functionality and social emotion and acceptance and proposes the following hypotheses:

H4: User trust behavior has a mediating role between functionality and acceptance.

H5: User trust behavior has a mediating role between social emotion and acceptance.

Data and Research Methodology

Scale Design and Data Sources

At present, AI virtual assistants are widely used in daily life, and people of all ages and industries are able to access the use of AI virtual assistants and can grasp the scenario of this study more accurately. Therefore, this paper selects the public as the investigation target. This paper uses Wenjuanxing to generate online questionnaires and sends out questionnaire links and QR codes through WeChat groups, friend circles, QQ groups, online forums, and so on to invite the public to visit the questionnaire. This paper collects sample data with the help of the Questionnaire Star platform and sends questionnaires by QQ, WeChat, friends circles, among other routes. A total of 240 valid questionnaires were obtained. The descriptive statistics of the sample are shown in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Results of descriptive statistics of the sample (N = 240).

Variable Measurement

To ensure the reliability and validity of the measurement scales, mature measurement scale items were selected for this study and appropriately adapted to the study scenario. In particular, functionality is based on the technology acceptance model and incorporates the findings of Venkatesh and Davis (2000) and others to classify functionality into two dimensions, perceived usefulness and perceived of ease of use, each of which contains four question items. Social emotion is based on the service robot acceptance model, which is divided into three dimensions: perceived humanity, perceived social interactivity, and perceived social presence. Combined with the study of Fernandes and Oliveira (2021), perceived humanity contains four question items, perceived social interactivity contains two question items, and perceived social presence contains three question items. Trust is based on the service robot acceptance model, containing one dimension of trust and four question items based on Shin (2021). The AVA contains three question items according to Fernandes and Oliveira (2021). The question items in this paper are all scored on a 5-point Likert scale, with 1–5 indicating very poor to fully conforming.

Reliability and Validity Tests

The descriptive statistics and correlation coefficients are shown in Table 2. The results show that perceived usefulness is positively related to trust (r = 0.510, p < 0.01), perceived of ease of use had a positive correlation with Trust (r = 0.464, p < 0.01), perceived social interactivity displayed a positive correlation with trust (r = 0.547, p < 0.01), and perceived social presence is found to be positively correlated with trust (r = 0.537, p < 0.01). These results preliminarily supported hypotheses 2a, 2b, 3b, and 3c.

TABLE 2
www.frontiersin.org

Table 2. Descriptive statistics results with correlation coefficients.

As shown in Table 3, the Cronbach α of the seven variables are >0.7, indicating that factor analysis was appropriate.

TABLE 3
www.frontiersin.org

Table 3. Reliability test results.

Confirmatory factor analysis was conducted using SPSSAU to directly test the validity through confirmatory factor analysis of mature scales (as shown in Table 4). The results showed that standard load factors were all in acceptable range (>0.400), which indicates a strong correlation between the latent variables and the analytic term measures. And the results show that the AVE is >0.5 and the CR value is >0.7, which means the aggregation validity is high.

TABLE 4
www.frontiersin.org

Table 4. Scale items and validity tests.

The results of KMO and Bartlett's test are shown in Table 5. The KMO value is more than 0.9 and p-value is < 0.05, indicating that the validity of the study data was feasible.

TABLE 5
www.frontiersin.org

Table 5. KMO and Bartlett's test.

Empirical Testing and Analysis

Selection of Research Method

The purpose of this study is to investigate the factors influencing acceptance and their mechanisms of action. Based on the existing literature, this paper identifies a study of acceptability consisting of four dimensions: functionality, social emotion, trust, and AVA. This paper uses multilevel regression analysis to analyze the correlations among the variables. The core of multilevel regression analysis is regression analysis; the difference is that hierarchical regression can be divided into multiple layers, where each layer adds more polynomials on top of the previous layer. This approach can solve the problem of whether more polynomials have explanatory power for the model. In addition, this paper used the product coefficient test for mediating effects; specifically, the bootstrap sampling method was used for testing. The basic idea of bootstrap sampling is to construct an estimated confidence interval with the help of multiple sampling with partial sample release when the full sample is unknown. This method has relatively high test efficacy and does not impose restrictions on the mediating sampling distribution.

Correlation Test

In this paper, a multilevel regression analysis was used to test the hypotheses using SPSSAU software.

As shown in Table 6, this stratified regression analysis involved a total of three models. There are three models involved in this hierarchical regression analysis. The independent variables in model 1 are control variables (Gender, Age, Education and Marital Status), model 2 adds perceived of ease of use and perceived usefulness to model 1, and model 3 adds perceived humanity, perceived humanity squared, perceived social interactivity, and perceived social presence to model 2.

TABLE 6
www.frontiersin.org

Table 6. Results of multilevel regression tests (Explanatory variable: Trust).

The explanatory variable in this study is Trust, model 1 examines the effect of control variables, and an F-test of the model reveals that the model does not pass the F-test (F = 1.516, p > 0.05). This indicates that the four control variables of Gender, Age, Education, and Marital Status do not have a significant effect on the logical path of Trust.

The results of model 2 showed that the variation in the F-value was significant (p < 0.05) after adding perceived of ease of use and perceived usefulness to model 1. This means that perceived of ease of use and Perceived usefulness added explanatory meaning to the model. In addition, the R-squared value increased from 0.025 to 0.296, implying that perceived of ease of use and perceived usefulness can explain 27.0% of the strength of trust. In particular, the regression coefficient value for perceived of ease of use is 0.219 and shows significance (t = 2.772, p = 0.006 < 0.01), which implies that perceived of ease of use has a significant positive relationship with trust. The regression coefficient value for perceived usefulness is 0.287 and shows significance (t = 4.184, p = 0.000 < 0.01), which implies that perceived usefulness has a significant positive relationship with Trust.

For model 3, the addition of perceived humanity, perceived humanity squared, perceived social interactivity, and perceived social presence to model 2 produced a significant change in the F-value (p < 0.05), implying that the addition of Perceived humanity, perceived humanity squared, perceived social interactivity, and perceived social presence explained the significance of the model. In addition, the R-squared value increased from 0.296 to 0.525, implying that perceived humanity, perceived humanity squared, perceived social interactivity, and perceived social presence can explain 23.0% of the strength of Trust. Specifically, the regression coefficient value for perceived humanity was 1.208 and demonstrated significance (t = 8.100, p = 0.000 < 0.01), implying that perceived humanity can have a significant positive influence on Trust. The regression coefficient value of perceived humanity squared was −0.212 and showed significance (t = −8.178, p = 0.000 < 0.01), implying that perceived humanity squared would have a significant negative influence on Trust, which is the inverted U relationship between perceived humanity and Trust. The regression coefficient value of perceived social interactivity is 0.176 and shows significance (t = 2.919, p = 0.004 < 0.01), implying that perceived social interactivity will have a significant positive influence on Trust. The regression coefficient value of perceived social presence was 0.206 and showed significance (t = 4.174, p = 0.000 < 0.01), implying that perceived social presence will have a significant positive influence on Trust.

As shown in Table 7, there were 2 models involved in this hierarchical regression analysis. The independent variables in model 1 are control variables (Gender, Age, Education, and Marital Status), and model 2 adds Trust to model 1.

TABLE 7
www.frontiersin.org

Table 7. Multilevel regression test results (Explanatory variable: AVA).

The explanatory variable in this study is AVA, model 1 examines the effect of control variables, and an F-test of the model reveals that the model does not pass the F-test (F = 0.432, p > 0.05). This indicates that the four control variables of Gender, Age, Education, and Marital Status do not have a significant effect on the logical path of AVA. The results of model 2 showed that the variation in the F-value was significant (p < 0.05) after adding Trust to model 1. This means Trust added explanatory meaning to the model. In addition, the R-squared value increased from 0.007 to 0.224, implying that Trust can explain 21.6% of the strength of trust. In particular, the regression coefficient value for Trust is 0.483 and shows significance (t = 8.074, p = 0.000 < 0.01), which implies that Trust has a significant positive relationship with AVA.

Intermediation Effect Test

In this paper, the bias-corrected nonparametric percentile bootstrap method was applied to test the mediating effect (as shown in Table 8), and the confidence level was set at 95%.

TABLE 8
www.frontiersin.org

Table 8. Summary of intermediary role test results.

In the path from functionality to acceptance, the 95% BootCI was (0.012~0.109), excluding zero; the trust interval for direct effects was (0.398~0.652), excluding zero, indicating that it was partially mediated. In the path from social emotion to acceptance, the 95% BootCI was (0.013~0.129), excluding zero; the direct effect trust interval was (−0.096~0.130), including zero, indicating that trust plays a full mediating effect in the process of moving from social emotion to acceptance, and hypotheses 4 and 5 are confirmed.

The intermediary role test results were shown in Figure 2.

FIGURE 2
www.frontiersin.org

Figure 2. Intermediary role test results.

Discussion

This paper develops an AI virtual assistant acceptance model based on the technology acceptance model and the service robot acceptance model. Overall, the AVA model extends the potential path of AVA and improves the study of the acceptance transformation mechanism from the trust perspective. The model shows high predictive power for AVA and can well explain the differences in user acceptance and the reasons for the formation of differences. The results of the study can be summarized in three aspects as follows.

First, this paper verifies the positive effect of trust on the acceptance of AI virtual assistants. The services of AI virtual assistants are executed based on AI algorithms, but users do not fully trust the information or services provided by AI virtual assistants due to the inherent black-box problem of AI technology. Existing studies suggest that trust is an essential driver of technology acceptance and can positively influence users' acceptance of new technologies (Kaplan and Haenlein, 2019; van Pinxteren et al., 2019; Asatiani et al., 2020). This paper confirms through empirical research that trust is significantly and positively correlated with AI virtual assistant acceptance, and that its ability to reduce users' negative emotions toward AI virtual assistants plays a key role in improving AI virtual assistant acceptance.

Second, this study explored the relationship with trust at the functionality and social emotional levels. Most of the existing studies focus on the relationship between functionality and acceptance (King and He, 2006), but there is a lack of research on the relationship with trust. Based on the technology acceptance model, this paper examines the effect of functionality on trust in terms of two dimensions, perceived usefulness and perceived of ease of use, which confirmed that both usefulness and ease of use dimensions were significantly and positively related to trust. This shows that efficient service experience helps users develop trust, which means that effective services of AI virtual assistants will encourage users to increase their interactions with them and thus develop trust based on familiarity with their functions.

In addition, the degree of user trust in AI virtual assistants depends on their ability to satisfy users' social-emotional and relational needs. Drawing on the service robot acceptance model, this paper divides social emotion into three dimensions: perceived humanity, perceived social interactivity, and perceived social presence. Currently, there are two different views on perceived humanity. Social reaction theory and social presence theory believe that higher anthropomorphism will lead to a positive customer response, which means that perceived humanity will lead to user trust (Qiu and Benbasat, 2009; van Doorn et al., 2017). Conversely, some scholars have argued that the positive effects of highly anthropomorphic AI virtual assistants fail to be proven in many scenarios and can even increase users' negative emotions (Tinwell et al., 2011; van Doorn et al., 2017). Consequently, this paper explores the relationship between perceived humanity and trust through empirical tests and finds a non-linear inverted U-shaped relationship between perceived humanity and trust, meaning that only moderate perceived humanity can promote trust among users. In addition, the maturation of AI technology will enable AI virtual assistants to have certain human-like attributes, such as voice, real-time interaction, verbal communication skills, and social etiquette, that enable human users to perceive their social presence. Scholars believe that these attributes can generate positive emotions and establish favorable social relationships, which can enhance the interaction between users and AI virtual assistants to help increase their trust level (Fernandes and Oliveira, 2021). This study further confirmed that perceived social interactivity and perceived social presence were positively related to trusting behavior, which is consistent with the findings of existing studies.

Finally, this paper explores the mediating role of trust between functionality and social emotion and acceptance. Trust motivates AI virtual assistants to build a favorable image and suppresses users' perceptions of various risks, which in turn positively motivates users' acceptance behavior toward them. According to Schmitt (1999), customers' purchasing behavior is the result of a combination of rational and emotional factors. Drawing on the customer delivered value theory, AI virtual assistants should make every effort to provide customers with quality services, obtain customer satisfaction, and help customers generate willingness to use. AI virtual assistants that provide favorable services give users a comfortable experience, which generates a brand effect, increases user loyalty and trust, and forms a positive cycle. This paper confirms the mediating role of trust between functionality and acceptance, with the service experience being the bridge between users and AI virtual assistants to build a well-trusted relationship. The social emotions of AI virtual assistants enable users to create sublimation and association with social systems, which leads to satisfaction at the psychological level. Trust is the prerequisite for user identification and the key factor that determines whether users are willing to interact deeply with the information source (Wirtz et al., 2018). Based on the above studies, this paper confirms the mediating role of trust between social emotion and acceptance, which is important for enhancing the contextualized services of AI virtual assistants from a perceptual perspective.

In summary, this study establishes a new acceptance model for AI virtual assistants, verifying the inverted U-shaped effect of perceived humanity on trust and the mediating role of trust in the acceptance transformation mechanism. It complements the gap of existing technology acceptance models at the trust level and expands the transformation path of AVA. To a certain extent, it extends the boundary and application space of existing theories and helps to solve the user acceptance problem from the trust perspective.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author Contributions

SZ and ZM: writing. BC and XY: providing revised advice. XZ processing data. All authors contributed to the article and approved the submitted version.

Funding

This work was supported by the Social Science Foundation of Heilongjiang Province of China (18TQC239), Guangxi Science and Technology Plan Project (Guangxi Science and Technology Base and Talent Special Project: AD20159069).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Asatiani, A., Malo, P., Nagbøl, P. R., Penttinen, E., Rinta-Kahila, T., and Salovaara, A. (2020). Challenges of explaining the behavior of black-box AI systems. MIS Q. Executive 19, 259–278. doi: 10.17705/2msqe.00037

CrossRef Full Text | Google Scholar

Chattaraman, V., Kwon, W., Gilbert, J., and Ross, K. (2019). Should AI-Based, conversational digital assistants employ social-or task-oriented interaction style? a task-competency and reciprocity perspective for older adults. Comput. Hum. Behav. 90, 315–330. doi: 10.1016/j.chb.2018.08.048

CrossRef Full Text | Google Scholar

Epley, N., Akalis, S., Waytz, A., and Cacioppo, J. (2008). Creating social connection through inferential reproduction. Psychol. Sci. 19, 114–120. doi: 10.1111/j.1467-9280.2008.02056.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Fernandes, T., and Oliveira, E. (2021). Understanding consumers' acceptance of automated technologies in service encounters: drivers of digital voice assistants adoption. J. Bus. Res. 122, 180–191. doi: 10.1016/j.jbusres.2020.08.058

CrossRef Full Text | Google Scholar

Glikson, E., and Woolley, A. W. (2020). Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14, 627–660. doi: 10.5465/annals.2018.0057

PubMed Abstract | CrossRef Full Text | Google Scholar

Gummerus, J., Lipkin, M., Dube, A., and Heinonen, K. (2019). Technology in use-characterizing customer self-service devices. J. Serv. Mark. 33, 44–56. doi: 10.1108/JSM-10-2018-0292

CrossRef Full Text | Google Scholar

Gursoy, D., Chi, O. H., Lu, L., and Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int. J. Inf. Manage 49, 157–169. doi: 10.1016/j.ijinfomgt.2019.03.008

CrossRef Full Text | Google Scholar

Hassanein, K., and Head, M. (2007). Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. Int. J. Hum. Comput. Stud. 65, 689–708. doi: 10.1016/j.ijhcs.2006.11.018

CrossRef Full Text | Google Scholar

Holzwarth, M., Janiszewski, C., and Neumann, M. M. (2006). The influence of avatars on online consumer shopping behavior. J. Mark. 70, 19–36. doi: 10.1509/jmkg.70.4.019

CrossRef Full Text | Google Scholar

Jan, A. U., and Contreras, V. (2011). Technology acceptance model for the use of information technology in universities. Comput. Hum. Behav. 27, 845–851. doi: 10.1016/j.chb.2010.11.009

CrossRef Full Text | Google Scholar

Jiang, C., Rashid, R. M., and Wang, J. (2019). Investigating the role of social presence dimensions and information support on consumers' trust and shopping intentions. J. Retail. Consum. Serv. 51, 263–270. doi: 10.1016/j.jretconser.2019.06.007

CrossRef Full Text | Google Scholar

Kaplan, A., and Haenlein, M. (2019). Siri, siri in my hand, who is the fairest in the land? on the interpretations, illustrations and implications of artificial intelligence. Bus. Horiz. 62, 15–25. doi: 10.1016/j.bushor.2018.08.004

CrossRef Full Text | Google Scholar

Kim, H. Y., and McGill, A. L. (2018). Minions for the rich? financial status changes how consumers see products with anthropomorphic features. J. Consum. Res. 45, 429–450. doi: 10.1093/jcr/ucy006

CrossRef Full Text | Google Scholar

King, W. R., and He, J. (2006). A meta-analysis of the technology acceptance model. Inform. Manag. 43, 740–755. doi: 10.1016/j.im.2006.05.003

CrossRef Full Text | Google Scholar

Kulviwat, S., Bruner, I. I. G. C, Kumar, A., Nasco, S. A., and Clark, T. (2007). Toward a unified theory of consumer acceptance technology. Psychol. Mark. 24, 1059–1084. doi: 10.1002/mar.20196

CrossRef Full Text | Google Scholar

McLean, G., and Osei-Frimpong, K. (2019). Hey Alexa… examine the variables influencing the use of artificial intelligent in-home voice assistants. Comput. Hum. Behav. 99, 28–37. doi: 10.1016/j.chb.2019.05.009

CrossRef Full Text | Google Scholar

Palmatier, R. W., Dant, R. P., Grewal, D., and Evans, K. R. (2006). Factors influencing the effectiveness of relationship marketing: a meta-analysis. J. Mark. 70, 136–153. doi: 10.1509/jmkg.70.4.136

CrossRef Full Text | Google Scholar

Qiu, L., and Benbasat, I. (2009). Evaluating anthropomorphic product recommendation agents: a social relationship perspective to designing information systems. J. Manag. Inform. syst. 25, 145–182. doi: 10.2753/MIS0742-1222250405

CrossRef Full Text | Google Scholar

Schmitt, B. (1999). Experiential marketing. J. Mark. Manag. 15, 53–67. doi: 10.1362/026725799784870496

CrossRef Full Text | Google Scholar

Shin, D. (2020). How do users interact with algorithm recommender systems? the interaction of users, algorithms, and performance. Comput. Hum. Behav. 109:106344. doi: 10.1016/j.chb.2020.106344

CrossRef Full Text | Google Scholar

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 146:102551. doi: 10.1016/j.ijhcs.2020.102551

CrossRef Full Text | Google Scholar

Song, J. H., and Zinkhan, G. M. (2008). Determinants of perceived web site interactivity. J. Mark. 72, 99–113. doi: 10.1509/jmkg.72.2.99

CrossRef Full Text | Google Scholar

Sparks, B. A., and Browning, V. (2011). The impact of online reviews on hotel booking intentions and perceptions of trust. Tour. Manag. 32, 1310–1323. doi: 10.1016/j.tourman.2010.12.011

CrossRef Full Text | Google Scholar

Sundar, S. S. (2020). Rise of machine agency: a framework for studying the psychology of human–AI interaction (HAII). J. Comput. Mediat. Commun. 25, 74–88. doi: 10.1093/jcmc/zmz026

CrossRef Full Text | Google Scholar

Tinwell, A., Grimshaw, M., Nabi, D., and Williams, A. (2011). Facial expression of emotion and perception of the Uncanny Valley in virtual characters. Comput. Hum. Behav. 27, 741–749. doi: 10.1016/j.chb.2010.10.018

CrossRef Full Text | Google Scholar

van Doorn, J., Mende, M., Noble, S., Hulland, J., Ostrom, A., Grewal, D., et al. (2017). Domo Arigato Mr. Roboto: emergence of automated social presence in organizational frontlines and customers' service experiences. J. Serv. Res. 20, 43–58. doi: 10.1177/1094670516679272

CrossRef Full Text | Google Scholar

van Pinxteren, M., Wetzels, R., Rüger, J., and Wetzels, M. (2019). Trust in humanoid robots: implications for services marketing. J. Serv. Mark. 33, 507–518. doi: 10.1108/JSM-01-2018-0045

CrossRef Full Text | Google Scholar

Venkatesh, V., and Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 39, 273–315. doi: 10.1111/j.1540-5915.2008.00192.x

CrossRef Full Text | Google Scholar

Venkatesh, V., and Davis, F. D. (2000). A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46, 186–204. doi: 10.1287/mnsc.46.2.186.11926

CrossRef Full Text | Google Scholar

Wang, Y. D., and Emurian, H. H. (2005). An overview of online trust: concepts, elements, and implications. Comput. Hum. Behav. 21, 105–125. doi: 10.1016/j.chb.2003.11.008

CrossRef Full Text | Google Scholar

Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., et al. (2018). Bravenew world: service robots in the frontline. J. Serv. Manag. 29, 907–931. doi: 10.1108/JOSM-04-2018-0119

CrossRef Full Text | Google Scholar

Keywords: AI virtual assistant, motivation, social emotion, trust, acceptance, mediating effects, inverted U relationship

Citation: Zhang S, Meng Z, Chen B, Yang X and Zhao X (2021) Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects. Front. Psychol. 12:728495. doi: 10.3389/fpsyg.2021.728495

Received: 21 June 2021; Accepted: 20 July 2021;
Published: 13 August 2021.

Edited by:

Yizhang Jiang, Jiangnan University, China

Reviewed by:

Fei Hou, Beijing Normal University, Zhuhai, China
Jing Xue, Wuxi People's Hospital Affiliated to Nanjing Medical University, China

Copyright © 2021 Zhang, Meng, Chen, Yang and Zhao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xiu Yang, 2017110016@gxufe.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.