Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Psychol., 18 November 2025

Sec. Educational Psychology

Volume 16 - 2025 | https://doi.org/10.3389/fpsyg.2025.1636480

My digital mentor: a mixed-methods study of user-GAI interactions

  • 1School of Management, Xiamen University, Xiamen, China
  • 2School of Management, Xiamen University Tan Kah Kee College, Zhangzhou, China

Introduction: Generative Artificial Intelligence (GAI) has emerged as a powerful tool in online learning, offering dynamic, high-quality, and user-friendly content. While previous studies have primarily focused on GAI’s short-term impacts, such as users’ acceptance and initial adoption, a notable gap exists in understanding long-term usage (i.e., infusion use) and the psychological mechanisms.

Method and results: This study employs a two-stage mixed-methods approach to investigate users’ infusion use of GAI in online learning scenarios. A semi-structured interview (N = 26) was conducted in the first stage to develop a systematic framework of influencing factors. These factors include intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, and emotional support. The second stage empirically validated the research framework using survey data of 327 participants. We find that the eight factors influence users’ infusion use through two key psychological mediators: perceived value and satisfaction. We also used the fsQCA method to obtain the configurations. These configurations demonstrate that no single factor alone is sufficient; rather, it is the combination of multiple factors that fosters users’ infusion use.

Discussion: Our findings contribute to expanding the literature on the application of the theoretical literature on technology adoption in online learning contexts and provide practical implications for developing effective user-GAI interaction.

1 Introduction

Generative Artificial Intelligence (GAI) like ChatGPT simulates human cognitive processes through deep learning, extensive training datasets to generate innovative content (Yuan et al., 2024). In online learning, GAI serves diverse roles, such as virtual teachers, teaching assistants, and automatic grading (Liang et al., 2023; Peng and Wan, 2024). It supports real-time Q&A, tracks learning progress, and facilitates speaking practice through interactive dialogue (Du and Lv, 2024; Shao et al., 2025). One example is Quizlet Q-Chat, which adapts to different students’ learning habits and helps them master key concepts through customized Q&A sessions.1

Unlike traditional offline and online classes, GAI overcomes temporal and spatial constraints, enabling learners to access knowledge anytime and anywhere. This 24/7 learning support benefits non-student groups, such as working professionals. GAI technology can provide these individuals with flexible access to knowledge, enabling them to acquire cutting-edge industry skills and knowledge rapidly. It engages users through a communication that closely mimics human-to-human interaction. GAI can deeply understand users’ needs and preferences and provide them with a customized learning approach. The technology’s adaptability facilitates seamless integration into daily routines. It also encourages comprehensive utilization of its functional capabilities. However, existing literature lacks an exploration of the key factors driving users’ infusion use in online learning contexts.

Infusion use refers to users’ profound integration of information technology/systems (IT/S) into their daily learning processes to maximize technological potential (Chen et al., 2021; Hassandoust and Techatassanasoontorn, 2022). It represents the ultimate state of technology adoption, i.e., when the technology is fully embedded in the users’ daily lives (Jones et al., 2002). Existing research has extensively investigated users’ long-term usage behaviors, such as continuous usage and deep use. Continuous usage primarily concerns IT adoption and long-term usage decisions (Bhattacherjee, 2001), yet it does not fully capture the nature of post-adoption behavior (Hu et al., 2024). Deep use, on the other hand, examines the extent to which users leverage IT to achieve personal goals (Ogbanufe and Gerhart, 2020). However, these post-adoption behaviors represent efforts toward achieving the ultimate state of infusion use, which reflects the optimal alignment among users, IT, and tasks (Hu et al., 2024).

In this study, infusion use refers to users’ active, repetitive, and long-term in-depth use of GAI. This study focuses on infusion use for three reasons. First, unlike traditional AI, infusion use requires users to adopt an open attitude toward GAI, deeply understand its functions, and actively explore its potential, imposing higher users’ demands (Hu et al., 2024). Therefore, it may be challenging for users to realize the full potential of the GAI. Second, existing studies focus on short-term behaviors such as users’ acceptance (Wong et al., 2023; Li Y. et al., 2024), adoption (Chang and Park, 2024; Pathak and Bansal, 2024), and intention to use (Kim et al., 2021; Camilleri, 2024) of AI technologies, while neglecting users’ long-term behaviors (i.e., infusion use). In contrast to short-term use, infusion use emphasizes continuity and regularity, focusing on users’ ability to use GAI to support their learning, which is essential for creating superior business value (Chen et al., 2020; Chen et al., 2021). As a result, infusion use is considered a more promising pattern for technology adoption (Hu et al., 2024). Third, most studies have employed theoretical frameworks such as the technology acceptance model (TAM) (Zou et al., 2025; Kim et al., 2025), the unified theory of acceptance and use of technology (UTAUT) (Wang, 2025; Xu et al., 2025), the theory of planned behavior (TPB) (Al-Emran et al., 2024), and the task-technology fit (TTF) (Du and Lv, 2024) to investigate users’ usage behavior of GAI. However, few scholars have systematically explored GAI’s long-term use (i.e., infusion use) from a comprehensive perspective. As an emerging technology, the key factors influencing users’ infusion use of GAI have not been sufficiently explored. Therefore, it is essential to analyze the key factors and frameworks that drive the widespread infusion use of GAI.

Based on the above analysis, this study develops the following research questions:

(1) Which factors influence users’ infusion use of GAI?

(2) What are the influencing mechanisms of these factors on infusion use?

(3) What are the configurational effects of these factors on infusion use?

To answer these questions, a two-stage methodology was employed. Stage 1 qualitative research, this study collected data from 26 users through semi-structured interviews to develop a comprehensive theoretical framework to understand the factors influencing users’ infusion use of GAI in online learning contexts. In stage 2, we conducted a quantitative study that empirically validated the research framework using 327 participants’ survey data. At last, fuzzy set qualitative comparative analysis (fsQCA) methods were integrated to analyze all samples, validating the configurational effects.

This study makes unique contributions to the literature. First, we expand the theoretical understanding of GAI in online learning scenarios by developing and empirically validating a research framework that systematically explains how GAI’s characteristics influence infusion use. Second, we verify the critical mediating roles of perceived value and satisfaction in the behavioral formation process, thereby gaining insights into users’ psychological mechanisms. Finally, our findings offer actionable guidance for GAI educators, technology developers, and policymakers to enhance technology integration and maximize educational impact.

2 Theoretical background

2.1 SOR model

The Stimulus-Organism-Response (SOR) model was proposed by Mehrabian (1974), as a theoretical framework for exploring the relationship between external stimuli and organismic responses. This model posits that behavioral performance is not merely a stimulus–response paradigm, but rather, it is achieved through cognitive processing by the organism to elicit a specific response. Specifically, stimuli (S) is defined as the various types of external factors in the environment that influence the internal mental state or cognitive processes of the organism (O), ultimately leading to a specific behavioral response (R). These stimuli activate various internal processes, including cognition, affect, and evaluation, ultimately determining the organism’s behavioral response (Xie et al., 2023). The concept of an organism’s perception establishes a link between stimulus and response, thereby explaining the process by which an organism is stimulated and responds.

While established models like TAM effectively explain technology acceptance through perceptual factors like usefulness and ease of use (Al-Adwan et al., 2023; Zou et al., 2025), the SOR model offers a more comprehensive perspective. It captures not only external technological characteristics but also the crucial mediating role of users’ internal states—particularly their cognitive processing (Fu et al., 2025). This model provides a more nuanced understanding of how external environmental stimuli interact with individual cognition and evaluation to shape behavioral responses (Chen, 2023; Liu Y. F. et al., 2023). Consequently, the SOR model is increasingly applied within online learning, including e-learning platforms (Fu et al., 2025), AI teaching assistants (Peng and Wan, 2024), and mobile-assisted language learning (Lee and Xiong, 2023).

This range of applications demonstrates the SOR model’s validity for GAI online learning contexts. It not only aids in understanding how GAI features (stimuli) and users’ internal psychological processes (organism), but also provides actionable recommendations for optimizing the GAI online learning experience and enhancing users’ infusion use.

2.2 The mixed-methods approach design

Our study employed a mixed-methods approach to explore the infusion use of GAI in online learning scenarios, following established methodological guidelines (Venkatesh et al., 2016; Creswell and Creswell, 2018). The steps for an exploratory sequential design are as follows: First, the qualitative stage involved 26 semi-structured interviews to explore factors influencing users’ infusion use of GAI. We proposed our research model and hypotheses based on the results of stage 1. Then, stage 2 tested the hypotheses through an online survey. This study is conducted in two stages, as illustrated in Figure 1.

Figure 1
Flowchart depicting a two-stage study on factors influencing users' infusion use of GAI in online learning. Stage 1 involves a qualitative study with semi-structured interviews of twenty-six participants. Stage 2 is a quantitative study analyzing relationships using a survey of three hundred twenty-seven participants. The result provides a comprehensive understanding of these factors.

Figure 1. Mixed-method design.

The mixed-methods approach is particularly appropriate for our study. First, this design offers advantages over single-method approaches by simultaneously addressing both confirmatory and explanatory research questions (Creswell and Creswell, 2018). It aligns perfectly with the dual nature of our investigation. Second, the application of GAI in online education context presents a degree of novelty, making it difficult for existing theories to provide a thorough description and explanation of the issues (Venkatesh et al., 2016).

3 Stage 1: the qualitative study

3.1 Data collection

This study employed semi-structured interviews conducted through face-to-face interviews and online video conferences. The research group first screened participants with prior experience using GAI, who usually use GAI as their main learning tool and can express their opinions based on their experiences. Participants were recruited using purposive sampling techniques to ensure their suitability for the study topic. All participants were requested to indicate their willingness to participate in semi-structured interviews. Among them, 13 were female (50%). The average age of participants was 29.77 (SD = 8.75). Most participants were young and middle-aged individuals under 35, aligning with QuestMobile’s finding that AI users are predominantly concentrated in this age group.2

Before the interviews, the researcher established several guidelines with participants, including encouraging open sharing, preventing interruptions, and ensuring the anonymity of all information, to foster an open and secure communication environment. During the interviews, the researcher initially collected basic demographic information and explained GAI. Participants were then asked to describe specific examples of using GAI for learning. The interviews focused on the role of GAI in the participants’ learning, their most memorable experiences, challenges encountered in using GAI, and the subsequent impacts. The detailed interview protocol is shown in Appendix A.1. Each interview lasted 20–30 min, with all participants agreeing to audio recording. After the interviews, the audio recordings were transcribed verbatim into textual material for subsequent qualitative analysis. Data were collected between December 2024 and January 2025. Interviews ceased upon reaching data saturation, defined as the point when no significant new information emerged from the data (Shao et al., 2024). Demographic details of the participants are presented in Table 1.

Table 1
www.frontiersin.org

Table 1. Participants’ basic information.

3.2 Data analysis

First, the semi-structured interview transcripts were pre-processed to remove content unrelated to GAI in online learning scenarios. Second, semantically ambiguous or irrelevant content was eliminated. Next, responses reflecting interviewees’ misinterpretation of the questions were excluded. Finally, the transcripts were coded and labeled line-by-line according to the logical sequence of the interview content. According to Liu Y. L. et al. (2023), we adopted thematic analysis combining “top-down” framework analysis (Ritchie and Spencer, 2002) and “bottom-up” grounded theory (Glaser and Strauss, 2017). We analyzed the interview data by coding, theming, decontextualizing, and recontextualizing.

This study utilized NVivo 11.0 software to conduct a thorough analysis of the interview data, examining the content word-by-word, sentence-by-sentence, and paragraph-by-paragraph. The researchers were instructed to use original phrases from the interview transcripts for labeling during the coding process. Two graduate students subsequently organized the data based on the initial nodes. Throughout this process, the original information was continuously compared and revised, with all meaningful themes and concepts being precisely extracted.

To minimize researcher bias in the coding process, two graduate students independently coded the data through a back-to-back approach. Both coders were native Chinese speakers and familiar with GAI. Before starting the coding process, a centralized meeting was conducted to align the coders on the procedures and clarify relevant concepts and theories. After each coding round, the results from both coders were compared, the same initial concepts were merged, and the coding conflicts were discussed with experts. Finally, the concepts were summarized and reorganized. After repeated discussions, only concepts agreed upon by both coders were retained. The complete coding process is presented in Appendix A.2.

3.3 Findings from interviews

Through semi-structured interviews, we identified several factors influencing the infusion use of GAI, such as intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, emotional support, perceived value, and satisfaction. Building upon these exploratory findings, we subsequently designed a quantitative study to empirically validate the hypothesized relationships between these variables. Analysis of interview text reveals that users primarily employ ChatGPT, Doubao, ERNIE Bot, Deepseek, Gemini, Kimi, and other GAI platforms, reflecting the diversity of GAI tools within online learning contexts.

4 Stage 2: the quantitative study

4.1 Development of hypotheses

Intelligence reflects GAI’s capability through environmental perception, adaptive learning, problem-solving, and goal attainment. This characteristic of continuous evolution through feedback leads users to recognize it as a genuine intelligence (Moussawi et al., 2023). As the most critical factors of AI technology (Bartneck et al., 2009), intelligence fundamentally relies on natural language processing technologies that enable AI to simulate human cognitive processes in language comprehension and production (McLean et al., 2021). The statement is echoed by the qualitative investigation. For example, interviewee (P6) mentioned: “Because I think ChatGPT records all my previous conversations, I think its answer will be more professional and more in line with my heart.” GAI exemplifies this intelligence through its extensive knowledge repository and professional response capabilities, delivering not merely accurate and compelling answers but also formulating solutions that satisfy users through concise and coherent language outputs (Priya and Sharma, 2023). During interactive processes, GAI exhibits a high level of attentiveness to the users’ needs while employing diverse response strategies, which significantly enhance users’ experience and foster positive attitudes toward the technology.

Priya and Sharma (2023) pointed out that intelligence manifests in three critical dimensions of information generation: effectiveness, efficiency, and reliability. These dimensions not only constitute the fundamental drivers of GAI advancement but also serve as critical factors shaping users’ perceptions. As demonstrated in some studies, a direct relationship between GAI’s intelligent performance and its functional capabilities (Maroufkhani et al., 2022; Priya and Sharma, 2023). This relationship improves users’ perceived value and satisfaction (Song et al., 2022; Lin and Lee, 2024; Song et al., 2024).

In the context of online learning, GAI’s intelligence capacity enables accurate comprehension of users’ inquiries and provision of elaborated responses, thereby enhancing users’ learning experience. Therefore, we hypothesize that:

H1a: Intelligence positively influences users’ perceived value.

H1b: Intelligence positively influences users’ satisfaction.

Explainable AI (XAI) has been defined as systems designed to provide transparent decision processes and clear explanations, enabling users to understand system capabilities and limitations (Dwivedi et al., 2023). Research has demonstrated that this explainability feature enhances users’ trust and acceptance of recommendation algorithms while simultaneously enabling effective knowledge transfer, thereby increasing the adoptability of AI-generated suggestions (Zhang and Curley, 2018). Fundamentally, explainability refers to an AI system’s capacity to articulate its decision logic in a user-comprehensible format. This capability primarily aims to eliminate the “black box” nature of the AI decision-making process, consequently strengthening users’ confidence in the system (Shin, 2021). Within human-GAI interaction contexts, explainability facilitates rational evaluation of algorithmic outputs by providing decision-making rationales. The statement is echoed by the qualitative investigation. For example, interviewee (P23) mentioned: “AI can give more detailed content. This thought process is more in line with my idea of recognizing it and learning from it.” This mechanism significantly shapes users’ attitudes toward GAI (Cheung and Ho, 2025). Furthermore, by bridging a connection between the user’s perception and GAI’s operation, explainability not only deepens understanding of specific responses but also elevates the overall interaction quality. As users progressively acquire more comprehensive explanations of the system through successive cycles of inquiries, it enhances users’ perceived value and satisfaction. Therefore, we hypothesize that:

H2a: Explainability positively influences users’ perceived value.

H2b: Explainability positively influences users’ satisfaction.

Response time, as a key indicator of AI’s service efficacy, reflects the timeliness of the system in processing users’ requests and providing feedback (DeLone and McLean, 2003; Liu Y. L. et al., 2023). AI, powered by sophisticated machine learning algorithms and natural language processing capabilities, can analyze vast datasets and generate precise responses within milliseconds. This superior responsiveness significantly improves the efficiency of human-computer interaction (Neiroukh et al., 2024). The statement is echoed by the qualitative investigation. For example, interviewee (P14) mentioned: “If you use MOOC or bilibili, or some other large and well-known platforms, one thing they have in common is that you need to watch videos, which may require some investment in your time cost. But instead, using the generative AI software, it can give you a result in a few seconds. I really like it!” Research has shown that prolonged response times not only reduce task efficiency but may also convey negative impressions about the system’s predictive capabilities. Users’ prevailing assumption that most prediction tasks should be inherently simple for AI systems (Efendić et al., 2020). Liu Y. L. et al. (2023) also pointed out that response time is one of the determining factors affecting users’ perceived value and satisfaction with an AI service. GAI’s timely response impacts the interaction quality, which subsequently strengthens users’ trust in the GAI (Pham et al., 2024). Especially in online learning scenarios, as the response speed increases, it not only sustains users’ engagement and concentration during GAI interactions but also fosters more positive attitudes toward the technology. Therefore, we hypothesize that:

H3a: Response time positively influences users’ perceived value.

H3b: Response time positively influences users’ satisfaction.

Integrability means that GAI effectively facilitates the ability to combine information from different sources to respond to users’ problems (Chen et al., 2025). It depends on the task and contextual environment, which also reflects task-related properties (Nelson et al., 2005). Chen et al. (2025) argued that highly interdependent complex tasks are more dependent on the integrated system’s outputs than a task-independent system. GAI can be optimally adapted not only based on existing databases but also combined with historical user interaction data in the content generation process (Chen et al., 2025). Existing studies have shown that optimizing the knowledge acquisition pathways, thereby enhancing learning effectiveness and enabling learners to achieve superior outcomes (Korayim et al., 2025). Furthermore, the integrability facilitates rapid adaptation to environmental changes and effective utilization of pivotal opportunities (Ding, 2021). In online learning contexts, GAI generates systematic and related knowledge according to users’ needs, and this powerful integration capability enables users to flexibly respond to problems, thus fostering a more favorable attitude toward technology. Interviewee (P14) mentioned: “GAI will organize these words into more complete results and present to me, so it may be more convenient in the ability to integrate information. I think it is better than before we searched the web page.” Therefore, we hypothesize that:

H4a: Integrability positively influences users’ perceived value.

H4b: Integrability positively influences users’ satisfaction.

Accuracy represents that the system provides up-to-date and relevant information to the users’ intended goals (Chung et al., 2020). It is one of the important foundations for users to use smart service products (Cheng and Jiang, 2022). This viewpoint is echoed by P9, who mentioned: “There are some official data or real-time information, and I do not 100% believe the answers it gives me.” During the interaction process, it is often necessary to donate considerable cognitive efforts to evaluate the precision and relevance of GAI-generated content (Chen et al., 2023c). When users confirm that AI recommendations sufficiently address their requirements, this validation triggers a “cognitive resonance” phenomenon—the perception that the system genuinely comprehends their underlying needs—thereby substantially increasing recommendation acceptance (Li et al., 2021). Accuracy not only makes users feel that their needs are fully valued but, more importantly, provides effective solutions (Yuan et al., 2022). This positive experience strengthens users’ recognition of AI technology capabilities (Gursoy et al., 2019), and plays an important role in users’ satisfaction (Walle et al., 2023). In the context of online learning, accuracy is of equal importance. GAI should ensure the solutions and methods are correct and feasible to establish perceived value, fostering positive user attitudes (Chen et al., 2023b). Therefore, we hypothesize that:

H5a: Accuracy positively influences users’ perceived value.

H5b: Accuracy positively influences users’ satisfaction.

Source credibility defined as individuals’ perception of sources as trustworthy and expertized (Hovland and Weiss, 1951). Camilleri (2024) stated that users often rely on pre-existing perceptions about information sources rather than objectively assessing content quality. It is a prerequisite for users to assess information’s usefulness (Camilleri and Filieri, 2023). Compared to non-expert sources, information disseminated by experts is usually perceived higher reliability and credibility (Ismagilova et al., 2020). Users will be more inclined to accept the advice and knowledge provided by professional and authoritative sources (Hovland and Weiss, 1951; Wang and Scheinbaum, 2018). In the context of online learning, users expect the source reliability from GAI. This viewpoint is echoed by P21, who mentioned: “If the results of GAI are different from those I searched in Baidu, I may not be able to judge which information is true.” Only when they confirm information originates from credible and authoritative sources do they feel confident acquiring knowledge through GAI interactions. This credibility reinforces users’ perception of information utility (Camilleri and Kozak, 2022), as well as contributing to positive attitudes toward GAI. Therefore, we hypothesize that:

H6a: Source credibility positively influences users’ perceived value.

H6b: Source credibility positively influences users’ satisfaction.

Personalization provides customized services to users based on their needs, preferences, and intent (Ameen et al., 2021). Personalization in AI-based services has been defined as the service capability to provide specialized services based on a user’s personal information and contextual usage (Liu and Tao, 2022; Kim and Hur, 2024). Highly personalized AI not only formulates precise inquiries to identify individual needs but also simulates users’ decision-making processes (Kim and Hur, 2024). Therefore, it generates customized content that better meets the users’ expectations. Pappas et al. (2017) demonstrated that users are more inclined to higher relevant information. Aw et al. (2024) further noted that higher levels of personalization in mobile AR shopping apps can create a stronger sense of realism and coherence between virtual and real dimensions, which can lead to a significant increase in users’ immersion. In online learning scenarios, when GAI provides personalized answers to users’ specific questions, it optimizes satisfaction (Li and Zhang, 2023), and effectively enhances users’ perceived value. Such personalized learning support better matches the educational content with the learner’s knowledge level and cognitive style, thus creating a more efficient and enjoyable learning experience (Baillifard et al., 2025). Interviewee (P6) mentioned: “Because many of my questions will be very professional, I would prefer to have such a personalized and professional AI.” Therefore, we hypothesize that:

H7a: Personalization positively influences users’ perceived value.

H7b: Personalization positively influences users’ satisfaction.

Emotional support, as a crucial dimension of social support, fulfills individuals’ psychological needs by conveying empathy, emotional validation, and encouragement (Meng and Dai, 2021). Existing research suggested that human-provided emotional support effectively enhanced individuals’ role meaning, thereby increasing well-being (Pai, 2023), as well as significantly improving service evaluations (Menon and Dubé, 2007) and effectively alleviating psychological stress (Meng and Dai, 2021). GAI can simulate human emotional communication, and are gradually taking on the role of emotional support (Gelbrich et al., 2021). This viewpoint is echoed by interviewee P16, who mentioned: “I think GAI is completely different from human’s feedback. Maybe humans will tell you that the employment environment is not very good for finding a job, and the situation is not very optimistic now. But GAI is relatively neutral. It may just give me some vitality.Lee et al. (2022) pointed out that AI chatbots equipped with emotional intelligence dialogue systems can provide emotional understanding and encouragement to users. This interaction facilitates deeper communication and builds emotional connections. In our interviews, participants also reported that when they experienced academic stress and confided in GAI, they received both emotional consolation and personalized academic guidance. In online learning scenarios, GAI’s emotional support creates a friend-like interactive experience, and this humanized interaction significantly enhances users’ perceived value and satisfaction. Therefore, we hypothesize that:

H8a: Emotional support positively influences users’ perceived value.

H8b: Emotional support positively influences users’ satisfaction.

Perceived value represents users’ overall evaluation of a product or service’s usefulness (Chen et al., 2023b). Within human-GAI interaction contexts, this construct is users’ assessment of GAI’s functionality, such as its 24/7 availability, problem-solving efficiency, and generation of content (Carvalho and Ivanov, 2024; Xu et al., 2024). As Lee et al. (2007) stated that perceived value is more significant in increasing satisfaction than service quality. Perceived value is different in scenarios, such as technology, service delivery, and tangible commitments (De Kervenoael et al., 2020). In AI service applications, robots can effectively balance temporal efficiency, economic considerations, and user experience, which directly influence users’ perceived value (De Kervenoael et al., 2020). Interviewee (P10) mentioned: “I may still need to spend time identifying what is right and wrong. But it will take less time than Baidu. This effectively offsets the time spent identifying the answer and is important for my experience of using it afterwards.” Infusion use is an active, repetitive, and long-term deep use behavior of GAI (Hu et al., 2024), which represents a more sustained engagement than continuous use behavior. In new technology research, perceived value is a critical factor that influences users’ long-term use behavior (Lavado-Nalvaiz et al., 2022; Maroufkhani et al., 2022). Therefore, we hypothesize that:

H9: Perceived value positively influences users’ infusion use.

H11a: Perceived value mediates the relationship between intelligence and infusion use.

H11b: Perceived value mediates the relationship between explainability and infusion use.

H11c: Perceived value mediates the relationship between response time and infusion use.

H11d: Perceived value mediates the relationship between integrability and infusion use.

H11e: Perceived value mediates the relationship between accuracy and infusion use.

H11f: Perceived value mediates the relationship between source credibility and infusion use.

H11g: Perceived value mediates the relationship between personalization and infusion use.

H11h: Perceived value mediates the relationship between emotional support and infusion use.

Satisfaction is an indicator of service quality assessment, quantifying the variance between users’ actual service experience and their expectations (Xie et al., 2023). This concept encompasses not only the immediate usage pleasure but also a comprehensive assessment process that involves the comparison of users’ past experiences with their current expectations (Poushneh et al., 2024). When using AI services, satisfaction is one of the main factors that impact users’ subsequent behavior (Jiang et al., 2022; Chen et al., 2023b; Xie et al., 2024). This viewpoint is echoed by interviewee P25, who mentioned: “GAI, like Doubao, is helpful, because it will make me more and more fluent, and then the expression will become more and more natural, and the response will get better and better in all aspects. That’s why I’ve been sticking with it for oral speaking.”Specifically, when the GAI provides information that aligns with user requirements, this positive experience fosters favorable beliefs, ultimately promoting their sustained behavior (i.e., infusion use) (Ku and Chen, 2024). Therefore, we hypothesize that:

H10: Satisfaction positively influences users’ infusion use.

H12a: Satisfaction mediates the relationship between intelligence and infusion use.

H12b: Satisfaction mediates the relationship between explainability and infusion use.

H12c: Satisfaction value mediates the relationship between response time and infusion use.

H12d: Satisfaction mediates the relationship between integrability and infusion use.

H12e: Satisfaction mediates the relationship between accuracy and infusion use.

H12f: Satisfaction mediates the relationship between source credibility and infusion use.

H12g: Satisfaction mediates the relationship between personalization and infusion use.

H12h: Satisfaction mediates the relationship between emotional support and infusion use.

Figure 2 describes the theoretical model.

Figure 2
Diagram illustrating a model with three sections: Stimulus, Organism, and Response. Stimulus includes factors like Intelligence, Explainability, and others, each linked to Perceived Value and Satisfaction in the Organism section, represented by arrows marked with hypotheses H1a to H8b. These lead to Infusion Use in the Response section, indicated by arrows labeled H9 and H10. Surrounding text references additional hypotheses H11a to H12h related to sections.

Figure 2. The proposed research model.

4.2 Questionnaire design

This study collected data through a questionnaire comprising three main sections. The first section outlines the research purpose, defines GAI, and presents two examples of its application in online learning scenarios. The second section measures eight antecedent factors, two mediators, and the outcome variable in the theoretical model. The third section captures participants’ demographic information, including gender, age, education level, profession, frequency and year of GAI usage in learning.

To ensure the reliability and validity of the questionnaire, the measurement items in this study were adapted from established scales in the literature, with appropriate modifications to the context of GAI in online learning scenarios. Among them, intelligence, response time, and explainability use the scales developed by scholars such as Mehmood et al. (2024), Darban (2024), and Liu Y. L. et al. (2023), respectively. The scale of integration, accuracy, and source credibility mainly refers to the study of Chen et al. (2025), Zhou and Wu (2024), Yuan et al. (2022), and Wilson and Baack (2023). Personalization and emotional support are mainly based on the scales developed by Chen et al. (2023a), Zhu et al. (2023), and Zhang et al. (2018). Perceived value is adapted from De Kervenoael et al. (2020) and Chen et al. (2023b). Satisfaction is based on Xu et al. (2023). Finally, infusion use is adapted from Hu et al. (2024). All questionnaire research data were collected by a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree). Detailed measurement items are provided in Appendix B Table B1.

Before the formal survey, this study conducted a pilot test involving fifty participants and consulted experts to refine the wording and structure of the questionnaire items based on the participants’ feedback. The pilot test results indicated that the scale demonstrated strong reliability and validity.

4.3 Data collection

Questionnaire data for this study were collected via Sojump,3 a widely used online survey platform in China with 260 million registered users, similar to Amazon MTurk (Wu et al., 2024). This platform has also been widely used in previous related studies (Ding et al., 2023; Del Ponte et al., 2024; Javed et al., 2024). Before the questionnaire, participants were provided with a brief introduction to GAI and online learning, along with two screenshots demonstrating the use of GAI in online learning contexts. After the questionnaire is completed, each participant will receive 3 RMB (about 0.413 $) as a reward. The questionnaire was distributed and collected in February 2025, yielding 386 participants who joined our study.

We obtained data through random sampling, but to ensure the quality of the data, this study implemented three data screening criteria. First, participants were required to have prior experience using GAI for learning purposes. The question, “Have you ever used GAI to assist in learning?” was set to exclude participants without experience (N = 17). Second, two attention tests were conducted to exclude the sample who failed to answer correctly (N = 23). Additionally, the reverse questions were included for the third item of explainability and the fourth item of accuracy to drop samples with inconsistent responses (N = 19). In sum, we included 327 participants in our data analysis. Following Chin’s (1998) guideline for PLS-SEM, we ensured the sample size exceeded both: (1) 10 times the number of items in the largest construct; (2) 10 times the number of independent variables. Second, we used G*Power 3.1.9 software to calculate the sample size. With a significance level α= 0.05, Power(1β)= 0.95, and effect size = 0.15, the minimum sample size was 160. Our sample sizes satisfy these requirements. Among the participants, 160 were female (48.93%), with the majority aged 19–24 (41.59%). The data on year of usage indicated that users with 1–2 years of experience constitute the majority (51.07%). The demographics of the final sample are presented in Table 2.

Table 2
www.frontiersin.org

Table 2. Sample demographics.

4.4 Data analysis

We used Partial Least Square (PLS) to test our theoretical model. PLS-SEM has no strict requirements on sample size and quantity. In addition, it has strong predictive and interpretative ability (Hair et al., 2011), which is suitable for exploratory theoretical construction. The influence mechanism of this study first built a model of GAI’s infusion use in online learning scenarios, which belongs to exploratory research and is suitable for the PLS-SEM method. The data were analyzed using SmartPLS 4.0. We followed the two-step approach in examining the measurement and structural models.

4.4.1 Measurement model

To ensure the reliability and validity of the questionnaire, we assessed its convergent and discriminant validity and reliability (MacKenzie et al., 2011), with the results detailed in Table 3. Specifically, reliability was assessed by Cronbach’s Alpha and Composite reliability (CR) for all variables. The results indicate that the Cronbach’s Alpha values of all variables range from 0.820 to 0.892, and the CR values range from 0.893 to 0.921, with all coefficients exceeding the threshold of 0.7, which indicates that the questionnaire has strong internal consistency. Additionally, the average variance extracted (AVE) values for each construct ranged between 0.675 and 0.752, all exceeding 0.5, providing evidence of convergent validity.

Table 3
www.frontiersin.org

Table 3. Assessment of reliability and convergent validity.

Discriminant validity was assessed using the Fornell-Larcker criterion and Heterotrait-monotrait ratio (HTMT). Table 4 shows that the square root of the AVE values of all constructs was higher than the inter-construct correlations (MacKenzie et al., 2011), demonstrating good discriminant validity. Furthermore, the HTMT values among the constructs in Table 5 are below the critical value of 0.85.

Table 4
www.frontiersin.org

Table 4. Discriminant validity using (Fornell-Larcker method).

Table 5
www.frontiersin.org

Table 5. Discriminant validity using (HTMT method).

4.4.2 Common method bias

Since the sample data were collected from a single source, with participants answering all questions simultaneously, this may lead to common method bias (CMB) among them. To mitigate the potential impact of CMB, this study adopted several control measures based on established studies: (1) informing participants that the survey was anonymous and that there were no right or wrong answers; (2) incorporating reverse items and attention test questions; and (3) balancing the order of questionnaire items. Additionally, we used Harman’s single-factor test for CMB. The results of a principal component analysis indicated that a single factor explains 38.50% of the variance in the data, which is below the recommended threshold of 40% (Podsakoff et al., 2003). Furthermore, this study employed the method proposed by Liang et al. (2007), and the results are presented in Appendix B Table B2. The average substantive explained factor loading (0.746) was larger than the average method factor loading (0.002), yielding a ratio of 493:1. Both test results imply that the CMB may not be a concern.

4.4.3 Structure model

To access the structural model, this study evaluated the variance explained (R2), effect size (f2), and Stone-Geisser’s (Q2) of variables. The model explained a portion of the variance, with a coefficient of determination (R2) of 0.648 for perceived value, 0.661 for satisfaction, and 0.576 for infusion use as a dependent variable, indicating a good level of predictive power (Hair et al., 2019). Effect size analysis (f2) showed all values ranging from 0.016 to 0.218, indicating low to medium impacts across constructs (Chin, 1998). Additionally, all the Q2 values exceeded the threshold of zero, confirming the model’s relevance regarding all endogenous variables (Hair et al., 2019).

A bootstrapping procedure was chosen to measure the significance of the path coefficient, standard error, and t-statistics (Table 6). (1) Perceived value. The results of the path analysis suggested that perceived value was positively influenced by intelligence (β = 0.084, p < 0.05), explainability (β = 0.162, p < 0.001), response time (β = 0.101, p < 0.05), integrability (β = 0.111, p < 0.05), accuracy (β = 0.106, p < 0.01), source credibility (β = 0.172, p < 0.01), personalization (β = 0.129, p < 0.05) and emotional support (β = 0.285, p < 0.001). These results supported H1a, H2a, H3a, H4a, H5a, H6a, H7a and H8a. (2) Satisfaction. The results of the path analysis suggested that satisfaction was positively associated with intelligence (β = 0.111, p < 0.01), explainability (β = 0.167, p < 0.001), response time (β = 0.161, p < 0.001), integrability (β = 0.134, p < 0.01), accuracy (β = 0.165, p < 0.01), source credibility (β = 0.162, p < 0.01), personalization (β = 0.111, p < 0.05) and emotional support (β = 0.188, p < 0.001). These results supported H1b, H2b, H3b, H4b, H5b, H6b, H7b and H8b.

Table 6
www.frontiersin.org

Table 6. Direct effects test.

Perceived value was found to be positively associated with infusion use (β = 0.356, p < 0.001), thus supporting H9. Satisfaction was positively influenced by infusion use (β = 0.456, p < 0.001), which supported H10. As shown in Figure 3, the hypothesis proposed in this study is supported.

Figure 3
Flowchart depicting a model with

Figure 3. Direct effects test.

Furthermore, we conducted mediation tests on the effects of perceived value and satisfaction. Specifically, we utilized the bootstrapping method with 5,000 repetitions to construct confidence intervals (CIs) (Edwards and Lambert, 2007; Hayes, 2009). Table 7 presents the bootstrapping results along with the corresponding 95% CIs. It shows that perceived value partially or fully mediates the relationship between intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, emotional support, and infusion use. These results supported H11a, H11b, H11c, H11d, H11e, H11f, H11g, H11h. At the same time, satisfaction partially or fully mediates the relationship between intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, emotional support, and infusion use. These results supported H12a, H12b, H12c, H12d, H12e, H12f, H12g, H12h.

Table 7
www.frontiersin.org

Table 7. Indirect and mediating effects test.

4.4.4 Fuzzy-set qualitative comparative analysis (fsQCA)

The theoretical foundation of SEM is based on the principle of correlational causation. This implies that variations in independent variables systematically influence dependent variable values. However, this assumption’s reliability may be limited due to the fundamentally asymmetric nature of most real-world relationships (Chakraborty et al., 2024). fsQCA tackles these concerns and integrates the strengths of both qualitative and quantitative research methods, accommodating sample sizes ranging from very small to very large (Lin et al., 2024). It is particularly well-suited for examining complex relationships among multiple factors in social phenomena. This study employs fsQCA for two primary reasons. First, it can better explore the causal complexity (Wang et al., 2024). Given that GAI infusion use is driven by multiple factors, traditional quantitative methods, which often isolate the individual effects of each factor, are less suitable (Hu et al., 2024). Second, fsQCA is based on Boolean algebra rather than regression analysis, enabling any situation to be described as a combination of causal conditions and their outcomes. Through logical and non-statistical procedures, fsQCA can establish logical links between combinations of causal conditions and outcomes (Lin et al., 2024), thereby providing deeper insights into the mechanisms that shape users’ infusion use. Figure 4 shows the configuration of antecedents and outcome conditions.

Figure 4
Diagram illustrating factors influencing GAI's infusion use. Attributes like intelligence, explainability, response time, integrability, personalization, emotional support, source credibility, and accuracy orbit around GAI. An arrow points to

Figure 4. Configurational model of users’ infusion use of GAI.

According to Pappas and Woodside (2021), the fsQCA method has three stages: (1) data calibration; (2) necessary conditions; (3) configuration analysis. We used the fsQCA 3.0 software to analyze the 327 samples. As recommended of fsQCA studies, variable calibration was conducted before the analysis of necessary conditions. Specifically, three calibration anchors — the full membership, the crossover point, and the full non-membership — were defined. According to previous research, 95, 50, and 5% of each construct were used to set full membership, crossover point, and full non-membership, respectively (Lalicic and Weismayer, 2021). This process transformed the questionnaire data into continuous membership scores ranging from 0 to 1 (Zhou and Wu, 2024). Additionally, following Ragin (2006), a value of 0.001 was added to the calibrated values to avoid excessive 0.5 that would result in data exclusion during truth table construction.

We further used the fsQCA 3.0 software to strengthen the accuracy of necessary condition analysis. If the consistency threshold is greater than 0.9, the antecedent condition is a necessary condition. The results are presented in Table 8. We can see that the maximum consistency of the antecedent conditions that influence infusion use is 0.841. It indicates that all the antecedent conditions are not necessary conditions. Therefore, it is necessary to understand users’ infusion use through configurational analysis.

Table 8
www.frontiersin.org

Table 8. Analysis of necessary conditions of fsQCA method.

We constructed a truth table for all logically possible antecedent configurations. Following Pappas and Woodside (2021), the case frequency threshold was set at 3 when the sample size exceeded 150. In this study, the consistency threshold was set at 0.85, and PRI consistency thresholds below 0.75 were labeled as 0. Variables that appear in both intermediate and parsimonious solutions are considered as core conditions, while those appearing only in intermediate solutions are identified as peripheral conditions (Fiss, 2011). The results of the configurations are shown in Table 9. The large filled circles (A solid black circle on a white background.) represent the presence of core conditions, while the small filled circles (Black dot on a white background.) indicate the presence of peripheral conditions. Conversely, denote the absence of core conditions, and the small cross-out circles (A black circle with an "X" inside, resembling a cancel or delete symbol, on a white background.) signify the absence of peripheral conditions. Blank indicates the condition is present or absent. As shown in the table, six configurations explain users’ infusion use, with an overall solution consistency of 0.655 and a coverage of 0.957. Both indicators exceed the recommendation, confirming the reliability of the results (Li F. et al., 2024). It shows that the six configurations are highly explanatory for users’ infusion use. Among these, S4a and S4b constitute a second-order equivalent configuration, as their core conditions are identical.

Table 9
www.frontiersin.org

Table 9. Sufficient configurations for infusion use.

S1 indicates that when GAI possesses integrability, accuracy, source credibility, and emotional support as core conditions, along with intelligence and response time as peripheral conditions, users are more likely to infusion use it as a tool in everyday learning. S2 demonstrates that GAI with integrability, accuracy, personalization, and emotional support as core conditions complemented by response time and explainability as peripheral conditions, enhances users’ infusion use. S3 shows that GAI, with accuracy, source credibility, personalization, and emotional support as core conditions, and response time and explainability as peripheral conditions, leads to high infusion use. S4a shows that when GAI has integrability, accuracy, source credibility, personalization, and emotional support as core conditions, with intelligence as a peripheral condition, it significantly strengthens users’ infusion use. In addition, S4b exhibits the highest raw coverage and represents the core configuration for infusion use. It shares the same core conditions as S4a but includes explainability as a peripheral condition, both positively influencing users’ infusion use. S5 illustrates that GAI with integrability and personalization as core conditions exist, combined with intelligence, response time, explainability, and accuracy as peripheral conditions, facilitates users’ infusion use even in the absence of source credibility and emotional support.

We tested the sensitivity of the solutions to both the sample and the calibration. First, the consistency threshold was reduced from 0.85 to 0.8, with all other parameters unchanged, resulting in configurations identical to the original. Second, the PRI consistency is increased from 0.75 to 0.8, and the rest is unchanged. Compared to the PRI of 0.75, only S5 is eliminated, and the coverage is reduced from 0.655 to 0.631. It can be seen that the results prove to be predominantly robust.

5 Discussion

With the widespread application of GAI, individuals are increasingly shifting from traditional online learning platforms (e.g., MOOCs) to GAI-assisted problem-solving. This transition not only transforms users’ learning habits but also raises questions regarding GAI’s impact on user behavior and psychological processes (Kim et al., 2021; Ahmad et al., 2023; Germinder and Capizzo, 2024). To address these questions, this study employed a mixed-methods approach. In the first stage, we conducted 26 semi-structured interviews to identify factors influencing users’ infusion use of GAI in online learning contexts. The second stage comprises a quantitative study that employs 327 participants to validate the proposed research model. Finally, fsQCA was applied to examine the configurational effects among these factors, revealing the distinct pathways that lead to users’ infusion use of GAI.

Guided by “top-down” framework analysis and “bottom-up” grounded theory, stage 1 identifies eight critical factors influencing users’ infusion use of GAI: intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, and emotional support. These factors collectively influence users in establishing deep, long-term engagement with GAI, enabling integration into their daily lives (Jones et al., 2002). Empirical evidence confirms that these eight factors influence users’ infusion use through parallel mediation of perceived value and satisfaction. These findings make up the framework for understanding GAI infusion use in online learning contexts. Beyond previous studies identified influencing factors such as response time (Baabdullah, 2024), accuracy (Zhou and Wu, 2024), and source credibility (Chakraborty et al., 2024) in usage intention, users also place equal importance on other dimensions, such as emotional support and personalization. This distinction highlights GAI’s unique position as an emerging learning tool that combines the accessibility of traditional online education with adaptive capabilities that enhance its responsiveness to individual learning needs.

Our research extends beyond previous studies that examined single or combined factors such as hedonic motivation, habit, perceived usefulness, perceived risk, and perceived responsiveness (Sanusi et al., 2024; Zhao et al., 2024; Kim et al., 2025). This study employs a mixed-methods approach to systematically identify and validate multidimensional factors and process a more comprehensive theoretical framework. Furthermore, our findings regarding response time diverge from Gnewuch et al. (2022). That study concluded that delayed responses align better with human conversational rhythms, effectively stimulating social reactions. However, we found that rapid responses significantly enhance perceived value and satisfaction in online learning context. This discrepancy may reflect users’ expectations that modern technology can maintain both speed and quality (Neiroukh et al., 2024), leading them to associate faster responses with more enjoyable experiences (Yang, 2023).

Finally, through fsQCA analysis, we demonstrate that GAI infusion use is driven by synergistic combinations of multiple factors. While most studies have examined users’ attitudes from a single-factor perspective (Chakraborty et al., 2024; Wang, 2025), our configurational analysis reveals six distinct pathways to infusion use. S4a and S4b form a second-order equivalence configuration. The eight factors function as core or peripheral conditions. Among these, integrability, accuracy, source credibility, personalization, and emotional support emerge as core conditions, with explainability as a peripheral condition, representing the most generalized configurations. Importantly, no single factor constitutes a necessary condition for infusion use.

6 Implications and conclusions

6.1 Theoretical implications

First, this study systematically proposes a theoretical framework that GAI’s characteristics influence users’ infusion use in online learning scenarios. Previous studies have investigated intelligence and explainability in promoting positive GAI usage (Al-Emran et al., 2024; Darban, 2024; Theresiawati et al., 2025). However, these studies often focus on a single or limited characteristic. Our work comprehensively identifies eight key GAI characteristics influencing users’ infusion use—intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, and emotional support. This integrated theoretical framework not only enriches our understanding of GAI attributes but also provides a systematic theoretical foundation for subsequent research.

Second, this study extends theoretical understanding of perceived value and satisfaction in online learning contexts. Previous research has examined either attitudes (i.e., satisfaction) or cognition (i.e., perceived value) as independent mediators influencing GAI usage in online learning (Chan and Zhou, 2023; Kim et al., 2025). However, the synergistic mechanism between these two psychological mediators in users’ behaviors has not been fully elucidated. Based on the SOR model, we demonstrate that the eight GAI features (stimuli) influence infusion use (response) through the parallel mediation of perceived value and satisfaction (organism). These findings provide deeper insights into how individual cognition and attitudes jointly evolve when responding to external stimuli.

Third, these findings contribute to the literature on deep usage of GAI in online learning contexts. With the advancement of GAI technology, increasing research focuses on users’ long-term usage behaviors in online learning (Ngo et al., 2024; Holzmann et al., 2025; Liu et al., 2025). As the ultimate stage in post-adoption, infusion use represents not only the deep integration of technology into the learning process but also the latent commercial value within the education domain (Hu et al., 2024). Although infusion use has been investigated in contexts such as information technology (Sundaram et al., 2007), customer relationship management (Chen et al., 2021), and smart objects (Hu et al., 2024), its examination in online learning environments remains limited. This study addresses this gap by providing a theoretical framework for understanding GAI infusion use, thereby supplementing and enriching research on user behavior in online learning contexts.

6.2 Practical implications

This study offers significant practical implications for GAI researchers and developers. First, GAI service providers should adopt a long-term strategic perspective to foster users’ infusion use, thereby unlocking greater business value. As demonstrated by globally successful products like ChatGPT, Gemini, and DeepSeek, deep engagement and comprehensive feature utilization are critical to commercial success. Service providers should focus on enhancing all eight identified dimensions—intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, and emotional support—while prioritizing core user needs and establishing clear strategic objectives.

Second, educational institutions and teachers should establish multi-dimensional evaluations when selecting GAI learning tools. Beyond conventional metrics like “intelligence,” criteria should also consider other factors, such as explainability and emotional support. This enables users to receive both comprehensive knowledge and psychological encouragement during challenging learning phases. Additionally, leveraging GAI to facilitate personalized learning plans can transform GAI into a valuable educational partner.

Third, developers can enhance perceived value and satisfaction by emphasizing GAI’s practical benefits and distinctive advantages. Effective strategies include implementing intelligent summaries (e.g., “I have summarized the key points of this chapter for you, saving your research time”) and generating personalized learning progress reports that help users visualize their achievement. Furthermore, incorporating empathetic interactions during complex tasks (e.g., “This question is indeed challenging. Let us tackle it step by step.”) can create a pleasant and efficient learning experience.

Finally, configuration analysis suggests that when facing technical resource constraints, developers should prioritize five key dimensions: integrability, accuracy, source credibility, personalization, and emotional support. Specific implementations may include regularly updating knowledge bases to ensure content authority and relevance, enhancing information capture and summarization capabilities, dynamic user profiles, and emotional interaction.

6.3 Limitations and future research

This study has several limitations. First, this study is primarily based on the sample of Chinese users, and its conclusions may be influenced by specific cultural and contextual factors. This may limit the generalizability of the findings to some extent. Future research could incorporate more diverse participants across different countries and regions. It can help validate the transferability of our findings. Furthermore, exploring how cultural differences influence the use of GAI in online learning contexts could both enhance the theoretical robustness and practical applicability of the research. Second, this study relies on cross-sectional data collected. Future studies could employ longitudinal methods to capture the dynamic nature of users’ perceptions over time. Third, while this study is grounded in the SOR model, future work could integrate other relevant theories (e.g., diffusion of innovations theory, CASA paradigm) to further enrich the understanding of factors influencing users’ infusion use.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by The Institutional Ethics Committee of School of Management, Xiamen University Tan Kah Kee College. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required from the participants or the participants’ legal guardians/next of kin because Semi-structured interviews and questionnaires were collected anonymously through a nationwide online platform, and participants’ names and other sensitive personal information were not identified. Participants were informed that there were no correct or incorrect responses and that their participation did not have any personal repercussions.

Author contributions

LX: Writing – review & editing, Conceptualization, Writing – original draft, Data curation, Formal analysis, Methodology. GC: Project administration, Supervision, Writing – review & editing. NZ: Writing – review & editing, Visualization, Validation.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by the Reform Project of Xiamen University Tan Kah Kee College (Project No. 2022J06).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2025.1636480/full#supplementary-material

Footnotes

1. ^For detailed information about Quizlet Q-Chat, please visit https://quizlet.com/blog/meet-q-chat.

2. ^For a detailed description of the report, see: https://www.questmobile.com.cn/research/report/1818126420037177346.

3. ^www.Sojump.cn

References

Ahmad, N., Du, S., Ahmed, F., ul Amin, N., and Yi, X. (2023). Healthcare professionals satisfaction and AI-based clinical decision support system in public sector hospitals during health crises: a cross-sectional study. Inf. Technol. Manag. 26, 205–217. doi: 10.1007/s10799-023-00407-w

Crossref Full Text | Google Scholar

Al-Adwan, A. S., Li, N., Al-Adwan, A., Abbasi, G. A., Albelbisi, N. A., and Habibi, A. (2023). Extending the technology acceptance model (TAM) to predict university students’ intentions to use metaverse-based learning platforms. Educ. Inf. Technol. 28, 15381–15413. doi: 10.1007/s10639-023-11816-3

PubMed Abstract | Crossref Full Text | Google Scholar

Al-Emran, M., Abu-Hijleh, B., and Alsewari, A. A. (2024). Exploring the effect of generative AI on social sustainability through integrating AI attributes, TPB, and T-EESST: a deep learning-based hybrid SEM-ANN approach. IEEE Trans. Eng. Manag. 71, 14512–14524. doi: 10.1109/tem.2024.3454169

Crossref Full Text | Google Scholar

Ameen, N., Tarhini, A., Reppel, A., and Anand, A. (2021). Customer experiences in the age of artificial intelligence. Comput. Hum. Behav. 114:106548. doi: 10.1016/j.chb.2020.106548

PubMed Abstract | Crossref Full Text | Google Scholar

Aw, E. C.-X., Tan, G. W.-H., Ooi, K.-B., and Hajli, N. (2024). Tap here to power up! Mobile augmented reality for consumer empowerment. Internet Res. 34, 960–993. doi: 10.1108/INTR-07-2021-0477

Crossref Full Text | Google Scholar

Baabdullah, A. M. (2024). Generative conversational AI agent for managerial practices: the role of IQ dimensions, novelty seeking and ethical concerns. Technol. Forecast. Soc. Change 198:122951. doi: 10.1016/j.techfore.2023.122951

Crossref Full Text | Google Scholar

Baillifard, A., Gabella, M., Lavenex, P. B., and Martarelli, C. S. (2025). Effective learning with a personal AI tutor: a case study. Educ. Inf. Technol. 30, 297–312. doi: 10.1007/s10639-024-12888-5

Crossref Full Text | Google Scholar

Bartneck, C., Kulić, D., Croft, E., and Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81. doi: 10.1007/s12369-008-0001-3

Crossref Full Text | Google Scholar

Bhattacherjee, A. (2001). Understanding information systems continuance: an expectation-confirmation model. MIS Q. 25, 351–370. doi: 10.2307/3250921

Crossref Full Text | Google Scholar

Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: using SmartPLS to advance an information technology acceptance framework. Technol. Forecast. Soc. Change 201:123247. doi: 10.1016/j.techfore.2024.123247

Crossref Full Text | Google Scholar

Camilleri, M. A., and Filieri, R. (2023). Customer satisfaction and loyalty with online consumer reviews: factors affecting revisit intentions. Int. J. Hosp. Manag. 114:103575. doi: 10.1016/j.ijhm.2023.103575

Crossref Full Text | Google Scholar

Camilleri, M. A., and Kozak, M. (2022). Utilitarian motivations to engage with travel websites: An interactive technology adoption model. J. Serv. Mark. 37, 96–109. doi: 10.1108/JSM-12-2021-0477

Crossref Full Text | Google Scholar

Carvalho, I., and Ivanov, S. (2024). ChatGPT for tourism: applications, benefits and risks. Tour. Rev. 79, 290–303. doi: 10.1108/TR-02-2023-0088

Crossref Full Text | Google Scholar

Chakraborty, D., Kar, A. K., Patre, S., and Gupta, S. (2024). Enhancing trust in online grocery shopping through generative AI chatbots. J. Bus. Res. 180:114737. doi: 10.1016/j.jbusres.2024.114737

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Zhou, W. X. (2023). An expectancy value theory (EVT) based instrument for measuring student perceptions of generative AI. Smart Learn. Environ. 10:64. doi: 10.1186/s40561-023-00284-4

Crossref Full Text | Google Scholar

Chang, W., and Park, J. (2024). A comparative study on the effect of ChatGPT recommendation and AI recommender systems on the formation of a consideration set. J. Retail. Consum. Serv. 78:103743. doi: 10.1016/j.jretconser.2024.103743

Crossref Full Text | Google Scholar

Chen, M. J. (2023). Antecedents and outcomes of virtual presence in online shopping: a perspective of SOR (stimulus-organism-response) paradigm. Electron. Mark. 33:58. doi: 10.1007/s12525-023-00674-z

Crossref Full Text | Google Scholar

Chen, Q., Gong, Y., and Lu, Y. (2023a). User experience of digital voice assistant: conceptualization and measurement. ACM Trans. Comput. Human Interact. 31, 1–35. doi: 10.1145/3622782

Crossref Full Text | Google Scholar

Chen, L. W., Hsieh, J., Rai, A., and Xu, S. A. X. (2021). How does employee infusion use of CRM systems drive customer satisfaction? Mechanism differences between face-to-face and virtual channels. MIS Q. 45, 719–754. doi: 10.25300/misq/2021/13265

Crossref Full Text | Google Scholar

Chen, Q., Lu, Y., Gong, Y., and Xiong, J. (2023b). Can AI chatbots help retain customers? Impact of AI service quality on customer loyalty. Internet Res. 33, 2205–2243. doi: 10.1108/INTR-09-2021-0686

Crossref Full Text | Google Scholar

Chen, R. R., Ou, C. X., Wang, W., Peng, Z., and Davison, R. M. (2020). Moving beyond the direct impact of using CRM systems on frontline employees' service performance: the mediating role of adaptive behaviour. Inf. Syst. J. 30, 458–491. doi: 10.1111/isj.12265

Crossref Full Text | Google Scholar

Chen, H., Wang, P., and Hao, S. (2025). AI in the spotlight: the impact of artificial intelligence disclosure on user engagement in short-form videos. Comput. Human Behav. 162:108448. doi: 10.1016/j.chb.2024.108448

Crossref Full Text | Google Scholar

Chen, Q., Yin, C. Q., and Gong, Y. M. (2023c). Would an AI chatbot persuade you: an empirical answer from the elaboration likelihood model. Inf. Technol. People 38, 937–962. doi: 10.1108/itp-10-2021-0764

Crossref Full Text | Google Scholar

Cheng, Y., and Jiang, H. (2022). Customer–brand relationship in the era of artificial intelligence: understanding the role of chatbot marketing efforts. J. Prod. Brand. Manag. 31, 252–264. doi: 10.1108/JPBM-05-2020-2907

Crossref Full Text | Google Scholar

Cheung, J. C., and Ho, S. S. (2025). Explainable AI and trust: how news media shapes public support for AI-powered autonomous passenger drones. Public Underst. Sci. 34, 344–362. doi: 10.1177/09636625241291192

PubMed Abstract | Crossref Full Text | Google Scholar

Chin, W. W. (1998). The partial least squares approach to structural equation modeling. Mod. Methods Bus. Res. 295, 295–336.

Google Scholar

Chung, M., Ko, E., Joung, H., and Kim, S. J. (2020). Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 117, 587–595. doi: 10.1016/j.jbusres.2018.10.004

Crossref Full Text | Google Scholar

Creswell, J. W., and Creswell, J. D. (2018). Research design: qualitative, quantitative, and mixed methods approaches. Los Angeles: Sage publications.

Google Scholar

Darban, M. (2024). Navigating virtual teams in generative AI-led learning: the moderation of team perceived virtuality. Educ. Inf. Technol. 29, 23225–23248. doi: 10.1007/s10639-024-12681-4

Crossref Full Text | Google Scholar

De Kervenoael, R., Hasan, R., Schwob, A., and Goh, E. (2020). Leveraging human-robot interaction in hospitality services: incorporating the role of perceived value, empathy, and information sharing into visitors' intentions to use social robots. Tour. Manag. 78:104042. doi: 10.1016/j.tourman.2019.104042

Crossref Full Text | Google Scholar

Del Ponte, A., Li, L., Ang, L., Lim, N., and Seow, W. J. (2024). Evaluating SoJump. Com as a tool for online behavioral research in China. J. Behav. Exp. Financ. 41:100905. doi: 10.1016/j.jbef.2024.100905

Crossref Full Text | Google Scholar

DeLone, W. H., and McLean, E. R. (2003). The DeLone and McLean model of information systems success: a ten-year update. J. Manag. Inf. Syst. 19, 9–30. doi: 10.1080/07421222.2003.11045748

Crossref Full Text | Google Scholar

Ding, L. (2021). Employees’ challenge-hindrance appraisals toward STARA awareness and competitive productivity: a micro-level case. Int. J. Contemp. Hospit. Manag. 33, 2950–2969. doi: 10.1108/IJCHM-09-2020-1038

Crossref Full Text | Google Scholar

Ding, Z., Tu, M., and Guo, Z. (2023). Exploring the inhibitory effect of maximising mindset on impulse buying. J. Consum. Behav. 22, 676–687. doi: 10.1002/cb.2153

Crossref Full Text | Google Scholar

Du, L., and Lv, B. (2024). Factors influencing students’ acceptance and use generative artificial intelligence in elementary education: An expansion of the UTAUT model. Educ. Inf. Technol. 29, 24715–24734. doi: 10.1007/s10639-024-12835-4

Crossref Full Text | Google Scholar

Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., et al. (2023). Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv. 55, 1–33. doi: 10.1145/3561048

Crossref Full Text | Google Scholar

Edwards, J. R., and Lambert, L. S. (2007). Methods for integrating moderation and mediation: a general analytical framework using moderated path analysis. Psychol. Methods 12, 1–22. doi: 10.1037/1082-989x.12.1.1

PubMed Abstract | Crossref Full Text | Google Scholar

Efendić, E., Van de Calseyde, P. P., and Evans, A. M. (2020). Slow response times undermine trust in algorithmic (but not human) predictions. Organ. Behav. Hum. Decis. Process. 157, 103–114. doi: 10.1016/j.obhdp.2020.01.008

Crossref Full Text | Google Scholar

Fiss, P. C. (2011). Building better causal theories: a fuzzy set approach to typologies in organization research. Acad. Manag. J. 54, 393–420. doi: 10.5465/amj.2011.60263120

Crossref Full Text | Google Scholar

Fu, Y., Ma, S., Xie, C. F., Li, S. H., and Liu, X. Y. (2025). Exploring the impact of environmental variables on learning performance and persistence in E-learning platforms. Educ. Inf. Technol. 30, 20099–20123. doi: 10.1007/s10639-025-13572-y

Crossref Full Text | Google Scholar

Gelbrich, K., Hagel, J., and Orsingher, C. (2021). Emotional support from a digital assistant in technology-mediated services: effects on customer satisfaction and behavioral persistence. Int. J. Res. Mark. 38, 176–193. doi: 10.1016/j.ijresmar.2020.06.004

Crossref Full Text | Google Scholar

Germinder, L. A., and Capizzo, L. (2024). A strategic communication practitioner imperative: contextualizing responsible AI as part of responsible advocacy. Int. J. Strateg. Commun. 19, 1–17. doi: 10.1080/1553118x.2024.2430959

Crossref Full Text | Google Scholar

Glaser, B., and Strauss, A. (2017). Discovery of grounded theory: strategies for qualitative research. New York: Routledge.

Google Scholar

Gnewuch, U., Morana, S., Adam, M. T., and Maedche, A. (2022). Opposing effects of response time in human–chatbot interaction: the moderating role of prior experience. Bus. Inf. Syst. Eng. 64, 773–791. doi: 10.1007/s12599-022-00755-x

Crossref Full Text | Google Scholar

Gursoy, D., Chi, O. H., Lu, L., and Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int. J. Inf. Manag. 49, 157–169. doi: 10.1016/j.ijinfomgt.2019.03.008

Crossref Full Text | Google Scholar

Hair, J. F., Ringle, C. M., and Sarstedt, M. (2011). PLS-SEM: indeed a silver bullet. J. Mark. Theory Pract. 19, 139–152. doi: 10.2753/MTP1069-6679190202

Crossref Full Text | Google Scholar

Hair, J. F., Risher, J. J., Sarstedt, M., and Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31, 2–24. doi: 10.1108/EBR-11-2018-0203

Crossref Full Text | Google Scholar

Hassandoust, F., and Techatassanasoontorn, A. A. (2022). Antecedents of IS infusion behaviours: an integrated IT identity and empowerment perspective. Behav. Inf. Technol. 41, 2390–2414. doi: 10.1080/0144929x.2021.1928287

Crossref Full Text | Google Scholar

Hayes, A. F. (2009). Beyond baron and Kenny: statistical mediation analysis in the new millennium. Commun. Monogr. 76, 408–420. doi: 10.1080/03637750903310360

Crossref Full Text | Google Scholar

Holzmann, P., Gregori, P., and Schwarz, E. J. (2025). Students’ little helper: investigating continuous-use determinants of generative AI and ethical judgment. Educ. Inf. Technol., 1–21. doi: 10.1007/s10639-025-13708-0

Crossref Full Text | Google Scholar

Hovland, C. I., and Weiss, W. (1951). The influence of source credibility on communication effectiveness. Public Opin. Q. 15, 635–650. doi: 10.1086/266350

Crossref Full Text | Google Scholar

Hu, Q., Pan, Z., Lu, Y., and Gupta, S. (2024). How does material adaptivity of smart objects shape infusion use? The pivot role of social embeddedness. Internet Res. 34, 1219–1248. doi: 10.1108/INTR-04-2022-0253

Crossref Full Text | Google Scholar

Ismagilova, E., Slade, E., Rana, N. P., and Dwivedi, Y. K. (2020). The effect of characteristics of source credibility on consumer behaviour: a meta-analysis. J. Retail. Consum. Serv. 53:101736. doi: 10.1016/j.jretconser.2019.01.005

Crossref Full Text | Google Scholar

Javed, S., Rashidin, M. S., and Jian, W. (2024). Effects of heuristic and systematic cues on perceived content credibility of Sina Weibo influencers: the moderating role of involvement. Humanit. Soc. Sci. Commun. 11, 1–16. doi: 10.1057/s41599-024-04107-w

Crossref Full Text | Google Scholar

Jiang, H., Cheng, Y., Yang, J., and Gao, S. (2022). AI-powered chatbot communication with customers: dialogic interactions, satisfaction, engagement, and customer behavior. Comput. Human Behav. 134:107329. doi: 10.1016/j.chb.2022.107329

Crossref Full Text | Google Scholar

Jones, E., Sundaram, S., and Chin, W. (2002). Factors leading to sales force automation use: a longitudinal analysis. J. Pers. Sell. Sales Manage. 22, 145–156. doi: 10.1080/08853134.2002.10754303

Crossref Full Text | Google Scholar

Kim, W. B., and Hur, H. J. (2024). What makes people feel empathy for AI chatbots? Assessing the role of competence and warmth. Int. J. Human Comput. Interact. 40, 4674–4687. doi: 10.1080/10447318.2023.2219961

Crossref Full Text | Google Scholar

Kim, J. S., Kim, M., and Baek, T. H. (2025). Enhancing user experience with a generative AI chatbot. Int. J. Human Comput. Interact. 41, 651–663. doi: 10.1080/10447318.2024.2311971

Crossref Full Text | Google Scholar

Kim, J., Merrill, K., Xu, K., and Sellnow, D. D. (2021). My teacher is a machine: understanding students' perceptions of AI teaching assistants in online education. Int. J. Human Comput Interact. 37:98. doi: 10.1080/10447318.2020.1855708

Crossref Full Text | Google Scholar

Korayim, D., Bodhi, R., Badghish, S., Yaqub, M. Z., and Bianco, R. (2025). Do generative artificial intelligence related competencies, attitudes and experiences affect employee outcomes? An intellectual capital perspective. J. Intellect. Cap. 26, 595–615. doi: 10.1108/JIC-09-2024-0295

Crossref Full Text | Google Scholar

Ku, E. C., and Chen, C.-D. (2024). Artificial intelligence innovation of tourism businesses: from satisfied tourists to continued service usage intention. Int. J. Inf. Manag. 76:102757. doi: 10.1016/j.ijinfomgt.2024.102757

Crossref Full Text | Google Scholar

Lalicic, L., and Weismayer, C. (2021). Consumers’ reasons and perceived value co-creation of using artificial intelligence-enabled travel service agents. J. Bus. Res. 129, 891–901. doi: 10.1016/j.jbusres.2020.11.005

Crossref Full Text | Google Scholar

Lavado-Nalvaiz, N., Lucia-Palacios, L., and Pérez-López, R. (2022). The role of the humanisation of smart home speakers in the personalisation–privacy paradox. Electron. Commer. Res. Appl. 53:101146. doi: 10.1016/j.elerap.2022.101146

Crossref Full Text | Google Scholar

Lee, C. T., Pan, L.-Y., and Hsieh, S. H. (2022). Artificial intelligent chatbots as brand promoters: a two-stage structural equation modeling-artificial neural network approach. Internet Res. 32, 1329–1356. doi: 10.1108/INTR-01-2021-0030

Crossref Full Text | Google Scholar

Lee, S. Y., Petrick, J. F., and Crompton, J. (2007). The roles of quality and intermediary constructs in determining festival attendees' behavioral intention. J. Travel Res. 45, 402–412. doi: 10.1177/00472875072995

Crossref Full Text | Google Scholar

Lee, J. C., and Xiong, L. N. (2023). Exploring learners' continuous usage decisions regarding mobile-assisted language learning applications: a social support theory perspective. Educ. Inf. Technol. 28, 16743–16769. doi: 10.1007/s10639-023-11884-5

Crossref Full Text | Google Scholar

Li, L., Lee, K. Y., Emokpae, E., and Yang, S.-B. (2021). What makes you continuously use chatbot services? Evidence from Chinese online travel agencies. Electron. Mark. 31, 1–25. doi: 10.1007/s12525-020-00454-z

Crossref Full Text | Google Scholar

Li, Y., Li, Y., Chen, Q., and Chang, Y. (2024). Humans as teammates: the signal of human–AI teaming enhances consumer acceptance of chatbots. Int. J. Inf. Manag. 76:102771. doi: 10.1016/j.ijinfomgt.2024.102771

Crossref Full Text | Google Scholar

Li, C.-Y., and Zhang, J.-T. (2023). Chatbots or me? Consumers’ switching between human agents and conversational agents. J. Retail. Consum. Serv. 72:103264. doi: 10.1016/j.jretconser.2023.103264

Crossref Full Text | Google Scholar

Li, F., Zhang, H., Wong, C. U. I., and Chen, X. (2024). Interpreting the mixed model of sustained engagement in online gamified learning: a dual analysis based on MPLUS and FSQCA. Entertain. Comput. 50:100643. doi: 10.1016/j.entcom.2024.100643

Crossref Full Text | Google Scholar

Liang, H. G., Saraf, N., Hu, Q., and Xue, Y. J. (2007). Assimilation of enterprise systems: the effect of institutional pressures and the mediating role of top management. MIS Q. 31, 59–87. doi: 10.2307/25148781

Crossref Full Text | Google Scholar

Liang, J., Wang, L. L., Luo, J., Yan, Y. F., and Fan, C. (2023). The relationship between student interaction with generative artificial intelligence and learning achievement: serial mediating roles of self-efficacy and cognitive engagement. Front. Psychol. 14:1285392. doi: 10.3389/fpsyg.2023.1285392

PubMed Abstract | Crossref Full Text | Google Scholar

Lin, R. R., and Lee, J. C. (2024). The supports provided by artificial intelligence to continuous usage intention of mobile banking: evidence from China. Aslib J. Inf. Manag. 76, 293–310. doi: 10.1108/ajim-07-2022-0337

Crossref Full Text | Google Scholar

Lin, J., Luo, X., Li, L., and Hsu, C. (2024). Unraveling the effect of organisational resources and top management support on e-commerce capabilities: evidence from ADANCO-SEM and fsQCA. Eur. J. Inf. Syst. 33, 403–421. doi: 10.1080/0960085X.2023.2169202

Crossref Full Text | Google Scholar

Liu, Y. F., Cai, L. H., Ma, F., and Wang, X. Q. (2023). Revenge buying after the lockdown: based on the SOR framework and TPB model. J. Retail. Consum. Serv. 72:103263. doi: 10.1016/j.jretconser.2023.103263

Crossref Full Text | Google Scholar

Liu, Y. L., Hu, B., Yan, W., and Lin, Z. (2023). Can chatbots satisfy me? A mixed-method comparative study of satisfaction with task-oriented chatbots in mainland China and Hong Kong. Comput. Human Behav. 143:107716. doi: 10.1016/j.chb.2023.107716

Crossref Full Text | Google Scholar

Liu, K., and Tao, D. (2022). The roles of trust, personalization, loss of privacy, and anthropomorphism in public acceptance of smart healthcare services. Comput. Human Behav. 127:107026. doi: 10.1016/j.chb.2021.107026

Crossref Full Text | Google Scholar

Liu, Y., Zhang, Z. Z., and Wu, Y. K. (2025). What drives Chinese university students' long-term use of GenAI? Evidence from the heuristic-systematic model. Educ. Inf. Technol. 30, 14967–15000. doi: 10.1007/s10639-025-13403-0

Crossref Full Text | Google Scholar

MacKenzie, S. B., Podsakoff, P. M., and Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques. MIS Q. 35, 293–334. doi: 10.2307/23044045

Crossref Full Text | Google Scholar

Maroufkhani, P., Asadi, S., Ghobakhloo, M., Jannesari, M. T., and Ismail, W. K. W. (2022). How do interactive voice assistants build brands' loyalty? Technol. Forecast. Soc. Change 183:121870. doi: 10.1016/j.techfore.2022.121870

Crossref Full Text | Google Scholar

McLean, G., Osei-Frimpong, K., and Barhorst, J. (2021). Alexa, do voice assistants influence consumer brand engagement?–examining the role of AI powered voice assistants in influencing consumer brand engagement. J. Bus. Res. 124, 312–328. doi: 10.1016/j.jbusres.2020.11.045

Crossref Full Text | Google Scholar

Mehmood, K., Kautish, P., and Shah, T. R. (2024). Embracing digital companions: unveiling customer engagement with anthropomorphic AI service robots in cross-cultural context. J. Retail. Consum. Serv. 79:103825. doi: 10.1016/j.jretconser.2024.103825

Crossref Full Text | Google Scholar

Mehrabian, A. (1974). An approach to environmental psychology. Cambridge: MIT Press.

Google Scholar

Meng, J., and Dai, Y. (2021). Emotional support from AI chatbots: should a supportive partner self-disclose or not? J. Comput.-Mediat. Commun. 26, 207–222. doi: 10.1093/jcmc/zmab005

Crossref Full Text | Google Scholar

Menon, K., and Dubé, L. (2007). The effect of emotional provider support on angry versus anxious consumers. Int. J. Res. Mark. 24, 268–275. doi: 10.1016/j.ijresmar.2007.04.001

Crossref Full Text | Google Scholar

Moussawi, S., Koufaris, M., and Benbunan-Fich, R. (2023). The role of user perceptions of intelligence, anthropomorphism, and self-extension on continuance of use of personal intelligent agents. Eur. J. Inf. Syst. 32, 601–622. doi: 10.1080/0960085X.2021.2018365

Crossref Full Text | Google Scholar

Neiroukh, S., Emeagwali, O. L., and Aljuhmani, H. Y. (2024). Artificial intelligence capability and organizational performance: unraveling the mediating mechanisms of decision-making processes. Manag. Decis. doi: 10.1108/MD-10-2023-1946

Crossref Full Text | Google Scholar

Nelson, R. R., Todd, P. A., and Wixom, B. H. (2005). Antecedents of information and system quality: an empirical examination within the context of data warehousing. J. Manag. Inf. Syst. 21, 199–235. doi: 10.1080/07421222.2005.11045823

Crossref Full Text | Google Scholar

Ngo, T. T. A., Tran, T. T., An, G. K., and Nguyen, P. T. (2024). ChatGPT for educational purposes: investigating the impact of knowledge management factors on student satisfaction and continuous usage. IEEE Trans. Learn. Technol. 17, 1367–1378. doi: 10.1109/tlt.2024.3383773

Crossref Full Text | Google Scholar

Ogbanufe, O., and Gerhart, N. (2020). The mediating influence of smartwatch identity on deep use and innovative individual performance. Inf. Syst. J. 30, 977–1009. doi: 10.1111/isj.12288

Crossref Full Text | Google Scholar

Pai, P. (2023). Becoming a mother: a role learning perspective on the use of online community resources to facilitate a life-role transition. Inf. Manag. 60:103861. doi: 10.1016/j.im.2023.103861

Crossref Full Text | Google Scholar

Pappas, I. O., Kourouthanassis, P. E., Giannakos, M. N., and Chrissikopoulos, V. (2017). Sense and sensibility in personalized e-commerce: how emotions rebalance the purchase intentions of persuaded customers. Psychol. Mark. 34, 972–986. doi: 10.1002/mar.21036

Crossref Full Text | Google Scholar

Pappas, I. O., and Woodside, A. G. (2021). Fuzzy-set qualitative comparative analysis (fsQCA): guidelines for research practice in information systems and marketing. Int. J. Inf. Manag. 58:102310. doi: 10.1016/j.ijinfomgt.2021.102310

Crossref Full Text | Google Scholar

Pathak, A., and Bansal, V. (2024). Ai as decision aid or delegated agent: the effects of trust dimensions on the adoption of AI digital agents. Comput. Hum. Behav. 2:100094. doi: 10.1016/j.chbah.2024.100094

Crossref Full Text | Google Scholar

Peng, Z., and Wan, Y. (2024). Human vs. AI: exploring students’ preferences between human and AI TA and the effect of social anxiety and problem complexity. Educ. Inf. Technol. 29, 1217–1246. doi: 10.1007/s10639-023-12374-4

Crossref Full Text | Google Scholar

Pham, H. C., Duong, C. D., and Nguyen, G. K. H. (2024). What drives tourists’ continuance intention to use ChatGPT for travel services? A stimulus-organism-response perspective. J. Retail. Consum. Serv. 78:103758. doi: 10.1016/j.jretconser.2024.103758

Crossref Full Text | Google Scholar

Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., and Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. J. Appl. Psychol. 88, 879–903. doi: 10.1037/0021-9010.88.5.879

PubMed Abstract | Crossref Full Text | Google Scholar

Poushneh, A., Vasquez-Parraga, A., and Gearhart, R. S. (2024). The effect of empathetic response and consumers’ narcissism in voice-based artificial intelligence. J. Retail. Consum. Serv. 79:103871. doi: 10.1016/j.jretconser.2024.103871

Crossref Full Text | Google Scholar

Priya, B., and Sharma, V. (2023). Exploring users' adoption intentions of intelligent virtual assistants in financial services: an anthropomorphic perspectives and socio-psychological perspectives. Comput. Human Behav. 148:107912. doi: 10.1016/j.chb.2023.107912

Crossref Full Text | Google Scholar

Ragin, C. C. (2006). Set relations in social research: evaluating their consistency and coverage. Polit. Anal. 14, 291–310. doi: 10.1093/pan/mpj019

Crossref Full Text | Google Scholar

Ritchie, J., and Spencer, L. (2002). “Qualitative data analysis for applied policy research” in Analyzing Qualitative Data. (London: Routledge), 173–194.

Google Scholar

Sanusi, I. T., Ayanwale, M. A., and Chiu, T. K. (2024). Investigating the moderating effects of social good and confidence on teachers' intention to prepare school students for artificial intelligence education. Educ. Inf. Technol. 29, 273–295. doi: 10.1007/s10639-023-12250-1

Crossref Full Text | Google Scholar

Shao, A. P., Lu, Z., Zhong, B., Liu, S. Q., and Lu, W. (2025). Human touch vs. AI tech: understanding user preferences in the future of education. Comput. Hum. Behav. 164:108492. doi: 10.1016/j.chb.2024.108492

Crossref Full Text | Google Scholar

Shao, Z., Zhang, J., Zhang, L., and Benitez, J. (2024). Uncovering post-adoption usage of AI-based voice assistants: a technology affordance lens using a mixed-methods approach. Eur. J. Inf. Syst. 34, 475–501. doi: 10.1080/0960085x.2024.2363322

PubMed Abstract | Crossref Full Text | Google Scholar

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Human Comput. Stud. 146:102551. doi: 10.1016/j.ijhcs.2020.102551

Crossref Full Text | Google Scholar

Song, M., Du, J., Xing, X., and Mou, J. (2022). Should the chatbot “save itself” or “be helped by others”? The influence of service recovery types on consumer perceptions of recovery satisfaction. Electron. Commer. Res. Appl. 55:101199. doi: 10.1016/j.elerap.2022.101199

Crossref Full Text | Google Scholar

Song, X. X., Gu, H. M., Li, Y. P., Leung, X. Y., and Ling, X. D. (2024). The influence of robot anthropomorphism and perceived intelligence on hotel guests' continuance usage intention. Inf. Technol. Tour. 26, 89–117. doi: 10.1007/s40558-023-00275-8

Crossref Full Text | Google Scholar

Sundaram, S., Schwarz, A., Jones, E., and Chin, W. W. (2007). Technology use on the front line: how information technology enhances individual performance. J. Acad. Mark. Sci. 35, 101–112. doi: 10.1007/s11747-006-0010-4

Crossref Full Text | Google Scholar

Theresiawati,, Hidayanto, A. N., Seta, H. B., Widyatmoko, A. M., Muhammad, M. A., Said, M. F. A., et al. (2025). Analysis of factors influencing students’ adoption of generative AI as a programming learning resource. Interact. Learn. Environ., 1–18. doi: 10.1080/10494820.2025.2546630

Crossref Full Text | Google Scholar

Venkatesh, V., Brown, S. A., and Sullivan, Y. W. (2016). Guidelines for conducting mixed-methods research: an extension and illustration. J. Assoc. Inf. Syst. 17, 435–494. doi: 10.17705/1jais.00433

Crossref Full Text | Google Scholar

Walle, A. D., Demsash, A. W., Ferede, T. A., and Wubante, S. M. (2023). Healthcare professionals' satisfaction toward the use of district health information system and its associated factors in Southwest Ethiopia: using the information system success model. Front. Digit. Health 5:1140933. doi: 10.3389/fdgth.2023.1140933

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, Q. R. (2025). EFL learners' motivation and acceptance of using large language models in English academic writing: an extension of the UTAUT model. Front. Psychol. 15:1514545. doi: 10.3389/fpsyg.2024.1514545

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, Y. D., Jiang, Y. S., Liu, R. H., and Miao, M. (2024). A configurational analysis of the causes of the discontinuance behavior of augmented reality (AR) apps in e-commerce. Electron. Commer. Res. Appl. 63:101355. doi: 10.1016/j.elerap.2023.101355

Crossref Full Text | Google Scholar

Wang, S. W., and Scheinbaum, A. C. (2018). Enhancing brand credibility via celebrity endorsement: trustworthiness trumps attractiveness and expertise. J. Advert. Res. 58, 16–32. doi: 10.2501/JAR-2017-042

Crossref Full Text | Google Scholar

Wilson, R. T., and Baack, D. W. (2023). How the credibility of places affects the processing of advertising claims: a partial test of the B2B communication effects model. J. Bus. Res. 168:114238. doi: 10.1016/j.jbusres.2023.114238

Crossref Full Text | Google Scholar

Wong, I. A., Zhang, T., Lin, Z. C., and Peng, Q. (2023). Hotel AI service: are employees still needed? J. Hosp. Tour. Manag. 55, 416–424. doi: 10.1016/j.jhtm.2023.05.005

Crossref Full Text | Google Scholar

Wu, X., Zhou, Z., and Chen, S. (2024). A mixed-methods investigation of the factors affecting the use of facial recognition as a threatening AI application. Internet Res. 34, 1872–1897. doi: 10.1108/INTR-11-2022-0894

Crossref Full Text | Google Scholar

Xie, Y., Liang, C., Zhou, P., and Zhu, J. (2024). When should chatbots express humor? Exploring different influence mechanisms of humor on service satisfaction. Comput. Human Behav. 156:108238. doi: 10.1016/j.chb.2024.108238

Crossref Full Text | Google Scholar

Xie, Y. G., Zhu, K. Y., Zhou, P. Y., and Liang, C. Y. (2023). How does anthropomorphism improve human-AI interaction satisfaction: a dual-path model. Comput. Human Behav. 148:107878. doi: 10.1016/j.chb.2023.107878

Crossref Full Text | Google Scholar

Xu, H., Law, R., Lovett, J., Luo, J. M., and Liu, L. (2024). Tourist acceptance of ChatGPT in travel services: the mediating role of parasocial interaction. J. Travel Tour. Mark. 41, 955–972. doi: 10.1080/10548408.2024.2364336

Crossref Full Text | Google Scholar

Xu, J., Li, Y., Shadiev, R., and Li, C. X. (2025). College students' use behavior of generative AI and its influencing factors under the unified theory of acceptance and use of technology model. Educ. Inf. Technol. 30, 1–24. doi: 10.1007/s10639-025-13508-6

Crossref Full Text | Google Scholar

Xu, Y., Niu, N., and Zhao, Z. (2023). Dissecting the mixed effects of human-customer service chatbot interaction on customer satisfaction: an explanation from temporal and conversational cues. J. Retail. Consum. Serv. 74:103417. doi: 10.1016/j.jretconser.2023.103417

Crossref Full Text | Google Scholar

Yang, X. (2023). The effects of AI service quality and AI function-customer ability fit on customer's overall co-creation experience. Ind. Manag. Data Syst. 123, 1717–1735. doi: 10.1108/IMDS-08-2022-0500

Crossref Full Text | Google Scholar

Yuan, S., Li, F., Browning, M., Bardhan, M., Zhang, K. R., McAnirlin, O., et al. (2024). Leveraging and exercising caution with ChatGPT and other generative artificial intelligence tools in environmental psychology research. Front. Psychol. 15:1295275. doi: 10.3389/fpsyg.2024.1295275

PubMed Abstract | Crossref Full Text | Google Scholar

Yuan, C., Zhang, C., and Wang, S. (2022). Social anxiety as a moderator in consumer willingness to accept AI assistants based on utilitarian and hedonic values. J. Retail. Consum. Serv. 65:102878. doi: 10.1016/j.jretconser.2021.102878

Crossref Full Text | Google Scholar

Zhang, J., and Curley, S. P. (2018). Exploring explanation effects on consumers’ trust in online recommender agents. Int. J. Human Comput. Interact. 34, 421–432. doi: 10.1080/10447318.2017.1357904

Crossref Full Text | Google Scholar

Zhang, X., Liu, S., Chen, X., Wang, L., Gao, B. J., and Zhu, Q. (2018). Health information privacy concerns, antecedents, and information disclosure intention in online health communities. Inf. Manag. 55, 482–493. doi: 10.1016/j.im.2017.11.003

Crossref Full Text | Google Scholar

Zhao, L., Rahman, M. H., Yeoh, W., Wang, S., and Ooi, K.-B. (2024). Examining factors influencing university students’ adoption of generative artificial intelligence: a cross-country study. Stud. High. Educ., 1–23. doi: 10.1080/03075079.2024.2427786

Crossref Full Text | Google Scholar

Zhou, T., and Wu, X. (2024). Examining user migration intention from social Q&A communities to generative AI. Humanit. Soc. Sci. Commun. 11, 1–10. doi: 10.1057/s41599-024-03540-1

PubMed Abstract | Crossref Full Text | Google Scholar

Zhu, Y., Zhang, R., Zou, Y., and Jin, D. (2023). Investigating customers’ responses to artificial intelligence chatbots in online travel agencies: the moderating role of product familiarity. J. Hospit. Tour. Technol. 14, 208–224. doi: 10.1108/JHTT-02-2022-0041

Crossref Full Text | Google Scholar

Zou, B., Lyu, Q., Han, Y., Li, Z., and Zhang, W. (2025). Exploring students’ acceptance of an artificial intelligence speech evaluation program for EFL speaking practice: an application of the integrated model of technology acceptance. Comput. Assist. Lang. Learn. 38, 1366–1391. doi: 10.1080/09588221.2023.2278608

Crossref Full Text | Google Scholar

Keywords: generative artificial intelligence (GAI), online learning, infusion use, mixed-method, configurations

Citation: Xian L, Cao G and Zhang N (2025) My digital mentor: a mixed-methods study of user-GAI interactions. Front. Psychol. 16:1636480. doi: 10.3389/fpsyg.2025.1636480

Received: 18 June 2025; Accepted: 09 October 2025;
Published: 18 November 2025.

Edited by:

Daniel H. Robinson, The University of Texas at Arlington College of Education, United States

Reviewed by:

Gaojun Shi, Hangzhou Normal University, China
Yao Qin, Handan University, China
Saqar Moisan F. Alotaibi, Taibah University, Saudi Arabia

Copyright © 2025 Xian, Cao and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Guangqiu Cao, Z3FjYW9AeHVqYy5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.