- 1School of Management, Xiamen University, Xiamen, China
- 2School of Management, Xiamen University Tan Kah Kee College, Zhangzhou, China
Introduction: Generative Artificial Intelligence (GAI) has emerged as a powerful tool in online learning, offering dynamic, high-quality, and user-friendly content. While previous studies have primarily focused on GAI’s short-term impacts, such as users’ acceptance and initial adoption, a notable gap exists in understanding long-term usage (i.e., infusion use) and the psychological mechanisms.
Method and results: This study employs a two-stage mixed-methods approach to investigate users’ infusion use of GAI in online learning scenarios. A semi-structured interview (N = 26) was conducted in the first stage to develop a systematic framework of influencing factors. These factors include intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, and emotional support. The second stage empirically validated the research framework using survey data of 327 participants. We find that the eight factors influence users’ infusion use through two key psychological mediators: perceived value and satisfaction. We also used the fsQCA method to obtain the configurations. These configurations demonstrate that no single factor alone is sufficient; rather, it is the combination of multiple factors that fosters users’ infusion use.
Discussion: Our findings contribute to expanding the literature on the application of the theoretical literature on technology adoption in online learning contexts and provide practical implications for developing effective user-GAI interaction.
1 Introduction
Generative Artificial Intelligence (GAI) like ChatGPT simulates human cognitive processes through deep learning, extensive training datasets to generate innovative content (Yuan et al., 2024). In online learning, GAI serves diverse roles, such as virtual teachers, teaching assistants, and automatic grading (Liang et al., 2023; Peng and Wan, 2024). It supports real-time Q&A, tracks learning progress, and facilitates speaking practice through interactive dialogue (Du and Lv, 2024; Shao et al., 2025). One example is Quizlet Q-Chat, which adapts to different students’ learning habits and helps them master key concepts through customized Q&A sessions.1
Unlike traditional offline and online classes, GAI overcomes temporal and spatial constraints, enabling learners to access knowledge anytime and anywhere. This 24/7 learning support benefits non-student groups, such as working professionals. GAI technology can provide these individuals with flexible access to knowledge, enabling them to acquire cutting-edge industry skills and knowledge rapidly. It engages users through a communication that closely mimics human-to-human interaction. GAI can deeply understand users’ needs and preferences and provide them with a customized learning approach. The technology’s adaptability facilitates seamless integration into daily routines. It also encourages comprehensive utilization of its functional capabilities. However, existing literature lacks an exploration of the key factors driving users’ infusion use in online learning contexts.
Infusion use refers to users’ profound integration of information technology/systems (IT/S) into their daily learning processes to maximize technological potential (Chen et al., 2021; Hassandoust and Techatassanasoontorn, 2022). It represents the ultimate state of technology adoption, i.e., when the technology is fully embedded in the users’ daily lives (Jones et al., 2002). Existing research has extensively investigated users’ long-term usage behaviors, such as continuous usage and deep use. Continuous usage primarily concerns IT adoption and long-term usage decisions (Bhattacherjee, 2001), yet it does not fully capture the nature of post-adoption behavior (Hu et al., 2024). Deep use, on the other hand, examines the extent to which users leverage IT to achieve personal goals (Ogbanufe and Gerhart, 2020). However, these post-adoption behaviors represent efforts toward achieving the ultimate state of infusion use, which reflects the optimal alignment among users, IT, and tasks (Hu et al., 2024).
In this study, infusion use refers to users’ active, repetitive, and long-term in-depth use of GAI. This study focuses on infusion use for three reasons. First, unlike traditional AI, infusion use requires users to adopt an open attitude toward GAI, deeply understand its functions, and actively explore its potential, imposing higher users’ demands (Hu et al., 2024). Therefore, it may be challenging for users to realize the full potential of the GAI. Second, existing studies focus on short-term behaviors such as users’ acceptance (Wong et al., 2023; Li Y. et al., 2024), adoption (Chang and Park, 2024; Pathak and Bansal, 2024), and intention to use (Kim et al., 2021; Camilleri, 2024) of AI technologies, while neglecting users’ long-term behaviors (i.e., infusion use). In contrast to short-term use, infusion use emphasizes continuity and regularity, focusing on users’ ability to use GAI to support their learning, which is essential for creating superior business value (Chen et al., 2020; Chen et al., 2021). As a result, infusion use is considered a more promising pattern for technology adoption (Hu et al., 2024). Third, most studies have employed theoretical frameworks such as the technology acceptance model (TAM) (Zou et al., 2025; Kim et al., 2025), the unified theory of acceptance and use of technology (UTAUT) (Wang, 2025; Xu et al., 2025), the theory of planned behavior (TPB) (Al-Emran et al., 2024), and the task-technology fit (TTF) (Du and Lv, 2024) to investigate users’ usage behavior of GAI. However, few scholars have systematically explored GAI’s long-term use (i.e., infusion use) from a comprehensive perspective. As an emerging technology, the key factors influencing users’ infusion use of GAI have not been sufficiently explored. Therefore, it is essential to analyze the key factors and frameworks that drive the widespread infusion use of GAI.
Based on the above analysis, this study develops the following research questions:
(1) Which factors influence users’ infusion use of GAI?
(2) What are the influencing mechanisms of these factors on infusion use?
(3) What are the configurational effects of these factors on infusion use?
To answer these questions, a two-stage methodology was employed. Stage 1 qualitative research, this study collected data from 26 users through semi-structured interviews to develop a comprehensive theoretical framework to understand the factors influencing users’ infusion use of GAI in online learning contexts. In stage 2, we conducted a quantitative study that empirically validated the research framework using 327 participants’ survey data. At last, fuzzy set qualitative comparative analysis (fsQCA) methods were integrated to analyze all samples, validating the configurational effects.
This study makes unique contributions to the literature. First, we expand the theoretical understanding of GAI in online learning scenarios by developing and empirically validating a research framework that systematically explains how GAI’s characteristics influence infusion use. Second, we verify the critical mediating roles of perceived value and satisfaction in the behavioral formation process, thereby gaining insights into users’ psychological mechanisms. Finally, our findings offer actionable guidance for GAI educators, technology developers, and policymakers to enhance technology integration and maximize educational impact.
2 Theoretical background
2.1 SOR model
The Stimulus-Organism-Response (SOR) model was proposed by Mehrabian (1974), as a theoretical framework for exploring the relationship between external stimuli and organismic responses. This model posits that behavioral performance is not merely a stimulus–response paradigm, but rather, it is achieved through cognitive processing by the organism to elicit a specific response. Specifically, stimuli (S) is defined as the various types of external factors in the environment that influence the internal mental state or cognitive processes of the organism (O), ultimately leading to a specific behavioral response (R). These stimuli activate various internal processes, including cognition, affect, and evaluation, ultimately determining the organism’s behavioral response (Xie et al., 2023). The concept of an organism’s perception establishes a link between stimulus and response, thereby explaining the process by which an organism is stimulated and responds.
While established models like TAM effectively explain technology acceptance through perceptual factors like usefulness and ease of use (Al-Adwan et al., 2023; Zou et al., 2025), the SOR model offers a more comprehensive perspective. It captures not only external technological characteristics but also the crucial mediating role of users’ internal states—particularly their cognitive processing (Fu et al., 2025). This model provides a more nuanced understanding of how external environmental stimuli interact with individual cognition and evaluation to shape behavioral responses (Chen, 2023; Liu Y. F. et al., 2023). Consequently, the SOR model is increasingly applied within online learning, including e-learning platforms (Fu et al., 2025), AI teaching assistants (Peng and Wan, 2024), and mobile-assisted language learning (Lee and Xiong, 2023).
This range of applications demonstrates the SOR model’s validity for GAI online learning contexts. It not only aids in understanding how GAI features (stimuli) and users’ internal psychological processes (organism), but also provides actionable recommendations for optimizing the GAI online learning experience and enhancing users’ infusion use.
2.2 The mixed-methods approach design
Our study employed a mixed-methods approach to explore the infusion use of GAI in online learning scenarios, following established methodological guidelines (Venkatesh et al., 2016; Creswell and Creswell, 2018). The steps for an exploratory sequential design are as follows: First, the qualitative stage involved 26 semi-structured interviews to explore factors influencing users’ infusion use of GAI. We proposed our research model and hypotheses based on the results of stage 1. Then, stage 2 tested the hypotheses through an online survey. This study is conducted in two stages, as illustrated in Figure 1.
The mixed-methods approach is particularly appropriate for our study. First, this design offers advantages over single-method approaches by simultaneously addressing both confirmatory and explanatory research questions (Creswell and Creswell, 2018). It aligns perfectly with the dual nature of our investigation. Second, the application of GAI in online education context presents a degree of novelty, making it difficult for existing theories to provide a thorough description and explanation of the issues (Venkatesh et al., 2016).
3 Stage 1: the qualitative study
3.1 Data collection
This study employed semi-structured interviews conducted through face-to-face interviews and online video conferences. The research group first screened participants with prior experience using GAI, who usually use GAI as their main learning tool and can express their opinions based on their experiences. Participants were recruited using purposive sampling techniques to ensure their suitability for the study topic. All participants were requested to indicate their willingness to participate in semi-structured interviews. Among them, 13 were female (50%). The average age of participants was 29.77 (SD = 8.75). Most participants were young and middle-aged individuals under 35, aligning with QuestMobile’s finding that AI users are predominantly concentrated in this age group.2
Before the interviews, the researcher established several guidelines with participants, including encouraging open sharing, preventing interruptions, and ensuring the anonymity of all information, to foster an open and secure communication environment. During the interviews, the researcher initially collected basic demographic information and explained GAI. Participants were then asked to describe specific examples of using GAI for learning. The interviews focused on the role of GAI in the participants’ learning, their most memorable experiences, challenges encountered in using GAI, and the subsequent impacts. The detailed interview protocol is shown in Appendix A.1. Each interview lasted 20–30 min, with all participants agreeing to audio recording. After the interviews, the audio recordings were transcribed verbatim into textual material for subsequent qualitative analysis. Data were collected between December 2024 and January 2025. Interviews ceased upon reaching data saturation, defined as the point when no significant new information emerged from the data (Shao et al., 2024). Demographic details of the participants are presented in Table 1.
3.2 Data analysis
First, the semi-structured interview transcripts were pre-processed to remove content unrelated to GAI in online learning scenarios. Second, semantically ambiguous or irrelevant content was eliminated. Next, responses reflecting interviewees’ misinterpretation of the questions were excluded. Finally, the transcripts were coded and labeled line-by-line according to the logical sequence of the interview content. According to Liu Y. L. et al. (2023), we adopted thematic analysis combining “top-down” framework analysis (Ritchie and Spencer, 2002) and “bottom-up” grounded theory (Glaser and Strauss, 2017). We analyzed the interview data by coding, theming, decontextualizing, and recontextualizing.
This study utilized NVivo 11.0 software to conduct a thorough analysis of the interview data, examining the content word-by-word, sentence-by-sentence, and paragraph-by-paragraph. The researchers were instructed to use original phrases from the interview transcripts for labeling during the coding process. Two graduate students subsequently organized the data based on the initial nodes. Throughout this process, the original information was continuously compared and revised, with all meaningful themes and concepts being precisely extracted.
To minimize researcher bias in the coding process, two graduate students independently coded the data through a back-to-back approach. Both coders were native Chinese speakers and familiar with GAI. Before starting the coding process, a centralized meeting was conducted to align the coders on the procedures and clarify relevant concepts and theories. After each coding round, the results from both coders were compared, the same initial concepts were merged, and the coding conflicts were discussed with experts. Finally, the concepts were summarized and reorganized. After repeated discussions, only concepts agreed upon by both coders were retained. The complete coding process is presented in Appendix A.2.
3.3 Findings from interviews
Through semi-structured interviews, we identified several factors influencing the infusion use of GAI, such as intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, emotional support, perceived value, and satisfaction. Building upon these exploratory findings, we subsequently designed a quantitative study to empirically validate the hypothesized relationships between these variables. Analysis of interview text reveals that users primarily employ ChatGPT, Doubao, ERNIE Bot, Deepseek, Gemini, Kimi, and other GAI platforms, reflecting the diversity of GAI tools within online learning contexts.
4 Stage 2: the quantitative study
4.1 Development of hypotheses
Intelligence reflects GAI’s capability through environmental perception, adaptive learning, problem-solving, and goal attainment. This characteristic of continuous evolution through feedback leads users to recognize it as a genuine intelligence (Moussawi et al., 2023). As the most critical factors of AI technology (Bartneck et al., 2009), intelligence fundamentally relies on natural language processing technologies that enable AI to simulate human cognitive processes in language comprehension and production (McLean et al., 2021). The statement is echoed by the qualitative investigation. For example, interviewee (P6) mentioned: “Because I think ChatGPT records all my previous conversations, I think its answer will be more professional and more in line with my heart.” GAI exemplifies this intelligence through its extensive knowledge repository and professional response capabilities, delivering not merely accurate and compelling answers but also formulating solutions that satisfy users through concise and coherent language outputs (Priya and Sharma, 2023). During interactive processes, GAI exhibits a high level of attentiveness to the users’ needs while employing diverse response strategies, which significantly enhance users’ experience and foster positive attitudes toward the technology.
Priya and Sharma (2023) pointed out that intelligence manifests in three critical dimensions of information generation: effectiveness, efficiency, and reliability. These dimensions not only constitute the fundamental drivers of GAI advancement but also serve as critical factors shaping users’ perceptions. As demonstrated in some studies, a direct relationship between GAI’s intelligent performance and its functional capabilities (Maroufkhani et al., 2022; Priya and Sharma, 2023). This relationship improves users’ perceived value and satisfaction (Song et al., 2022; Lin and Lee, 2024; Song et al., 2024).
In the context of online learning, GAI’s intelligence capacity enables accurate comprehension of users’ inquiries and provision of elaborated responses, thereby enhancing users’ learning experience. Therefore, we hypothesize that:
H1a: Intelligence positively influences users’ perceived value.
H1b: Intelligence positively influences users’ satisfaction.
Explainable AI (XAI) has been defined as systems designed to provide transparent decision processes and clear explanations, enabling users to understand system capabilities and limitations (Dwivedi et al., 2023). Research has demonstrated that this explainability feature enhances users’ trust and acceptance of recommendation algorithms while simultaneously enabling effective knowledge transfer, thereby increasing the adoptability of AI-generated suggestions (Zhang and Curley, 2018). Fundamentally, explainability refers to an AI system’s capacity to articulate its decision logic in a user-comprehensible format. This capability primarily aims to eliminate the “black box” nature of the AI decision-making process, consequently strengthening users’ confidence in the system (Shin, 2021). Within human-GAI interaction contexts, explainability facilitates rational evaluation of algorithmic outputs by providing decision-making rationales. The statement is echoed by the qualitative investigation. For example, interviewee (P23) mentioned: “AI can give more detailed content. This thought process is more in line with my idea of recognizing it and learning from it.” This mechanism significantly shapes users’ attitudes toward GAI (Cheung and Ho, 2025). Furthermore, by bridging a connection between the user’s perception and GAI’s operation, explainability not only deepens understanding of specific responses but also elevates the overall interaction quality. As users progressively acquire more comprehensive explanations of the system through successive cycles of inquiries, it enhances users’ perceived value and satisfaction. Therefore, we hypothesize that:
H2a: Explainability positively influences users’ perceived value.
H2b: Explainability positively influences users’ satisfaction.
Response time, as a key indicator of AI’s service efficacy, reflects the timeliness of the system in processing users’ requests and providing feedback (DeLone and McLean, 2003; Liu Y. L. et al., 2023). AI, powered by sophisticated machine learning algorithms and natural language processing capabilities, can analyze vast datasets and generate precise responses within milliseconds. This superior responsiveness significantly improves the efficiency of human-computer interaction (Neiroukh et al., 2024). The statement is echoed by the qualitative investigation. For example, interviewee (P14) mentioned: “If you use MOOC or bilibili, or some other large and well-known platforms, one thing they have in common is that you need to watch videos, which may require some investment in your time cost. But instead, using the generative AI software, it can give you a result in a few seconds. I really like it!” Research has shown that prolonged response times not only reduce task efficiency but may also convey negative impressions about the system’s predictive capabilities. Users’ prevailing assumption that most prediction tasks should be inherently simple for AI systems (Efendić et al., 2020). Liu Y. L. et al. (2023) also pointed out that response time is one of the determining factors affecting users’ perceived value and satisfaction with an AI service. GAI’s timely response impacts the interaction quality, which subsequently strengthens users’ trust in the GAI (Pham et al., 2024). Especially in online learning scenarios, as the response speed increases, it not only sustains users’ engagement and concentration during GAI interactions but also fosters more positive attitudes toward the technology. Therefore, we hypothesize that:
H3a: Response time positively influences users’ perceived value.
H3b: Response time positively influences users’ satisfaction.
Integrability means that GAI effectively facilitates the ability to combine information from different sources to respond to users’ problems (Chen et al., 2025). It depends on the task and contextual environment, which also reflects task-related properties (Nelson et al., 2005). Chen et al. (2025) argued that highly interdependent complex tasks are more dependent on the integrated system’s outputs than a task-independent system. GAI can be optimally adapted not only based on existing databases but also combined with historical user interaction data in the content generation process (Chen et al., 2025). Existing studies have shown that optimizing the knowledge acquisition pathways, thereby enhancing learning effectiveness and enabling learners to achieve superior outcomes (Korayim et al., 2025). Furthermore, the integrability facilitates rapid adaptation to environmental changes and effective utilization of pivotal opportunities (Ding, 2021). In online learning contexts, GAI generates systematic and related knowledge according to users’ needs, and this powerful integration capability enables users to flexibly respond to problems, thus fostering a more favorable attitude toward technology. Interviewee (P14) mentioned: “GAI will organize these words into more complete results and present to me, so it may be more convenient in the ability to integrate information. I think it is better than before we searched the web page.” Therefore, we hypothesize that:
H4a: Integrability positively influences users’ perceived value.
H4b: Integrability positively influences users’ satisfaction.
Accuracy represents that the system provides up-to-date and relevant information to the users’ intended goals (Chung et al., 2020). It is one of the important foundations for users to use smart service products (Cheng and Jiang, 2022). This viewpoint is echoed by P9, who mentioned: “There are some official data or real-time information, and I do not 100% believe the answers it gives me.” During the interaction process, it is often necessary to donate considerable cognitive efforts to evaluate the precision and relevance of GAI-generated content (Chen et al., 2023c). When users confirm that AI recommendations sufficiently address their requirements, this validation triggers a “cognitive resonance” phenomenon—the perception that the system genuinely comprehends their underlying needs—thereby substantially increasing recommendation acceptance (Li et al., 2021). Accuracy not only makes users feel that their needs are fully valued but, more importantly, provides effective solutions (Yuan et al., 2022). This positive experience strengthens users’ recognition of AI technology capabilities (Gursoy et al., 2019), and plays an important role in users’ satisfaction (Walle et al., 2023). In the context of online learning, accuracy is of equal importance. GAI should ensure the solutions and methods are correct and feasible to establish perceived value, fostering positive user attitudes (Chen et al., 2023b). Therefore, we hypothesize that:
H5a: Accuracy positively influences users’ perceived value.
H5b: Accuracy positively influences users’ satisfaction.
Source credibility defined as individuals’ perception of sources as trustworthy and expertized (Hovland and Weiss, 1951). Camilleri (2024) stated that users often rely on pre-existing perceptions about information sources rather than objectively assessing content quality. It is a prerequisite for users to assess information’s usefulness (Camilleri and Filieri, 2023). Compared to non-expert sources, information disseminated by experts is usually perceived higher reliability and credibility (Ismagilova et al., 2020). Users will be more inclined to accept the advice and knowledge provided by professional and authoritative sources (Hovland and Weiss, 1951; Wang and Scheinbaum, 2018). In the context of online learning, users expect the source reliability from GAI. This viewpoint is echoed by P21, who mentioned: “If the results of GAI are different from those I searched in Baidu, I may not be able to judge which information is true.” Only when they confirm information originates from credible and authoritative sources do they feel confident acquiring knowledge through GAI interactions. This credibility reinforces users’ perception of information utility (Camilleri and Kozak, 2022), as well as contributing to positive attitudes toward GAI. Therefore, we hypothesize that:
H6a: Source credibility positively influences users’ perceived value.
H6b: Source credibility positively influences users’ satisfaction.
Personalization provides customized services to users based on their needs, preferences, and intent (Ameen et al., 2021). Personalization in AI-based services has been defined as the service capability to provide specialized services based on a user’s personal information and contextual usage (Liu and Tao, 2022; Kim and Hur, 2024). Highly personalized AI not only formulates precise inquiries to identify individual needs but also simulates users’ decision-making processes (Kim and Hur, 2024). Therefore, it generates customized content that better meets the users’ expectations. Pappas et al. (2017) demonstrated that users are more inclined to higher relevant information. Aw et al. (2024) further noted that higher levels of personalization in mobile AR shopping apps can create a stronger sense of realism and coherence between virtual and real dimensions, which can lead to a significant increase in users’ immersion. In online learning scenarios, when GAI provides personalized answers to users’ specific questions, it optimizes satisfaction (Li and Zhang, 2023), and effectively enhances users’ perceived value. Such personalized learning support better matches the educational content with the learner’s knowledge level and cognitive style, thus creating a more efficient and enjoyable learning experience (Baillifard et al., 2025). Interviewee (P6) mentioned: “Because many of my questions will be very professional, I would prefer to have such a personalized and professional AI.” Therefore, we hypothesize that:
H7a: Personalization positively influences users’ perceived value.
H7b: Personalization positively influences users’ satisfaction.
Emotional support, as a crucial dimension of social support, fulfills individuals’ psychological needs by conveying empathy, emotional validation, and encouragement (Meng and Dai, 2021). Existing research suggested that human-provided emotional support effectively enhanced individuals’ role meaning, thereby increasing well-being (Pai, 2023), as well as significantly improving service evaluations (Menon and Dubé, 2007) and effectively alleviating psychological stress (Meng and Dai, 2021). GAI can simulate human emotional communication, and are gradually taking on the role of emotional support (Gelbrich et al., 2021). This viewpoint is echoed by interviewee P16, who mentioned: “I think GAI is completely different from human’s feedback. Maybe humans will tell you that the employment environment is not very good for finding a job, and the situation is not very optimistic now. But GAI is relatively neutral. It may just give me some vitality.” Lee et al. (2022) pointed out that AI chatbots equipped with emotional intelligence dialogue systems can provide emotional understanding and encouragement to users. This interaction facilitates deeper communication and builds emotional connections. In our interviews, participants also reported that when they experienced academic stress and confided in GAI, they received both emotional consolation and personalized academic guidance. In online learning scenarios, GAI’s emotional support creates a friend-like interactive experience, and this humanized interaction significantly enhances users’ perceived value and satisfaction. Therefore, we hypothesize that:
H8a: Emotional support positively influences users’ perceived value.
H8b: Emotional support positively influences users’ satisfaction.
Perceived value represents users’ overall evaluation of a product or service’s usefulness (Chen et al., 2023b). Within human-GAI interaction contexts, this construct is users’ assessment of GAI’s functionality, such as its 24/7 availability, problem-solving efficiency, and generation of content (Carvalho and Ivanov, 2024; Xu et al., 2024). As Lee et al. (2007) stated that perceived value is more significant in increasing satisfaction than service quality. Perceived value is different in scenarios, such as technology, service delivery, and tangible commitments (De Kervenoael et al., 2020). In AI service applications, robots can effectively balance temporal efficiency, economic considerations, and user experience, which directly influence users’ perceived value (De Kervenoael et al., 2020). Interviewee (P10) mentioned: “I may still need to spend time identifying what is right and wrong. But it will take less time than Baidu. This effectively offsets the time spent identifying the answer and is important for my experience of using it afterwards.” Infusion use is an active, repetitive, and long-term deep use behavior of GAI (Hu et al., 2024), which represents a more sustained engagement than continuous use behavior. In new technology research, perceived value is a critical factor that influences users’ long-term use behavior (Lavado-Nalvaiz et al., 2022; Maroufkhani et al., 2022). Therefore, we hypothesize that:
H9: Perceived value positively influences users’ infusion use.
H11a: Perceived value mediates the relationship between intelligence and infusion use.
H11b: Perceived value mediates the relationship between explainability and infusion use.
H11c: Perceived value mediates the relationship between response time and infusion use.
H11d: Perceived value mediates the relationship between integrability and infusion use.
H11e: Perceived value mediates the relationship between accuracy and infusion use.
H11f: Perceived value mediates the relationship between source credibility and infusion use.
H11g: Perceived value mediates the relationship between personalization and infusion use.
H11h: Perceived value mediates the relationship between emotional support and infusion use.
Satisfaction is an indicator of service quality assessment, quantifying the variance between users’ actual service experience and their expectations (Xie et al., 2023). This concept encompasses not only the immediate usage pleasure but also a comprehensive assessment process that involves the comparison of users’ past experiences with their current expectations (Poushneh et al., 2024). When using AI services, satisfaction is one of the main factors that impact users’ subsequent behavior (Jiang et al., 2022; Chen et al., 2023b; Xie et al., 2024). This viewpoint is echoed by interviewee P25, who mentioned: “GAI, like Doubao, is helpful, because it will make me more and more fluent, and then the expression will become more and more natural, and the response will get better and better in all aspects. That’s why I’ve been sticking with it for oral speaking.”Specifically, when the GAI provides information that aligns with user requirements, this positive experience fosters favorable beliefs, ultimately promoting their sustained behavior (i.e., infusion use) (Ku and Chen, 2024). Therefore, we hypothesize that:
H10: Satisfaction positively influences users’ infusion use.
H12a: Satisfaction mediates the relationship between intelligence and infusion use.
H12b: Satisfaction mediates the relationship between explainability and infusion use.
H12c: Satisfaction value mediates the relationship between response time and infusion use.
H12d: Satisfaction mediates the relationship between integrability and infusion use.
H12e: Satisfaction mediates the relationship between accuracy and infusion use.
H12f: Satisfaction mediates the relationship between source credibility and infusion use.
H12g: Satisfaction mediates the relationship between personalization and infusion use.
H12h: Satisfaction mediates the relationship between emotional support and infusion use.
Figure 2 describes the theoretical model.
4.2 Questionnaire design
This study collected data through a questionnaire comprising three main sections. The first section outlines the research purpose, defines GAI, and presents two examples of its application in online learning scenarios. The second section measures eight antecedent factors, two mediators, and the outcome variable in the theoretical model. The third section captures participants’ demographic information, including gender, age, education level, profession, frequency and year of GAI usage in learning.
To ensure the reliability and validity of the questionnaire, the measurement items in this study were adapted from established scales in the literature, with appropriate modifications to the context of GAI in online learning scenarios. Among them, intelligence, response time, and explainability use the scales developed by scholars such as Mehmood et al. (2024), Darban (2024), and Liu Y. L. et al. (2023), respectively. The scale of integration, accuracy, and source credibility mainly refers to the study of Chen et al. (2025), Zhou and Wu (2024), Yuan et al. (2022), and Wilson and Baack (2023). Personalization and emotional support are mainly based on the scales developed by Chen et al. (2023a), Zhu et al. (2023), and Zhang et al. (2018). Perceived value is adapted from De Kervenoael et al. (2020) and Chen et al. (2023b). Satisfaction is based on Xu et al. (2023). Finally, infusion use is adapted from Hu et al. (2024). All questionnaire research data were collected by a seven-point Likert scale (1 = strongly disagree, 7 = strongly agree). Detailed measurement items are provided in Appendix B Table B1.
Before the formal survey, this study conducted a pilot test involving fifty participants and consulted experts to refine the wording and structure of the questionnaire items based on the participants’ feedback. The pilot test results indicated that the scale demonstrated strong reliability and validity.
4.3 Data collection
Questionnaire data for this study were collected via Sojump,3 a widely used online survey platform in China with 260 million registered users, similar to Amazon MTurk (Wu et al., 2024). This platform has also been widely used in previous related studies (Ding et al., 2023; Del Ponte et al., 2024; Javed et al., 2024). Before the questionnaire, participants were provided with a brief introduction to GAI and online learning, along with two screenshots demonstrating the use of GAI in online learning contexts. After the questionnaire is completed, each participant will receive 3 RMB (about 0.413 $) as a reward. The questionnaire was distributed and collected in February 2025, yielding 386 participants who joined our study.
We obtained data through random sampling, but to ensure the quality of the data, this study implemented three data screening criteria. First, participants were required to have prior experience using GAI for learning purposes. The question, “Have you ever used GAI to assist in learning?” was set to exclude participants without experience (N = 17). Second, two attention tests were conducted to exclude the sample who failed to answer correctly (N = 23). Additionally, the reverse questions were included for the third item of explainability and the fourth item of accuracy to drop samples with inconsistent responses (N = 19). In sum, we included 327 participants in our data analysis. Following Chin’s (1998) guideline for PLS-SEM, we ensured the sample size exceeded both: (1) 10 times the number of items in the largest construct; (2) 10 times the number of independent variables. Second, we used G*Power 3.1.9 software to calculate the sample size. With a significance level = 0.05, = 0.95, and effect size = 0.15, the minimum sample size was 160. Our sample sizes satisfy these requirements. Among the participants, 160 were female (48.93%), with the majority aged 19–24 (41.59%). The data on year of usage indicated that users with 1–2 years of experience constitute the majority (51.07%). The demographics of the final sample are presented in Table 2.
4.4 Data analysis
We used Partial Least Square (PLS) to test our theoretical model. PLS-SEM has no strict requirements on sample size and quantity. In addition, it has strong predictive and interpretative ability (Hair et al., 2011), which is suitable for exploratory theoretical construction. The influence mechanism of this study first built a model of GAI’s infusion use in online learning scenarios, which belongs to exploratory research and is suitable for the PLS-SEM method. The data were analyzed using SmartPLS 4.0. We followed the two-step approach in examining the measurement and structural models.
4.4.1 Measurement model
To ensure the reliability and validity of the questionnaire, we assessed its convergent and discriminant validity and reliability (MacKenzie et al., 2011), with the results detailed in Table 3. Specifically, reliability was assessed by Cronbach’s Alpha and Composite reliability (CR) for all variables. The results indicate that the Cronbach’s Alpha values of all variables range from 0.820 to 0.892, and the CR values range from 0.893 to 0.921, with all coefficients exceeding the threshold of 0.7, which indicates that the questionnaire has strong internal consistency. Additionally, the average variance extracted (AVE) values for each construct ranged between 0.675 and 0.752, all exceeding 0.5, providing evidence of convergent validity.
Discriminant validity was assessed using the Fornell-Larcker criterion and Heterotrait-monotrait ratio (HTMT). Table 4 shows that the square root of the AVE values of all constructs was higher than the inter-construct correlations (MacKenzie et al., 2011), demonstrating good discriminant validity. Furthermore, the HTMT values among the constructs in Table 5 are below the critical value of 0.85.
4.4.2 Common method bias
Since the sample data were collected from a single source, with participants answering all questions simultaneously, this may lead to common method bias (CMB) among them. To mitigate the potential impact of CMB, this study adopted several control measures based on established studies: (1) informing participants that the survey was anonymous and that there were no right or wrong answers; (2) incorporating reverse items and attention test questions; and (3) balancing the order of questionnaire items. Additionally, we used Harman’s single-factor test for CMB. The results of a principal component analysis indicated that a single factor explains 38.50% of the variance in the data, which is below the recommended threshold of 40% (Podsakoff et al., 2003). Furthermore, this study employed the method proposed by Liang et al. (2007), and the results are presented in Appendix B Table B2. The average substantive explained factor loading (0.746) was larger than the average method factor loading (0.002), yielding a ratio of 493:1. Both test results imply that the CMB may not be a concern.
4.4.3 Structure model
To access the structural model, this study evaluated the variance explained (), effect size (), and Stone-Geisser’s () of variables. The model explained a portion of the variance, with a coefficient of determination () of 0.648 for perceived value, 0.661 for satisfaction, and 0.576 for infusion use as a dependent variable, indicating a good level of predictive power (Hair et al., 2019). Effect size analysis () showed all values ranging from 0.016 to 0.218, indicating low to medium impacts across constructs (Chin, 1998). Additionally, all the values exceeded the threshold of zero, confirming the model’s relevance regarding all endogenous variables (Hair et al., 2019).
A bootstrapping procedure was chosen to measure the significance of the path coefficient, standard error, and t-statistics (Table 6). (1) Perceived value. The results of the path analysis suggested that perceived value was positively influenced by intelligence (β = 0.084, p < 0.05), explainability (β = 0.162, p < 0.001), response time (β = 0.101, p < 0.05), integrability (β = 0.111, p < 0.05), accuracy (β = 0.106, p < 0.01), source credibility (β = 0.172, p < 0.01), personalization (β = 0.129, p < 0.05) and emotional support (β = 0.285, p < 0.001). These results supported H1a, H2a, H3a, H4a, H5a, H6a, H7a and H8a. (2) Satisfaction. The results of the path analysis suggested that satisfaction was positively associated with intelligence (β = 0.111, p < 0.01), explainability (β = 0.167, p < 0.001), response time (β = 0.161, p < 0.001), integrability (β = 0.134, p < 0.01), accuracy (β = 0.165, p < 0.01), source credibility (β = 0.162, p < 0.01), personalization (β = 0.111, p < 0.05) and emotional support (β = 0.188, p < 0.001). These results supported H1b, H2b, H3b, H4b, H5b, H6b, H7b and H8b.
Perceived value was found to be positively associated with infusion use (β = 0.356, p < 0.001), thus supporting H9. Satisfaction was positively influenced by infusion use (β = 0.456, p < 0.001), which supported H10. As shown in Figure 3, the hypothesis proposed in this study is supported.
Furthermore, we conducted mediation tests on the effects of perceived value and satisfaction. Specifically, we utilized the bootstrapping method with 5,000 repetitions to construct confidence intervals (CIs) (Edwards and Lambert, 2007; Hayes, 2009). Table 7 presents the bootstrapping results along with the corresponding 95% CIs. It shows that perceived value partially or fully mediates the relationship between intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, emotional support, and infusion use. These results supported H11a, H11b, H11c, H11d, H11e, H11f, H11g, H11h. At the same time, satisfaction partially or fully mediates the relationship between intelligence, explainability, response time, integrability, accuracy, source credibility, personalization, emotional support, and infusion use. These results supported H12a, H12b, H12c, H12d, H12e, H12f, H12g, H12h.
4.4.4 Fuzzy-set qualitative comparative analysis (fsQCA)
The theoretical foundation of SEM is based on the principle of correlational causation. This implies that variations in independent variables systematically influence dependent variable values. However, this assumption’s reliability may be limited due to the fundamentally asymmetric nature of most real-world relationships (Chakraborty et al., 2024). fsQCA tackles these concerns and integrates the strengths of both qualitative and quantitative research methods, accommodating sample sizes ranging from very small to very large (Lin et al., 2024). It is particularly well-suited for examining complex relationships among multiple factors in social phenomena. This study employs fsQCA for two primary reasons. First, it can better explore the causal complexity (Wang et al., 2024). Given that GAI infusion use is driven by multiple factors, traditional quantitative methods, which often isolate the individual effects of each factor, are less suitable (Hu et al., 2024). Second, fsQCA is based on Boolean algebra rather than regression analysis, enabling any situation to be described as a combination of causal conditions and their outcomes. Through logical and non-statistical procedures, fsQCA can establish logical links between combinations of causal conditions and outcomes (Lin et al., 2024), thereby providing deeper insights into the mechanisms that shape users’ infusion use. Figure 4 shows the configuration of antecedents and outcome conditions.
Figure 4. Configurational model of users’ infusion use of GAI.
) represent the presence of core conditions, while the small filled circles (
) indicate the presence of peripheral conditions. Conversely, denote the absence of core conditions, and the small cross-out circles (
) signify the absence of peripheral conditions. Blank indicates the condition is present or absent. As shown in the table, six configurations explain users’ infusion use, with an overall solution consistency of 0.655 and a coverage of 0.957. Both indicators exceed the recommendation, confirming the reliability of the results (