Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Psychol., 03 December 2025

Sec. Educational Psychology

Volume 16 - 2025 | https://doi.org/10.3389/fpsyg.2025.1599478

This article is part of the Research TopicAI Innovations in Education: Adaptive Learning and BeyondView all 27 articles

Perceived satisfaction, perceived usefulness, and interactive learning environments as predictors of university students’ self-regulation in the context of GenAI-assisted learning: an empirical study in mainland China

Zhiwei LiuZhiwei Liu1Yan ZhaoYan Zhao2Haode Zuo
Haode Zuo1*Yongjing LuYongjing Lu1
  • 1School of Mathematics, Yangzhou University, Yangzhou, China
  • 2Department of Mathematics, Taizhou University, Taizhou, Zhejiang, China

Given the potential risks of learners’ misuse of generative artificial intelligence (GenAI), including over-reliance, privacy concerns, and exposure to biased outputs, it is essential to investigate university students’ self-regulation in GenAI-assisted learning. Self-regulated learning enables university students to set goals, monitor their learning progress, and adjust strategies, thereby enhancing the effectiveness of GenAI-assisted learning. Guided by the three-tier model of self-regulation, which encompasses individual characteristics, cognitive and emotional factors, and behavioral intention, this study employed a mixed-method approach. Structural equation modeling (SEM) was used to quantitatively examine the relationships among key variables, while interviews provided qualitative insights, enabling a comprehensive exploration of factors influencing self-regulation in GenAI-assisted learning. Using a sample of 607 university students (e.g., prospective mathematics teachers) from Mainland China, this study found that compared to perceived self-efficacy and interactive learning environments, information system quality showed a stronger influence on learners perceived usefulness and satisfaction in GenAI-assisted learning. In predicting learner perceived self-regulation, perceived usefulness was a stronger predictor than the interactive learning environment and perceived satisfaction. Similarly, perceived usefulness was a stronger predictor of behavioral intention than perceived satisfaction and self-regulation. This study further investigated the partial mediating effects of perceived usefulness, perceived satisfaction, and perceived self-regulation among other variables. This study proposes a conceptual model to explore the interconnectedness of these factors in GenAI-assisted learning. It highlights the importance of information system quality for educators and recommends that researchers further investigate the dynamic factors influencing self-regulation in GenAI-assisted learning environments.

1 Introduction

GenAI, represented by tools such as OpenAI’s ChatGPT and Baidu’s Ernie Bot, has recently gained significant attention in higher education (Liu et al., 2025a). These tools can assist learners by delivering timely feedback, generating customized learning materials, and supporting technology-enhanced instruction (Zhang and Tur, 2024). With the growing integration of GenAI tools in higher education, their potential to create engaging learning experiences has been widely recognized, for example, by enabling multilingual translation, correcting programming errors, generating narratives, and acknowledging mistakes (Al-Emran et al., 2023; Hwang and Chang, 2023; Jeon, 2024; Pan et al., 2025).

However, the rapid adoption of GenAI has also raised concerns about potential risks, including privacy risks, over-reliance on GenAI outputs, exposure to inaccurate or biased information, and diminished critical thinking (Rospigliosi, 2023). Strengthening learners’ self-regulation is widely recognized as essential for the effective use of GenAI in higher education (Karal and Sarialioglu, 2025). Self-regulation empowers learners to set goals, monitor their progress, and adjust strategies proactively, enabling them to critically evaluate GenAI-generated content, avoid over-reliance on GenAI, and manage potential privacy risks (Sallam et al., 2025). Consequently, enhancing self-regulation can reduce the risks of GenAI misuse and support deeper autonomous learning. Despite the rapid integration of GenAI tools such as OpenAI’s ChatGPT and Baidu’s Ernie Bot into higher education, most existing studies have focused on technology acceptance (Du and Lv, 2024; Biyiri et al., 2024), behavioral intention (Garcia, 2025), and specific applications (e.g., AI-assisted writing and chatbot adoption; Xia et al., 2023; Hu et al., 2025), rather than on learners’ self-regulation learning. Current studies have predominantly adopted Unified Theory of Acceptance and Use of Technology Model (UTAUT), Technology Acceptance Model (TAM), or extended acceptance models, focusing on variables such as perceived usefulness, perceived ease of use, and behavioral intention (Şimşek et al., 2025; Tang et al., 2025; Liu et al., 2025b). However, it has largely overlooked the psychological and contextual mechanisms that shape learners’ self-regulated learning in GenAI-assisted environments, including cognitive strategies, emotional regulation, and learning environment factors. To date, empirical research integrating psychological, emotional, and contextual dimensions are scarce, and very few have used SEM to systematically explain how these factors shape learners’ self-regulation in higher education. This underscores a critical thematic gap, namely the need to develop an integrative understanding of how cognitive, emotional, and contextual factors jointly influence learners’ self-regulation in GenAI-assisted learning. Addressing this gap is essential for promoting the effective use of GenAI tools and guiding the design of pedagogical strategies in the era of GenAI-supported education.

To address this gap, the present study draws upon Liaw and Huang’s (2013, p. 6) three-tier models of self-regulation, which is expanded in this study into eight variables, namely perceived self-efficacy, perceived anxiety, information system quality, interactive learning environment, perceived usefulness, perceived satisfaction, self-regulation, and behavioral intention. Prior studies have consistently identified these variables as key determinants of learners’ self-regulation in educational settings (Bandura, 1986; Davis, 1989; DeLone and McLean, 1992; Liaw and Huang, 2007; Sun et al., 2008). These variables collectively capture the cognitive, emotional, and contextual dimensions essential for understanding how learners interact with GenAI and regulate their learning behaviors.

2 Literature review

2.1 GenAI in higher education

GenAI is transforming higher education by offering students innovative approaches to accessing knowledge and enhancing their learning processes (Strzelecki and ElArabawy, 2024). GenAI, such as OpenAI’s ChatGPT and Baidu’s Ernie Bot, have transformed the learning process by providing personalized support, thereby enhancing students’ learning performance (Wang et al., 2024). These tools foster deeper cognitive processing and enhance students’ interest in problems-solving (Jin et al., 2025a). For instance, these tools can generate case studies, role-playing scenarios, and debate topics, which stimulate students’ ideas (Chan and Hu, 2023; Lodge et al., 2023). By using GenAI, students can enhance their ability to organize learning tasks, monitor progress, and receive immediate feedback, all of which are essential components of self-regulated learning. Furthermore, GenAI can provide customized learning paths to enable students to engage in adaptive learning experiences (Yu et al., 2025). This contributes to improved time management, higher engagement, and increased motivation, as learners gain a greater sense of control over their learning processes. Recognizing these benefits, educators have increasingly integrated GenAI tools to foster dynamic learning environments that promote active learning, and critical thinking (Tang et al., 2025).

In the context of Mainland China, the rapid integration of GenAI into higher education have fostered a distinctive environment for student learning. GenAI, such as OpenAI’s ChatGPT and domestic platforms like Baidu’s ERNIE Bot, are being increasingly adopted in Chinese universities, providing functions such as intelligent question-and-answer (Q&A) systems, academic writing support, and content generation. Surveys data indicate that GenAI-assisted functions, including paper editing and Q&A systems, have reached a penetration rate of approximately 84% in Chinese universities (Zhang and Wang, 2025). This widespread adoption reflects China’s policy emphasis on advancing educational digitalization and enhancing university students’ digital literacy. However, Chinese higher education is characterized by unique cultural and institutional features that affect how students engage with GenAI-assisted learning (Huang et al., 2019). For example, Chinese students are typically socialized within teacher-centered and exam-oriented educational environments, which may limit their capacity for self-regulation in interacting with GenAI tools (Yan et al., 2024). Moreover, concerns regarding academic integrity and data privacy are pronounced in Mainland China, where institutional trust and adherence to national AI regulations play a central role in shaping student adoption behavior (Teo and Huang, 2019). Therefore, investigating Chinese university students’ self-regulated learning in GenAI-assisted environments is timely and significant, as it captures an educational ecosystem characterized by distinctive cultural, institutional, and technological dynamics.

However, the integration of OpenAI’s ChatGPT and Baidu’s Ernie Bot into higher education also presents risks and challenges, particularly concerning self-regulated learning (Jin et al., 2025b). A primary concern is the potential overreliance on these tools, which may hinder students’ active engagement in critical thinking and reflection (Strzelecki, 2024). By relying on GenAI for immediate responses, students may bypass essential cognitive processes, which can impair their ability to set learning goals and monitor progress (Zhou et al., 2024). Furthermore, sensitive information disclosed during interactions may be vulnerable to breaches or misuse, potentially eroding students’ confidence in employing these tools (Chiu, 2024). Also, GenAI-generated content may carry biases from its training data, potentially resulting in inaccuracies that distort students’ decision-making (Farrokhnia et al., 2024). This limitation undermines students’ ability to critically evaluate generated content and hinders the development of their self-regulation learning. Concerns about academic integrity also arise because GenAI tools can generate assignments, essays, or creative works, undermine the authenticity of student submissions and hinder the development of independent learning (Bhullar et al., 2024). Finally, university students with limited technical proficiency may encounter technological barriers that hinder their effective use of GenAI, thereby constraining their self-regulation learning (Fergus et al., 2023).

To address these risks and foster their self-regulated learning, universities should focus on cultivating students’ digital literacy, enabling them to use GenAI effectively while retaining autonomy (Wang et al., 2024). Establishing clear ethical standards and safeguarding data privacy are essential for creating a responsible educational environment (Cotton et al., 2024). Ensuring transparency in AI algorithms and content generation further enables students to critically evaluate GenAI outputs and ensures that these tools support their self-regulation learning (Arthur et al., 2025).

2.2 Learners’ self-regulation in GenAI-assisted learning based on three-tier models

In education, self-regulation refers to learners’ ability to actively manage their cognitive, emotional, and behavioral processes in the learning process (Kim et al., 2023). It involves setting goals, monitoring progress, adjusting learning strategies, and maintaining motivation. In the context of GenAI-assisted learning, self-regulation is particularly critical. Using GenAI tools such as ChatGPT or Baidu’s ERNIE Bot requires learners to effectively balance tool interaction with the active management of their own learning processes (Wang, 2024). Self-regulation encompasses not only traditional cognitive strategies but also the ability to adapt to the distinctive affordances of GenAI tools (e.g., content generation, feedback loops, and personalized learning; Hew et al., 2025). These tools support learners in organizing their learning process, while they also require students to actively monitor their interactions with the technology to prevent over-reliance (Zhang and Tur, 2024). Therefore, in this study, self-regulation is defined as students’ capacity to manage their learning experiences in GenAI-assisted learning environments by adjusting their strategies, behavior, and emotions to achieve the intended learning outcomes (Xu et al., 2025).

To understand and measure self-regulation in GenAI-assisted learning, Liaw and Huang (2013) proposed a three-tier model, which has become an important framework in e-learning research (Figure 1). This model comprises three interrelated tiers that shape how learners regulate their behaviors in technology-supported learning. The first tier highlights individual characteristics (e.g., self-efficacy and anxiety) and environmental factors (e.g., information system quality), which influence learners’ cognitive and affective responses during the learning process. High-quality GenAI tools (e.g., OpenAI’s ChatGPT, Baidu’s Ernie Bot) can alleviate learners’ cognitive load and offer more personalized learning experiences, thereby fostering their self-regulation learning. The second tier encompasses cognitive and affective factors, including perceived satisfaction and usefulness. These factors shape learners’ perceptions of the tools they use, thereby influencing their cognitive and affective responses. The model posits that learners who perceive tools as satisfying and useful are more likely to engage in self-regulation learning. Finally, the model highlights the behavioral intention tier, which is driven by perceived self-regulation. In this study, this tier reflects the learner’s intention to continue using GenAI, based on their usefulness and satisfaction. In other words, learners who can effectively regulate their learning with GenAI are more likely to sustain their use of these tools in the future.

Figure 1
Flowchart showing three tiers: The individual characteristics and system quality tier with perceived anxiety, self-efficacy, and interactive learning environments; the affective and cognitive tier with perceived satisfaction and usefulness; and the behavioral tier with perceived self-regulation. Arrows connect the tiers sequentially.

Figure 1. The conceptual model of understanding self-regulation (Liaw and Huang, 2013).

Although TAM and UTAUT are widely applied to examine technology acceptance, they primarily emphasize perceived ease of use, perceived usefulness, and behavioral intention, focusing on the technological aspects of users’ interactions with tools. However, these models overlook the cognitive, emotional, and behavioral dynamics that are crucial for understanding self-regulated learning in the context of educational technologies (Şimşek et al., 2025; Liu et al., 2025b). In contrast, Liaw and Huang’s (2013) three-tier self-regulation model offers a more comprehensive framework by integrating individual characteristics, emotional responses, and behavioral intention, providing a deeper understanding of how learners interact with GenAI tools such as ChatGPT and Baidu’s ERNIE Bot in higher education. This model extends the TAM/UTAUT frameworks by addressing the psychological and emotional factors that influence self-regulation in technology-mediated learning environments. While TAM and UTAUT primarily examine user acceptance based on technology perceptions, the three-tier model incorporates learners’ self-efficacy, anxiety, and emotional responses, which are particularly important in GenAI-assisted learning. It highlights the role of individual traits and contextual factors in shaping how students engage with GenAI tools, balancing cognitive, emotional, and behavioral aspects to achieve their learning goals. The novelty of applying the three-tier model in the context of GenAI-assisted learning lies in its ability to address the interplay between learners’ personal characteristics, their emotional responses, and the quality of the GenAI tools they use. Unlike the more limited scope of TAM/UTAUT, this model provides a richer, more nuanced understanding of how self-regulation influences learners’ use of GenAI tools. This theoretical framework offers valuable insights into the psychological, emotional, and contextual factors that shape learners’ self-regulation and how they perceive and interact with GenAI tools in their academic learning environments. Consequently, this expanded framework contributes to a deeper theoretical understanding of self-regulated learning in GenAI-enhanced education.

2.3 SEM

SEM is a powerful statistical technique for examining complex relationships between observed and latent variables (Anderson and Gerbing, 1988). Unlike traditional regression analysis, SEM enables researchers to test simultaneous causal pathways among multiple variables, including direct and indirect effects (Fan et al., 2016). This is particularly valuable for investigating constructs such as self-regulation, which encompass interrelated cognitive, emotional, and behavioral factors that cannot be directly measured. In this study, SEM was employed to construct a theoretical framework for examining the factors that influence learners’ self-regulation in GenAI-assisted learning environments. By using SEM, this study examined how individual characteristics (e.g., self-efficacy and anxiety), environmental factors (e.g., information system quality), and cognitive and affective responses (e.g., perceived satisfaction and usefulness) interrelate to shape learners’ self-regulation and behavioral intentions. This approach allows for the evaluation of both the direct effects of these factors and the modeling of their indirect effects. The use of SEM in this study is justified by its ability to handle complex data structures, accommodate measurement error, and model latent variables, which are central to this research (Fornell and Larcker, 1981). Unlike other modeling techniques, SEM offers a more nuanced understanding of how multiple factors, including learners’ emotional and cognitive responses, influence their learning behaviors. Given the dynamic and personalized nature of GenAI (e.g., OpenAI’s ChatGPT and Baidu’s Ernie Bot), SEM enables the capture of the complex interactions between learners and these tools, as well as their potential to support or hinder self-regulated learning. Overall, SEM serves as an appropriate and essential methodological approach for exploring the psychological, emotional, and contextual factors that influence self-regulation in GenAI-assisted learning. It provides a holistic understanding of how interactions with GenAI tools influence learners’ behaviors, forming a cornerstone for this study’s design.

2.4 Perceived self-efficacy

Bandura (1986) defined perceived self-efficacy as an individual’s confidence in their ability to successfully accomplish a specific task. In this study, perceived self-efficacy refers to learners’ confidence in using GenAI tools to achieve their learning goals (Börekci and Uyangör, 2025). Kumar (2021) reported that the effect of learners’ self-efficacy on chatbot-assisted learning was limited. However, some studies have suggested that higher self-efficacy can enhance learners’ learning performance and persistence, as it is associated with a more positive attitude toward this learning approach (Chang et al., 2022; Divekar et al., 2022). For instance, Lee et al. (2022) found that GenAI not only supported students in understanding knowledge but also offered immediate solutions to the problems they encountered, thereby enhancing their learning confidence. Therefore, perceived self-efficacy is crucial for GenAI-assisted learning.

2.5 Perceived anxiety

Computer anxiety is an uncomfortable emotional state characterized by nervousness, worry, and apprehension (Doll and Torkzadeh, 1988). In this study, perceived anxiety refers to an individual’s tendency to feel uneasy or fearful about the current or potential use of GenAI (Sallam et al., 2025). This anxiety may be exacerbated by the need for users to learn new terminology and understand unfamiliar applications associated with GenAI (Caporusso, 2023). Also, issues such as user privacy breaches and virus intrusions increase the risks of internet usage, further intensifying learners’ anxiety (Barbeite and Weiss, 2004). Existing research has indicated a negative correlation between perceived anxiety and the frequency of GenAI use (Zhu et al., 2024). Therefore, perceived anxiety is a critical factor influencing the use of GenAI.

2.6 Information system quality

In this study, information system quality is defined as the extent to which the information generated by GenAI accurately conveys the intended meaning (Baabdullah, 2024). It encompasses the technical performance of the system, the quality of the generated content, and the effectiveness of its practical applications. Mun and Hwang (2024) indicated that the information system quality of ChatGPT significantly influenced learners’ intention to use them. Similarly, Tlili et al. (2023) found that the reliability and flexibility of GenAI were crucial for ensuring stable and efficient operation. Furthermore, the content quality of GenAI is reflected in the accuracy and completeness of its generated information, which is intended to provide users with high-quality responses (Goli et al., 2023). For example, a high level of information system quality can effectively encourage learners to engage with live chat features during the learning process (McLean and Osei-Frimpong, 2019). Therefore, information system quality is a crucial determinant of GenAI-assisted learning.

2.7 Interactive learning environments

An interactive learning environment refers to a setting that enhances learning experience by facilitating interactions among learners, instructors, peers, and learning systems (Liaw, 2008). GenAI fosters interactive learning by responding to learners’ queries, which lies at the core of interactive engagement (Rospigliosi, 2023). In GenAI-assisted learning, synchronous and asynchronous features create dynamic communication channels. Synchronous communication enables real-time interaction between teachers and students, while asynchronous communication supports flexible participation without simultaneous engagement (Sharma et al., 2007). These interaction modes not only enable learners to share information but also allow them to access valuable knowledge autonomously. Because learning inherently occurs in social contexts, interactions among learners, instructors, and peers offer critical opportunities for knowledge construction (Liaw and Huang, 2013). Consequently, an interactive learning environment is essential for GenAI-assisted learning.

2.8 Perceived usefulness

The usefulness of learning systems is recognized as a critical factor influencing learning performance (Virvou and Katsionis, 2008). In this study, perceived usefulness is defined as the extent to which learners perceive GenAI as effective, efficient, and satisfying in supporting their learning (Kim et al., 2025). Prior research has shown that interactive learning environments were key determinants of learners perceived usefulness in e-learning systems (Liaw and Huang, 2007). Börekci and Uyangör (2025) found that perceived self-efficacy significantly impacted perceived usefulness. Zheng W. et al. (2024) and Zheng Y. et al. (2024) also suggested that the information system quality significantly affected perceived usefulness. Furthermore, Sallam et al. (2025) demonstrated a negative correlation between perceived anxiety and perceived usefulness. Based on these findings, we propose the following hypotheses:

H1: Perceived self-efficacy has a positive impact on perceived usefulness of learners in GenAI-assisted learning.

H2: Perceived anxiety has a negative impact on perceived usefulness of learners in GenAI-assisted learning.

H3: Information system quality has a positive impact on perceived usefulness of learners in GenAI-assisted learning.

H4: Interactive learning environment has a positive impact on perceived usefulness of learners in GenAI-assisted learning.

2.9 Perceived satisfaction

Perceived satisfaction is considered a key indicator of the effectiveness of learning systems (Virvou and Katsionis, 2008). In this study, it is defined as the degree of comfort and contentment learners experience in using GenAI (Kim et al., 2025). Existing studies have shown that perceived self-efficacy, information system quality, interactive learning environments, and perceived usefulness significantly influenced perceived satisfaction (Kim and Ong, 2005; Liaw, 2008). Specifically, Liaw et al. (2007) found that perceived self-efficacy played an important role in shaping learners’ satisfaction with e-learning systems. Liu et al. (2009) reported that learner satisfaction increases when e-learning environments provide more interactive activities. Further, the information system quality is closely associated with learners’ satisfaction (Almufarreh, 2024; Wut and Lee, 2022). Al-Sharafi et al. (2023) also shown that learners are satisfied with GenAI-assisted learning when the tools meet their needs. Moreover, prior studies have identified a negative correlation between perceived anxiety and perceived satisfaction (Sun et al., 2008; Tsai, 2009). Based on these findings, the following hypotheses were formulated:

H5: Perceived self-efficacy has a positive impact on perceived satisfaction of learners in GenAI-assisted learning.

H6: Perceived anxiety has a negative impact on perceived satisfaction of learners in GenAI-assisted learning.

H7: Information system quality has a positive impact on perceived satisfaction of learners in GenAI-assisted learning.

H8: Interactive learning environment has a positive impact on perceived satisfaction of learners in GenAI-assisted learning.

H9: Perceived usefulness has a positive impact on perceived satisfaction of learners in GenAI-assisted learning.

2.10 Perceived self-regulation

Existing research has found a strong correlation between perceived satisfaction and perceived self-regulation (Kramarski and Gutman, 2006). For example, Ji et al. (2025) suggested that perceived satisfaction enhanced learners’ self-regulation, which in turn increases their intention to continue using GenAI. Zhou et al. (2024) also found that perceived usefulness was a crucial factor in promoting self-regulation in GenAI-assisted learning environments. Vighnarajah et al. (2009) also suggested that interactive e-learning environment could enhance learners’ self-regulation. Therefore, the following hypotheses were proposed:

H10: Perceived satisfaction has a positive impact on perceived self-regulation of learners in GenAI-assisted learning.

H11: Perceived usefulness has a positive impact on perceived self-regulation of learners in GenAI-assisted learning.

H12: Interactive learning environment has a positive impact on perceived self-regulation of learners in GenAI-assisted learning.

2.11 Behavioral intention

This study defines behavioral intention as learner’s willingness to continue using GenAI (Cai et al., 2023). Existing studies have suggested that perceived satisfaction and usefulness could significantly influence learners’ behavioral intention in GenAI-assisted learning (Jung and Jo, 2025; Mohamed Eldakar et al., 2025). For example, Al-Sharafi et al. (2023) found that when students perceive AI chatbots as effective in assisting their learning tasks, their satisfaction also increases, enhancing their likelihood of continued use. Furthermore, Ma and Lee (2019) found that learners perceived usefulness and self-regulation were positively correlated with their behavioral intentions. Based on the above findings, we proposed the following hypotheses and research model (Figure 2).

Figure 2
Diagram showing relationships between factors impacting behavioral intention. Perceived self-efficacy (H1, H5), perceived anxiety (H2, H6, H7), information system quality (H3, H8), and interactive learning environments (H4) lead to perceived satisfaction (H10) and perceived usefulness (H9, H11). These influence behavioral intention (H13, H14), alongside perceived self-regulation (H15).

Figure 2. Research hypotheses.

H13: Perceived satisfaction has a positive impact on behavioral intention of learners in GenAI-assisted learning.

H14: Perceived usefulness has a positive impact on behavioral intention of learners in GenAI-assisted learning.

H15: Perceived self-regulation has a positive impact on behavioral intention of learners in GenAI-assisted learning.

3 Methods

This study employed SEM and qualitative interviews to examine the relationships among variables in GenAI-assisted learning. The methodological design was guided by Grant and Osanloo’s (2014) framework, which emphasizes that a theoretical framework serves as the blueprint of a research project. Their guidelines highlight the importance of (a) grounding a study in a clearly defined theoretical model, (b) ensuring alignment between the research questions, hypotheses, literature review, instruments, and data analysis procedures, and (c) providing a clear rationale for the selection of the theoretical foundation. Compared with other general methodological recommendations, such as Creswell and Creswell’s (2017) research design framework or Yin’s (2018) guidelines for case study research, Grant and Osanloo’s (2014) approach places a more explicit emphasis on integrating theory at every stage of the research process. This characteristic aligns with the current study, which requires a strong theoretical grounding to investigate the psychological and contextual factors affecting learners’ self-regulation in GenAI-assisted learning.

Following this guideline, the present study adopted Liaw and Huang’s (2013) three-tier model of self-regulation, as its theoretical foundation and expanded it into eight constructs to develop the research model. This alignment ensured theoretical coherence across the survey instrument, variable selection, and subsequent SEM analysis. To complement the quantitative findings, an open-ended interview question was appended to provide qualitative insights that enrich the interpretation of the results.

3.1 Instrument development

This study aimed to systematically test the hypotheses of the research model using a survey methodology. The questionnaire comprised three sections, with the first section collecting participants’ demographic information, including gender, age, educational level, major, and experience using GenAI. The second section comprised eight scales, each corresponding to a construct in the research model. The third section included an open-ended question designed to elicit participants’ views on the advantages, disadvantages, and suggestions for GenAI-assisted learning. This section was primarily based on the frameworks developed by Liaw (2008) and Liaw and Huang (2013), with relevant items revised to fit the context of this study. Therefore, the final research instrument consisted of 29 items measured on a seven-point Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). Items numbered 1–3 evaluated perceived self-efficacy. Items 4–7 measured perceived anxiety. The information system quality was gauged through Items 8–11. Items 12–15 were formulated to assess the interactive learning environments, whereas Items 16–18 appraised their perceived satisfaction. Items 19–21 quantified the perceived usefulness. Items numbered 22–25 evaluated perceived self-regulation. Lastly, Items 26–29 determined the behavioral intention. The final version of the questionnaire is presented in Table A1.

Before the formal survey, a pilot survey was conducted to ensure the adaptability of the questionnaire items to the actual survey. However, the discriminate validity was not satisfactory between perceived self-efficacy and perceived satisfaction. Therefore, one of the items related to perceived self-efficacy was removed according to the results. Another item related to perceived usefulness was also removed due to its unacceptable factor loading.

To further improve the scientific validity of the scale, two experts with substantial experience in higher education and educational technology were invited to review the content of the questionnaire. The first expert is a professor specializing in technology integration in teacher education, with over 20 years of experience in developing and validating research instruments. The second expert is a senior researcher in psychometrics who has collaborated on several large-scale studies involving survey validation. These experts provided detailed feedback on the relevance, clarity, and validity of the questionnaire items, ensuring alignment with the study’s objectives and the theoretical framework. For example, in this study, the definition of “use” was based on the expected frequency of participants’ use of GenAI in current learning, rather than their future use. All items were carefully screened and adjusted for content validity. The items were then subjected to a back-translation validation process to ensure cross-cultural consistency and linguistic accuracy. Specifically, one researcher translated from English to Chinese and then another researcher translated the Chinese back to English.

3.2 Data collection

This study was conducted in Jiangsu Province, an economically developed coastal province in eastern Mainland China. Participants included undergraduate and graduate students majoring in math, etc. The Questionnaire Star website1 was used to pose the questionnaire. Counsellors from the school of mathematics at several universities in Jiangsu assisted us in sharing the link to the questionnaire with their university students via WeChat and Tencent QQ. While these recruitment methods helped us efficiently reach participants, it is important to acknowledge that these platforms may limit the diversity of the sample. WeChat and Tencent QQ are popular among younger, tech-savvy students, which may lead to an overrepresentation of certain demographic groups, such as more active and engaged students. Additionally, counselors from specific universities may have led to a concentration of participants from a limited number of institutions, potentially affecting the generalizability of the findings to students from other provinces or fields of study. At the beginning of the questionnaire, respondents were informed that they have the right to withdraw, confidentiality, anonymity and data protection. Participants voluntarily completed the survey and were entered into a small gift card lottery as appreciation for their participation. They were also informed about anonymity issues, explaining that the data would be used solely for research purposes. Ethical guidelines approved by the first author’s university academic committee were followed during data collection. Ethical approval was obtained under protocol number yzumath20240801 on August 1, 2024.

After data cleaning, the final analysis included data from 607 university students (e.g., prospective mathematics teachers) with experience using GenAI, aged 18–27. Participants were selected based on their use of GenAI, with popular tools being Baidu’s Ernie Bot and OpenAI’s ChatGPT. The demographic characteristics show that 37.4% of the respondents identity as males and 57.3% of the respondents are undergraduates. Additional details regarding the demographic characteristics of the respondents are presented in Table 1, with descriptive statistics.

Table 1
www.frontiersin.org

Table 1. Demographic information about participants.

While we believe that these samples provide valuable insights, we also acknowledge that these recruitment methods introduce the possibility of selection bias. These samples may not fully represent the broader university student population, especially in terms of demographic diversity. Future work could benefit from expanding the recruitment methods to include a more varied range of platforms and institutions, which would enhance the representativeness of the sample and improve the external validity of the findings.

Once data collection was completed, we downloaded the data sets from the WJX platform for further analysis. The data were stored in SPSS 26 and saved in SAV format for normality testing.

3.3 Statistical approaches

Skewness and kurtosis indices were examined to assess the normality of each item, which is a prerequisite for SEM analysis. A dataset is generally considered normally distributed when the absolute skewness is below 3 and the absolute kurtosis is below 10 (Kline, 2023). The distribution of all scale items was examined using SPSS 26. Then, SPSS 26 and AMOS 28 were employed to evaluate the reliability and validity of the measurements and to test the proposed structural model. We also conducted a confirmatory factor analysis (CFA) using AMOS 28 to assess the measurement model. Prior to the CFA, the Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test of sphericity were examined to ensure sampling adequacy. The model included eight latent variables, and standardized factor loadings were evaluated to confirm construct validity. A factor loading threshold of 0.50 was adopted, and items with loadings below this threshold were removed to improve the reliability and validity of the measurement model.

Subsequently, Cronbach’s alpha (α), Average Variance Extracted (AVE), and Composite Reliability (CR) were calculated for each variable based on the retained items. Specifically, Cronbach’s α should exceed 0.80, the AVE should be greater than 0.50 and the CR should be above 0.70 (Hair et al., 2012). To assess the discriminant validity of the constructs, we employed the “Validity and Reliability Test” plugin in Amos, which incorporates both the Fornell–Larcker criterion and the Heterotrait–Monotrait (HTMT) ratio of correlations.

After confirming the validity and reliability of the constructs, we used Amos 28 to perform standardized model estimation via the maximum likelihood method. The analysis generated path coefficients (β) and significance levels (p), which were used to test the research hypotheses and evaluate the mediating effects. According to Baron and Kenny (1986), mediating effects can be classified into three types: full mediation, partial mediation, and suppressing mediation. In this study, mediation effects were tested using the bias-corrected and accelerated bootstrap method, with 5,000 resamples, as recommended in recent SEM practice (Hu et al., 2025; Tang et al., 2025). The explanatory power of the model for the outcome variables was evaluated by calculating the squared multiple correlations (R2). According to Hair and Alamer (2022), R2 values of 0.25, 0.50, and 0.75 are considered to represent weak, moderate, and substantial explanatory power, respectively.

Model fit indices in this study were evaluated based on widely accepted criteria, with the chi-square divided by degrees of freedom (χ2/df, CMIN/DF) expected to be ≤ 3.0 (Hayduk, 1987). The root mean square error of approximation (RMSEA) was expected to be ≤ 0.05 (Hair et al., 2017). The incremental fit index (IFI), Tucker–Lewis index (TLI), and comparative fit index (CFI) were required to be ≥ 0.90 (Bagozzi and Yi, 1988). The standardized root means square residual (SRMR) was considered acceptable if ≤ 0.08 (Hu and Bentler, 1999).

3.4 Qualitative analysis

To gain deeper insight into learners’ experiences with GenAI-assisted learning, we included one open-ended interview question at the end of the questionnaire: “Please tell us about the advantages, disadvantages and suggestions of GenAI.” This question was designed to elicit participants’ reflective opinions about GenAI, including perceived strengths, limitations, and potential improvements. The goal was to capture the subjective aspects of their experiences that might not be fully revealed through quantitative scales.

We conducted a thematic analysis on the open-ended responses to explore learners’ perceptions. All responses were reviewed and manually coded into three main themes—advantages, disadvantages, and suggestions—as aligned with the structure of the prompt. Each theme was further categorized into sub-themes based on lexical frequency and semantic similarity. Representative expressions were extracted for illustration.

The coding process was conducted using NVivo 12, and all codes were discussed and validated by two researchers to enhance reliability. The process involved iterative comparison and resolution of discrepancies through discussion.

4 Results

4.1 Measurement validity, reliability, and model fit indices

The results of the normality test are presented in Table 2. The skewness values ranged from −0.921 to 0.161, and the kurtosis values ranged from −1.014 to 1.890. These results indicate that the skewness and kurtosis of the dataset fall within acceptable limits, suggesting that all 29 items across the eight variables approximate a normal distribution.

Table 2
www.frontiersin.org

Table 2. Descriptive statistics and normality tests.

The proposed model demonstrated an excellent fit to the collected data. As shown in Table 3, the model fit indices were as follows: CMIN/DF = 1.865, RMSEA = 0.038, IFI = 0.973, TLI = 0.969, CFI = 0.973, and SRMR = 0.040. All six indices met the recommended thresholds, indicating a satisfactory model fit.

Table 3
www.frontiersin.org

Table 3. Model fit indices.

As shown in Table 4, all items were satisfactorily loaded onto the eight factors corresponding to the proposed variables. Both the composite reliability (CR) and average variance extracted (AVE) values for all constructs exceeded the recommended thresholds. Additionally, the CFA results indicated a KMO value of 0.934, and Bartlett’s test of sphericity was significant (χ2 = 11,507.763, p < 0.001). Cronbach’s alpha values for all variables ranged from 0.811 to 0.946, indicating satisfactory internal consistency. Collectively, these results confirmed that the measurement model met the prerequisites for subsequent SEM analysis.

Table 4
www.frontiersin.org

Table 4. Construct validity and convergent validity.

Furthermore, The Fornell-Larcker criterion and the Heterotrait-Monotrait Ratio (HTMT) were employed to assess discriminant validity. As shown in Table 5, the square roots of the AVE for each construct exceeded the corresponding inter-construct correlations, indicating satisfactory discriminant validity (Fornell and Larcker, 1981). This result indicated that the study achieved satisfactory discriminant validity according to the Fornell–Larcker criterion. To further confirm these findings, the Heterotrait–Monotrait (HTMT) ratio of correlations was calculated, reflecting the average inter-item correlations across constructs compared with those within the same construct (Henseler et al., 2015). As shown in Table 6, all HTMT values were below the recommended threshold of 0.90, indicating acceptable discriminant validity (Hair et al., 2019).

Table 5
www.frontiersin.org

Table 5. The Fornell-Larcker test results.

Table 6
www.frontiersin.org

Table 6. The Heterotrait-Monotrait (HTMT) ratio testing results.

4.2 Path analysis and research model power

The explanatory power of the hypothesized model is presented in Table 7. The R2 values for perceived usefulness, perceived satisfaction, perceived self-regulation, and behavioral intention were 0.625, 0.642, 0.590, and 0.496, respectively. The results indicate that the model demonstrates moderate to strong explanatory power for perceived usefulness, perceived satisfaction, and perceived self-regulation, while the relatively lower R2 value for behavioral intention. This suggests that the model accounts for only about half of the variance in behavioral intention, leaving a significant portion unexplained. The unexplained variance may be attributed to additional factors that were not considered in the current model. For example, social influence (e.g., peers, colleagues, or societal norms) may play a significant role in shaping students’ behavioral intentions, especially in technology adoption contexts (Zheng W. et al., 2024; Zheng Y. et al., 2024). Furthermore, platform trust could be another critical factor, as users’ trust in the GenAI platform may strongly affect their intention to use it (Wang et al., 2025). Also, perceived ease of use and previous experience with similar technologies could contribute to explaining the remaining variance in behavioral intention (Unal and Uzun, 2021). These factors, which were not included in this study, could provide a more comprehensive understanding of the factors influencing behavioral intention.

Table 7
www.frontiersin.org

Table 7. Results of model path analysis.

Overall, while the model shows a strong fit for several key variables, the relatively lower R2 for behavioral intention highlights the need for further exploration of other potential influences, such as social influence and platform trust, that might contribute to the formation of behavioral intention.

Additionally, the results of the path analysis showed that information system quality and interactive learning environments contributed more to perceived usefulness than perceived self-efficacy and perceived anxiety. Information system quality and perceived usefulness explained and predicted perceived satisfaction more significantly than perceived self-efficacy, perceived anxiety, and interactive learning environments. Second, perceived usefulness had a greater impact on perceived self-regulation than perceived satisfaction and interactive learning environments. Perceived usefulness and perceived self-regulation emerged as stronger predictors of behavioral intention than perceived satisfaction.

Table 8 presents the mediating effects of perceived satisfaction, perceived usefulness and perceived self-regulation. The path of perceived anxiety on perceived satisfaction was not significant, as were the paths of perceived satisfaction and perceived usefulness on behavioral intention. All other mediating paths exhibited partial mediation effects. Figure 3 illustrates the final structural model.

Table 8
www.frontiersin.org

Table 8. Mediating effects.

Figure 3
A causal diagram illustrating relationships between various psychological and system factors. Ovals represent factors: perceived self-efficacy, perceived anxiety, information system quality, interactive learning environments, perceived satisfaction, perceived usefulness, perceived self-regulation, and behavioral intention. Arrows indicate relationships between factors with path coefficients labeled. Higher significance levels are shown with asterisks.

Figure 3. The results of hypotheses. The green numbers in the figure represent the indirect effect values. Superscript 1 indicates mediation through PU, superscript 2 indicates mediation through PS, and superscript 3 indicates mediation through PSR. ***p < 0.001, **p < 0.010, *p < 0.050.

4.3 Qualitative results

Table 9 shows the key themes that emerged from the qualitative analysis. For advantages, learners frequently highlighted intelligent responses, fast and efficient interaction, and personalized learning support. Disadvantages focused on dependency on the tool, potential inaccuracies, and operational difficulties such as needing precise prompts. Suggestions included calls to enhance efficiency, improve personalization, strengthen accuracy, and promote broader awareness and usability of GenAI.

Table 9
www.frontiersin.org

Table 9. Thematic analysis of learners’ perceptions in GenAI-assisted Learning.

5 Discussion

This study explored the effects of perceived anxiety on perceived usefulness and satisfaction in GenAI-assisted learning (H2 and H6). The results showed that perceived anxiety had no significant effect on these variables, which contradicts previous findings (Sallam et al., 2025; Zhu et al., 2024). One possible explanation is that reducing barriers to using GenAI may streamline the learning process, allowing learners to focus on self-regulation rather than being distracted by technology-related anxiety (Mohamed et al., 2025). Moreover, as GenAI tools (e.g., Ernie Bot and ChatGPT) have become widely adopted, learners have developed greater adaptability to their use, while the associated complexity and barriers have progressively diminished (Laun and Wolff, 2025). Therefore, perceived anxiety may no longer serve as a fundamental predictor of perceived usefulness and satisfaction. Over time, students have likely become more comfortable and confident in utilizing GenAI tools, reducing the emotional response to potential technology-related stressors. It is important to consider the possibility that the measurement limitations in this study may have contributed to these non-significant findings. The scales used to measure perceived anxiety, usefulness, and satisfaction may not have fully captured the complex relationship between these variables. Future studies could benefit from refining these measures or exploring alternative approaches to assess anxiety and its impact more effectively, particularly in contexts involving rapidly evolving technology like GenAI.

The study revealed that information system quality exerted a stronger influence on perceived usefulness and satisfaction than did the interactive learning environment and perceived self-efficacy (H1, 3, 4, 5, 7, and 8). This finding confirmed Zheng W. et al.’s (2024) and Zheng Y. et al. (2024) research. High-quality information systems provide learners with seamless user experience, reducing cognitive load and enabling them to concentrate on the learning process. In this study, the GenAI can deliver clear, structured responses that effectively support knowledge acquisition. This capability enables learners to quickly identify relevant information and complete learning tasks more efficiently. In this study, the overwhelming impact of information system quality might suggest that learners’ emotional and motivational responses to technology could be less emphasized in GenAI-assisted learning. However, this does not diminish the importance of psychological factors—in fact, they might become more significant in environments where learners are encouraged to explore and use these tools more independently. Emotional responses (e.g., satisfaction) related to learning challenges, the sense of control, and self-regulation could influence learners’ ability to fully engage with the tool and sustain its use over time. These findings imply that both technical and psychological factors must be integrated when constructing theoretical models for GenAI-assisted learning. A more holistic model would recognize the interaction between learner characteristics (e.g., self-efficacy, motivation), the technical quality of the tools (e.g., usability, personalization), and the emotional experiences during the learning process. Future work should focus on how these elements interact, as this could provide a more nuanced understanding of how GenAI tools affect learning outcomes, adoption, and sustained engagement.

Moreover, the interactive learning environment contributed more to perceived satisfaction than perceived self-efficacy, a finding consistent with the results of Jin et al. (2025b). A possible explanation is that university students develop foundational computer and AI interaction skills during their studies, leading to an almost negligible level of computer illiteracy in this population (Zhang and Wang, 2025). Consequently, learners tend to demonstrate high confidence in using GenAI tools, which diminishes the predictive effect of self-efficacy. Finally, the enhanced satisfaction observed in this study can be attributed to the provision of accurate information, concise content, and engaging interface design offered by the GenAI tools (Mun and Hwang, 2024). By presenting content in logically coherent and easily comprehensible components, these tools facilitate smoother learning experiences and enable the efficient completion of tasks. However, this finding is closely tied to the capabilities of Ernie Bot and ChatGPT and may not necessarily generalize all GenAI applications. Future work could extend this work by comparing a broader range of GenAI models to determine whether similar effects on perceived satisfaction and usefulness are observed across tools with varying capabilities. Examining how different GenAI functionalities shape learning outcomes would yield deeper insights to inform both tool selection and instructional design.

Additionally, this study found a significant and strong correlation between perceived usefulness and satisfaction (H9), aligning with previous findings (Kim et al., 2025). According to social exchange theory (Cook and Rice, 2003), learners who perceive the learning content provided by GenAI as highly beneficial are more likely to experience satisfaction with their learning experience. In other words, when learners perceive that GenAI enhances their learning performance and facilitates the effortless completion of tasks, their overall learning satisfaction tends to increase (Jung and Jo, 2025).

Perceived usefulness also played a significant partial mediating role in the relationships between perceived self-efficacy, information system quality, and interactive learning environment on perceived satisfaction. Among these paths, perceived usefulness served as the strongest mediator between information system quality and perceived satisfaction. The GenAI tools used in this study (e.g., Ernie Bot and ChatGPT) have gained widespread adoption among learners, mainly because they deliver rapid responses and high-quality learning content (Liu et al., 2025a). These capabilities help learners achieve their academic goals more effectively and enhance their overall satisfaction. In other words, higher system usability is positively associated with increased learner satisfaction (Almufarreh, 2024). Similarly, the perceived usefulness of GenAI tools partially mediated the effects of the interactive learning environment on perceived self-regulation. When learners perceive these tools as useful, they are more likely to engage actively in technology-mediated learning interactions (Bai and Wang, 2025). By improving the quality and frequency of teacher–learner communication, learners are more likely to receive timely feedback and adjust their learning strategies.

Perceived satisfaction, perceived usefulness, and the interactive learning environment were identified as crucial predictors of perceived self-regulation in GenAI-assisted learning (H10-12), which aligns with previous research (Ji et al., 2025). Specifically, the interactive learning environment plays a critical role in fostering self-regulation. For example, pedagogical strategies like group discussions and structured assignments help learners actively using GenAI, enhancing their self-regulatory skills (Zhou et al., 2024). Additionally, motivation serves as a crucial factor in GenAI-assisted learning. Intrinsic motivation—such as the satisfaction derived from using the tool—encourages deeper cognitive engagement, whereas extrinsic motivation—such as the perceived usefulness of GenAI—drives more practical, goal-oriented learning behaviors. Together, these motivational factors and an interactive environment help learners integrate GenAI effectively into their studies, promoting self-directed learning and alignment with individual learning goals.

Furthermore, perceived satisfaction was found to partially mediate the effect of perceived usefulness on self-regulation in GenAI-assisted learning with Ernie Bot and ChatGPT. As learners’ satisfaction with these GenAI tools increases, their motivation to engage in the learning process rises, which in turn enhances their self-regulation abilities (Ji et al., 2025). This positive feedback loop underscores the essential role of perceived usefulness in supporting self-regulated learning within these specific GenAI environments. Surprisingly, perceived satisfaction did not mediate the effect of perceived usefulness on behavioral intention. This may be because learners prioritize the usefulness of tools like ChatGPT over overall satisfaction. For example, even if learners feel dissatisfied with the subscription cost of ChatGPT, they are still likely to continue using it if they believe that it contributes to their academic achievement (Jo, 2024).

The perceived usefulness has a stronger impact on behavioral intention than perceived satisfaction and self-regulation (H13–H15), which aligns with previous studies (Mohamed Eldakar et al., 2025; Mohamed et al., 2025). This finding suggests that when learners perceive GenAI, such as Ernie Bot and ChatGPT, to be more useful, they are more likely to adopt them for learning in the future. However, it is important to consider the cultural and institutional contexts in which these findings were observed, particularly in the context of Mainland China. In China, the education system is largely exam-oriented, which places a significant emphasis on achieving high academic performance in standardized exams. This system may influence students’ perceptions of GenAI tools primarily in terms of their efficacy and utility for exam preparation, rather than the overall satisfaction with the tools themselves. As a result, students may prioritize the perceived usefulness of GenAI over other factors like satisfaction. This could explain why our findings show that perceived usefulness is a stronger predictor of behavioral intention than perceived satisfaction. In contrast, in Western educational contexts, where there is often greater emphasis on self-regulated learning and personalized learning tools, students may place more importance on the satisfaction they derive from using GenAI (Al-Sharafi et al., 2023). The institutional and cultural differences in educational priorities could explain why findings in Western studies (e.g., Sallam et al., 2025) emphasize perceived satisfaction as a stronger predictor of students’ behavioral intentions. Additionally, perceived self-regulation partially mediated the effect of perceived usefulness on behavioral intention. When learners recognize the usefulness of Ernie Bot and ChatGPT, they are likely to adjust their learning behaviors more actively to maximize the benefits of these tools. Zimmerman and Schunk (2001) also emphasized that self-regulation is a crucial mediator in technology acceptance, enhancing the effect of perceived usefulness on learners’ behavioral intentions. However, perceived self-regulation did not mediate the impact of perceived satisfaction on behavioral intentions. This may be because self-regulation primarily involves learners’ internal cognitive and behavioral processes (Liaw and Huang, 2013), which may not fully correspond to the external feedback and interactions provided by tools such as Ernie Bot and ChatGPT. This misalignment may weaken the link between learners’ self-regulatory behaviors and their intentions to continue using GenAI tools.

In this study, both quantitative and qualitative data were collected to explore learners’ perceptions of GenAI-assisted learning. The quantitative data, derived from the survey responses, provided numerical insights into constructs such as perceived usefulness, self-regulation, and behavioral intention. Meanwhile, the qualitative data, gathered from the open-ended responses, allowed us to capture participants’ subjective experiences and suggestions for improvement. To enhance the robustness of our findings, we conducted a triangulation process, where we compared and integrated both types of data. This process not only helped us cross-validate the findings but also enriched our interpretation of the results. For example, the quantitative data indicated a strong positive correlation between perceived usefulness and behavioral intention, which was supported by the qualitative feedback where many participants described how GenAI’s efficiency and convenience led them to use it more frequently for academic tasks. One participant stated, “It saves a lot of time, especially when working on assignments and mathematical proofs. I can quickly check solutions to problems.” This qualitative response aligns with the quantitative data showing that efficiency was a key predictor of behavioral intention in our model. Furthermore, the quantitative results revealed that perceived anxiety and perceived satisfaction had an insignificant path in the model, indicating that the level of anxiety felt by users did not significantly impact their satisfaction with GenAI. This result was further nuanced by qualitative data, where some participants expressed concerns about accuracy and ease of use, but did not directly associate these factors with overall satisfaction. As one participant mentioned, “I sometimes feel anxious when using GenAI because I’m not sure if the answers are correct, but it does not really affect my overall satisfaction.” This suggests that while anxiety may influence users’ perceptions of GenAI, it does not directly reduce their satisfaction with its use, possibly due to the tool’s overall convenience and efficiency. By integrating both types of data, we were able to validate and further explain the quantitative findings, demonstrating the consistency between the two datasets. This triangulation not only strengthens the validity of our conclusions but also provides a more nuanced understanding of the learners’ experiences with GenAI. The qualitative insights, particularly the suggestions for improving personalization and accuracy, help contextualize the quantitative results and offer actionable recommendations for future system optimization. In future studies, continuing to incorporate both qualitative and quantitative methods will allow for a deeper understanding of the complex factors influencing learners’ engagement with GenAI.

6 Limitations

This study has several limitations. First, the sample consisted of undergraduates and graduate students, with limited representation from other educational levels. Future research should include primary education and special education to capture a broader spectrum of learning experiences and enhance the generalizability of the findings. Second, the study used a cross-sectional design, which restricts the ability to capture the dynamic changes of learners’ self-regulation. Researchers should adopt longitudinal follow-up designs to examine how self-regulation in GenAI-assisted learning evolves over time. Third, this study included only an open-ended interview question in the questionnaire, which may not fully capture the participants’ subjective experiences in GenAI-assisted learning. Future work should consider incorporating in-depth interviews to obtain richer qualitative insights. Fourth, this study primarily reflects learners’ experiences with GenAI-assisted learning using Ernie Bot and ChatGPT, as these were the most popular tools among Chinese learners. Although the research instrument was designed to capture general perceptions of GenAI in learning, the findings may not fully reflect learners’ experiences with other GenAI tools. Consequently, the generalizability of the results to other GenAI tools may be limited. Future work should focus on specific GenAI applications or conduct comparative analyses across different platforms to gain a more comprehensive understanding of learners’ self-regulation in various GenAI-assisted learning environments. Fifth, while this study relied on a single open-ended interview question for qualitative input, we acknowledge that this design may limit the depth of the qualitative insights collected. The decision to use only one open-ended question was made to focus on a specific aspect of participants’ experiences with GenAI-assisted learning, ensuring that responses would be aligned with the research focus. However, we recognize that a broader set of open-ended questions could have allowed for a more nuanced exploration of participants’ perspectives on other relevant factors, such as their motivations, challenges, and perceived benefits. In future research, we plan to include a series of open-ended questions to gather more detailed and varied qualitative data. This would allow us to explore different dimensions of participants’ experiences and provide a richer understanding of the factors that influence engagement with GenAI-assisted learning. Additionally, this study relied on self-reported data, which may be susceptible to common method bias (CMB). Specifically, participants may have provided responses that align with social desirability or other personal biases, potentially inflating the relationships between the variables. Although the self-reported nature of the data is a common approach in educational research, the risk of CMB cannot be overlooked. To address this limitation, future work could incorporate techniques such as Harman’s single-factor test or the use of a variable marker to test for the presence of CMB. These methods would help ensure that the observed relationships are not unduly influenced by common method effects. Finally, one notable limitation is the lack of multi-group SEM (MGSEM) to examine potential differences in path relationships across subpopulations. This study focused on a general model without considering the possibility that different groups, such as those based on gender, education level, or other demographic factors, may exhibit distinct learning behaviors and responses to GenAI-assisted learning environments. MGSEM allows for a more nuanced understanding of heterogeneous learning behaviors, as it can explore whether path relationships vary across different subgroups. For instance, the effects of perceived usefulness or satisfaction on self-regulation or behavioral intention may differ for male and female participants, or for those at different levels of education. Therefore, future work could benefit from integrating MGSEM to explore these group differences and gain a more refined understanding of the diverse learning behaviors in GenAI-assisted learning environments.

7 Implications

From a theoretical perspective, this study offers empirical insights into learners’ self-regulation in GenAI-assisted learning, addressing a gap in existing literature and strengthening the theoretical foundation for applying GenAI in interdisciplinary learning. From a practical perspective, teachers are encouraged to integrate high-quality GenAI tools (e.g., ChatGPT in global contexts and Ernie Bot in China) into their teaching practices to enhance students’ learning experiences. By using these tools, educators can create personalized learning environments where students receive tailored feedback based on their individual needs. This integration enhances learners’ engagement by allowing them to interact with dynamic and creative content (e.g., AI-driven simulations, interactive lessons, and innovative assignments). Second, learners can actively engage in discussions with teachers to optimize their GenAI-assisted learning experiences (e.g., ChatGPT and Ernie Bot), adjusting timely their learning behaviors. Learners should also prioritize identifying valuable information and leveraging the high-quality resources provided by GenAI to develop a deeper understanding. Moreover, students can use GenAI as supportive tools for setting learning goals, managing their learning schedules, and monitoring progress, thereby enhancing their self-regulation. Third, operators should develop emotion-sensing systems to identify learners’ emotional states. Developers should prioritize user experience and system stability by creating learning systems that are user-friendly and accessible. Finally, developers can incorporate dynamic, interactive command designs into ChatGPT (e.g., layered prompts, visual navigation) to guide users step-by-step through operations. This approach not only reduces the learning threshold but also enhances the usability of ChatGPT.

8 Conclusion

This study expanded Liaw and Huang’s (2013) three-tier model of self-regulation into eight factors—perceived self-efficacy, perceived anxiety, interactive learning environments, information system quality, perceived satisfaction, perceived usefulness, perceived self-regulation, behavioral intention—to systematically examine the key determinants of learners’ self-regulation in GenAI-assisted learning. Findings revealed that the information system quality and interactive learning environments were stronger predictors of perceived usefulness than perceived self-efficacy. Information system quality was a more significant predictor of perceived satisfaction than both perceived self-efficacy and interactive learning environments, while perceived usefulness also played an effective role in predicting perceived satisfaction. Additionally, perceived usefulness was more effective in predicting the effects of perceived self-regulation than interaction learning environments and perceived satisfaction. It also outperformed both perceived satisfaction and perceived self-regulation in predicting behavioral intention. Furthermore, perceived usefulness partially mediated the effects of perceived self-efficacy, information system quality and interactive learning environments on perceived satisfaction. Perceived usefulness also partially mediated the effects of interactive learning environments on perceived self-regulation. Perceived satisfaction partially mediated the relationship between perceived usefulness and perceived self-regulation. Perceived self-regulation partially mediated the relationship between perceived usefulness and behavioral intention. This study found the factors influencing learners’ (e.g., prospective mathematics teachers) self-regulation in GenAI-assisted learning and further enriched the extended three-tier model of self-regulation.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by the ethics committee/institutional review board of Yangzhou University, School of Mathematical Sciences. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

ZL: Writing – review & editing, Writing – original draft. YZ: Data curation, Funding acquisition, Writing – review & editing. HZ: Writing – review & editing, Funding acquisition, Software. YL: Data curation, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by the National Natural Science Foundation of China (Project Approval No. 11901426), and the Qinglan Project of Jiangsu Province of China. This work was supported by the National Natural Science Foundation of China (Grant No. 12371140). This study was also supported by the Jiangsu Provincial Higher Education Teaching Reform Project, “Construction and Practice of a General Education Curriculum System on Mathematical Principles of Artificial Intelligence for Science and Engineering Undergraduates” (Project Approval No. 2025JGYB800); the 2025 Yangzhou University Humanities and Social Sciences Research Fund Special Project “Research on Teacher Education Talent Cultivation Mechanisms and Feasible Pathways for ‘Optimized Combination and Transformational Integration’ Facilitated by Artificial Intelligence” (Project Approval No. XJJZX2025-1); the Yangzhou University Graduate Education Teaching Reform Project (Project Approval No. XJGKT25_003); the Sanlian Shuyuan Master Teacher Studio; and the Excellent Teaching Team of the Yangzhou University “Qinglan Project”.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that Gen AI was used in the creation of this manuscript. Generative AI was employed solely for partial language refinement in the preparation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2025.1599478/full#supplementary-material

Footnotes

References

Al-Emran, M., AlQudah, A. A., Abbasi, G. A., Al-Sharafi, M. A., and Iranmanesh, M. (2023). Determinants of using AI-based chatbots for knowledge sharing: evidence from PLS-SEM and fuzzy sets (fsQCA). IEEE Trans. Eng. Manag. 71, 4985–4999. doi: 10.1109/TEM.2023.3237789

Crossref Full Text | Google Scholar

Almufarreh, A. (2024). Determinants of students’ satisfaction with AI tools in education: a PLS-SEM-ANN approach. Sustainability 16:5354. doi: 10.3390/su16135354

Crossref Full Text | Google Scholar

Al-Sharafi, M. A., Al-Emran, M., Iranmanesh, M., Al-Qaysi, N., Iahad, N. A., and Arpaci, I. (2023). Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 31, 7491–7510. doi: 10.1080/10494820.2022.2075014

Crossref Full Text | Google Scholar

Anderson, J. C., and Gerbing, D. W. (1988). Structural equation modeling in practice: a review and recommended two-step approach. Psychol. Bull. 103, 411–423. doi: 10.1037/0033-2909.103.3.411

Crossref Full Text | Google Scholar

Arthur, F., Salifu, I., and Abam Nortey, S. (2025). Predictors of higher education students’ behavioural intention and usage of ChatGPT: the moderating roles of age, gender and experience. Interact. Learn. Environ. 33, 993–1019. doi: 10.1080/10494820.2024.2362805

Crossref Full Text | Google Scholar

Baabdullah, A. M. (2024). Generative conversational AI agent for managerial practices: the role of IQ dimensions, novelty seeking and ethical concerns. Technol. Forecast. Soc. Change 198:122951. doi: 10.1016/j.techfore.2023.122951

Crossref Full Text | Google Scholar

Bagozzi, R. P., and Yi, Y. (1988). On the evaluation of structural equation models. J. Acad. Mark. Sci. 16, 74–94. doi: 10.1007/BF02723327

Crossref Full Text | Google Scholar

Bai, Y., and Wang, S. (2025). Impact of generative AI interaction and output quality on university students’ learning outcomes: a technology-mediated and motivation-driven approach. Sci. Rep. 15:24054. doi: 10.1038/s41598-025-08697-6

PubMed Abstract | Crossref Full Text | Google Scholar

Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall, 94–106.

Google Scholar

Barbeite, F. G., and Weiss, E. M. (2004). Computer self-efficacy and anxiety scales for an internet sample: testing measurement equivalence of existing measures and development of new scales. Comput. Human Behav. 20, 1–15. doi: 10.1016/S0747-5632(03)00049-9

Crossref Full Text | Google Scholar

Baron, R. M., and Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J. Pers. Soc. Psychol. 51, 1173–1182. doi: 10.1037/0022-3514.51.6.1173

PubMed Abstract | Crossref Full Text | Google Scholar

Bhullar, P. S., Joshi, M., and Chugh, R. (2024). ChatGPT in higher education-a synthesis of the literature and a future research agenda. Educ. Inf. Technol. 29, 21501–21522. doi: 10.1007/s10639-024-12723-x

Crossref Full Text | Google Scholar

Biyiri, E., Dahanayake, S., Dassanayake, D., Nayyar, A., Dayangana, K., and Jayasinghe, J. (2024). ChatGPT in self-directed learning: exploring acceptance and utilization among undergraduates of state universities in Sri Lanka. Educ. Inf. Technol. 30, 10381–10409. doi: 10.1007/s10639-024-13269-8

Crossref Full Text | Google Scholar

Börekci, C., and Uyangör, N. (2025). The role of academic self-efficacy in pre-service mathematics and science teachers’ use of generative artificial intelligence tools. Balıkesir Üniv. Fen Bilimleri Enstitüsü Dergisi 27, 681–704. doi: 10.25092/baunfbed.1596547

Crossref Full Text | Google Scholar

Cai, Q., Lin, Y., and Yu, Z. (2023). Factors influencing learner attitudes towards ChatGPT-assisted language learning in higher education. Int. J. Hum. Comput. Interact. 40, 7112–7126. doi: 10.1080/10447318.2023.2261725

Crossref Full Text | Google Scholar

Caporusso, N. (2023). Generative artificial intelligence and the emergence of creative displacement anxiety. Res. Directs Psychol. Behav. 3, 1–12. doi: 10.53520/rdpb2023.10795

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20:43. doi: 10.1186/s41239-023-00411-8

Crossref Full Text | Google Scholar

Chang, C. Y., Hwang, G. J., and Gau, M. L. (2022). Promoting students’ learning achievement and self-efficacy: a mobile chatbot approach for nursing training. Br. J. Educ. Technol. 53, 171–188. doi: 10.1111/bjet.13158

Crossref Full Text | Google Scholar

Chiu, T. K. (2024). The impact of generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interact. Learn. Environ. 32, 6187–6203. doi: 10.1080/10494820.2023.2253861

Crossref Full Text | Google Scholar

Cook, K. S., and Rice, E. (2003). Social exchange theory. In Handbook of social psychology. (ed.) J. Delamater, (New York: Kluwer), pp. 61–88.

Google Scholar

Cotton, D. R., Cotton, P. A., and Shipway, J. R. (2024). Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148

Crossref Full Text | Google Scholar

Creswell, J. W., and Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). London: Sage publications.

Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

DeLone, W. H., and McLean, E. R. (1992). Information systems success: the quest for the dependent variable. Inf. Syst. Res. 3, 60–95. doi: 10.1287/isre.3.1.60

PubMed Abstract | Crossref Full Text | Google Scholar

Divekar, R. R., Drozdal, J., Chabot, S., Zhou, Y., Su, H., Chen, Y., et al. (2022). Foreign language acquisition via artificial intelligence and extended reality: design and evaluation. Comput. Assist. Lang. Learn. 35, 2332–2360. doi: 10.1080/09588221.2021.1879162

Crossref Full Text | Google Scholar

Doll, W. J., and Torkzadeh, G. (1988). The measurement of end-user computing satisfaction. MIS Q. 12, 259–274. doi: 10.2307/248851

Crossref Full Text | Google Scholar

Du, L., and Lv, B. (2024). Factors influencing students’ acceptance and use generative artificial intelligence in elementary education: an expansion of the UTAUT model. Educ. Inf. Technol. 29, 24715–24734. doi: 10.1007/s10639-024-12835-4

PubMed Abstract | Crossref Full Text | Google Scholar

Fan, Y., Chen, J., Shirkey, G., John, R., Wu, S. R., Park, H., et al. (2016). Applications of structural equation modeling (SEM) in ecological studies: an updated review. Ecol. Process. 5, 1–12. doi: 10.1186/s13717-016-0063-3

Crossref Full Text | Google Scholar

Farrokhnia, M., Banihashem, S. K., Noroozi, O., and Wals, A. (2024). A SWOT analysis of ChatGPT: implications for educational practice and research. Innov. Educ. Teach. Int. 61, 460–474. doi: 10.1080/14703297.2023.2195846

Crossref Full Text | Google Scholar

Fergus, S., Botha, M., and Ostovar, M. (2023). Evaluating academic answers generated using ChatGPT. J. Chem. Educ. 100, 1672–1675. doi: 10.1021/acs.jchemed.3c00087

Crossref Full Text | Google Scholar

Fornell, C., and Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18, 39–50. doi: 10.2307/3151312

Crossref Full Text | Google Scholar

Garcia, M. B. (2025). ChatGPT as an academic writing tool: factors influencing researchers’ intention to write manuscripts using generative artificial intelligence. Int. J. Hum.-Comput. Interact. 2, 1–15. doi: 10.1080/10447318.2025.2499158

Crossref Full Text | Google Scholar

Goli, M., Sahu, A. K., Bag, S., and Dhamija, P. (2023). Users’ acceptance of artificial intelligence-based chatbots: an empirical study. Int. J. Technol. Human Interact. 19, 1–18. doi: 10.4018/IJTHI.318481

Crossref Full Text | Google Scholar

Grant, C., and Osanloo, A. (2014). Understanding, selecting, and integrating a theoretical framework in dissertation research: creating the blueprint for your “house”. Admin. Issues J. 4:4. doi: 10.5929/2014.4.2.9

Crossref Full Text | Google Scholar

Hair, J., and Alamer, A. (2022). Partial least squares structural equation modeling (PLS-SEM) in second language and education research: guidelines using an applied example. Res. Methods Appl. Linguist. 1:100027. doi: 10.1016/j.rmal.2022.100027

Crossref Full Text | Google Scholar

Hair, J. F., Babin, B. J., and Krey, N. (2017). Covariance-based structural equation modeling in the journal of advertising: review and recommendations. J. Advert. 46, 163–177. doi: 10.1080/00913367.2017.1281777

Crossref Full Text | Google Scholar

Hair, J. F., Risher, J. J., Sarstedt, M., and Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31, 2–24. doi: 10.1108/EBR-11-2018-0203

Crossref Full Text | Google Scholar

Hair, J. F., Sarstedt, M., Ringle, C. M., and Mena, J. A. (2012). An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 40, 414–433. doi: 10.1007/s11747-011-0261-6

Crossref Full Text | Google Scholar

Hayduk, L. (1987). Structural equation modeling with LISREL: Essentials and advances. Washington: Johns Hopkins University Press.

Google Scholar

Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8

Crossref Full Text | Google Scholar

Hew, K. F., Huang, W., Wang, S., Luo, X., and Gonda, D. E. (2025). Towards a large-language-model-based chatbot system to automatically monitor student goal setting and planning in online learning. Educ. Technol. Soc. 28, 112–132. doi: 10.30191/ETS.202507_28.SP08

Crossref Full Text | Google Scholar

Hu, L. t., and Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. 6, 1–55. doi: 10.1080/10705519909540118

Crossref Full Text | Google Scholar

Hu, L., Wang, H., and Xin, Y. (2025). Factors influencing Chinese pre-service teachers’ adoption of generative AI in teaching: an empirical study based on UTAUT2 and PLS-SEM. Educ. Inf. Technol. 30, 12609–12631. doi: 10.1007/s10639-025-13353-7

Crossref Full Text | Google Scholar

Huang, F., Teo, T., Sánchez-Prieto, J. C., García-Peñalvo, F. J., and Olmos-Migueláñez, S. (2019). Cultural values and technology adoption: a model comparison with university teachers from China and Spain. Comput. Educ. 133, 69–81. doi: 10.1016/j.compedu.2019.01.012

Crossref Full Text | Google Scholar

Hwang, G.-J., and Chang, C.-Y. (2023). A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. 31, 4099–4112. doi: 10.1080/10494820.2021.1952615

Crossref Full Text | Google Scholar

Jeon, J. (2024). Exploring AI chatbot affordances in the EFL classroom: young learners’ experiences and perspectives. Comput. Assist. Lang. Learn. 37, 1–26. doi: 10.1080/09588221.2021.2021241

Crossref Full Text | Google Scholar

Ji, Y., Zhong, M., Lyu, S., Li, T., Niu, S., and Zhan, Z. (2025). How does AI literacy affect individual innovative behavior: the mediating role of psychological need satisfaction, creative self-efficacy, and self-regulated learning. Educ. Inf. Technol. 30, 16133–16162. doi: 10.1007/s10639-025-13437-4

Crossref Full Text | Google Scholar

Jin, F., Lin, C.-H., and Lai, C. (2025b). Modeling AI-assisted writing: how self-regulated learning influences writing outcomes. Comput. Human Behav. 165:108538. doi: 10.1016/j.chb.2024.108538

Crossref Full Text | Google Scholar

Jin, Y., Yan, L., Echeverria, V., Gašević, D., and Martinez-Maldonado, R. (2025a). Generative AI in higher education: a global perspective of institutional adoption policies and guidelines. Comput. Educ. Art. Intellig. 8:100348. doi: 10.1016/j.caeai.2024.100348

Crossref Full Text | Google Scholar

Jo, H. (2024). Subscription intentions for ChatGPT plus: a look at user satisfaction and self-efficacy. Mark. Intell. Plann. 42, 1052–1073. doi: 10.1108/MIP-08-2023-0411

Crossref Full Text | Google Scholar

Jung, Y. M., and Jo, H. (2025). Understanding continuance intention of generative AI in education: an ECM-based study for sustainable learning engagement. Sustainability 17:6082. doi: 10.3390/su17136082

Crossref Full Text | Google Scholar

Karal, Y., and Sarialioglu, R. O. (2025). Examining the relationship between undergraduate students' acceptance, anxiety and online self-regulation of generative artificial intelligence. Int. J. Technol. Educ. 8, 445–466. doi: 10.46328/ijte.1065

Crossref Full Text | Google Scholar

Kim, M., Kim, J., Knotts, T. L., and Albers, N. D. (2025). AI for academic success: investigating the role of usability, enjoyment, and responsiveness in ChatGPT adoption. Educ. Inf. Technol. 30, 14393–14414. doi: 10.1007/s10639-025-13398-8

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, G.-M., and Ong, S. M. (2005). An exploratory study of factors influencing m-learning success. J. Comput. Inf. Syst. 46, 92–97. doi: 10.1080/08874417.2005.11645872

Crossref Full Text | Google Scholar

Kim, Y.-e., Zepeda, C. D., and Butler, A. C. (2023). An interdisciplinary review of self-regulation of learning: bridging cognitive and educational psychology perspectives. Educ. Psychol. Rev. 35:92. doi: 10.1007/s10648-023-09800-x

Crossref Full Text | Google Scholar

Kline, R. B. (2023). Principles and practice of structural equation modeling. New York, NY: Guilford publications.

Google Scholar

Kramarski, B., and Gutman, M. (2006). How can self-regulated learning be supported in mathematical e-learning environments? J. Comput. Assist. Learn. 22, 24–33. doi: 10.1111/j.1365-2729.2006.00157.x

Crossref Full Text | Google Scholar

Kumar, J. A. (2021). Educational chatbots for project-based learning: investigating learning outcomes for a team-based design course. Int. J. Educ. Technol. High. Educ. 18:65. doi: 10.1186/s41239-021-00302-w

PubMed Abstract | Crossref Full Text | Google Scholar

Laun, M., and Wolff, F. (2025). Chatbots in education: hype or help? A meta-analysis. Learn. Individ. Differ. 119:102646. doi: 10.1016/j.lindif.2025.102646

Crossref Full Text | Google Scholar

Lee, Y.-F., Hwang, G.-J., and Chen, P.-Y. (2022). Impacts of an AI-based cha bot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educ. Technol. Res. Dev. 70, 1843–1865. doi: 10.1007/s11423-022-10142-8

Crossref Full Text | Google Scholar

Liaw, S.-S. (2008). Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: a case study of the blackboard system. Comput. Educ. 51, 864–873. doi: 10.1016/j.compedu.2007.09.005

Crossref Full Text | Google Scholar

Liaw, S.-S., and Huang, H.-M. (2007). Developing a collaborative e-learning system based on users’ perceptions. Comput. Support. Coop. Work Design 4402, 751–759. doi: 10.1007/978-3-540-72863-4_76

Crossref Full Text | Google Scholar

Liaw, S.-S., and Huang, H.-M. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors to self-regulation in e-learning environments. Comput. Educ. 60, 14–24. doi: 10.1016/j.compedu.2012.07.015

Crossref Full Text | Google Scholar

Liaw, S.-S., Huang, H.-M., and Chen, G.-D. (2007). An activity-theoretical approach to investigate learners’ factors toward e-learning systems. Comput. Human Behav. 23, 1906–1920. doi: 10.1016/j.chb.2006.02.002

Crossref Full Text | Google Scholar

Liu, N., Deng, W., and Ayub, A. F. M. (2025b). Exploring the adoption of AI-enabled English learning applications among university students using extended UTAUT2 model. Educ. Inf. Technol. 30, 13351–13383. doi: 10.1007/s10639-025-13349-3

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, S.-H., Liao, H.-L., and Pratt, J. A. (2009). Impact of media richness and flow on e-learning technology acceptance. Comput. Educ. 52, 599–607. doi: 10.1016/j.compedu.2008.11.002

Crossref Full Text | Google Scholar

Liu, Z., Zuo, H., and Lu, Y. (2025a). The impact of ChatGPT on students’ academic achievement: a meta-analysis. J. Comput. Assist. Learn. 41:e70096. doi: 10.1111/jcal.70096

Crossref Full Text | Google Scholar

Lodge, J. M., Thompson, K., and Corrin, L. (2023). Mapping out a research agenda for generative artificial intelligence in tertiary education. Australas. J. Educ. Technol. 39, 1–8. doi: 10.14742/ajet.8695

Crossref Full Text | Google Scholar

Ma, L., and Lee, C. S. (2019). Investigating the adoption of MOOCs: a technology–user–environment perspective. J. Comput. Assist. Learn. 35, 89–98. doi: 10.1111/jcal.12314

Crossref Full Text | Google Scholar

McLean, G., and Osei-Frimpong, K. (2019). Chat now… examining the variables influencing the use of online live chat. Technol. Forecast. Soc. Change 146, 55–67. doi: 10.1016/j.techfore.2019.05.017

Crossref Full Text | Google Scholar

Mohamed Eldakar, M. A., Khafaga Shehata, A. M., and Abdelrahman Ammar, A. S. (2025). What motivates academics in Egypt toward generative AI tools? An integrated model of TAM, SCT, UTAUT2, perceived ethics, and academic integrity. Inf. Dev. 41, 747–765. doi: 10.1177/02666669251314859

Crossref Full Text | Google Scholar

Mohamed, M. G., Goktas, P., Khalaf, S. A., Kucukkuya, A., Al-Faouri, I., Seleem, E. A. E. S., et al. (2025). Generative artificial intelligence acceptance, anxiety, and behavioral intention in the Middle East: a TAM-based structural equation modelling approach. BMC Nurs. 24:703. doi: 10.1186/s12912-025-03436-8

Crossref Full Text | Google Scholar

Mun, I. B., and Hwang, K.-H. (2024). Understanding ChatGPT continuous usage intention: the role of information quality, information usefulness, and source trust. Inf. Dev. 41, 675–691. doi: 10.1177/02666669241307595

Crossref Full Text | Google Scholar

Pan, M., Lai, C., and Guo, K. (2025). Effects of GenAI-empowered interactive support on university EFL students’ self-regulated strategy use and engagement in reading. Internet High. Educ. 65:100991. doi: 10.1016/j.iheduc.2024.100991

Crossref Full Text | Google Scholar

Rospigliosi, P. a. (2023). Artificial intelligence in teaching and learning: what questions should we ask of ChatGPT? Interact. Learn. Environ. 31, 1–3. doi: 10.1080/10494820.2023.2180191

Crossref Full Text | Google Scholar

Sallam, M., Al-Adwan, A. S., Mijwil, M. M., Abdelaziz, D. H., Al-Qaisi, A., Ibrahim, O. M., et al. (2025). Technology readiness, social influence, and anxiety as predictors of university educators’ perceptions of generative AI usefulness and effectiveness. Dev. Psychol. 9, 1–27. doi: 10.20944/preprints202505.0338.v1

Crossref Full Text | Google Scholar

Sharma, S., Dick, G., Chin, W., and Land, L. (2007) Self-regulation and e-learning. In Proceedings of the Fifteenth European Conference on Information System 383–394.

Google Scholar

Şimşek, A. S., Cengiz, G. Ş. T., and Bal, M. (2025). Extending the TAM framework: exploring learning motivation and agility in educational adoption of generative AI. Educ. Inf. Technol. 30, 1–30. doi: 10.1007/s10639-025-13591-9

Crossref Full Text | Google Scholar

Strzelecki, A. (2024). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 32, 5142–5155. doi: 10.1080/10494820.2023.2209881

Crossref Full Text | Google Scholar

Strzelecki, A., and ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 55, 1209–1230. doi: 10.1111/bjet.13425

Crossref Full Text | Google Scholar

Sun, P.-C., Tsai, R. J., Finger, G., Chen, Y.-Y., and Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 50, 1183–1202. doi: 10.1016/j.compedu.2006.11.007

Crossref Full Text | Google Scholar

Tang, X., Yuan, Z., and Qu, S. (2025). Factors influencing university students’ behavioural intention to use generative artificial intelligence for educational purposes based on a revised UTAUT2 model. J. Comput. Assist. Learn. 41:e13105. doi: 10.1111/jcal.13105

Crossref Full Text | Google Scholar

Teo, T., and Huang, F. (2019). Investigating the influence of individually espoused cultural values on teachers’ intentions to use educational technologies in Chinese universities. Interact. Learn. Environ. 27, 813–829. doi: 10.1080/10494820.2018.1489856

Crossref Full Text | Google Scholar

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., et al. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 10:15. doi: 10.1186/s40561-023-00237-x

Crossref Full Text | Google Scholar

Tsai, M. J. (2009). The model of strategic e-learning: understanding and evaluating student e-learning from metacognitive perspectives. J. Educ. Technol. Soc. 12, 34–48. Available at: https://www.jstor.org/stable/jeductechsoci.12.1.34

Google Scholar

Unal, E., and Uzun, A. M. (2021). Understanding university students’ behavioral intention to use Edmodo through the lens of an extended technology acceptance model. Br. J. Educ. Technol. 52, 619–637. doi: 10.1111/bjet.13046

Crossref Full Text | Google Scholar

Vighnarajah,, Wong, S. L., and Abu Bakar, K. (2009). Qualitative findings of students’ perception on practice of self-regulated strategies in online community discussion. Comput. Educ. 53, 94–103. doi: 10.1016/j.compedu.2008.12.021

Crossref Full Text | Google Scholar

Virvou, M., and Katsionis, G. (2008). On the usability and likeability of virtual reality games for education: the case of VR-ENGAGE. Comput. Educ. 50, 154–178. doi: 10.1016/j.compedu.2006.04.004

Crossref Full Text | Google Scholar

Wang, Y. (2024). Cognitive and sociocultural dynamics of self-regulated use of machine translation and generative AI tools in academic EFL writing. System 126:103505. doi: 10.1016/j.system.2024.103505

Crossref Full Text | Google Scholar

Wang, F., Li, N., Cheung, A. C., and Wong, G. K. (2025). In GenAI we trust: an investigation of university students’ reliance on and resistance to generative AI in language learning. Int. J. Educ. Technol. High. Educ. 22:59. doi: 10.1186/s41239-025-00547-9

Crossref Full Text | Google Scholar

Wang, C., Wang, H., Li, Y., Dai, J., Gu, X., and Yu, T. (2024). Factors influencing university students’ behavioral intention to use generative artificial intelligence: integrating the theory of planned behavior and AI literacy. Int. J. Hum. Comput. Interact. 41, 6649–6671. doi: 10.1080/10447318.2024.2383033

Crossref Full Text | Google Scholar

Wut, T. M., and Lee, S. W. (2022). Factors affecting students’ online behavioral intention in using discussion forum. Interact. Technol. Smart Educ. 19, 300–318. doi: 10.1108/ITSE-02-2021-0034

Crossref Full Text | Google Scholar

Xia, Q., Chiu, T. K., Chai, C. S., and Xie, K. (2023). The mediating effects of needs satisfaction on the relationships between prior knowledge and self-regulated learning through artificial intelligence chatbot. Br. J. Educ. Technol. 54, 967–986. doi: 10.1111/bjet.13305

Crossref Full Text | Google Scholar

Xu, X., Qiao, L., Cheng, N., Liu, H., and Zhao, W. (2025). Enhancing self-regulated learning and learning experience in generative AI environments: the critical role of metacognitive support. Br. J. Educ. Technol. 56, 1842–1863. doi: 10.1111/bjet.13599

Crossref Full Text | Google Scholar

Yan, L., Sha, L., Zhao, L., Li, Y., Martinez-Maldonado, R., Chen, G., et al. (2024). Practical and ethical challenges of large language models in education: a systematic scoping review. Br. J. Educ. Technol. 55, 90–112. doi: 10.1111/bjet

Crossref Full Text | Google Scholar

Yin, R. K. (2018). Case study research and applications. Thousand Oaks, CA: Sage.

Google Scholar

Yu, S., Hou, Y., and Li, H. (2025). Suitability of Chinese GenAI platforms for early childhood education: a multifaceted evaluation. AI Brain Child 1:7. doi: 10.1007/s44436-025-00008-0

Crossref Full Text | Google Scholar

Zhang, P., and Tur, G. (2024). A systematic review of ChatGPT use in K-12 education. Eur. J. Educ. 59:e12599. doi: 10.1111/ejed

Crossref Full Text | Google Scholar

Zhang, R., and Wang, J. (2025). Perceptions, adoption intentions, and impacts of generative AI among Chinese university students. Curr. Psychol. 44, 11276–11295. doi: 10.1007/s12144-025-07928-3

PubMed Abstract | Crossref Full Text | Google Scholar

Zheng, W., Ma, Z., Sun, J., Wu, Q., and Hu, Y. (2024). Exploring factors influencing continuance intention of pre-service teachers in using generative artificial intelligence. Int. J. Hum. Comput. Interact. 41, 10325–10338. doi: 10.1080/10447318.2024.2433300

Crossref Full Text | Google Scholar

Zheng, Y., Wang, Y., Liu, K. S.-X., and Jiang, M. Y.-C. (2024). Examining the moderating effect of motivation on technology acceptance of generative AI for English as a foreign language learning. Educ. Inf. Technol. 29, 23547–23575. doi: 10.1007/s10639-024-12763-3

Crossref Full Text | Google Scholar

Zhou, X., Teng, D., and Al-Samarraie, H. (2024). The mediating role of generative AI self-regulation on students’ critical thinking and problem-solving. Educ. Sci. 14:302. doi: 10.3390/educsci14121302

Crossref Full Text | Google Scholar

Zhu, W., Huang, L., Zhou, X., Li, X., Shi, G., Ying, J., et al. (2024). Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. Int. J. Hum. Comput. Interact. 41, 742–764. doi: 10.1080/10447318.2024.2323277

Crossref Full Text | Google Scholar

Zimmerman, B. J., and Schunk, D. H. (2001). “Reflections on theories of self-regulated learning and academic achievement” in Self-regulated learning and academic achievement: Theoretical perspectives, vol. 2, 289–307.

Google Scholar

Keywords: GenAI-assisted learning, perceived self-regulation, structural equation modeling, three-tier models of self-regulation, prospective mathematics teachers

Citation: Liu Z, Zhao Y, Zuo H and Lu Y (2025) Perceived satisfaction, perceived usefulness, and interactive learning environments as predictors of university students’ self-regulation in the context of GenAI-assisted learning: an empirical study in mainland China. Front. Psychol. 16:1599478. doi: 10.3389/fpsyg.2025.1599478

Received: 25 March 2025; Revised: 23 October 2025; Accepted: 12 November 2025;
Published: 03 December 2025.

Edited by:

Daniel H. Robinson, The University of Texas at Arlington College of Education, United States

Reviewed by:

Janika Leoste, Tallinn University, Estonia
Yunjie Tang, Peking University, China

Copyright © 2025 Liu, Zhao, Zuo and Lu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Haode Zuo, eXp6eHpoZEBmb3htYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.