Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell., 22 January 2026

Sec. AI for Human Learning and Behavior Change

Volume 9 - 2026 | https://doi.org/10.3389/frai.2026.1738774

Exploring the role of agentic AI in fostering self-efficacy, autonomy support, and self-learning motivation in higher education

  • Department of Educational Technologies, College of Education, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia

Introduction: Rapid adoption of Artificial Intelligence (AI) in learning has revolutionized learners’ engagement but comprehension of psychological and technological drivers of successful AI-enabled learning remains scarce. This research investigates how students’ perceived agency of AI, usefulness, ease of use, trust, autonomy supporting, and self-efficacy collectively impact students’ self-learning behavior and motivation. Based on Technology Acceptance Model (TAM), Social Cognitive Theory (SCT), and Self-Determination Theory (SDT) theories, our research model predicts an integrated model of motivational and behavioral processes underlying AI adoption in learning settings.

Methods: We adopted and followed a quantitative research design with a structured questionnaire administered among 280 higher education students in Saudi Arabia. We applied Structural Equation Modeling (SEM) using SmartPLS 4 to analyze data.

Results: Findings indicate that students’ perceived agency of AI significantly predicts usefulness, ease of use, and autonomy supporting, while ease of use significantly enhances AI-enabled self-efficacy. Self-efficacy and autonomy supporting significantly impact self-learning motivation, driving self-learning behavior positively. But usefulness and trust in AI failed to influence self-efficacy directly, which reveals cultural and contextual settings.

Discussion: This research adds richness to the fusion of TAM, SCT, and SDT theories in illustrating how AI’s perceived autonomy and usability collectively promote self-directed learning motivation. This research also provides guidelines to educators and system designers to design AI tools that promote learner autonomous settings, usability, and confidence. Future research ought to perform longitudinal and cross-cultural validations to fine-tune theoretically.

1 Introduction

Over the past few decades, education has changed significantly due to the influence of computer and communication technologies. Traditional classrooms, characterized by teacher-centrated learning, static and fixed curriculum, same-paced learning, and minimal feedback, have failed repeatedly to accommodate differentiated students (Dahri et al., 2024; Almogren et al., 2024a). Students have remained disengaged, unable to influence the direction of learning, and restricted by narrow assessment procedures. Traditional systems generally make assumptions that all students learn at equal rates, profit from the same teaching methodology, and need equal facilitation by instructors (Dahri et al., 2024; Almogren et al., 2024a). This homogeneity ignores individual variability in background knowledge, motivation, self-regulation ability, learning modes, and learning rate. Furthermore, feedback in most traditional contexts comes late, generic, and inadequate to students’ constantly changing needs (Kalantzis and Cope, 2004). All these have been increasingly challenged in research on education, due to the inhibitions that they create on students’ motivation and learning of self-regulated behaviors, which are critically needed in higher education and in life continuing learning contexts (de Bruijn-Smolrs et al., 2016; Roth et al., 2016).

With the rise in digital instrumentation, Internet learning platforms, intelligent tutoring systems, adaptive learning systems, and AI-driven learning technologies, Strielkowski et al. (2025) noted that there is growing enthusiasm about how technology could make up some of those weaknesses. Technology innovations such as Learning Management Systems (LMS), MOOCs, interactive multimedia, and simulation-based learning have begun to allow greater flexibility in pacing, diversity in modality in instructions, immediate feedback, and access to more kinds of resources (Munna et al., 2024; Ok, 2025). Studies have found that where digital technology is well crafted, it has been demonstrated to enhance students’ engagement, cognitive learning outcomes, and satisfaction. But many of those tools still function in reactive or prescriptive modes: students respond to system directions, yet moderate adaptation on the part of the system is small, underpinned all too often by only simple heuristics or pre-defined branching logic. Such tools have the potential to increase utility and ease of access, but do not necessarily encourage deeper motivational constructs—such as perceived agency, self-determination or self-efficacy—and may little encourage true self-directional learning behavior.

Recent advances in Artificial Intelligence (AI), notably in agentic AI, adaptive systems, reinforcement learning, large-language models, and explainable AI, have opened new possibilities in educational settings. Agentic AI refers to AI systems that can act with a degree of autonomy: making decisions, adapting dynamically to learner needs, guiding learning paths proactively, offering personalized scaffolding, and even initiating interventions or suggestions rather than merely responding to users. In education, these systems include adaptive tutoring platforms, AI agents that monitor student progress and provide timely feedback, and intelligent companions capable of supporting students’ decision making about what, when, and how to learn. Educational research has begun to document benefits of AI for increasing self-efficacy and motivation. For example, a systematic review found that AI tools significantly contribute to the development of computational thinking and self-efficacy among learners across levels when the systems adapt to learner performance and provide supportive feedback (Massaty et al., 2024). Another study of nursing students in China showed that AI literacy correlated positively with AI self-efficacy, which in turn was linked to higher engagement (He et al., 2025). Similarly, investigations among pre-service special education teachers in China have revealed that perceived usefulness and ease of use influence their intention to adopt AI tools, mediated by self-efficacy (Yao and Wang, 2024). Moreover, teacher studies in K-12 contexts indicate that while attitudes toward AI are generally positive, actual readiness, measured via self-efficacy, access, and support—varies widely across individuals and institutional contexts (Bergdahl and Sjöberg, 2025). These findings suggest that agentic AI holds promise not only for content delivery but for motivational and self-regulatory dimensions of learning. Prior studies have predominantly focused on China, Europe, and Western contexts, with limited empirical evidence from Middle Eastern higher education systems (Dahri et al., 2025; Dahri et al., 2025; Bandi et al., 2025).

Despite these promising developments, there remains a lack of clarity around exactly how different perceptions of an AI system (such as its perceived agency, usefulness, or ease of use), together with factors like trust, autonomy support, and self-efficacy, combine to influence students’ self-learning motivation and ultimate self-learning behavior. While prior work has examined pairwise relationships (for example, AI literacy → engagement, or usefulness → behavioral intention), comprehensive models that integrate these constructs and test mediated pathways are relatively rare (Fan and Zhang, 2024; Wang et al., 2025; Gkanatsiou et al., 2025; Han et al., 2025). Furthermore, many studies are localized to particular domains, such as language learning, special education, or higher vocational education, leaving out broader student populations and contexts (Tyler et al., 2004; Gersten and Woodward, 1994; Boud and Walker, 1998). Also, there is little empirical work on the role of perceived agency of AI—that is, how much students view the AI tools as acting independently or adaptively—and how that perceived agency interacts with trust, autonomy support, and self-efficacy to drive motivation and behavior. Theoretical perspectives from Social Cognitive Theory (which emphasizes self-efficacy, observational and mastery experience) (Schunk, 2012; Stajkovic and Luthans, 1998) and Self-Determination Theory (which emphasizes autonomy, competence, and relatedness) offer useful lenses for this investigation (Rigby and Ryan, 2018; Moore et al., 2020; Vansteenkiste et al., 2018). Integrating these theories in a model that includes agentic AI notions promises to yield richer understanding of motivational and behavioral dynamics in AI-supported self-learning. To address these gaps, the present study proposes and empirically validates a comprehensive structural model that integrates perceived AI agency, autonomy support, trust, and AI-supported self-efficacy to explain students’ self-learning motivation and self-learning behavior in AI-assisted educational settings. By doing so, this study extends existing AI adoption research beyond intention-based models and offers context-specific empirical insights into how agentic AI tools shape meaningful learning behaviors. In particular:

1. To define the associations between perceived agency of AI and (a) perceived usefulness, (b) ease of use, and (c) autonomy support.

2. To understand how perceived usefulness, ease of use, and trust in AI act to construct AI-assisted self-efficacy.

3. To examine how self-efficacy and autonomous support enhance self-learning motivation.

4. A systematic review of how self-learning motivation affects self-learning behavior in AI-assisted learning settings.

5. To test mediated associations between these constructs, pinpointing indirect associations between perceptions of AI and true self-learning behavior.

According to these aims, the research answers the following research questions:

RQ1: How do students’ conceptions of AI’s agency affect students’ conceptions of usefulness, ease of use, and facilitation of autonomy?

RQ2: How do perceptions of usefulness, ease of use, and trust in AI influence students’ self-efficacy with AI?

RQ3: In what ways do students self-learning and self-efficacy benefit from autonomy

RQ4: How does self-learning motivation influence real self-learning behavior through motivational and perceptual intervening constructs?

RQ5: What indirect channels (mediations) play important roles in connecting perceptions regarding AI to learning behavior of learners?

This research contributes to theory, practice, and policy. From a theoretical standpoint, it advances knowledge on agentic AI by placing measures of perceived agency, trust, and autonomous motivation under a Structural Equation Modeling (SEM) framework based on Social Cognitive Theory and Self-Determination Theory. From an empirical point of view, it gathers data from diverse higher education students to provide understanding on how perceptions get translated into motivation and self-learning in AI-enacted contexts. From a practical point of view, findings shall assist learning AI systems designers to identify strong traits, such as augmenting perceived agency, usability, autonomy, and trust—these strengthen self-learning. Finally, policy decisions shall guide institutions and planners in establishing standards, allotment of funds, and designing professional development events to ensure that AI tools advance learning motivation and autonomy and not dependency or superficiality.

2 Theoretical background and literature review

This study draws primarily on three theories: Technology Acceptance Model (TAM), Social Cognitive Theory (SCT), and Self-Determination Theory (SDT). The TAM (Davis, 1989) posits that two core beliefs, perceived usefulness (PU) and perceived ease of use (PEU), are key determinants of users’ attitudes toward adopting and using technology, which then lead to behavioural intention and actual use (Tam, 2024; Dahri et al., 2024; Mun et al., 2006; Pan and Jordan-Marsh, 2010) defined PU as the degree to which an individual believes that using a specific technology enhances their performance, while PEU is the degree to which using that technology is free of effort (Yen et al., 2010). TAM has been widely applied in educational technology studies to explain students’ and teachers’ adoption of e-learning, AI tools, and ICT more broadly. For example, pre-service teachers’ intention to adopt generative AI has been modelled via extended TAM showing strong paths from PU and PEU to behavioral intention (Şimşek et al., 2025; Falebita and Kok, 2025). Social Cognitive Theory (Bandura, 1986) emphasizes that learning occurs in a social context with dynamic and reciprocal interaction of person, environment, and behavior. Key in SCT is self-efficacy, the belief in one’s capabilities to organize and execute the courses of action required to manage prospective situations. In education, self-efficacy has been shown to influence motivation, persistence, strategy use, and ultimately learning outcomes. SCT also supports consideration of how trust and agency (agency meaning control, autonomy, or action) influence beliefs and behaviors. Whereas Self-Determination Theory (Deci and Ryan, 2000) focuses on psychological needs: autonomy, competence, and relatedness. When these needs are satisfied, intrinsic motivation and engagement are higher. In technology-enhanced learning settings, SDT has been used to explore how autonomy support (from tools or instructors) and competence (often via self-efficacy) foster students’ motivation and self-regulated learning. Together, these theories provide a strong foundation for analyzing how perceptions of an AI system (agentic or autonomous AI) combine with beliefs and environmental/psychological supports to predict motivation and behavior. Below is a summary of selected existing studies that examine constructs similar to those in this model (PU, PEU, self-efficacy, autonomy, trust, motivation, behavior with AI or technology in education) (Table 1).

Table 1
www.frontiersin.org

Table 1. Summary of prior empirical studies Related to technology acceptance.

These studies collectively indicate that PU and PEU are robust predictors of attitudes/intentions/usage in educational contexts involving AI or other technologies; that self-efficacy is a critical mediator, especially under SCT; that autonomy and psychological need satisfaction (from SDT) matter for motivation and self-regulated/self-directed learning; and that trust (in AI or technology) is increasingly being included, showing importance. Given the precedents, this study includes the following key latent variables, each of which has theoretical and empirical rationale:

• Perceived Agency of AI: This captures how much students believe the AI tool acts adaptively, initiatively, or independently. While fewer studies have directly measured agencies, related notions of autonomy support, control, or adaptivity are emerging (Hidayat-ur-Rehman, 2024). Agentic properties may enhance users perceived usefulness, self-efficacy, and autonomy, aligning with both SCT and SDT.

• Perceived Usefulness (PU): Central in TAM, PU reflects beliefs about performance enhancement (Pan and Jordan-Marsh, 2010; Yen et al., 2010). In education, believing a tool will help in academic performance strongly influences motivation and uptake. Seen in many studies above (e.g., generative AI in teacher studies; Google Classroom; technology readiness). High PU likely boosts self-efficacy (believing the tool will help me succeed), autonomy perception, and motivation.

• Perceived Ease of Use (PEU): Also central in TAM, PEU influences PU and reduces barriers. When a tool is easy to use, students can focus on learning rather than struggling with interface or interaction difficulties (Venkatesh, 2000). PEU is often a predictor of PU and of attitude or intention. Empirically, studies show PEU → PU strongly. It may also contribute to self-efficacy by lowering perceived obstacles.

• Trust in AI: Trust is about belief in recommendations, decisions, reliability, and integrity (privacy, fairness) (Flavián and Guinalíu, 2006). Trust enhances willingness to rely on AI suggestions, accept guidance, and engage in deeper interactions. Under SCT, trust influences beliefs about how well one can use the system (Lauer and Deng, 2007). It also moderates or mediates relationships between perceptions and behavior in some literature.

• Autonomy Support: Coming from SDT, autonomy support refers to environment or tool features that let learners make decisions, choose strategies, pacing etc. (Núñez and León, 2015). When students feel supported in autonomy, their intrinsic motivation is stronger. Autonomy support also helps satisfy the psychological need for autonomy (Yuan and Kim, 2018).

• AI-Supported Self-Efficacy: Under SCT, self-efficacy is vital: believing one can succeed when using AI tools will drive both motivation and behavior. AI support can enhance this by scaffolding, feedback, adaptivity.

• Self-Learning Motivation: Reflects intrinsic drive, interest, enjoyment, responsibility for learning. Motivational constructs are outcomes in SDT of psychological need satisfaction (autonomy, competence). Motivation is often the proximal predictor of behavior.

• Self-Learning Behavior: The ultimate outcome—observable or self-reported behaviors of taking initiative, exploring resources, managing own learning, engaging independently with AI tools.

Integrating these variables under TAM, SCT, and SDT yields a model in which perceptions of AI agency, usefulness, ease of use, plus trust and autonomy support, build self-efficacy, which strengthens motivation (especially intrinsic), leading to self-learning behavior. Each element is supported by prior literature (see Table 2), though typically in simpler models; relatively few studies simultaneously integrate perceived agency and autonomy support with TAM and self-efficacy and link through to behavior (Figure 1).

Table 2
www.frontiersin.org

Table 2. Constructs, codes, and descriptions.

Figure 1
Flowchart depicting factors influencing self-learning behavior.

Figure 1. Proposed research model.

2.1 Perceived usefulness and AI-supported self-efficacy

Perceived usefulness (PU), the pivotal construct in the Technology Acceptance Model (TAM), defines the extent to which a user feels that the usage of a system will improve performance (Dahri et al., 2024; Davis, 1989). In learning settings, if a student finds an AI tool useful to learning or knowledge attainment, such belief reinforces confidence in using it to perform sophisticated tasks (Shahzad et al., 2024; Lin and Chen, 2024). From the point of view of Social Cognitive Theory (SCT), self-efficacy construct arises in combination with mastery experiences with perceived helping structures: if users have the belief that a system will improve outcomes (Igbaria and Iivari, 1995; Compeau and Higgins, 1995), they develop faith in their abilities to perform tasks with the help of that system. In the domain of agentic AI — systems that adapt, guide, or precipitate help, the significance of perceived usefulness (PU) becomes elevated: an AI that users believe to be useful shall likely be considered to collaborate well and hence strengthen the student’s belief in ability to perform (AI-based self-efficacy). Experimental research relating to the adoption of AI supports this correlation. For example, in research on students’ adoption of AI, PU significantly and positively influenced self-efficacy and also mediated the relation between them with behavioral intention (Musyaffi et al., 2024). In research on humanoid robots in learning contexts, the higher the perception of usefulness among students, the greater the self - efficacy in communication with those systems of AI (Compeau and Higgins, 1995; Jia et al., 2014). In the context of acceptance of AI among teachers, self efficacy was related to perceived gains and trust, so that faith in usefulness affects confidence in using tools of AI (Viberg et al., 2024). Therefore, in our integrated model—in which Agentic AI presents choices, timely scaffolds, and adaptive intellect, if students also believe that the AI actually enhances their learning, then students’ AI-enabled self-efficacy should grow. So, we propose:

H1: Perceived usefulness positively affects AI-supported self-efficacy.

2.2 Perceived ease of use and AI-supported self-efficacy

Perceived ease of use (PEU), another foundational TAM construct, denotes how effortless or free of effort the user expects interacting with the system will be (Davis, 1989). In educational technology settings, a user who anticipates few obstacles in navigating, commanding, or interpreting AI features can devote more cognitive resources to substantive learning rather than interface struggle. According to SCT, lower perceived barriers (i.e., easier use) lower anxiety and increase the sense of control, thereby contributing to enhanced self-efficacy. When combined with agentic AI, the ease of use becomes even more critical: if an AI system operates autonomously, makes intuitive decisions, and has a smooth interaction interface, users are more likely to perceive it as supportive rather than burdensome, reinforcing their belief in performing learning tasks effectively with it. Empirical evidence supports this linkage: in the study on humanoid robot acceptance, self-efficacy significantly enhances perceived ease of use, and ease of use in turn predicts attitudes and intention (Al Darayseh, 2023). In recent work investigating student intentions to use AI, PEU was shown to positively influence students’ self-efficacy and indirectly intention via attitude (Jeilani and Abubakar, 2025; Almogren et al., 2024b). In teacher acceptance of AI systems, ease of use was strongly associated with higher usability, lower resistance, and increased self-efficacy in utilization (Al Darayseh, 2023). In nutshell, when students believe AI is easy to use, they feel more capable of harnessing its features, increasing their AI-supported self-efficacy. Therefore, we posit:

H2: Perceived ease of use positively affects AI-supported self-efficacy.

2.3 Perceived agency of AI and perceived usefulness

Perceived agency of AI refers to the degree to which a learner views the AI system as capable of acting autonomously making decisions, adapting its behavior, initiating scaffolds or suggestions, rather than merely reacting to user input. When an AI system is perceived to possess such autonomy, users are more likely to ascribe competence and utility to it, thereby influencing their judgments about its usefulness. From a Technology Acceptance Model (TAM) perspective, perceived usefulness is determined partly by the perceived capability of the system to perform tasks effectively. The more agentic an AI appears, the more it may be seen as capable of delivering useful support (e.g., proactively guiding learning, anticipating needs). Social Cognitive Theory further supports this: agency enhances perceived legitimacy of the tool as a collaborator, fostering confidence in its intended benefits. Empirical research in human-AI interaction finds that increasing perceived autonomy or adaptiveness raises user expectations of usefulness (e.g., AI as decision aid; Pathak et al.’s work on AI agents) (Pathak and Bansal, 2024). Moreover, studies in consumer AI services demonstrate that perceived autonomy or agency supports perceptions of utility and value in technology use (Han and Ko, 2025). Thus, within the educational context, if students perceive the AI as agentic, they will more readily believe it helps their learning goals, increasing its perceived usefulness. Hence:

H3: Perceived agency of AI positively affects perceived usefulness.

2.4 Perceived agency of AI and perceived ease of use

Beyond usefulness, perceived agency also influences the ease with which users believe they can work with the system. If an AI acts autonomously and intelligently, some burdens of decision-making, navigation, or interface complexity may be masked or managed by the system itself, thereby reducing the user’s effort. In TAM theory, when technology seems to reduce required exertion (i.e., effort), it is judged as easier to use. An agentic AI can anticipate learner intentions, present options, and automate background tasks, making interaction more seamless. From an HCI (Human-Computer Interaction) lens, agency and autonomy in design can be leveraged to hide complexity and scaffold interaction, boosting perceived usability (ease). Research in automation and autonomy studies suggests that users interacting with more autonomous systems often perceive lower effort and smoother operation (automation aiding the human) (Salatino et al., 2025). Studies of AI decision agents show that perceived ease of use is positively influenced by agentic behavior (i.e., the AI “does more” implicitly) (Pathak and Bansal, 2024). In sum, in our model of AI in education, a more agentic AI is expected to be viewed as easier to engage with, because it lowers cognitive and operational load on learners. Therefore:

H4: Perceived agency of AI positively affects perceived ease of use.

2.5 Perceived agency of AI and autonomy support

One of the central promises of agentic AI in education is to enhance learners’ autonomy: giving them choices, guiding without over-controlling, responding to learner preferences, and supporting self-directed pathways. Autonomy support, derived from Self-Determination Theory (SDT), refers to the extent to which the environment or tool permits learners to make decisions, select strategies, and feel control over their process. When an AI is perceived as agentic, learners may interpret its adaptability and initiative as granting them freedom—because the AI can adjust to their chosen path without rigid constraints. In that way, agency in the system translates into psychological autonomy support. Theoretical perspectives on human–technology agency show that as a system becomes more agentic, it can serve as an enabler of human autonomy rather than a constraining tool (i.e., co-agent rather than master) (Faas et al., 2024). Empirical work in human-AI collaboration shows that when users feel restricted by AI choices (e.g., limited options), perceived autonomy declines and intrinsic motivation suffers (Bennett et al., 2023). In contrast, AI systems designed with shared autonomy tend to preserve or enhance the human sense of autonomy. In line with this, we argue that higher perceived agencies in AI will lead students to feel more autonomy support from the system. Therefore:

H5: Perceived agency of AI positively influences autonomy support.

2.6 Trust in AI and AI-supported self-efficacy

Trust in artificial intelligence (AI) encapsulates learners’ conviction that the AI system will operate reliably, offer accurate recommendations, safeguard privacy, and prioritize the learner’s best interests, rather than deceive or function improperly (Lauer and Deng, 2007). Within the framework of Social Cognitive Theory, trust has the potential to impact self-efficacy, as a tool regarded as trustworthy mitigates uncertainty, anxiety, and perceived risk, thereby enhancing confidence in its utilization. In educational contexts, when students place their trust in an AI agent, they are more inclined to explore, make errors, and engage profound elements that foster belief in their capability to succeed with AI (Suriano et al., 2025; Gu et al., 2024). In relation to agentic AI—systems capable of autonomous action—the significance of trust becomes increasingly paramount: if students perceive the AI as both competent and trustworthy, they are more likely to have faith in their ability to collaborate effectively with it (Bedué and Fritzsche, 2022). Empirical studies reinforce this notion: in the realm of public sector AI adoption, trust in AI positively impacted AI self-efficacy and mediated the influence of perceived system characteristics on behavioral intention (Khan et al., 2024). In investigations of technology adoption, trust in automated systems bolsters user confidence and their readiness to depend on these systems, which reinforces stronger self-efficacy beliefs (for instance, in human-robot interaction and driver assistance systems). Consequently, in our model, we propose that trust in AI will positively influence students’ AI-supported self-efficacy. Thus:

H6: Trust in AI positively affects AI-supported self-efficacy.

2.7 Trust in AI and self-learning motivation

In addition to acting upon efficacy beliefs, trust in AI may have a more immediate impact upon self-learning motivation. From an SDT perspective, intrinsic motivation among learners is nurtured where learners perceive the learning environment (or instrument) as reliable, caring, and non-controlling (Wong et al., 2024). In so far as students have confidence in the AI, students shall more likely accept its suggestions, follow its guidance, sense psychological protection, and value the learning alliance, thereby kindling intrinsic motivation. In addition, trust reduces cognitive and affective friction (e.g., concern over errors, excessive bias, misuse of data), freeing up cognitive and affective resources to devotedly ponder over learning targets and less over doubts about the system (Shi and Zhang, 2025; Wu et al., 2024). In the Agentic AI scenario, where the system had the potential to intervene proactively or suggest, trust is a necessity; in its place, users might resist or distrust intervention and thereby disengage (Murugesan, 2025; Hughes et al., 2025). Empirical research on AI and human–machine systems parallels this: confidence in autonomous agents positively correlates with user engagement and acceptance that have high correlations with motivation (Murugesan, 2025; Vanneste and Puranam, 2024). In educational AI adoption studies, trust has also been shown to have an impact upon motivational constructs such as enjoyment, satisfaction, and continuing intent to utilize the system. On this basis, we suggest that trust in AI will have a positive influence upon self-learning motivation in our model. Hence:

H7: Trust in AI positively affects self-learning motivation.

2.8 Autonomy support and self-learning motivation

Autonomy support—rooted in Self-Determination Theory (SDT)—refers to the extent to which the learning environment or tool enables learners to make choices, follow their interests, and feel volitional in their actions (Deci and Ryan, 2000). SDT meta-analytic and intervention evidence shows that autonomy-supportive contexts reliably increase intrinsic motivation and need satisfaction (autonomy and competence), which in turn foster engagement and deeper learning (Howard et al., 2025; Wang, 2024). In AI-mediated learning, agentic AI has the potential to be autonomy-supportive when it adapts to learner preferences, offers meaningful choices (what to learn, when, and how), and scaffolds rather than controls the learning path; such design aligns AI activity with SDT’s autonomy need and can promote intrinsic self-learning motivation (Howard et al., 2025; Saleh, 2025). Empirical work demonstrates that autonomy support—whether provided by teachers, instructional design, or adaptive technologies—predicts agentic engagement and increases students’ willingness to take initiative in learning (Reeve, 2013; Reeve and Shin, 2020). In AI contexts, learners who perceive the system as supporting their agency report higher interest, enjoyment, and persistence, because the tool both reduces external controls and enhances perceived competence through tailored scaffolding (Patall et al., 2022). Thus, when an AI system is experienced as enabling choice and self-direction (i.e., autonomy support), intrinsic motives for self-learning are strengthened, producing more sustained and self-regulated engagement. On this theoretical and empirical basis, we posit:

H8: Autonomy support positively influences self-learning motivation.

2.9 AI-supported self-efficacy and self-learning motivation

Self-efficacy—beliefs in one’s ability to plan and take actions to achieve desired results—is a core construct in Social Cognitive Theory (Bandura, 1986) and a potent antecedent of motivation and perseverance. In technology-enhanced learning, AI-enabled self-efficacy refers to learners’ beliefs in achieving academic intentions with the help of AI tools (Hughes et al., 2025; Saleh, 2025). From theory, self-efficacy promotes intrinsic motivation because learners who have confidence see tasks as manageable, set ambitious goals, and understand setbacks as temporary setbacks that allow them to rebound, and so maintain interest and effort (Bandura and Schunk, 1981). Agentic AI boosts efficacy via timely, personalized feedback, scaffolds, and just-in-time guidance—mechanisms that produce mastery experiences and vicarious learning chances (by watching solutions or demonstrations), both of which enhance efficacy beliefs (Yang et al., 2025). Recent experiments document that AI-based personalization and adaptive feedback significantly bolster students’ self-efficacy, and that high AI self-efficacy goes along with high engagement and learning motivation (Shi and Zhang, 2025; Lyu and Salam, 2025). In addition, systematic reviews on AI learning tools conclude that more substantial efficacy gains result in interactive and explainable systems, since explainability diminishes uncertainty and reinforces learners’ feelings of competence (Lyu and Salam, 2025; Zhou et al., 2025). With this theory-guided and evidential background, we anticipate that AI-enabled self-efficacy will be a direct positive antecedent of intrinsic self-learning motivation. Hence:

H9: AI-supported self-efficacy positively influences self-learning motivation.

2.10 Self-learning motivation and self-learning behavior

Throughout motivation and self-regulation research, intrinsic motivation is a proximal predictor of self-regulated learning behaviors—planning, strategy use, persistence, exploration, and initiative (Xu et al., 2023; Chen, 2022). SDT and SCT both converge on the proposition that motivated learners (intrinsically interested and volitionally engaged) who also perceive efficacy will engage in behaviors that characterize autonomous learning (forethought, monitoring, and reflection) and in so doing produce behaviorally observable self-learning actions (Glenn, 2018). In AI-enabled settings, agentive systems have the potential to magnify that translation from motivation to behavior by suggesting resources, stimulating reflection, and easing friction on exploratory behaviors—thus converting motivation to tangible learning behaviors (Bandi et al., 2025; Hosseini and Seilani, 2025) (Figure 2). Experimental research on SRL and motivation also reports that motivation predicts autonomous study frequency, resource access, and persistence on self-directed tasks (ZabihiAtergeleh et al., 2025). Recent research on AI education also supports this pathway: students reporting stronger intrinsic motivation in adaptive AI later showed more initiative, exploratory behavior, and independent problem solving (Liu, 2025; Zhao et al., 2025). Hence, in line with theory and data, we speculate that motivation will positively predict observable self-learning behavior in AI settings. Thence:

Figure 2
Flowchart depicting relationships between various factors.

Figure 2. Relationships (proposed research hypothesis) model.

H10: Self-learning motivation positively leads to self-learning behavior.

3 Methodology

3.1 Research design

In this study, a quantitative research design was adopted to investigate relationships between perceived agency of AI, perceived usefulness, perceived ease of use, trust in AI, AI-supported self-efficacy, autonomy support, self-learning motivation, and self-learning behavior. Quantitative methodology was appropriate to test hypothesized relationships and confirm a conceptual model with the assistance of statistical analysis (Creswell and Hirose, 2019). As the purpose of this research was to test proposed model and interrelationships between the latent constructs in an empirical fashion, Structural Equation Modeling (SEM) was used. SEM allows the concurrent testing of multiple associations between latent factors and yields results superior to traditional regression-based methods (Sarstedt et al., 2021; Hair Jr et al., 2021). SmartPLS 4 software was used to conduct Partial Least Squares SEM (PLS-SEM), which excels especially in the case of exploratory and predictive research with intricate models and comparatively small populations in populations (Hair Jr et al., 2021). PLS-SEM was selected over covariance-based SEM owing to its capability to accommodate non-normal distributions of data, its stability with small populations, and its concern with explained variance maximization (Hair et al., 2019). Measurement and structural facets comprise the model, with the former defining the validity and reliability of the constructs and the latter investigating hypothesized relationships between constructs. Study design was of the cross-sectional kind with data acquisition at a single point in time from individuals in the higher education sector in Saudi Arabia. This scenario was shortlisted in the light of fast-paced digital transformation efforts in Vision 2030, that give primacy to the importance of AI-based learning and technology-based staff development (Aldegether, 2020). This research deals with understanding educators’ self-learning behaviors in terms of influence from agentic AI with psychological and technological mediators and encapsulates a modern and contemporary research concern.

3.2 Data collection and sample

Data were gathered from teaching staff members and faculties in Saudi Arabian universities using a structured questionnaire in Google Forms. This approach was selected due to its effectiveness in reaching geographically separated respondents and maintaining anonymity and convenience (Bryman, 2016). A non-probability purposive sampling scheme was used, with targets set on respondents who had experience utilizing AI-enabled learning tools (like ChatGPT, AI tutor, or adaptive learning systems) to teach or to undergo professional development. A total of 320 questionnaires were put into distribution, with 280 valid observations being shortlisted for final analysis after data screening. The size of the sample satisfies the prerequisite to apply PLS-SEM analysis since Hair Jr et al. (2021) suggest at least 10 times the highest number of structural paths with an end point on a construct. Demographic characteristics of respondents revealed that 58% were male, and 42% were female, with an average teaching experience of 7 years, and all the respondents had previous exposure to learning settings with AI assistance. Prior to the main questionnaire, a pilot study was carried out with 40 respondents to check the clearness, dependability, and face validity of the instrument. Responses guided minor adjustments in the phrasing of the items. Pilot analysis revealed Cronbach’s alpha ratings exceeding 0.80 in all the constructs, representing high internal consistency (Nunnally and Bernstein, 1994). In dealing with ethical matters, informed consent was undertaken with all the participants with the aspect of voluntary participation, data confidentiality, and anonymity highlighted. No personally identifiable data were gathered. Study protocol was reviewed and cleared by the research ethics committee, and it was in line with institutional and country-based ethical guidelines (Alwakid and Dahri, 2025).

3.3 Questionnaire development and validation

The questionnaire was constructed from validated scales in the literature which were modified in the case of AI-supported self-learning in the tertiary level see Table 2 with constructs information. The questionnaire consisted of seven latent constructs which were operationalized with multiple reflective indicators in five-point Likert scale (1 = strongly disagree, 5 = strongly agree). Items from perceived usefulness and perceived ease of use were drawn from (Davis, 1989; Venkatesh and Davis, 2000). Items from perceived AI were drawn from recent work in Agentic AI in learning. Trust in AI used items drawn from (Choung et al., 2023). Items from AI-supported self-efficacy were drawn from (Compeau and Higgins, 1995). Items from (Ryan and Deci, 2020). Were utilized in order to operationalize the construct of autonomy support. Self-learning behavior and self-learning motivation were drawn from (Zimmerman, 2000).

The content validity of the instrument was established with the guidance of five professionals in the fields of educational technology and integration of AI. Their suggestions ensured the relevancy, breadth, and clarity of the measurement items. Construct validity was then established through confirmatory factor analysis (CFA) with SmartPLS. To confirm common method bias (CMB), procedural and statistical fixes were also employed. Procedurally, the anonymity was ensured, and the word order in the items was randomly scrambled. Statistically, Harman’s single-factor test indicated less than 40% explained variance in the first factor, and thus, CMB was not a serious concern (Podsakoff et al., 2003). Furthermore, the full collinearity VIF values were below 3.3, which confirmed the minimal presence of multicollinearity and CMB issues (Kock, 2015).

3.4 Data analysis procedure

Data analysis was conducted in two primary phases: measurement model evaluation and structural model evaluation, in alignment with Hair and Alamer (2022). (1) Measurement Model Analysis: Construct reliability and validity were initially assessed. Internal consistency reliability was determined through Cronbach’s alpha and Composite Reliability (CR), with all above the cut-off value of 0.70. Convergent validity was confirmed from the Average Variance Extracted (AVE), with values above 0.50 for all the constructs. Discriminant validity was assessed from the Fornell–Larcker criterion and the Heterotrait–Monotrait (HTMT) ratio, confirming satisfactory distinctiveness among the constructs (Henseler et al., 2015). (2) Structural Model Analysis: Following the Verification of the measurement model, the structural model was used to test the hypothesized associations. The bootstrapping procedure (5,000 resamples) was used for the estimation of the path coefficients, t-values, and p-values for the purpose of testing the hypotheses. The coefficient of determination (R2) and effect size (f2) were computed in order to assess the explanatory and practical value of the model. Moreover, the predictive relevance (Q2) values were also explored with the aid of the blindfolding procedure, signifying the predictive capability of the model (Hair et al., 2019). It was noted with great emphasis that every one of the hypothesized paths under consideration was found to be significant, thus providing strong empirical support for the proposed associations that exist among perceptions of agentic AI, trust levels, self-efficacy, motivation, and self-learning behavior. The model that was utilized in this study exhibited a high degree of explanatory value, as demonstrated by the R2 estimates for the main endogenous variables, which include AI-supported self-efficacy, support for autonomy, and self-learning behavior, all of which were reported to be above 0.60. This level of R2 indicates a very high degree of predictive accuracy. The rationale underlying this research ensured that there was a careful, reliable, and ethically justified empirical exploration conducted regarding the manner in which agentic AI and the associated psychological constructs exert their influence on teachers’ self-learning behavior. When combined with the validated measurement instrument, an adequately sufficient sample size, along with superior statistical modeling made possible through SmartPLS 4, the findings derived from this research are significantly reinforced in terms of both validity and generalizability.

4 Results

4.1 Demographic information of respondents

Table 3 gives the demographic profile of the respondents of this research (N = 256). (1) Gender information of the students; Male students (n = 138, 53.9%) and female students (n = 112, 43.8%) made up the sample with 6 respondents (n = 2.3%) who refused to indicate gender. (2) In age distribution, the majority of the participants were in the age group of 21 to 25 (n = 142, 55.5%), followed by those in the age group of 26–30 (n = 60, 23.4%) and those under 20 (n = 32, 12.5%), and those over 30 (n = 22, 8.6%) years old. (3) By academic level, the majority of them enrolled in undergraduate (n = 134, 52.3%) programs, while graduate students were (n = 78, 30.5%) and postgraduate students were (n = 44, 17.2%) level. (4) By field of specialization, Education had the greatest number of students (n = 92, 35.9%), Computer Science/Information Technology was (n = 68, 26.6%), followed by those in Engineering (n = 47, 18.4%), those in Business/Management (n = 31, 12.1%), and others (n = 18, 7.0%). In addition, the majority of the respondents (n = 187, 73.0%) indicated that they had prior exposure to the use of AI tools, while 69 respondents (n = 27.0%) who self-identified that they had been first-time users of these tools. In summary, the representation of the demographic data reveals a balanced and full set of students with diversified disciplines, level of schooling, and experience with the technology of AI (Table 3).

Table 3
www.frontiersin.org

Table 3. Demographic profile of participants (N = 256).

The findings identify a general positive sentiment towards the implementation of AI technologies in learning processes. Students’ interaction with tools based on AI (GAI1) averaged at 4.12 (SD = 0.83), which means most respondents were regularly exposed to AI implementations in their learning activities. Believing that AI technologies can facilitate learning experiences (GAI2) marked the highest average score at 4.36 (SD = 0.76), which indicates high agreement among students in the learning potential and value of AI implementation. Trust in utilizing AI-driven learning platforms (GAI3) is also rated high with an average score of 4.05 (SD = 0.88), which indicates students commonly feel assured and capable of operating and utilizing AI-driven tools properly. Overall, these findings indicate that students not only comprehend the potential benefit of AI in improving their learning but are also ready and willing to use such technologies. The high mean scores in all the statements indicate that the adoption of AI in higher learning will be very acceptable if learning organizations implement enough training and infrastructural support. Such findings have practical implications for the learning policymakers and organizations with an interest in the incorporation of AI technologies in learning and instruction practices, and they speak directly to the value in having both digital competence and positive user attitude in achieving the maximum pedagogical benefit from the incorporation of AI (Table 4).

Table 4
www.frontiersin.org

Table 4. General AI in education perceptions.

4.2 Measurement model results

The measurement model was also validated in order to confirm that the constructs employed in the current work were reliable and valid for moving towards the structural model assessment. The measurement comprised testing the reliability of the indicators, internal consistency reliability, convergent validity, and multicollinearity with the SmartPLS 4 software. Here, the guidelines proposed by Hair Jr et al. (2021) and Sarstedt et al. (2021) were also followed in order to report the PLS-SEM.

4.2.1 Indicator reliability and multicollinearity

Table 5 displays the standardized factor loadings and the respective Variance Inflation Factor (VIF) values for all measurement items. As can be illustrated, all loadings for the items are in the range from 0.755 up to 0.885, which are all above the threshold of 0.70 (Hair Jr et al., 2021), meaning each item sufficiently represents its respective latent construct. In addition, all the values for VIF are less than 3.3, which verifies that no multicollinearity problem prevails among the indicators (Diamantopoulos and Siguaw, 2006). These findings verify that all the items possess high individual reliability and that each construct captures a different conceptual dimension without redundancy.

Table 5
www.frontiersin.org

Table 5. Indicator loadings and VIF values.

All factor loadings exceeded 0.70, ensuring satisfactory indicator reliability, while VIF values ranged between 1.37 and 2.75, confirming the absence of multicollinearity concerns.

4.2.2 Internal consistency reliability and convergent validity

Measurement model reliability was also analysis using Cronbach’s alpha (α) and Composite Reliability (CR). As indicated in Table 6, all Cronbach’s alpha values were in the range from 0.762 to 0.869, all below the threshold value of 0.70 (Nunnally and Bernstein, 1994), and all CR values were in the range from 0.863 to 0.911, all below the threshold value of 0.70 (Hair Jr et al., 2021). These findings of the study confirm that each construct shows satisfactory reliability, which means the items are consistently measuring their respective latent constructs. We also analyzed the convergent validity based on the Average Variance Extracted (AVE). The AVE values for all the constructs were in the range from 0.643 to 0.719, all below the threshold value of 0.50 (Fornell and Larcker, 1981), which indicates that more than 50% of the variance in each construct is explained by its indicators. Thus, the measurement items sufficiently converge onto their respective constructs and thus establish convergent validity.

Table 6
www.frontiersin.org

Table 6. Construct reliability and convergent validity.

These results collectively indicate that the measurement model shows excellent internal consistency and convergent validity. All the reliability coefficients and AVE are well above conventional cut-offs, thereby confirming that the latent constructs are properly measured and the indicators appropriately capture their theoretical domains. The measurement model testing indicates all the constructs in this study exhibit strong psychometric properties and satisfy the threshold recommendation for indicator reliability, internal consistency reliability, and convergent validity. Hence, the measurement model can be deemed appropriate for proceeding with discriminant validity testing (through Fornell–Larcker and HTMT criteria) and further structural model investigation.

4.2.2.1 Discriminant validity

Discriminant validity was also assessed with two well-known criteria — the Heterotrait–Monotrait ratio (HTMT) and the Fornell–Larcker criterion — in order to verify that each construct in the model is empirically unique. Table 7 shows the results, which reveal that all values for HTMT are strongly below the conservative cut-off value of 0.85 (Henseler et al., 2015), thus ensuring satisfactory discriminant validity. In particular, the inter-construct correlations were from 0.47 (TAI–AISS) and 0.54 (SLM–TAI), up to 0.83 (PEU–AISS and PU–PA), which implies that while some constructs are moderately correlated (e.g., PEU–AISS = 0.81, PU–PA = 0.80), the values stay in the acceptable range and reflect the theoretical relatedness among constructs without duplication. The lower, but still quite acceptable, values for HTMT among the constructs like TAI–AISS (0.47), and SLM–TAI (0.54).

Table 7
www.frontiersin.org

Table 7. HTMT matrix.

The Fornell–Larcker criterion outcomes also offered strong support for discriminant validity (Table 8). In all the constructs, the square root of the AVE (the diagonal values) was more than the respective inter-construct correlations (off-diagonal values), thus verifying the fact that each construct shares more variance with its indications than with other constructs (Fornell and Larcker, 1981). For example, the square root of the AVE for AI-supported self-efficacy (AISS) (0.802) is more than its correlation with other constructs like AS (0.477) and PA (0.449) and thus verifies discriminant distinctiveness. In the same respect, Trust in AI (TAI) (√AVE = 0.804) is more than its correlation with SLM (0.447) and PU (0.516) and thus verifies that TAI indicates a unique theoretical dimension.

Table 8
www.frontiersin.org

Table 8. Fornel–Larcker criterion.

Both the Fornell–Larcker and HTMT tests endorse the fact that the constructs in the measurement model enjoy sufficient discriminant validity. In addition to the high convergent validity and internal consistency findings, these results verify that the measurement model is statistically correct and theoretically valid and therefore a good platform for continuing with the structural model analysis (Henseler et al., 2015; Hair Jr et al., 2021). The measurement model shows satisfactory reliability, convergent, and discriminant validity, verifying the fact that all the constructs are conceptually distinct but theoretically coherent in the domain of agentic AI–driven self-directed learning research.

4.3 Structural model results

The structural model testing utilized SmartPLS 4 to assess hypothesized construct relationships and the explanatory power of the model. Tables 9, 10 report the path coefficients, t-values, and the p-values, along with the R2, R2 adjusted, and f2 effect sizes for each endogenous construct. The R2 values reflect moderate to substantial explanatory power (Sarstedt et al., 2021): AI-supported self-efficacy (AISS, R2 = 0.486), autonomy support (AS, R2 = 0.336), perceived ease of use (PEU, R2 = 0.364), perceived usefulness (PU, R2 = 0.475), self-learning motivation (SLM, R2 = 0.358), and self-learning behavior (SLB, R2 = 0.254). These outcomes indicate the model accounts for a substantial amount of variance in all the dependent variables, which indicates the robustness and predictive validity of the model.

Table 9
www.frontiersin.org

Table 9. R2 values and F2 values.

Table 10
www.frontiersin.org

Table 10. Hypothesis testing results.

H1: Perceived usefulness → AI-supported self-efficacy (β = 0.137, t = 1.683, p = 0.093), Even though the positive correlation between perceived usefulness (PU) and AI-supported self-efficacy (AISS) was insignificant, it indicates that although learners are aware of the benefits AI can provide (see Figure 3), perceived usefulness in isolation may lack the potency required in building confidence in the usage of AI tools. This supports recent findings revealing that efficacy beliefs need effortful action and system trusting in addition to usefulness perception (Liu, 2025; Aldraiweesh and Alturki, 2025). H1 was unsupported. H2: Perceived ease of use → AI-supported self-efficacy (β = 0.570, t = 6.932, p < 0.001), A significant positive correlation verifies that learners become more confident in utilizing AI tools for self-direction if they find them easier to use. The result supports the Technology Acceptance Model (TAM), among other studies, in its focus on the importance of the interface in developing the self-efficacy of the user (Thabet et al., 2023). H2 was supported. H3: Perceived agency of AI → Perceived usefulness (β = 0.689, t = 14.436, p < 0.001), The very strong and highly significant correlation indicates that if learners view AI systems as capable, responsive, and independent, then they are more likely to view them as useful. The result supports recent research findings in AI Agentic revealing that perceived agency reinforces learners’ value attribution in the value they add in AI tools. H3 was supported. H4: Perceived agency of AI → Perceived ease of use (β = 0.604, t = 12.487, p < 0.001), The significant influence indicates that if AI systems are perceived as capable and intelligent, then they also seem easier in use. The result mirrors the confidence users have in operating systems which present adaptive, human-like reactiveness. H4 was supported. H5: Perceived agency of AI → Autonomy support (β = 0.579, t = 12.144, p < 0.001), A strong positive relationship indicates that learners having more perceived agency in AI tools are more autonomous in the learning process. The result supports Self-Determination Theory (SDT), which argues that AI systems encouraging the control of learners support more autonomy support (Deci and Ryan, 2020). H5 was supported. H6: Trust in AI → AI-supported self-efficacy (β = −0.053, t = 0.882, p = 0.378), The non-significant path suggests that believing in AI, while in theory related with confidence, failed directly in predicting AI-supported self-efficacy. Lack of effect may result from contextuality and cultural differences, in which learners’ belief in the AI does not necessarily foster self-confidence in utilizing AI tools (Dahri et al., 2025; Dahri et al., 2025). H6 was unsupported. H7: Trust in AI → Self-learning motivation (β = 0.181, t = 2.464, p = 0.014), A significant positive association indicates that more trusting in AI enables learners’ motivation in conducting self-directed learning, confirming that trusting in AI enables emotional security and the desire to learn with the aid of AI (Gu et al., 2024; Bedué and Fritzsche, 2022). H7 was supported. H8: Autonomy support → Self-learning motivation (β = 0.192, t = 2.705, p = 0.007), Autonomy support significantly improves self-learning motivation and supports the premise from SDT that perceived freedom and control govern intrinsic motivation (Ryan and Deci, 2020). AI tools with which learners can take independent learning decisions reinforce motivation. H8 was supported. H9: AI-supported self-efficacy → Self-learning motivation (β = 0.356, t = 5.368, p < 0.001). The significant path indicates that learners capable of utilizing AI efficiently also exhibit more motivation in conducting self-learning. The result agrees with prior studies that self-efficacy acts as a motivator in AI-aided learning (Polisetty et al., 2024). H9 was verified. H10: Self-learning motivation → Self-learning behavior (β = 0.504, t = 10.957, p < 0.001), A high and significant correlation indicates that motivated learners are likely to transform their motivation into active learning behaviors, in support of the expectancy–value theory and empirical findings in AI-mediation learning persistence H10 was verified (Rasheed et al., 2023).

Figure 3
Network diagram showing relationships among variables labeled PA, AS, PU, PEU, AISS, SLM, and SLB. Each node has associated numeric values, with additional nodes labeled with codes like PA01 and AS01 linked to primary nodes, indicating sub-factors and their respective scores.

Figure 3. Significant and insignificant paths.

Moreover, the f2 measures of the effect sizes indicate moderate to strong relationships for salient associations (e.g., PA → PU, f2 = 0.34; PEU → AISS, f2 = 0.505), which confirm the importance of AI’s ease of use and perceived agency in predicting learners’ psychological and behavioral measures. Overall, the structural model presents high explanatory power and theoretical consistency with the SDT and TAM models. Significant paths indicate that learners’ perceived agency, usability of the agentic AI, and trust in it all induce motivation and learning behavior and so respond to the research’s primary research questions on how AI promotes self-directed learning efficacy and engagement.

5 Discussion

The purpose of this research was to examine the interaction among significant psychological and perceptual factors that influence students’ adoption of self-learning with agentic AI in higher education contexts. In line with the Technology Acceptance Model (TAM) (Davis et al., 1989) and Self-Determination Theory (SDT) (Ryan and Deci, 2020), this research postulated and verified a model among perceived agency of AI, perceived usefulness, perceived ease of use, trust in AI, autonomy support, AI-supported self-efficacy, self-learning motivation, and self-learning behavior. According to data from 260 university students in Saudi Arabia and structural equation modeling (SmartPLS 4), findings presented significant and non-significant relationships that provide theoretical and practical insight on how learners make use of AI-based tools to learn independently as shown and illustrated in Table 10.

The results showed that perceived usefulness (PU) had a positive non-significant impact on AI-assisted self-efficacy (AISS) (β = 0.137, p = 0.093), while perceived ease of use (PEU) had a significant positive correlation (β = 0.570, p < 0.001). The non-significance of PU deviates from traditional classical TAM-informed studies that assert its primacy in predicting user confidence and adoption (Venkatesh et al., 2012). This result, however, corresponds with new AI-related research that indicates that usefulness perceptions do not necessarily construct self-efficacy in the absence of experiential engagement and faith (Chang et al., 2024; Shao et al., 2025). In AI-assisted learning, usefulness in itself does not ensure that learners will have confidence in their ability to efficiently communicate with intelligent systems. Students could perceive AI tools to be valuable and still harbor doubts about their ability to master or understand them precisely. On the other hand, the strong influence of perceived ease of use confirms the TAM hypothesis that user confidence and satisfaction result from the forming influence of usability (Amin et al., 2014; Calisir and Calisir, 2004). With AI systems that are intelligent, adaptive, and user-friendly, cognitive load decreases and confidence in task performance increases. Validations of ease of use in enhancing technology self-efficacy and perceived control in learning contexts came from studies by Xu et al. (2025) and Xia et al. (2025). In the context of agentic AI, this result emphasizes the significance of interface design, feedback, and personalization. Students will have confidence in using AI tools that respond and feel natural. Hence, H1 was not supported and H2 was supported, confirming that usability emerges to be a more potent predictor of AI efficacy than perceived utility.

It was identified that perceived agency of AI (PA) significantly contributed to strong positive influences on perceived usefulness (β = 0.689, p < 0.001), perceived ease of use (β = 0.604, p < 0.001), and autonomy support (β = 0.579, p < 0.001). These findings verify that students’ perception of AI as an intelligent, adaptive, and autonomous agent increases both the usefulness and ease of use of AI systems. Results confirm those of Al-Abdullatif and Alsubaie (2024), who highlighted that AI systems with human-like agency promote user engagement, credibility, and a sense of value. Likewise, Hosseini and Seilani (2025) identified that systems with AI-based agentic features (like NLI, context sense, and adaptability) make systems more effortlessly usable and responsive to personalized learning routes. Furthermore, the positive correlation between perceived agency and autonomy support also chimes prominently with Self-Determination Theory (SDT). When students perceive AI as an intelligent agent that honours learning tempo, offers relevant feedback, and assists in decision-making, it increases self-determination and feelings of competence (Deci and Ryan, 2020). More contemporary research by Katsenou et al. (2025) and Hu et al. (2025) also underlines that AI-based agentic systems function as a “learning companion” which promotes and scaffolds autonomy while keeping learners engaged. In contrast to legacy learning technology, AI-based agentic systems allow adaptive interaction and co-agency—even setting learning targets, requesting customized guidance, and self-monitoring upon progression. Hence, H3, H4, and H5 were confirmed with data, underlining the prime importance of AI agency in framing students’ perceptions of usefulness, ease of use, and autonomy in learning settings.

Results indicated that trust in AI (TAI) had no significant influence on AI-assisted self-efficacy (β = −0.053, p = 0.378) but had a significant positive impact on self-learning motivation (β = 0.181, p = 0.014). That non-significant relationship between trust and self-efficacy stands in contrast to several previous studies that identified trust as a primary predictor of confidence in AI interaction (Wong et al., 2024; Shao et al., 2025). One possible reason it could be that trust works more as a motivational and affective and less cognitive driver. Students might trust in AI suggestions or feedback but might not necessarily have a sense of proficiency in working with the system. That is, trust per se does not translate into self-efficacy unless augmented with user experience and sense of control. However, the positive influence of trust on self-learning motivation was in line with from Cui et al. (2025) and Lin et al. (2023), who mentioned that motivated learners to utilize the AI-based systems have been triggered by emotional trust—perceiving that AI will improve fairness, personalization, and learning efficiency. Trust alleviated the fear of automation and supports a sense of protection and curiosity that underpins motivation. Findings indicate a subtle interpretation: trust might have no direct value in elevating self-efficacy but serves to stimulate motivational participation. Hence, H6 was unsupported, while H7 was supported, which meant that trust in AI affects affective and not cognitive aspects of learning. Both autonomy support (AS) (β = 0.192, p = 0.007) and AI-supported self-efficacy (AISS) (β = 0.356, p < 0.001) significantly predicted self. This aligns with Self- Determination Theory (Alwakid and Dahri, 2025), which holds that autonomy and competence are vital antecedents to intrinsic motivation. Students perceiving control over learning processes and confidence in AI interaction will have a greater likelihood of wanting to learn by themselves. This supports Almusharraf and Bailey (2025), who concluded that students’ sense of autonomous and technological self-efficacy significantly predicted technology-enabled learning engagement in AI-based blended learning settings. Likewise, Miao and Ma (2023) identified that supporting students’ sense of autonomy increases the tendency to probe and self-manage, and self-efficacy bolsters persistence and hardiness. In concert, these facets develop a psychological climate supporting self-learning. Therefore, H8 and H9 were supported, confirming that both control (autonomy) and competence (efficacy) play a central role in maintaining motivation in self-directed AI settings. The conclusive hypothesis revealed that self-learning motivation (SLM) significantly and positively influenced self-learning behavior (SLB) (β = 0.504, p < 0.001). This result corresponds with expectancy–value theory (Amoozegar et al., 2024), which holds that motivated learners have a likelihood to act actively, surge on, and demonstrate stable learning behaviors. In AI-enabled learning, motivation acts as the linkage between cognitive belief and subsequent performance. Similar results appeared in the works of by Hu and Hui (2012) and Getenet et al. (2024), who found that motivation was a notable predictor of technology-enabled learning persistence and behavioral engagement. This research adds to the literature by being the first to empirically verify that self-directed AI—by its ability to create personalized, autonomous, and feedback-rich interactions—can precipitate motivation that translates into real learning behavior. Therefore, H10 was supported, confirming the sequence of motivation from perception and efficacy to behavior. This study contributes to both SDT and TAM by including agentic AI as a central antecedent that influences learners’ perceptions and motivational outcomes. Results suggest that perceived agency enhances both traditional TAM measures (usefulness and ease of use) and transfers to motivational domains such as autonomy and self-learning behavior. In practice, educators and developers of AI should create intelligent, adaptive, and autonomy-supportive AI systems to nurture confidence and intrinsic motivation. In addition, non-significant findings on perceived usefulness and trust emphasize that acceptance at the cognitive and emotional borderlands may differ in context. Training agendas should emphasize AI literacy, ethical knowledge, and reflective interaction to ensure learners apply to put trust in AI, in addition to understanding its weaknesses and possible biases. This research addressed five main research questions, and results suggest that learners’ perceptions of agentic AI have a considerable impact on ease of use, usefulness, and provision of autonomy support (RQ1). Results partially validate usefulness, ease of use, and trust on AI-enabled self-efficacy (RQ2), with ease of use being the strongest predictor. Furthermore, autonomy support and self-efficacy significantly affect self-learning motivation (RQ3), and motivation significantly predicts self-learning behavior (RQ4). Indirect channels (RQ5) suggest that AI’s agentic properties stimulate learning performance through motivational and self-efficacy-based processes. Similarly, these findings suggest that agentic AI systems are powerful catalysts of learner autonomy, confidence, and engagement, yet its effectiveness is dependent on usability, feelings of control, and emotional trust. This research provides an in-depth understanding of how perceptual, motivational, and behavioral domains converge and impact effectiveness in AI-enabled self-learning in higher education.

5.1 Theoretical implications

This study brings together the Technology Acceptance Model (TAM), Social Cognitive Theory (SCT), and Self-Determination Theory (SDT) in an AI-assisted learning model. This expands on TAM by proposing that usefulness and ease of use rest on perceived AI agency such that learners accept AI tools on utility, self-governance, and responsiveness. In this adaptation, the model revises TAM for analytics with adaptive systems in easing learning. Adopting SCT, this study targets AI-enabled self-efficacy to intervene between cognitive antecedents (usefulness, ease of use, trust) and motivation in line with Bandura’s suggestion that efficacy beliefs direct engagement operationalized in AI-assisted learning. This also confirms SCT by reporting that motivation to self-learn rises with rising autonomy with perceived control boosting intrinsic motivation. In synthesizing these theories, this study goes a step further in human-AI learning interaction understanding by establishing that perceived AI autonomy, usefulness, and trust together advance psychological empowerment and motivation and expands existing technology acceptance and learning motivation models.

5.2 Practical implications

These findings have immediate applications in the creation of more effective and comfortable learning spaces by teachers, instructional designers, and education policymakers. To designers of these tools, the lesson is clear: empower and make it simple. Since a student’s feelings of control and self-confidence depend on how user-friendly the tool itself happens to be, its developers ought to make intuitive interfaces and systems that give gentle, explicit feedback. Students who have a sense that they are in control will have greater confidence in their own capabilities. At the classroom level itself, teachers have a deciding role to play in using these tools while facilitating motivation. By introducing adaptive tutoring or chatting assistance, teachers create space to let students have greater control in where learning goes. This complements the inherent need for autonomy and works to change participation from being externally compelled to being internally self-motivated. But all this becomes attainable only in the presence of a trust base. Students must also trust the technology they utilize. This makes data security, clear ethical principles, and stable system functioning compulsory. Users must also feel safe and secure while transacting with such platforms. At a broader level, institutions and policymakers have a fundamental role to play. Enabling digital competence and specific literacy in how such systems operate on teaching agendas will prepare students to use them critically and efficiently. Besides that, universities and schools should establish forms of assessment to determine whether these tools indeed enhance self-directed learning so that investments have a tangible impact. In the end, the eventual adoption of technology relies on more than technical expertise. It involves two-pronged thinking: designing systems that both feel simple to learn and sound in ethics and hence supplementing the sense of agency of a learner while creating a self-reinforcing loop of motivation and self-directed development.

6 Conclusion

The current research explored the effects of AI perceptions focusing on these factors namely perceived agency, usefulness, ease of use, and trust the on students AI-supported self-efficacy, autonomy support, self-learning motivation, and finally self-learning behavior. Research framework is designed based on the theories such as Technology Acceptance Model (TAM), Social Cognitive Theory (SCT), and Self-Determination Theory (SDT); the research formalized and empirically validated a comprehensive structural model for describing the psychological and behavioral mechanisms governing AI-supported learning. We used SEM analysis based on SmartPLS analysis of data from 260 participants, the research yields strong empirical proof for the direct and mediated links among the constructs. Findings showed that perceived agency of AI increases the usefulness and ease of use of AI, and intelligent system design matters to positive learner perceptions. Perceived ease of use significantly influences AI-assisted self-efficacy, and pleasant AI interfaces enhance learning confidence. Autonomy support and self-efficacy form self-learning motivation and predict self-learning behavior. However, usefulness and trust in AI do not directly enhance self-efficacy in the absence of experience-based involvement. This research complements TAM, SCT, and SDT by integrating technological, cognitive, and motivational theories of human–AI interaction in learning. It informs teachers, developers, and policymakers on how to develop AI-enabled learning contexts to enhance autonomy, AI confidence, and empowerment. This model ought to be extended to varied cultures in future research, and long-term impact of AI-assisted motivation investigated. Variables such as ethical awareness and AI literacy ought to be taken into consideration in future research. In conclusion, research considerably informs knowledge on AI adoption in learning, illustrating how AI systems that enhance agency improve learners’ self-regulation, motivation, and engagement.

6.1 Limitations and future considerations

This study, despite presenting rich results, could not be regarded as flaw-free and hence presents opportunities for subsequent research. First, results depend on a point-in-time census based on self-reporting among a small population of 280 students in Saudi Arabia. While this population provides us with an illuminating bird’s eye perspective on learning orientations toward AI, its small size and single-nation basis imply that its results are likely to transfer to only a limited extent to other cultural or learning contexts. In later research that seeks to generalize such results to a broader setting, researchers should strive to create more inclusive and diverse populations that extend across multiple nations and fields of scholarship. Second, while our research integrated three central theory constructs—Technology Acceptance Model (TAM), Social Cognitive Theory (SCT), and Self-Determination Theory (SDT) to form a rich model, other theories could have added more richness to our findings. For example, theories such as the Expectation-Confirmation Theory (ECT) or the Unified Theory of Acceptance and Use of Technology (UTAUT) might have added more explanations on learners continuing to accept AI tools. With that, rather than a single-moment questionnaire, longitudinal studies or a mixed-methods study could have traced changes in learners’ perceptions, confidence, and motivating factors based on later exposure to systems based on AI. Third, while our Structural Equation Modeling (SEM-PLS) procedure was useful in assessing causal relationships, it could potentially miss the rich, dynamic interactions between variables that exist on a higher level of complexity. Subsequent research could utilize advanced analytical procedures, including multi-group analysis, artificial neuron networks, or other machine learning procedures, to identify non-linear patterns and latent relationships that our model could not. Lastly, our research failed to explicitly ask about related contextual issues such as students’ literacies in AI, the ethical issues surrounding such technologies, or how cultural contexts influence conceptions of autonomy. A collaborative research endeavor in these domains is inevitable. A research agenda of this kind could specify how learning with AI built into it could be carefully crafted to cultivate responsibility, equity, and empowerment in the imperfectly homogeneous global student population. Thus, these weaknesses in no way diminish the significance of our findings but rather provide a clear future research agenda in this rapidly evolving field.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements.

Author contributions

JA: Software, Writing – review & editing, Project administration, Formal analysis, Supervision, Writing – original draft, Methodology, Resources, Investigation, Visualization, Conceptualization, Funding acquisition, Validation, Data curation.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Al Darayseh, A. (2023). Acceptance of artificial intelligence in teaching science: science teachers’ perspective. Comput. Educ. Artif. Intell. 4:100132. doi: 10.1016/j.caeai.2023.100132

Crossref Full Text | Google Scholar

Al-Abdullatif, A. M., and Alsubaie, M. A. (2024). ChatGPT in learning: assessing students’ use intentions through the lens of perceived value and the influence of AI literacy. Behav. Sci. (Basel). 14:845. doi: 10.3390/bs14090845,

PubMed Abstract | Crossref Full Text | Google Scholar

Aldegether, R. (2020). Saudi Arabia’s vision 2030: approaches to multicultural education and training. Int. J. Innov. Creat. Chang. 12, 92–102.

Google Scholar

Aldraiweesh, A. A., and Alturki, U. (2025). The influence of social support theory on AI acceptance: examining educational support and perceived usefulness using SEM analysis. IEEE Access 13, 18366–18385. doi: 10.1109/ACCESS.2025.3534099

Crossref Full Text | Google Scholar

Almogren, A. S., Al-Rahmi, W. M., and Dahri, N. A. (2024a). Integrated technological approaches to academic success: mobile learning, social media, and AI in higher education. IEEE Access 12, 175391–175413. doi: 10.1109/ACCESS.2024.3498047

Crossref Full Text | Google Scholar

Almogren, A. S., Al-Rahmi, W. M., and Dahri, N. A. (2024b). Exploring factors influencing the acceptance of ChatGPT in higher education: a smart education perspective. Heliyon 10:e31887. doi: 10.1016/j.heliyon.2024.e31887

Crossref Full Text | Google Scholar

Almusharraf, A., and Bailey, D. (2025). Predicting attitude, use, and future intentions with translation websites through the TAM framework: a multicultural study among Saudi and south Korean language learners. Comput. Assist. Lang. Learn. 38, 1249–1276. doi: 10.1080/09588221.2023.2275141

Crossref Full Text | Google Scholar

Alwakid, W. N., and Dahri, N. A. (2025). Harnessing AI capabilities and green entrepreneurial orientation for sustainable SME performance using SEM analysis approach. Technol. Soc. 22:103007. doi: 10.1016/j.techsoc.2025.103007

Crossref Full Text | Google Scholar

Amin, M., Rezaei, S., and Abolghasemi, M. (2014). User satisfaction with mobile websites: the impact of perceived usefulness (PU), perceived ease of use (PEOU) and trust. Nankai Bus. Rev. Int. 5, 258–274. doi: 10.1108/NBRI-01-2014-0005

Crossref Full Text | Google Scholar

Amoozegar, A., Abdelmagid, M., and Anjum, T. (2024). Course satisfaction and perceived learning among distance learners in Malaysian research universities: the impact of motivation, self-efficacy, self-regulated learning, and instructor immediacy behaviour. Open Learn. J. Open, Distance e-Learning 39, 387–413. doi: 10.1080/02680513.2022.2102417

Crossref Full Text | Google Scholar

Bandi, A., Kongari, B., Naguru, R., Pasnoor, S., and Vilipala, S. V. (2025). The rise of agentic AI: a review of definitions, frameworks, architectures, applications, evaluation metrics, and challenges. Futur. Internet 17:404. doi: 10.3390/fi17090404

Crossref Full Text | Google Scholar

Bandura, A., “Social foundations of thought and action,” Englewood Cliffs, NJ, 1986, 23–28, p. 2, 1986.

Google Scholar

Bandura, A., and Schunk, D. H. (1981). Cultivating competence, self-efficacy, and intrinsic interest through proximal self-motivation. J. Pers. Soc. Psychol. 41, 586–598. doi: 10.1037/0022-3514.41.3.586

Crossref Full Text | Google Scholar

Bedué, P., and Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. J. Enterp. Inf. Manag. 35, 530–549. doi: 10.1108/JEIM-06-2020-0233

Crossref Full Text | Google Scholar

Bennett, D., Metatla, O., Roudaut, A., and Mekler, E. D. “How does HCI understand human agency and autonomy?,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023) 1–18. doi: 10.1145/3544548.3580651

Crossref Full Text | Google Scholar

Bergdahl, N., and Sjöberg, J. (2025). Attitudes, perceptions and AI self-efficacy in K-12 education. Comput. Educ. Artif. Intell. 8:100358. doi: 10.1016/j.caeai.2024.100358

Crossref Full Text | Google Scholar

Boud, D., and Walker, D. (1998). Promoting reflection in professional courses: the challenge of context. Stud. High. Educ. 23, 191–206. doi: 10.1080/03075079812331380384

Crossref Full Text | Google Scholar

Bryman, A. (2016). Social research methods. New York, USA: Oxford University Press.

Google Scholar

Calisir, F., and Calisir, F. (2004). The relation of interface usability characteristics, perceived usefulness, and perceived ease of use to end-user satisfaction with enterprise resource planning (ERP) systems. Comput. Human Behav. 20, 505–515. doi: 10.1016/j.chb.2003.10.004

Crossref Full Text | Google Scholar

Chang, P.-C., Zhang, W., Cai, Q., and Guo, H. (2024). Does AI-driven technostress promote or hinder employees’ artificial intelligence adoption intention? A moderated mediation model of affective reactions and technical self-efficacy. Psychol. Res. Behav. Manag. 17, 413–427. doi: 10.2147/PRBM.S441444,

PubMed Abstract | Crossref Full Text | Google Scholar

Chen, J. (2022). The effectiveness of self-regulated learning (SRL) interventions on L2 learning achievement, strategy employment and self-efficacy: a meta-analytic study. Front. Psychol. 13:1021101.

Google Scholar

Chen, X., Jiang, L., Zhou, Z., and Li, D. (2025). Impact of perceived ease of use and perceived usefulness of humanoid robots on students’ intention to use. Acta Psychol. 258:105217. doi: 10.1016/j.actpsy.2025.105217

Crossref Full Text | Google Scholar

Choung, H., David, P., and Ross, A. (2023). Trust in AI and its role in the acceptance of AI technologies. Int. J. Human–Computer Interact. 39, 1727–1739. doi: 10.1080/10447318.2022.2050543

Crossref Full Text | Google Scholar

Compeau, D. R., and Higgins, C. A. (1995). Computer self-efficacy: development of a measure and initial test. MIS Q. 19, 189–211. doi: 10.2307/249688

Crossref Full Text | Google Scholar

Creswell, J. W., and Hirose, M. (2019). Mixed methods and survey research in family medicine and community health. Fam. Med. Community Health 7:86. doi: 10.1136/fmch-2018-000086,

PubMed Abstract | Crossref Full Text | Google Scholar

Cui, Y. L., Zeng, M. L., Du, X. K., and He, W. M. (2025). What shapes learners’ trust in AI? A meta-analytic review of its antecedents and consequences. IEEE Access 13, 164008–164025. doi: 10.1109/ACCESS.2025.3611367

Crossref Full Text | Google Scholar

Dahri, N. A., Yahaya, N., al-Rahmi, W. M., Vighio, M. S., Alblehai, F., Soomro, R. B., et al. (2024). Investigating AI-based academic support acceptance and its impact on students’ performance in Malaysian and Pakistani higher education institutions. Educ. Inf. Technol. 29, 18695–18744. doi: 10.1007/s10639-024-12599-x

Crossref Full Text | Google Scholar

Dahri, N. A., Yahaya, N., Al-Rahmi, W. M., Aldraiweesh, A., Alturki, U., Almutairy, S., et al. (2024). Extended TAM based acceptance of AI-powered ChatGPT for supporting metacognitive self-regulated learning in education: a mixed-methods study. Heliyon 10:e29317. doi: 10.1016/j.heliyon.2024.e29317,

PubMed Abstract | Crossref Full Text | Google Scholar

Dahri, N. A., Al-Rahmi, W. M., Alhashmi, K. A., and Bashir, F. (2025). Enhancing mobile learning with AI-powered chatbots: investigating ChatGPT’s impact on student engagement and academic performance. Int. J. Interact. Mob. Technol. 19, 1–15. doi: 10.3991/ijim.v19i11.54643

Crossref Full Text | Google Scholar

Dahri, N. A., Dahri, F. H., Laghari, A. A., and Javed, M. (2025). Decoding ChatGPT’s impact on student satisfaction and performance: a multimodal machine learning and explainable AI approach. Complex Eng. Syst. 5:2025. doi: 10.20517/ces.2025.07

Crossref Full Text | Google Scholar

Dahri, N. A., Yahaya, N., Vighio, M. S., and Jumaat, N. F. (2025). Exploring the impact of ChatGPT on teaching performance: findings from SOR theory, SEM and IPMA analysis approach. Educ. Inf. Technol. 30, 18241–18276. doi: 10.1007/s10639-025-13539-z

Crossref Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1989). User acceptance of computer technology: a comparison of two theoretical models. Manag. Sci. 35, 982–1003. doi: 10.1287/mnsc.35.8.982

Crossref Full Text | Google Scholar

de Bruijn-Smolrs, M., Timmers, C. F., Gawke, J. C. L., Schoonman, W., and Born, M. P. (2016). Effective self-regulatory processes in higher education: research findings and future directions. A systematic review. Stud. High. Educ. 41, 139–158. doi: 10.1080/03075079.2014.915302

Crossref Full Text | Google Scholar

Deci, E. L., and Ryan, R. M. (2000). The" what" and" why" of goal pursuits: human needs and the self-determination of behavior. Psychol. Inq. 11, 227–268. doi: 10.1207/S15327965PLI1104_01

Crossref Full Text | Google Scholar

Diamantopoulos, A., and Siguaw, J. A. (2006). Formative versus reflective indicators in organizational measure development: a comparison and empirical illustration. Br. J. Manag. 17, 263–282. doi: 10.1111/j.1467-8551.2006.00500.x

Crossref Full Text | Google Scholar

Faas, C., Bergs, R., Sterz, S., Langer, M., and Feit, A. M. (2024). Give me a choice: the consequences of restricting choices through AI-support for perceived autonomy, motivational variables, and decision performance. arXiv Prepr. arXiv2410.07728. doi: 10.1016/j.heliyon.2024.e31887

Crossref Full Text | Google Scholar

Falebita, O. S., and Kok, P. J. (2025). Artificial intelligence tools usage: a structural equation modeling of undergraduates’ technological readiness, self-efficacy and attitudes. J. STEM Educ. Res. 8, 257–282. doi: 10.1007/s41979-024-00132-1

Crossref Full Text | Google Scholar

Fan, J., and Zhang, Q. (2024). From literacy to learning: the sequential mediation of attitudes and enjoyment in AI-assisted EFL education. Heliyon 10:e37158. doi: 10.1016/j.heliyon.2024.e37158,

PubMed Abstract | Crossref Full Text | Google Scholar

Flavián, C., and Guinalíu, M. (2006). Consumer trust, perceived security and privacy policy: three basic elements of loyalty to a web site. Ind. Manag. Data Syst. 106, 601–620. doi: 10.1108/02635570610666403

Crossref Full Text | Google Scholar

Fornell, C., and Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Los Angeles, CA: Sage Publications Sage CA.

Google Scholar

Gersten, R., and Woodward, J. (1994). The language-minority student and special education: issues, trends, and paradoxes. Except. Child. 60, 310–322. doi: 10.1177/001440299406000403

Crossref Full Text | Google Scholar

Getenet, S., Cantle, R., Redmond, P., and Albion, P. (2024). Students’ digital technology attitude, literacy and self-efficacy and their effect on online learning engagement. Int. J. Educ. Technol. High. Educ. 21:3. doi: 10.1186/s41239-023-00437-y

Crossref Full Text | Google Scholar

Gkanatsiou, M. A., Triantari, S., Tzartzas, G., Kotopoulos, T., and Gkanatsios, S. (2025). Rewired leadership: integrating AI-powered mediation and decision-making in higher education institutions. Technologies 13:396. doi: 10.3390/technologies13090396

Crossref Full Text | Google Scholar

Glenn, I. (2018). The role of self-efficacy in self-regulation learning in online college courses. Scottsdale, AZ: Northcentral University.

Google Scholar

Gu, F., Xu, H., and He, D. (2024). How does variation in AI performance affect trust in AI-infused systems: a case study with in-vehicle voice control systems. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 68, 1092–1097. doi: 10.1177/10711813241274423

Crossref Full Text | Google Scholar

Hair, J., and Alamer, A. (2022). Partial least squares structural equation modeling (PLS-SEM) in second language and education research: guidelines using an applied example. Res. Methods Appl. Linguist. 1:100027. doi: 10.1016/j.rmal.2022.100027

Crossref Full Text | Google Scholar

Hair, J. F. Jr., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., and Ray, S. (2021). Partial least squares structural equation modeling (PLS-SEM) using R: A workbook. Cham: Springer Nature.

Google Scholar

Hair, J. F. Jr., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2021). A primer on partial least squares structural equation modelling (PLS-SEM). London: Sage Publications.

Google Scholar

Hair, J. Jr., Hair, J. F. Jr., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2021). A primer on partial least squares structural equation modeling (PLS-SEM). Los Angeles, CA: Sage publications.

Google Scholar

Hair, J. F., Sarstedt, M., Ringle, C. M., and Mena, J. A. (2012). An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 40, 414–433. doi: 10.1007/s11747-011-0261-6

Crossref Full Text | Google Scholar

Hair, J. F., Risher, J. J., Sarstedt, M., and Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31, 2–24. doi: 10.1108/EBR-11-2018-0203

Crossref Full Text | Google Scholar

Han, J., and Ko, D. (2025). Consumer autonomy in generative AI services: the role of task difficulty and AI design elements in enhancing trust, satisfaction, and usage intention. Behav. Sci. (Basel). 15:534. doi: 10.3390/bs15040534,

PubMed Abstract | Crossref Full Text | Google Scholar

Han, Y., Yang, S., Han, S., He, W., Bao, S., and Kong, J. (2025). Exploring the relationship among technology acceptance, learner engagement and critical thinking in the Chinese college-level EFL context. Educ. Inf. Technol. 30, 14761–14784. doi: 10.1007/s10639-025-13375-1

Crossref Full Text | Google Scholar

He, T., Huang, J., Li, Y., Wang, L., Liu, J., Zhang, F., et al. (2025). The mediation effect of AI self-efficacy between AI literacy and learning engagement in college nursing students: a cross-sectional study. Nurse Educ. Pract. 22:104499. doi: 10.1016/j.nepr.2025.104499

Crossref Full Text | Google Scholar

Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8

Crossref Full Text | Google Scholar

Hidayat-ur-Rehman, I. (2024). Examining AI competence, chatbot use and perceived autonomy as drivers of students’ engagement in informal digital learning. J. Res. Innov. Teach. Learn. 17:136. doi: 10.1108/JRIT-05-2024-0136

Crossref Full Text | Google Scholar

Hosseini, S., and Seilani, H. (2025). The role of agentic AI in shaping a smart future: a systematic review. Array 22:100399. doi: 10.1016/j.array.2025.100399

Crossref Full Text | Google Scholar

Howard, J. L., Slemp, G. R., and Wang, X. (2025). Need support and need thwarting: a meta-analysis of autonomy, competence, and relatedness supportive and thwarting behaviors in student populations. Personal. Soc. Psychol. Bull. 51, 1552–1573. doi: 10.1177/01461672231225364,

PubMed Abstract | Crossref Full Text | Google Scholar

Hu, P. J.-H., and Hui, W. (2012). Examining the role of learning engagement in technology-mediated learning and its effects on learning effectiveness and satisfaction. Decis. Support. Syst. 53, 782–792. doi: 10.1016/j.dss.2012.05.014

Crossref Full Text | Google Scholar

Hu, Y.-H., Yu, H.-Y., and Hsieh, C.-L. (2025). Can pedagogical agent-based scaffolding boost information problem-solving in one-on-one collaborative learning with a virtual learning companion? Educ. Inf. Technol. 22, 1–28. doi: 10.1007/s10639-025-13784-2

Crossref Full Text | Google Scholar

Hughes, L., Dwivedi, Y. K., Malik, T., Shawosh, M., Albashrawi, M. A., Jeon, I., et al. (2025). AI agents and agentic systems: a multi-expert analysis. J. Comput. Inf. Syst. 12, 1–29. doi: 10.1080/08874417.2025.2483832

Crossref Full Text | Google Scholar

Igbaria, M., and Iivari, J. (1995). The effects of self-efficacy on computer usage. Omega 23, 587–605. doi: 10.1016/0305-0483(95)00035-6

Crossref Full Text | Google Scholar

Jeilani, A., and Abubakar, S. (2025). Perceived institutional support and its effects on student perceptions of AI learning in higher education: the role of mediating perceived learning outcomes and moderating technology self-efficacy. Front. Educ. 10:1548900. doi: 10.3389/feduc.2025.1548900

Crossref Full Text | Google Scholar

Jia, D., Bhatti, A., and Nahavandi, S. (2014). The impact of self-efficacy and perceived system efficacy on effectiveness of virtual training systems. Behav. Inf. Technol. 33, 16–35. doi: 10.1080/0144929X.2012.681067

Crossref Full Text | Google Scholar

Kalantzis, M., and Cope, B. (2004). Designs for learning. E-Learn. Digit. Media 1, 38–93. doi: 10.2304/elea.2004.1.1.7

Crossref Full Text | Google Scholar

Katsenou, R., Kotsidis, K., Papadopoulou, A., Anastasiadis, P., and Deliyannis, I. (2025). Beyond assistance: embracing AI as a collaborative co-agent in education. Educ. Sci. 15:1006. doi: 10.3390/educsci15081006

Crossref Full Text | Google Scholar

Khan, N. A., Maialeh, R., Akhtar, M., and Ramzan, M. (2024). The role of AI self-efficacy in religious contexts in public sector: the social cognitive theory perspective. Public Organ. Rev. 24, 1015–1036. doi: 10.1007/s11115-024-00770-4

Crossref Full Text | Google Scholar

Kock, N. (2015). Common method bias in PLS-SEM: a full collinearity assessment approach. Int. J. e-Collabor. 11, 1–10. doi: 10.4018/ijec.2015100101

Crossref Full Text | Google Scholar

Lauer, T. W., and Deng, X. (2007). Building online trust through privacy practices. Int. J. Inf. Secur. 6, 323–331. doi: 10.1007/s10207-007-0028-8

Crossref Full Text | Google Scholar

Lin, H., and Chen, Q. (2024). Artificial intelligence (AI)-integrated educational applications and college students’ creativity and academic emotions: students and teachers’ perceptions and attitudes. BMC Psychol. 12:487. doi: 10.1186/s40359-024-01979-0,

PubMed Abstract | Crossref Full Text | Google Scholar

Lin, C.-C., Huang, A. Y. Q., and Lu, O. H. T. (2023). Artificial intelligence in intelligent tutoring systems toward sustainable education: a systematic review. Smart Learn. Environ. 10:41. doi: 10.1186/s40561-023-00260-y

Crossref Full Text | Google Scholar

Liu, L. (2025). Impact of AI gamification on EFL learning outcomes and nonlinear dynamic motivation: comparing adaptive learning paths, conversational agents, and storytelling. Educ. Inf. Technol. 30, 11299–11338. doi: 10.1007/s10639-024-13296-5

Crossref Full Text | Google Scholar

Lyu, W., and Salam, Z. A. (2025). AI-powered personalized learning: enhancing self-efficacy, motivation, and digital literacy in adult education through expectancy-value theory. Learn. Motiv. 90:102129. doi: 10.1016/j.lmot.2025.102129

Crossref Full Text | Google Scholar

Massaty, M. H., Fahrurozi, S. K., and Budiyanto, C. W. (2024). The role of AI in fostering computational thinking and self-efficacy in educational settings: a systematic review. IJIE 8, 49–61. doi: 10.20961/ijie.v8i1.89596

Crossref Full Text | Google Scholar

Miao, J., and Ma, L. (2023). Teacher autonomy support influence on online learning engagement: the mediating roles of self-efficacy and self-regulated learning. SAGE Open 13:21582440231217736. doi: 10.1177/21582440231217737

Crossref Full Text | Google Scholar

Moore, M. E., Vega, D. M., Wiens, K. M., and Caporale, N. (2020). Connecting theory to practice: using self-determination theory to better understand inclusion in STEM. J. Microbiol. Biol. Educ. 21, 10–1128. doi: 10.1128/jmbe.v21i1.1955

Crossref Full Text | Google Scholar

Mun, Y. Y., Jackson, J. D., Park, J. S., and Probst, J. C. (2006). Understanding information technology acceptance by individual professionals: toward an integrative view. Inf. Manag. 43, 350–363. doi: 10.1016/j.im.2005.08.006

Crossref Full Text | Google Scholar

Munna, M. S. H., Hossain, M. R., and Saylo, K. R. (2024). Digital education revolution: evaluating LMS-based learning and traditional approaches. J. Innov. Technol. Converg. 6:22. doi: 10.69478/JITC2024v6n002a03

Crossref Full Text | Google Scholar

Murugesan, S. (2025). The rise of agentic AI: implications, concerns, and the path forward. IEEE Intell. Syst. 40, 8–14. doi: 10.1109/MIS.2025.3544940

Crossref Full Text | Google Scholar

Musyaffi, A. M., Adha, M. A., Mukhibad, H., and Oli, M. C. (2024). Improving students’ openness to artificial intelligence through risk awareness and digital literacy: evidence form a developing country. Soc. Sci. Humanit. Open 10:101168. doi: 10.1016/j.ssaho.2024.101168

Crossref Full Text | Google Scholar

Núñez, J. L., and León, J. (2015). Autonomy support in the classroom. Eur. Psychol. 20, 275–283. doi: 10.1027/1016-9040/a000234

Crossref Full Text | Google Scholar

Nunnally, B., and Bernstein, I. (1994). Psychometric theory. London: Sage Publications.

Google Scholar

Ok, E. An evaluation of the learning management system for increasing student engagement in higher education a review 2025

Google Scholar

Pan, S., and Jordan-Marsh, M. (2010). Internet use intention and adoption among Chinese older adults: from the expanded technology acceptance model perspective. Comput. Human Behav. 26, 1111–1119. doi: 10.1016/j.chb.2010.03.015

Crossref Full Text | Google Scholar

Patall, E. A., Kennedy, A. A. U., Yates, N., Zambrano, J., Lee, D., and Vite, A. (2022). The relations between urban high school science students’ agentic mindset, agentic engagement, and perceived teacher autonomy support and control. Contemp. Educ. Psychol. 71:102097. doi: 10.1016/j.cedpsych.2022.102097

Crossref Full Text | Google Scholar

Pathak, A., and Bansal, V. (2024). AI as decision aid or delegated agent: the effects of trust dimensions on the adoption of AI digital agents. Comput. Hum. Behav. Artif. Humans 2:100094. doi: 10.1016/j.chbah.2024.100094

Crossref Full Text | Google Scholar

Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., and Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. J. Appl. Psychol. 88, 879–903. doi: 10.1037/0021-9010.88.5.879,

PubMed Abstract | Crossref Full Text | Google Scholar

Polisetty, A., Chakraborty, D., Kar, A. K., and Pahari, S. (2024). What determines AI adoption in companies? Mixed-method evidence. J. Comput. Inf. Syst. 64, 370–387. doi: 10.1080/08874417.2023.2219668

Crossref Full Text | Google Scholar

Rasheed, H. M. W., Chen, Y., Khizar, H. M. U., and Safeer, A. A. (2023). Understanding the factors affecting AI services adoption in hospitality: the role of behavioral reasons and emotional intelligence. Heliyon 9:e16968. doi: 10.1016/j.heliyon.2023.e16968,

PubMed Abstract | Crossref Full Text | Google Scholar

Reeve, J. (2013). How students create motivationally supportive learning environments for themselves: the concept of agentic engagement. J. Educ. Psychol. 105:579. doi: 10.1037/a0032690

Crossref Full Text | Google Scholar

Reeve, J., and Shin, S. H. (2020). How teachers can support students’ agentic engagement. Theory Pract. 59, 150–161. doi: 10.1080/00405841.2019.1702451

Crossref Full Text | Google Scholar

Rigby, C. S., and Ryan, R. M. (2018). Self-determination theory in human resource development: new directions and practical considerations. Adv. Dev. Hum. Resour. 20, 133–147. doi: 10.1177/1523422318756954

Crossref Full Text | Google Scholar

Roth, A., Ogrin, S., and Schmitz, B. (2016). Assessing self-regulated learning in higher education: a systematic literature review of self-report instruments. Educ. Assess. Eval. Account. 28, 225–250. doi: 10.1007/s11092-015-9229-2

Crossref Full Text | Google Scholar

Ryan, R. M., and Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self-determination theory perspective: definitions, theory, practices, and future directions. Contemp. Educ. Psychol. 61:101860. doi: 10.1016/j.cedpsych.2020.101860

Crossref Full Text | Google Scholar

Salatino, A., Prével, A., Caspar, E., and Lo Bue, S. (2025). ‘Fire! Do not fire!’: Investigating the effects of autonomous systems on agency and moral decision-making. Acta Psychol. 260:105350. doi: 10.1016/j.actpsy.2025.105350

Crossref Full Text | Google Scholar

Saleh, E. H. (2025). Empowering learners: exploring teaching strategies, AI integration, and motivation tools for fostering autonomous learning in higher education. East J. Hum. Sci. 1, 14–34. doi: 10.63496/ejhs.Vol1.Iss6.182

Crossref Full Text | Google Scholar

Sarstedt, M., Ringle, C. M., and Hair, J. F. (2021). “Partial least squares structural equation modeling” in Handbook of market research. eds. C. Homburg, M. Klarmann, and A. Vomberg (Cham: Springer), 587–632.

Google Scholar

Schunk, D. H. “Social cognitive theory. 2012

Google Scholar

Shahzad, M. F., Xu, S., Lim, W. M., Yang, X., and Khan, Q. R. (2024). Artificial intelligence and social media on academic performance and mental well-being: student perceptions of positive impact in the age of smart learning. Heliyon 10:56. doi: 10.1016/j.heliyon.2024.e29523

Crossref Full Text | Google Scholar

Shao, C., Nah, S., Makady, H., and McNealy, J. (2025). Understanding user attitudes towards AI-enabled technologies: an integrated model of self-efficacy, TAM, and AI ethics. Int. J. Human–Computer Interact. 41, 3053–3065. doi: 10.1080/10447318.2024.2331858

Crossref Full Text | Google Scholar

Shi, S., and Zhang, H. (2025). EFL students’ motivation predicted by their self-efficacy and resilience in artificial intelligence (AI)-based context: from a self-determination theory perspective. Learn. Motiv. 91:102151. doi: 10.1016/j.lmot.2025.102151

Crossref Full Text | Google Scholar

Şimşek, A. S., Cengiz, G. Ş. T., and Bal, M. (2025). Extending the TAM framework: exploring learning motivation and agility in educational adoption of generative AI. Educ. Inf. Technol. 30, 20913–20942. doi: 10.1007/s10639-025-13591-9

Crossref Full Text | Google Scholar

Stajkovic, A. D., and Luthans, F. (1998). Social cognitive theory and self-efficacy: goin beyond traditional motivational and behavioral approaches. Organ. Dyn. 26, 62–74. doi: 10.1016/S0090-2616(98)90006-7

Crossref Full Text | Google Scholar

Strielkowski, W., Grebennikova, V., Lisovskiy, A., Rakhimova, G., and Vasileva, T. (2025). AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev. 33, 1921–1947. doi: 10.1002/sd.3221

Crossref Full Text | Google Scholar

Suriano, R., Plebe, A., Acciai, A., and Fabio, R. A. (2025). Student interaction with ChatGPT can promote complex critical thinking skills. Learn. Instr. 95:102011. doi: 10.1016/j.learninstruc.2024.102011

Crossref Full Text | Google Scholar

Tam, A. C. F. (2024). Interacting with ChatGPT for internal feedback and factors affecting feedback quality. Assess. Eval. High. Educ. 22, 1–17. doi: 10.1080/02602938.2024.2374485

Crossref Full Text | Google Scholar

Thabet, Z., Albashtawi, S., Ansari, H., Al-Emran, M., Al-Sharafi, M. A., and AlQudah, A. A. (2023). Exploring the factors affecting telemedicine adoption by integrating UTAUT2 and IS success model: a hybrid SEM–ANN approach. IEEE Trans. Eng. Manag. 71, 8938–8950. doi: 10.1109/TEM.2023.3296132

Crossref Full Text | Google Scholar

Tuffahati, N. N., and Nugraha, J. (2021). The effect of perceived usefulness and perceived ease of use on the Google classroom against learning motivation. J. TAM 12, 19–32. doi: 10.56327/jurnaltam.v12i1.1005

Crossref Full Text | Google Scholar

Tyler, N. C., Yzquierdo, Z., Lopez-Reyna, N., and Saunders Flippin, S. (2004). Cultural and linguistic diversity and the special education workforce: a critical overview. J. Spec. Educ. 38, 22–38. doi: 10.1177/00224669040380010301

Crossref Full Text | Google Scholar

Vanneste, B. S., and Puranam, P. (2024). Artificial intelligence, trust, and perceptions of agency. Acad. Manag. Rev. doi: 10.5465/amr.2022.0041

Crossref Full Text | Google Scholar

Vansteenkiste, M., Aelterman, N., De Muynck, G.-J., Haerens, L., Patall, E., and Reeve, J. (2018). Fostering personal meaning and self-relevance: a self-determination theory perspective on internalization. J. Exp. Educ. 86, 30–49. doi: 10.1080/00220973.2017.1381067

Crossref Full Text | Google Scholar

Venkatesh, V. (2000). Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model. Inf. Syst. Res. 11, 342–365. doi: 10.1287/isre.11.4.342.11872

Crossref Full Text | Google Scholar

Venkatesh, V., and Davis, F. D. (2000). A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag. Sci. 46, 186–204. doi: 10.1287/mnsc.46.2.186.11926

Crossref Full Text | Google Scholar

Venkatesh, V., Thong, J. Y. L., and Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q. 36, 157–178. doi: 10.2307/41410412

Crossref Full Text | Google Scholar

Viberg, O., Cukurova, M., Feldman-Maggor, Y., Alexandron, G., Shirai, S., Kanemune, M., et al. (2024). What explains teachers’ trust in AI in education across six countries? Int. J. Artif. Intell. Educ. 22, 1–29. doi: 10.1007/s40593-024-00433-x

Crossref Full Text | Google Scholar

Wang, M. (2024). Modeling the contributions of perceived teacher autonomy support and school climate to Chinese EFL students’ learning engagement. Percept. Mot. Skills 131, 2008–2029. doi: 10.1177/00315125241272672,

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, C., Wang, H., Li, Y., Dai, J., Gu, X., and Yu, T. (2025). Factors influencing university students’ behavioral intention to use generative artificial intelligence: integrating the theory of planned behavior and AI literacy. Int. J. Human–Computer Interact. 41, 6649–6671. doi: 10.1080/10447318.2024.2383033

Crossref Full Text | Google Scholar

Wong, L.-W., Tan, G. W.-H., Ooi, K.-B., and Dwivedi, Y. (2024). The role of institutional and self in the formation of trust in artificial intelligence technologies. Internet Res. 34, 343–370. doi: 10.1108/INTR-07-2021-0446

Crossref Full Text | Google Scholar

Wu, D., Zhang, S., Ma, Z., Yue, X.-G., and Dong, R. K. (2024). Unlocking potential: key factors shaping undergraduate self-directed learning in AI-enhanced educational environments. Systems 12:332. doi: 10.3390/systems12090332

Crossref Full Text | Google Scholar

Xia, Q., Chiu, T. K. F., and Chai, C. S. (2023). The moderating effects of gender and need satisfaction on self-regulated learning through artificial intelligence (AI). Educ. Inf. Technol. 28, 8691–8713. doi: 10.1007/s10639-022-11547-x

Crossref Full Text | Google Scholar

Xia, L., An, X., Li, X., and Dong, Y. (2025). Perceptions of generative artificial intelligence (AI), behavioral intention, and use experience as predictors of university students’ learning agency in generative AI-supported contexts. J. Educ. Comput. Res. :07356331251382853. doi: 10.1177/07356331251382853

Crossref Full Text | Google Scholar

Xu, Z., Zhao, Y., Zhang, B., Liew, J., and Kogut, A. (2023). A meta-analysis of the efficacy of self-regulated learning interventions on academic achievement in online and blended environments in K-12 and higher education. Behav. Inf. Technol. 42, 2911–2931. doi: 10.1080/0144929X.2022.2151935

Crossref Full Text | Google Scholar

Xu, Q., Liu, Y., and Li, X. (2025). Unlocking student potential: how AI-driven personalized feedback shapes goal achievement, self-efficacy, and learning engagement through a self-determination lens. Learn. Motiv. 91:102138. doi: 10.1016/j.lmot.2025.102138

Crossref Full Text | Google Scholar

Yang, M., Lovett, N., Li, B., and Hou, Z. (2025). Towards dynamic learner state: orchestrating AI agents and workplace performance via the model context protocol. Educ. Sci. 15:1004. doi: 10.3390/educsci15081004

Crossref Full Text | Google Scholar

Yao, N., and Wang, Q. (2024). Factors influencing pre-service special education teachers’ intention toward AI in education: digital literacy, teacher self-efficacy, perceived ease of use, and perceived usefulness. Heliyon 10:22. doi: 10.1016/j.heliyon.2024.e34894

Crossref Full Text | Google Scholar

Yen, D. C., Wu, C.-S., Cheng, F.-F., and Huang, Y.-W. (2010). Determinants of users’ intention to adopt wireless technology: an empirical study by integrating TTF with TAM. Comput. Human Behav. 26, 906–915. doi: 10.1016/j.chb.2010.02.005

Crossref Full Text | Google Scholar

Yuan, J., and Kim, C. (2018). The effects of autonomy support on student engagement in peer assessment. Educ. Technol. Res. Dev. 66, 25–52. doi: 10.1007/s11423-017-9538-x

Crossref Full Text | Google Scholar

ZabihiAtergeleh, N., Ahmadian, M., and Karimi, S. N. (2025). The motivational paths to self-regulated language learning: a structural equation modeling approach. Stud. Self-Access Learn. J. 16:22. doi: 10.37237/202501

Crossref Full Text | Google Scholar

Zhang, C., Schießl, J., Plößl, L., Hofmann, F., and Gläser-Zikuda, M. (2023). Acceptance of artificial intelligence among pre-service teachers: a multigroup analysis. Int. J. Educ. Technol. High. Educ. 20:49. doi: 10.1186/s41239-023-00420-7

Crossref Full Text | Google Scholar

Zhao, H., Zhang, H., Li, J., and Liu, H. (2025). Performance motivation and emotion regulation as drivers of academic competence and problem-solving skills in AI-enhanced preschool education: a SEM study. Br. Educ. Res. J. doi: 10.1002/berj.4196

Crossref Full Text | Google Scholar

Zhou, C., Ren, T., and Lang, L. (2025). The impact of AI-based adaptive learning technologies on motivation and engagement of higher education students. Educ. Inf. Technol. 30, 22735–22752. doi: 10.1007/s10639-025-13646-x

Crossref Full Text | Google Scholar

Zimmerman, B. J. (2000). “Attaining self-regulation: a social cognitive perspective” in Handbook of self-regulation. eds. M. Boekaerts, M. Zeidner, and P. R. Pintrich (Elsevier), 13–39.

Google Scholar

Keywords: agentic AI, artificial intelligence, autonomy support, higher education, self-efficacy, self-learning motivation

Citation: Alqurni J (2026) Exploring the role of agentic AI in fostering self-efficacy, autonomy support, and self-learning motivation in higher education. Front. Artif. Intell. 9:1738774. doi: 10.3389/frai.2026.1738774

Received: 03 November 2025; Revised: 23 December 2025; Accepted: 06 January 2026;
Published: 22 January 2026.

Edited by:

Leman Figen Gul, Istanbul Technical University, Türkiye

Reviewed by:

Dennis Arias-Chávez, Universidad Continental - Arequipa, Peru
Mazen Alzyoud, Al al-Bayt University, Jordan

Copyright © 2026 Alqurni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jehad Alqurni, amFscXVybmlAaWF1LmVkdS5zYQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.