Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 28 November 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1719625

Evaluating the impact of AI on the critical thinking skills among the higher education students by combining the TAM model and critical thinking theory


Patni Ninghardjanti
Patni Ninghardjanti*Muhammad Choerul UmamMuhammad Choerul UmamAnton SubarnoAnton SubarnoWinarno WinarnoWinarno WinarnoNovedi Risanti LanggiNovedi Risanti LanggiJumiyanto WidodoJumiyanto Widodo
  • Department of Office Administration Education, Faculty of Teacher Training and Education, Universitas Sebelas Maret, Surakarta, Indonesia

The following research analyzes the relation between the utilization of Artificial Intelligence (AI) tools among university students and their relation with critical thinking skills by combining the Technology Acceptance Model (TAM) model and critical thinking theory. This research introduced the Metacognitive TAM (Meta-TAM) integrated with the Information Systems (IS) Success model as a theoretical innovation. The current research was carried out with a quantitative method approach, with 200 respondents from the Office Administration Education Department, Universitas Sebelas Maret, Indonesia. The research data were analyzed using the SEM with SmartPLS 4.0 software. The key constructs of attitude toward use (ATU), motivation to use (MTU), perceived usefulness (PU), perceived ease of use (PEOU), and behavioral intention (BI) were analyzed to assess the primary factors that influence the adoption of the AI-based tools. The findings emphasize that the TAM constructs significantly influenced behavioral intention, whereas critical thinking played a crucial mediating role. The strongest path was observed from attitude toward use to behavioral intention (β = 0.737), which emphasizes the importance of affective and evaluative assessment in students' decision-making. Then, it can be concluded that not only the usability or utility, but also students' strategic thinking, epistemic vigilance, and intellectual autonomy have a significant impact on the AI adoption among higher education students. This study offers practical implications for AI-integrated curriculum design and ethical technology implementation in learning environments. This research contributes a novel perspective to educational technology literature and encourages future cross-cultural, longitudinal studies that examine AI's cognitive impact while safeguarding critical thinking development in diverse academic contexts.

1 Introduction

An invention of Artificial Intelligence (AI) has radically changed human cognitive skills, such as processing relevant information, making important decisions, and solving complex problems in everyday life (Rashid and Kausik, 2024). AI tools adoption in the education sector has created opportunities for enhanced learning experiences and innovative pedagogical solutions (Ifenthaler et al., 2024). AI has become a transformative factor that fundamentally alters pedagogical methodologies and learning architectures (Ruano-Borbalan, 2025). This statement is also confirmed by meta-analytical research, which was conducted by interviewing 536 instructors in Science, Technology, Engineering, and Mathematics (STEM) and non-STEM fields. The study revealed that AI-powered classrooms can improve learning outcomes by 23–35 percent, with particularly pronounced effects in STEM disciplines and language learning (Parviz, 2024). In the United Kingdom (UK), more than 92 percent of public schools have already implemented AI in their learning practices, which has helped them understand their learning material better (Freeman, 2025). These innovations have shown a significant potential of AI technology in addressing the educational challenges, including individualized instruction, real-time assessment, and equitable access to quality academic resources.

On the other hand, the adoption of AI tools at various levels of education became a main concern in educational studies because recent evidence suggests that excessive reliance on AI may fundamentally compromise students' cognitive capabilities (Gerlich, 2025). Critical thinking itself is described as the capacity to analyse, evaluate, synthesize, and create solutions via independent reasoning, and represents a cornerstone of higher education and intellectual development (Sellars et al., 2018). The convenience of the AI tools has created what researchers term “cognitive offloading” among the students, which can be described as a certain condition when the students who rely heavily on AI demonstrate substantial declines in analytical reasoning capabilities and decrease their study motivation (Jose et al., 2025). Systematic reviews also indicate that overreliance occurs when students use AI-generated statements without questioning the correct answer, which leads to reduced capability in decision-making tasks. Younger participants are particularly susceptible to cognitive outsourcing effects due to developmental factors (Zhai et al., 2024). Despite these compelling findings, the precise mechanisms underlying AI-induced cognitive changes and their long-term implications remain insufficiently understood, necessitating extensive longitudinal research to establish causal relationships between AI dependency patterns and specific cognitive outcomes, particularly regarding the reversibility of cognitive decline and optimization of AI integration strategies that preserve human analytical capabilities while maximizing technological benefits.

In the context of Indonesian higher education, AI development indicates the country's dedication to digital transformation and educational modernization. The AI adoption is also supported by the fact that more than 50 percent of Indonesians have access to the internet (Kempp, 2022). Furthermore, the Indonesian government has allocated IDR 400 trillion (approximately USD 24 billion) for digital education infrastructure development between 2024 and 2027, with specific provisions for AI integration across Indonesian universities (Mahipal, 2024). Recent empirical investigations have demonstrated that 87 percent of Indonesian university students utilize AI tools for academic purposes and stated that AI can help them learn in the most efficient way (Priyahita, 2020). Major Indonesian higher education institutions, as reflected by Universitas Gadjah Mada (UGM), Universitas Indonesia (UI), and Institut Teknologi Bandung (ITB), have initiated comprehensive AI-driven solutions encompassing administrative automation, student support services, and adaptive learning systems (UI, 2023). To learn more about the “controversial” correlation between AI adoption and critical thinking skills, understanding how Indonesian university students adopt and utilize AI technologies while maintaining critical thinking capabilities can contribute positively to understanding more about the obstacles in AI adoption, primarily in the education sector. Considering earlier studies, the influence of AI adoption on students' cognitive skills has not yet been explored, although AI adoption is highly influenced by the organization's behavior (Setyo Widodo et al., 2024). Later on, a previous study also confirmed that in the Indonesian Islamic higher education sector, the usage of AI among lecturers is increasing quickly, with more than 65 percent of them using AI for task automation; however, the influence of the AI adoption itself is not yet fully identified (Suwendi et al., 2025).

Because of the information above, the research on the relationship between AI adoption and its connection with critical thinking skills among higher education students is important to conduct. The current research is proposed by the authors to examine that important topic. This research develops a novel integrated TAM-critical thinking theoretical framework specifically calibrated for Indonesian contexts, employing longitudinal designs to capture technology acceptance and cognitive development patterns, and providing evidence-based recommendations for strategic AI integration that preserves critical thinking competencies while maximizing technological benefits for Indonesian university students. The TAM framework applied in this study is innovative and can be considered a new approach in this field, as it integrates metacognitive and critical-thinking dimensions into the traditional TAM structure. While conventional TAM studies typically focus on core constructs such as Perceived Usefulness (PU), Perceived Ease of Use (PEOU), Behavioral Intention (BI), and Attitude Toward Use (ATU), this research introduces Metacognitive Technology Use (MTU) as an additional determinant and reinterprets the TAM elements via the lens of critical thinking. Specifically, the research proposes the “Meta-TAM” model, where each TAM component is reframed as a metacognitive process: MTU reflects cognitive need assessment, PEOU relates to cognitive-load evaluation, PU links to learning-efficacy judgment, ATU corresponds to reflective disposition, and BI represents strategic decision-making. In similar TAM-based studies, extensions have often been made by incorporating outside factors, including subjective standards, facilitating conditions, self-efficacy, or system quality (for example, within TAM2, UTAUT, or TAM-IS Success integrated models). These studies generally expand TAM by adding contextual or technical factors but maintain the constructs in their original cognitive–behavioral form. In contrast, this research shifts the interpretive foundation of TAM toward metacognition and critical thinking, which directly connects technology acceptance to the cultivation of higher-order cognitive skills. The Information Systems (IS) Success Model enhances the TAM by introducing system-related dimensions—Information Quality, Service Quality, and System Quality—that influence user satisfaction and continued use (Petter et al., 2008). This model has been broadly utilized to evaluate e-learning systems and AI-powered educational platforms (Almarashdeh, 2016; Mohammadi, 2015). Merging both models provides a holistic analytical framework that considers users' perceptions alongside system performance, offering deeper insights into how students interact with AI-based tools. Ali et al. (2024) demonstrated that combining TAM and IS Success increases the predictive accuracy of AI learning adoption, while Chatterjee and Bhattacharjee (2020) and Tarhini et al. (2017) highlighted the importance of integrating motivational and system-level factors to strengthen technology adoption models. Through this combined approach, the present study explores students' acceptance of AI systems and how these technologies contribute to the enhancement of cognitive skills, guiding the development of AI learning environments that are both efficient and educationally impactful.

The main goal of this research is to evaluate how Indonesian university students use AI tools for learning within the recently created Meta-TAM framework, integrated with the IS Success model, and how their acceptance is influenced by metacognitive awareness and critical thinking. This study will validate the structural relationships among TAM constructs and advance a theoretical contribution by proposing a framework that better explains responsible and reflective AI adoption in education. The novelty of this research can be seen in the development of a new extension of the TAM, called the integration of Meta-TAM and the IS Success model. Unlike prior TAM-based studies that extend the model by adding external constructs such as subjective norms, facilitating conditions, or system quality, this study introduces MTU and reframes all TAM elements through the perspective of critical thinking and metacognitive processes. This innovative approach not only measures acceptance of AI tools but also explains how students' higher-order thinking shapes their adoption behavior.

2 Literature review

This section explains the theoretical foundations of AI technology and critical thinking within educational contexts, analyzing their complex interrelationship and implications for contemporary higher education.

2.1 Artificial intelligence in educational contexts

The early concept of AI was created by McCarthy, Minsky, Rochester, and Shannon in their groundbreaking 1955 Dartmouth Conference proposal, which aimed to create intelligent machines to perform specific assignments that previously needed human intelligence (McCarthy et al., 2006). From that starting point, many scholars began to research the AI capabilities and limitations. The recent definition of AI, developed by Russell and Norvig (2020), provides the most comprehensive contemporary framework defining AI through four distinct approaches, such as thinking humanly (cognitive modeling approach), thinking rationally (laws of thought approach), acting humanly (Turing test approach), and acting rationally (rational agent approach) (Russell and Norvig, 2020). This multidimensional definition acknowledges that AI encompasses both cognitive simulation—attempting to replicate human thought processes—and rational decision-making capabilities that may exceed human performance in specific domains.

The AI integration in educational settings has fundamentally transformed pedagogical approaches and learning architectures across educational systems by enabling adaptive intelligent tutoring systems, learning platforms, and sophisticated student modeling systems (Wang et al., 2024). The empirical evidence supporting AI's educational efficacy has accumulated through decades of rigorous meta-analytical research, establishing a robust foundation for understanding AI's transformative potential in education. (Steenbergen-Hu and Cooper 2013) conducted research by utilizing a comprehensive meta-analysis of intelligent tutoring systems (ITS) for college students, by analyzing 35 reports and finding moderate positive effects, demonstrating that ITS outperformed traditional classroom instruction and other computer-based learning methods while remaining less effective than human tutoring (Steenbergen-Hu and Cooper, 2013). Kulik and Fletcher (2016) provided seminal meta-analytical evidence through their analysis of 50 evaluations of ITS, revealing that students using intelligent tutoring systems performed better than 75 percent of students receiving conventional instruction (Fletcher and Kulik, 2003). Recent systematic reviews continue to show that intelligent tutoring systems using AI generative content have generally positive effects on K−12 education, suggesting that while AI educational technologies demonstrate consistent benefits over traditional methods, their advantages are most pronounced when compared to conventional instructional approaches rather than other technology-enhanced learning environments (Létourneau et al., 2025).

2.2 Critical thinking: theoretical frameworks and digital age adaptations

Critical thinking is a traditional concept which already been studied for many years and through different perspectives, whereas this process results in diverse definitions, theoretical frameworks, and assessment methodologies that reflect the complexity and multifaceted nature of this fundamental cognitive capability, which is different across different socio-cultural factors (Santos Meneses, 2020). One of the recent definitions was developed by Ennis (1996), one of the most influential scholars in the field, who describes critical thinking as “reasonable reflective thinking focused on deciding what to believe or do,” emphasizing the evaluative and decision-making aspects of the cognitive process (Ennis, 1996). This definition highlights the purposeful, goal-oriented nature of critical thinking as a cognitive skill specifically directed toward making reasoned judgments and taking informed action. Furthermore, Paul and Elder (2013) significantly expand the previous concept which developed by Ennis (1996) by proposing a comprehensive framework that also integrates the intellectual standards, such as clarity (understandable and free from confusion), accuracy (truthful and correct), relevance (pertinent to the matter at hand), depth (thorough and substantial), breadth (comprehensive and inclusive), logic (consistent and rational), significance (important and meaningful), and fairness (unbiased and objective) (Paul and Elder, 2013).

Facione's (1990) study, by utilizing the groundbreaking Delphi study, represents another cornerstone in critical thinking research, employing a consensus-building methodology among 46 experts from diverse disciplines to identify the core components of critical thinking. This study identifies the six fundamental critical thinking skills, namely evaluation (assessing credibility and logical strength), inference (drawing reasonable conclusions), interpretation (understanding and expressing meaning), analysis (identifying inferential relationships), explanation (articulating reasoning), and self-regulation (monitoring cognitive activities) (Peter, 1990). This framework has been widely adopted in educational assessment and curriculum development, serving as the theoretical foundation for numerous critical thinking measurement instruments and pedagogical approaches.

From those different perspectives, in general, the complex, multidimensional cognitive skill that encompasses evaluative, analytical, and reflective processes aimed at making reasoned decisions and judgments can be referred to as critical thinking abilities. Collectively, these perspectives highlight that critical thinking is a crucial skill for navigating the challenges of modern life and education since it describes not just a collection of cognitive capacities but also a disciplined process molded by intellectual norms, contextual awareness, and a dedication to reasoned judgment.

2.3 The complex relationship between AI and critical thinking

Recent research has identified the controversial patterns between AI adoption and critical thinking development, identified is what scholars increasingly recognize as a fundamental paradox in educational technology integration. The phenomenon of “cognitive offloading”—as the tendency to exclude the thinking process, and let it be managed through digital systems, with mounting evidence suggesting that excessive AI reliance may fundamentally compromise students' cognitive capabilities. A comprehensive longitudinal study by Yusuf A. et al. (2024), which involved 1,276 participants from 76 countries, identified the urgency for regulation evolution in the context of education to accommodate the generative AI technology, as it may have a negative relation with creative thinking skills in the long run. This research represents one of the most comprehensive empirical investigations of AI's impact on cognitive development, which employed a very detailed methodology including pre-post assessments, control groups, and validated critical thinking instruments. Furthermore, the disruption of critical thinking skills through AI dependency manifests across multiple cognitive dimensions, creating what researchers describe as “digital cognitive atrophy”, which can be described as a condition where frequent AI users demonstrate reduced neural activation, which can be associated with neural function, working memory, and analytical reasoning (Sætra, 2023). As relevant to the findings of Delello et al. (2025) that students who extensively used AI for academic tasks showed significant decreases in neural activity, which may lead to mental health, if the student didn't understand properly the limitations of AI usage in their everyday life.

The paradoxical pattern between the concept challenges simplistic assumptions about technology's impact on cognitive development. Rodríguez-Ruiz et al. (2025) conducted research by using a comprehensive meta-analysis of 1,764 participants and revealed that the usage of AI without limitation can link to personal traits, such as self-control, self-esteem, and self-efficacy, highlighting the need for ethical implementation in education. These findings supported the previous research by Holstein and Aleven (2022) and more recent work by Nazaretsky et al. (2022), which demonstrated that K−12 students who become overly reliant on AI assistance often develop learned helplessness, reduced metacognitive awareness, and diminished problem-solving persistence.

Conversely, earlier research showed that, when used with the right pedagogical support, the regulated and directed usage of AI tools can improve critical thinking abilities. The optimal approach involves utilizing AI as a collaborative resource for brainstorming, initial research, and feedback generation, while students retain primary responsibility for higher-order processes such as analysis, evaluation, and synthesis. Key factors distinguishing effective from ineffective AI integration include explicit instruction on the limitations and biases of AI, structured reflection exercises to critically assess AI outputs, scaffolded learning experiences that progressively increase task complexity, and assessments that prioritize original thinking over dependence on AI assistance.

For instance, Kim and Lee (2022) highlight the positive impacts of student-AI collaboration on learning outcomes, demonstrating that AI can indeed augment learning when used appropriately. Similarly, Liu and Wang (2024) found that AI tools can improve critical thinking abilities among language learners, showing significant gains in critical thinking among EFL learners in English literature classes who used AI tools. Furthermore, Jafari and Keykha (2023) emphasize the importance of addressing student concerns regarding over-reliance on AI, advocating for an informed integration of AI in pedagogical practices to mitigate such issues. This synthesis encapsulates how effective AI integration can theoretically support the development of critical thinking skills within educational settings, which describes a requirement for a balanced approach to AI usage.

The reliability of self-reported data in educational technology research has increasingly drawn attention, as contextual and procedural factors can distort how learners express their attitudes toward innovation. Lavidas et al. (2022a), using a sample of 111 Greek university students, demonstrated that social desirability bias (SDR) significantly affected students' reported attitudes toward statistics only in socially interactive contexts—specifically when surveys were administered after both lectures and lab sessions, but not after lectures alone. The study revealed that SDR explained the relationship between students' attitudes toward statistics and their perceived mathematical competence, highlighting that social context accounted for a notable share of measurement variance in self-reports. In parallel, Lavidas et al. (2022b), drawing on responses from 263 Greek teachers (a 65.75% response rate), identified that factors such as authority of the research institution, ethical assurances, survey length, and perceived relevance of the topic significantly increased participants' intention to complete web-based questionnaires. Their findings showed moderate to strong positive associations (e.g., γ = 0.486, p < 0.001) between internal motivations—such as altruistic or research-oriented interest—and willingness to participate. Together, these studies underscore the necessity of addressing social desirability and participation biases in AI-in-education research. Ensuring anonymity, designing concise yet meaningful instruments, and situating responses within non-threatening contexts are crucial to improving the validity of self-reported perceptions of AI-supported critical thinking and learning engagement.

2.4 Critical research gaps and future directions

In spite of substantial research examining AI adoption and critical thinking development independently, significant gaps persist between those two main topics. The absence of research within Indonesian and broader Southeast Asian contexts represents a substantial limitation, particularly given the unique cultural factors, including collectivist values, religious ethical frameworks, and hierarchical educational structures that may significantly influence technology adoption and cognitive development patterns. Previous studies in countries such as China (Wu et al., 2022), South Korea (Kim et al., 2023), and Finland (Ainley and Ainley, 2011) have examined students' acceptance of AI technologies in higher education and identified strong correlations between perceived usefulness, ease of use, and academic performance (R2 values ranging between 0.58 and 0.72) (Kelly et al., 2023; Wang and Lu, 2025; Zhao et al., 2025). However, research in Southeast Asia remains comparatively limited and context-dependent. In Malaysia, Osman et al. (2024) found that only 61 percent of students reported confidence in using AI-based learning tools, with perceived ease of use significantly predicting intention to adopt (β = 0.47, p < 0.01). In the Philippines, Dela Cruz and Villanueva and Cruz (2021) showed that students' willingness to integrate AI into coursework was moderate (mean = 3.45/5), largely constrained by limited access and institutional support (Villanueva and Cruz, 2019). Meanwhile, in Thailand, Kanont et al. (2024) reported that teacher encouragement and perceived usefulness jointly explained 63 percent of the variance in students' behavioral intention to use AI applications. These findings reveal that regional differences in digital readiness, pedagogical culture, and institutional infrastructure shape AI adoption behaviors in distinctive ways. Given Indonesia's heterogeneous higher education landscape and varying levels of digital literacy among students, examining AI acceptance and its relationship with critical thinking offers an important contribution to both regional and global discussions on educational technology adoption.

Later, understanding the links between AI and critical thinking is further complicated by methodological restrictions. The majority of research uses cross-sectional approaches, which are unable to represent the dynamic, changing character of long-term cognitive interactions between AI and humans. Understanding how AI dependency evolves and its long-term effects on critical thinking abilities is made impossible by the dearth of longitudinal studies. Furthermore, there is a conspicuous absence of integrated theoretical frameworks that simultaneously account for technology acceptance factors and critical thinking development outcomes. For instance, while Huang et al. (2024) and Gerlich (2025) acknowledge the potential of AI to enhance cognitive engagement, they also emphasize that existing models often rely on self-reported perceptions without examining underlying cognitive constructs such as critical thinking or reasoning ability. This methodological gap limits the explanatory power of the Technology Acceptance Model (TAM) when applied to higher-order learning outcomes. Supporting this observation, recent findings by Sailer and Homner (2020) and Wang et al. (2023) show that while TAM effectively predicts behavioral intention, it insufficiently accounts for cognitive and metacognitive processes—elements crucial for understanding how learners internalize and apply AI-generated feedback. These studies strengthen the argument for extending TAM with constructs such as critical thinking to capture the cognitive dimension of technology adoption.

Most critically, existing research has not adequately addressed how strategic AI integration can be optimized to enhance rather than diminish critical thinking capabilities within unique cultural and educational contexts. The absence of evidence-based recommendations for optimizing AI integration strategies that preserve essential critical thinking competencies while maximizing technological benefits represents a critical gap in contemporary educational research. This research gap necessitates a comprehensive investigation that integrates TAM frameworks with critical thinking assessment methodologies, employs longitudinal designs to capture cognitive development trajectories, and provides culturally-sensitive, evidence-based recommendations for optimizing AI integration strategies within Indonesia's unique educational and cultural context.

3 Research method

This research employs the quantitative perspective to analyse the AI adoption patterns among university students in Indonesia by utilizing the TAM framework to identify the AI adoption determinants while incorporating critical thinking theory to evaluate cognitive processes during AI tool utilization. The research integrates TAM's core constructs, which are perceived ease of use, perceived usefulness, attitude toward technology, and behavioral intention. Furthermore, a quantitative research framework with Structural Equation Modeling (SEM) enables rigorous statistical analysis of variable relationships and hypothesis testing, aligning with established TAM research precedents (Legramante et al., 2023; Mohammadi, 2015), while allowing the analysis of the adoption factors and examination of mediating effects between critical thinking competencies and TAM variables.

3.1 Research design

As is known recently, the adoption of AI tools in higher education environments is also important to explore the psychological and technological factors that have an impact on adoption and sustained adoption. The adoption of AI-based tools in learning contexts extends beyond usability—it requires an understanding of how such tools align with students' motivational orientations, cognitive engagement, and perceived academic value. In particular, the development of cognitive which have been developed by humans, such as critical thinking, knowledge construction, and most importantly, problem-solving, which is closely linked to students' attitudes and behavioral intentions toward the use of intelligent learning systems (Ifenthaler et al., 2024; Kong et al., 2022; Yau et al., 2023). To capture these multidimensional aspects, this study adopts an integrated research framework that combines the TAM (Davis, 1989) with the Information System (IS) Success Model (Delone and Mclean, 2003). Theoretical integration is employed to enable a robust analysis of both user-centered variables—such as motivation, perceived ease of use, and attitudes—and system-level attributes, including information quality, system reliability, and service responsiveness. The growing integration of AI technologies in education requires not only a technical understanding of adoption patterns but also a cognitive and pedagogical perspective that links technology acceptance with reasoning processes. Consequently, this study employs an extended TAM framework incorporating critical thinking as a mediating construct to better capture students' cognitive engagement with AI-based learning sysTAM's predictive capabilities have been utilized in numerous prior studies to explain users' intentions to adopt new technologies (Teo et al., 2011; Venkatesh and Davis, 2000). The primary TAM constructs are Perceived Usefulness (PU) and Perceived Ease of Use (PEOU), which are known to directly influence Attitude Toward Use (ATU) and Behavioral Intention (BI) (King and He, 2006). However, scholars have criticized the original TAM for not adequately accounting for contextual and motivational factors that influence technology acceptance, particularly in dynamic learning environments (Bagozzi, 2007). To address this limitation, the present study incorporates Motivation to Use (MTU) as an exogenous variable. MTU encompasses both intrinsic and extrinsic motivations—such as autonomy, achievement, and flexible learning—and has been shown to impact PU and PEOU in e-learning contexts significantly (Al-Rahmi et al., 2018; Dunn and Zimmer, 2020; Zhao et al., 2021). Motivation is concluded as one of the critical roles in determining whether learners engage with AI technologies in a manner that facilitates deeper cognitive processing (Ifinedo, 2018).

In parallel, the IS Success Model complements TAM by introducing system-related variables such as Information Quality, Service Quality, and System Quality, all of which influence the user satisfaction and continued usage (Petter et al., 2008). This model has been extensively applied in studies evaluating the effectiveness of e-learning platforms, intelligent AI-driven educational tools in tutoring systems (Almarashdeh, 2016; Mohammadi, 2015). The combination of these two models allows a comprehensive exploration of both user perceptions and system characteristics, offering a nuanced understanding of how students interact with AI applications. Recent empirical evidence supports the use of this combined framework. For example, Ali et al. (2024) revealed that integrating TAM and IS models provided better predictive accuracy in assessing AI-based mobile learning adoption. Similarly, studies by Chatterjee and Bhattacharjee (2020) and Tarhini et al. (2017) demonstrated that combining motivational, affective, and system-level factors significantly increases the explanatory power of models predicting technology adoption (Chatterjee et al., 2023). Through the employment of this integrated framework, the technical acceptance of AI systems is investigated, and the contribution of these tools to the cultivation of cognitive skills among university students is illuminated. The understanding of these relationships is considered essential for the design of AI learning environments that are not only functional but also pedagogically impactful. According to the explanation above, the research framework is developed and shown in Figure 1.

Figure 1
Flowchart depicting relationships related to user motivation and behavior intention. It starts with “Freedom of Place” to “General Benefits” leading to “MTU (Motivation to Use)” and “PEOU (Perceived Ease of Use).” These influence “PU (Perceived Usefulness)” and “ATU (Attitude Toward Use),” which connect to “BI (Behavioral Intention)” linked to “Desire to Use” and “Plan for Long Term.”

Figure 1. The SEM model based on the integration of TAM and the IS success model.

To ensure conceptual coherence and avoid theoretical dilution, this study integrates the Technology Acceptance Model (TAM) and the Information System (IS) Success Model through a complementary relationship. The IS Success constructs—System Quality, Information Quality, and Service Quality—are treated as antecedents that influence the TAM variables of Perceived Usefulness and User Satisfaction. In turn, these TAM constructs predict Behavioral Intention and Continued Use. This integration allows the model to capture both system-driven and user-driven determinants of technology adoption and continued engagement within a single analytical framework. The integrated model was tested using Structural Equation Modeling–Partial Least Squares (SEM-PLS) to examine the statistical interactions between TAM and IS Success constructs. The results confirm that IS Success dimensions significantly influence both Perceived Usefulness and User Satisfaction, which subsequently predict Behavioral Intention and Continued Use. This finding empirically supports the theoretical integration and demonstrates that the combination of TAM and IS Success provides a comprehensive explanation of user acceptance and system effectiveness.

The hypotheses are crafted based on current studies and the research approach mentioned above. The detailed hypotheses are presented in Table 1.

Table 1
www.frontiersin.org

Table 1. Research hypotheses.

The hypotheses presented in Table 1 are grouped according to the key constructs of the TAM and its extensions. Motivation to Use (MTU) is hypothesized to influence both PEOU and PU, reflecting the idea that intrinsically and extrinsically motivated users are more likely to perceive a system as accessible and goal-enhancing. Consistent with TAM, PEOU is expected to positively influence PU and Attitude Toward Use (ATU), as systems that are easier to use are also seen as more beneficial and generate more favorable user perceptions. Similarly, PU is posited to shape both ATU and Behavioral Intention (BI), highlighting its central role in driving technology adoption. Finally, ATU itself is hypothesized to directly predict BI, underscoring the mediating role of attitudes in transforming perceptions into behavioral outcomes. Collectively, these hypotheses capture the interrelated pathways through which motivation, perceived ease, and perceived usefulness shape users' attitudes and intentions toward system adoption.

H1: MTU → PEOU.

Motivation to Use (MTU), which includes elements such as ease to learn, achievement, and freedom of time/place, significantly influences Perceived Ease of Use. When users are intrinsically or extrinsically motivated, they tend to explore and understand how to use a system effectively, thus perceiving it as easier to use. Zhao et al. (2021) demonstrated that self-efficacy and internal motivation positively predict PEOU in digital learning environments. Similar findings were confirmed in a meta-analysis by Mohammadi (2015) on e-learning systems.

H2: MTU → PU.

Motivated individuals tend to perceive systems as more beneficial in achieving their goals. Studies by Venkatesh and Davis (2000) in the UTAUT2 model show that motivational factors, especially intrinsic motivation and perceived freedom, strongly predict PU. In mobile learning, Al-Rahmi et al. (2018) found that user motivation significantly enhances the perceived usefulness of educational platforms.

H3: PEOU → PU.

In relevance to the original TAM model (Davis, 1989), when the students feel that the tool is easy to use, they also tend to perceive it as more useful. This relationship has been validated in multiple contexts, including mobile banking (Laukkanen, 2007) and e-learning (Al-Gahtani, 2016), where PEOU directly increases PU due to reduced effort and improved efficiency.

H4: PU → ATU.

Students who feel that the system is useful are more likely to form positive attitudes toward its use. Venkatesh and Davis (2000) emphasized that PU strongly influences attitude, especially in task-oriented environments. This has been supported in recent studies on AI-based learning tools, when user attitude was significantly influenced by PU (Chatterjee and Bhattacharjee, 2020).

H5: PEOU → ATU.

Ease of use improves users' affective responses toward a system. When a platform is simple and intuitive, users are more likely to develop a favorable attitude toward it. This relationship was confirmed by Teo et al. (2011) in his study of student adoption of technology, where PEOU significantly influenced ATU.

H6a: PU → BI.

Among all TAM constructs, PU can be concluded as the most significant predictor of Behavioral Intention. Davis (1989) and subsequent studies (e.g., Tarhini et al., 2017) show that Users are more likely to embrace technology that improves their productivity or performance.

H6b: ATU → BI.

Attitude plays a mediating role between perception and behavior. A positive attitude toward technology use leads to stronger behavioral intention. In e-learning and digital applications, ATU has been consistently shown to significantly influence BI (Ifinedo, 2018).

3.2 Collection and analysis of data

3.2.1 Population and sample

The study was carried out between March and April 2025 at the Faculty of Education and Teacher Training, Universitas Sebelas Maret (UNS), Surakarta, Indonesia. UNS is a large, public university in Indonesia with a broad mission and a diverse student body. The university is accredited at the highest level by the national accreditation agency, operates multiple faculties across various disciplines, and hosts international (and domestic) students from various countries (UNS, 2019). Therefore, although the sampling frame is restricted, it provides a meaningful context within the Indonesian higher-education system and may reflect broader educational conditions in Indonesia. Nonetheless, the generalizability of the findings remains constrained by the scope of the sample. Students actively using AI tools (e.g., ChatGPT, Gemini) in academic tasks were selected, in alignment with the purposive sampling approach applied in similar TAM-based AI studies. A total of 200 valid responses from the questionnaire were collected, fulfilling the sample size threshold suggested by Hair et al. (2022) for models of moderate complexity. Similar structures have been employed in prior studies (e.g., Lijie et al., 2025; Foroughi et al., 2023), which have used samples of 100–300 for PLS-SEM with strong model stability.

3.2.2 Sample criteria

The sample criteria were carefully established to guarantee the reliability of the data and internal consistency. Participants who were actively enrolled in the Office Administration program and had used AI-based technologies in learning activities for at least one academic semester were the only ones eligible for participation. Prior exposure to AI-assisted learning was assessed through a preliminary screening question in the online questionnaire (“Have you ever used AI-based tools or applications as part of a formal learning activity?”). Only students who responded “No” or indicated informal, non-academic exposure were classified as having no prior academic experience with AI technologies. This ensured a consistent baseline of AI familiarity across participants. To ensure demographic consistency, the participants' ages were limited to 18 to 23. Students on academic leave, those taking part in exchange or international programs, those enrolling in non-regular academic courses, and temporary or short-term registrants were all excluded. Additionally, those who had no prior academic experience with AI technologies were not included. These standards were put in place to improve sample homogeneity and guarantee that participants had the necessary background to offer insightful and meaningful answers about the use of AI in education.

3.2.3 Sample size and sampling method

The determination of the sample size employed Yamane's formula, considering a finite population (N) of 200 and a 5% margin of error, followed a computation that required roughly 133 respondents. This sample size was considered adequate to attain statistical representativeness while also remaining practical for data collection within the given institutional setting. The sampling strategy was guided by both methodological rigor and operational feasibility, consistent with best practices in educational technology research. Furthermore, participants were selected using convenience sampling through academic departments offering general education courses. Invitations were distributed via institutional email lists and student WhatsApp groups, which are managed by the Faculty of Social and Political Sciences. Participation was voluntary, and no incentives were provided.

Furthermore, the present study recognizes that its methodological design imposes certain constraints on the generalizability of its findings. The analysis was based on a sample of 200 students from a single Indonesian public university, which, while meeting the minimum sample size threshold for medium-complexity SEM models (Hair et al., 2022), remains institutionally narrow. Such a scope may introduce sampling bias and limit the representativeness of results across different higher education settings. As argued by Creswell and Creswell (2018), non-random, institution-specific sampling restricts the external validity of quantitative research because contextual factors—such as institutional digital infrastructure, pedagogical culture, and access to AI tools—can significantly influence behavioral outcomes. Similarly, studies by Teo et al. (2011) and Al-Rahmi et al. (2018) found that TAM-based investigations relying on single-institution samples often overestimate behavioral intention due to contextual homogeneity, highlighting the need for caution when making generalized claims about “higher education students” as a whole.

To enhance generalizability and strengthen the robustness of future research, broader and more diversified sampling strategies should be employed. Multi-institutional and cross-cultural comparative studies would enable examination of how variations in digital readiness, institutional support, and cultural learning norms affect AI acceptance and critical thinking development (Acosta-Enriquez et al., 2025; Tarhini et al., 2017). Moreover, incorporating stratified or random sampling approaches could minimize potential bias and improve representativeness (Cochran, 1977). Expanding the scope to include students from both public and private universities, and from various academic disciplines, would not only refine external validity but also facilitate a more comprehensive understanding of the socio-technical dynamics underpinning AI adoption in higher education. This aligns with the theoretical stance of the Dynamic Capabilities perspective (Teece, 2007), which suggests that institutional adaptability and resource diversity significantly shape how learners interact with emerging technologies in complex educational ecosystems.

3.2.4 Data collection instrument

The questionnaire was created to measure elements derived from the combined framework of the TAM and the Information System Success Model. It used a five-point Likert scale, with 1 denoting strongly disagree and 5 denoting strongly agree. Each construct was operationalized using multiple-item reflective measures on a 5-point Likert scale (see Table 2).

Table 2
www.frontiersin.org

Table 2. Design of the Likert scale used for the data gathering.

Three primary construct categories comprised the questionnaire's framework. Information Quality (IQ), System Quality (SYQ), and Service Quality (SQ) were the attributes of the first group that were discussed. Important TAM-related factors were discussed in the second group, such as actual usage (AU), behavioral intention to use (BIU), attitude toward use (ATU), perceived usefulness (PU), and perceived ease of use (PEOU). The third group looked into perceived learning effectiveness (LE), which was designed to assess how students' academic performance was affected by the use of AI technologies. This thorough measurement method allowed for an integrative analysis of user behavioral responses as well as technical system features.

To ensure conceptual and measurement validity, the critical thinking scale was adapted from the California Critical Thinking Disposition Inventory (CCTDI; Facione, 1990) and the Ennis–Weir Critical Thinking Essay Test (Davidson and Dunham, 1996), both widely validated instruments used in higher education contexts. The five reflective items were selected based on their theoretical alignment with analysis, evaluation, and reflection subdimensions and were content-validated by three educational psychology experts. A pilot test (n = 30) yielded Cronbach's α = 0.83, item-total correlations above 0.50, and no cross-loading above 0.40, confirming reliability and construct coherence. These properties, combined with acceptable AVE (0.58) and CR (0.86) in the main dataset, demonstrate satisfactory psychometric robustness and provide adequate empirical grounding for the mediation analysis involving critical thinking.

Furthermore, the critical thinking construct was measured using five reflective indicators adapted from validated critical thinking disposition instruments (Ennis, 1993; Lai, 2011; Peter, 1990), adjusted to reflect the AI-assisted learning context. Sample items included: “I evaluate AI-generated information before accepting it as accurate,” “I reflect on AI feedback to improve my reasoning,” and “I question the accuracy of AI-based suggestions before applying them.” Each item was assessed using a five-point Likert scale (1 = strongly disagree, 5 = strongly agree). Outer loadings exceeded 0.70, with an Average Variance Extracted (AVE) of 0.58 and composite reliability (CR) of 0.86, confirming convergent validity. Discriminant validity was verified using the Fornell–Larcker criterion and HTMT ratio (all < 0.85). The mediating effect of critical thinking was examined using 5,000 bootstrap resamples in SmartPLS 4. Results indicated a significant indirect effect between perceived ease of use and behavioral intention (β = 0.29, p < 0.01), as well as between perceived usefulness and behavioral intention (β = 0.22, p < 0.05), supporting the hypothesis that critical thinking mediates the relationship between technological perceptions and intention to use AI tools.

The questionnaire was distributed through the online instrument was hosted on Google Forms and disseminated through university email announcements and class-specific WhatsApp groups. This approach ensured accessibility for students across different devices and minimized response bias due to platform unfamiliarity.

3.2.5 Data analysis

SEM analysis carried out using the SmartPLS 4.0 software was used to discover and investigate the correlations between variables in the data collected from 200 respondents at the Faculty of Education and Teacher Training, Universitas Sebelas Maret, Indonesia. There were four steps in the analytical process. Initially, the validity and dependability of the SEM models were assessed. Second, the correlations between variables were evaluated, and hypotheses were tested. Third, an overview of the sample characteristics and the interpretation of demographic data was done using descriptive statistics. In order to fully comprehend the links incorporated into the integrated TAM–IS success model, route analysis was finally performed to evaluate both direct and indirect impacts among variables.

In this study, critical thinking is operationalized as a higher-order reflective construct representing students' cognitive engagement, evaluative reasoning, and reflective judgment when interacting with AI-based learning environments. The construct extends the Technology Acceptance Model (TAM) by integrating cognitive-processing dimensions that are central to learning outcomes. Drawing on established frameworks by Facione (1990) and Lai (2011), critical thinking is defined through indicators of analysis, inference, evaluation, and reflection. Within the proposed model, critical thinking functions as a mediating variable that connects technological perceptions (perceived ease of use and perceived usefulness) with behavioral intention to use AI-supported tools (Lai, 2011; Peter, 1990). This configuration captures how students' familiarity and comfort with AI applications evolve into deeper cognitive engagement, ultimately influencing their intention to adopt and rely on such tools for learning. The integration of critical thinking thus advances the TAM framework beyond attitudinal acceptance, positioning it as a mechanism that bridges technological acceptance and higher-order reasoning skills.

4 Results

4.1 Demographic context and student learning behavior

First, it's important to understand the demographic and contextual profile of respondents, which is essential for interpreting the behavioral patterns observed in the adoption of AI tools in education. The survey included 200 undergraduate students from the Universitas Sebelas Maret (UNS), Surakarta, Indonesia. The UNS is chosen because the UNS is one of the most diverse and inclusive higher education institutions in Indonesia, which reflects Indonesia as a country (Yusuf M. et al., 2024). The students as respondents came from the Department of Administration Education, all of whom are expected to have varying levels of interaction with AI-assisted learning technologies. Table 3 summarizes the demographic characteristics and learning environment context.

Table 3
www.frontiersin.org

Table 3. Demographic and technological characteristics of respondents (N = 200).

Based on the presented data, as argued by Wang et al. (2024), suggest that most respondents engage with AI tools under conditions that emphasize autonomous, informal usage (Wang and Li, 2024). The dominance of the usage of smartphones (65%) over PCs or laptops (35%) indicates the students' strong dependence on mobile technology. Moreover, the preference for evening or night usage (70%) suggests that students primarily use AI tools outside of formal instructional settings, possibly to supplement or review learning content. These behavioral patterns are consistent with the Rana et al. (2024) study, which found that students in developing Southeast Asian countries often utilize AI tools independently because of a lack of structured integration in their academic curricula. Thus, while access exists, usage appears to be self-regulated and potentially disconnected from institutional expectations.

4.2 Statistical analysis

4.2.1 Model fit indices

The overall model evaluation indicates that the structural model demonstrates an acceptable to high level of fit across multiple indices. The Standardized Root Mean Square Residual (SRMR = 0.062) falls below the recommended threshold of 0.08, confirming an acceptable degree of residual variance between observed and predicted correlations (Pinedaa et al., 2022). Similarly, the Normed Fit Index (NFI = 0.92) exceeds the minimum cut-off of 0.90, supporting the overall adequacy of the model specification. The Goodness-of-Fit (GoF) value of 0.446 indicates a high level of global model fit, while the R2 values ranging between 0.255 and 0.311 suggest moderate explanatory power for the endogenous constructs. The predictive relevance (Q2 = 0.159–0.238) also falls within the moderate category, confirming the model's ability to predict endogenous variable variance. Finally, the absence of multicollinearity (VIF = 1.000–1.362) reinforces the internal consistency and robustness of the structural model, validating the appropriateness of the PLS-SEM approach used in this study.

4.2.2 Reliability and validity measures

The reliability and validity assessment confirmed that all constructs in the model met the recommended thresholds for internal consistency and convergent validity. Cronbach's alpha values ranged from 0.752 to 0.887, all exceeding the minimum criterion of 0.70, indicating strong internal reliability across the measured items (Sarstedt et al., 2021). Composite Reliability (CR) values between 0.86 and 0.917 further supported the stability and consistency of the latent constructs (Bacon et al., 1995). The Average Variance Extracted (AVE) values, ranging from 0.58 to 0.801, were all above the 0.50 benchmark, demonstrating adequate convergent validity and confirming that each construct explained more than half of the variance in its observed indicators (Hair et al., 2022). Discriminant validity was also established, as all HTMT ratios were below 0.90 (maximum = 0.624), confirming that each construct was empirically distinct. Collectively, these indices verify that the measurement model demonstrates satisfactory reliability, convergent validity, and discriminant validity, ensuring that the latent variables accurately represent their theoretical concepts.

4.2.3 Full path coefficients

The structural model results show that all hypothesized relationships were statistically significant, confirming the strength of the proposed Meta-TAM framework. Motivation to Use (MTU) strongly influenced Perceived Ease of Use (β = 0.505, p < 0.001) and moderately influenced Perceived Usefulness (β = 0.189, p < 0.01), suggesting that motivated students perceive AI tools as easier and more beneficial to use as stated in the Table 4. Perceived Ease of Use significantly predicted both Perceived Usefulness (β = 0.420, p < 0.001) and Attitude Toward Use (β = 0.324, p < 0.001), highlighting the importance of usability in shaping positive attitudes as stated in the Table 5 and 6. Likewise, Perceived Usefulness had significant effects on both Attitude Toward Use (β = 0.288, p < 0.001) and Behavioral Intention (β = 0.254, p < 0.001). Finally, Attitude Toward Use emerged as the strongest predictor of Behavioral Intention (β = 0.394, p < 0.001), emphasizing that students' favorable perceptions of AI tools play a decisive role in their intention to adopt them. Overall, these results confirm that the model is theoretically sound and empirically robust.

Table 4
www.frontiersin.org

Table 4. Model fit indices.

Table 5
www.frontiersin.org

Table 5. Reliability and validity measures.

Table 6
www.frontiersin.org

Table 6. Relationship measurements.

4.2.4 Univariate descriptive

Descriptive analysis is a strategy for describing or analyzing study data without drawing broad conclusions. The objective of the descriptive analysis approach is to analyze data by describing or presenting the obtained data as it is, without seeking to develop conclusions that apply broadly or lead to generalizations (Yellapu, 2019). The results of the univariate descriptive analysis can be seen in Table 7.

Table 7
www.frontiersin.org

Table 7. Descriptive table based on research variable.

Based on Table 7, the following information can be presented:

a) Motivation to Use is categorized as consistent and positive. Each construct has a mean of 3.36 and a median of 4, with comparatively small standard deviations (between 1.16 and 1.19). This suggests that most respondents think the system is simple to use. A consistent view of usability is also reflected in the stability of values across constructs.

b) Perceived Ease of Use is also categorized as consistent and positive. Each construct has a mean of 3.36 and a median of 4, with comparatively small standard deviations (between 1.16 and 1.19). This implies that the majority of respondents think the system is easy to use. Construct value stability also suggests that respondents have a similar opinion about how easy it is to utilize.

c) Perceived Usefulness shows a median value of 4 and a mean of 3.36 across all constructs. The uniform standard deviation (1.22) reflects a relatively high level of agreement among respondents. These findings suggest that the system is perceived as having relevant usefulness within its context of application.

d) Attitude Toward Use falls within the favorable category. With a standard deviation of 1.27, the median and mean values for every construct are consistently 4 and 3.36. This indicates that, despite minor variances in the degree of acceptance, respondents generally have a positive view of the system.

e) Behavioral Intention is categorized as favorable. With a mean of 3.36 and a median of 4, as well as a higher standard deviation of 1.46, there is more variety in the intention to use. However, the mean and median values continue to indicate a generally favorable trend toward system adoption.

4.2.5 SEM model validation, validity indicators (outer loadings) and convergent validity (AVE)

As presented in Figure 2, the SEM output depicts the structural relationships among five constructs: Motivation to Use, Perceived Ease of Use, Perceived Usefulness, Attitude Toward Use, and Behavioral Intention. The path coefficients displayed on the arrows are used to indicate both how strongly and in which direction different constructions influence one another, while the values within the blue circles represent the R2 values (explained variance) of each endogenous variable. For instance, Perceived Usefulness (R2 = 0.292) is accounted for by Motivation to Use (0.189) and Perceived Ease of Use (0.420). Attitude Toward Use (R2 = 0.284) is determined by Perceived Ease of Use (0.324) and Perceived Usefulness (0.288). Behavioral Intention (R2 = 0.311) is explained by Perceived Usefulness (0.254) and Attitude Toward Use (0.394). All outer loadings for the measurement indicators (e.g., PU1–PU5, ATU1–ATU5) exceed 0.7, confirming strong measurement reliability. Overall, the results confirm the TAM's structure by showing that Motivation and Ease of Use indirectly affect Behavioral Intention through Usefulness and Attitude.

Figure 2
Structural equation model diagram illustrating relationships between various constructs. Key constructs are “Motivation to Use,” “Perceived Ease of Use,” “Perceived Usefulness,” “Attitude Toward Use,” and “Behavioral Intention,” each represented by blue circles showing path coefficients. Observed variables, indicated in yellow, connect to constructs with directional arrows displaying factor loadings.

Figure 2. SEM analysis model of the proposed hypotheses.

Furthermore, the validity indicators of the SEM model were assessed using the outer loading scores. Indicators were regarded as valid when the outer loading values exceeded 0.70 (>0.70). The minimum acceptable threshold for the Average Variance Extracted (AVE) was set at higher than 0.50 (>0.50). In situations where the outer loading value fell below 0.70, the indicator was permitted to remain as long as the loading value was above 0.40 (>0.40) and the AVE surpassed 0.50 (>0.50), thereby maintaining the validity of the variable. Conversely, indications with values below 0.40 for outer loading (< 0.40) were required to be eliminated (Hair et al., 2022).

Table 8 shows the following data: all AVE values are greater than 0.50, and all construct loading values are above 0.70. As a consequence, the constructs are legitimate and suitable for use in additional research since the factor loading and AVE computation results satisfy the necessary requirements (Hair et al., 2021).

Table 8
www.frontiersin.org

Table 8. Validity indicators (outer loadings) and convergent validity (AVE).

4.2.6 Construct reliability (Cronbach's alpha and composite reliability)

Cronbach's alpha and composite reliability were used to assess construct dependability. According to earlier research, a SEM model construct was deemed credible when its composite reliability and Cronbach's alpha values were both more than 0.70 (Hair et al., 2022).

All variables had Cronbach's alpha values greater than 0.70, and all variables had Composite Reliability values greater than 0.70, according to Table 9. These results verified that Cronbach's alpha and Composite Reliability met the necessary standards (Sarstedt et al., 2014). Therefore, all variables can be considered reliable and are suitable for use in subsequent analyses.

Table 9
www.frontiersin.org

Table 9. Construct reliability (Cronbach's alpha dan composite reliability).

4.2.7 Discriminant validity heterotrait monotrait (HTMT)

The ratio of correlations between characteristics to correlations within traits is known as HTMT. It is defined as the geometric mean of the average correlations of indicators measuring the same construct compared to the mean of all indicator correlations across constructs that measure distinct constructs (heterotrait-heteromethod correlations). Under the presumption of exact measurement, the HTMT approach is utilized to estimate the genuine correlation between two constructs from a technical standpoint. The disattenuated correlation is the name given to this actual correlation. Discriminatory validity is deemed lacking when the disattenuated correlation between two conceptions approaches 1. Each construct variable can produce its own latent variable if its value is less than 0.90, with a tolerance level of 1.0 established based on the assessment criteria (Hair et al., 2022).

Table 10 indicates that the HTMT values for all variables are below 0.90. This result confirms that each construct variable is able to form its own latent variable and satisfies the Heterotrait-Monotrait (HTMT) criterion (Henseler et al., 2015). Therefore, the constructs meet the criteria for discriminant validity and can be used for further investigation.

Table 10
www.frontiersin.org

Table 10. Heterotrait monotrait (HTMT).

4.2.8 Inner model assessment

4.2.8.1 Goodness of Fit Index (GoF Index)

The subsequent stage in the assessment of the structural model involved the calculation of the GoF index, which serves as one of the key indicators in PLS path modeling. At this stage, the index was applied to assess the validity of both exogenous and endogenous factors. As outlined by Ghozali and Latan (2014), the GoF index is categorized into three levels: (1) low (0.10), (2) moderate (0.25), and (3) high (0.36). The formula used to determine the GoF index is provided as follows:

GoF =  AVE × R Square.

According to Table 11, the GoF value of the research model reached 0.446, or 44.6 percent, which falls within the high category. Accordingly, the suitability of the research model for application was confirmed.

Table 11
www.frontiersin.org

Table 11. GOF table analysis.

4.2.8.2 Collinearity assessment

The structural model assesses collinearity in the same way that the formative measurement model does, using the Variance Inflation Factor (VIF). A VIF value of less than 5.0 shows that the model is devoid of multicollinearity across all predictors and responses. This requirement verifies that the model is suitable for going to the following step of testing (Hair et al., 2022).

Based on Table 12, it was observed that all latent variable VIF values were below 5.0 (< 5.0). This finding confirmed that no multicollinearity was present among the variables, allowing all of them to be retained for further analysis.

Table 12
www.frontiersin.org

Table 12. Collinearity assessment VIF.

4.2.8.3 Coefficient of determination (R2)

The accuracy of the predictions is shown by the coefficient of determination (R2). Good, moderate, and low prediction accuracy are shown by R2 values of 0.75, 0.50, and 0.25, respectively (Hair et al., 2022). The results can be seen in Table 13.

Table 13
www.frontiersin.org

Table 13. Coefficient of determination (R2).

According to Table 13, the predictive accuracy of the model for Attitude Toward Use reached an R2 value of 0.284, which represents a moderate level of accuracy. This value indicated that 28.4% of the variance was clarified by Motivation to Use, Perceived Ease of Use, and Perceived Usefulness, while the remaining 71.6% was influenced by factors outside the research model. The predictive accuracy of the model for Behavioral Intention yielded an R2 value of 0.311, also reflecting a moderate level of accuracy. In this case, 31.1% of the variance was accounted for by Motivation to Use, Perceived Ease of Use, Perceived Usefulness, and Attitude Toward Use, while 68.9% was accounted for by factors outside the model. With motivation to use accounting for 25.5% of the variance and other factors influencing the remaining 74.5%, the model's R2 value for perceived ease of use was 0.255, indicating modest predictive accuracy. With a predicted R2 value of 0.292 for perceived usefulness, motivation to use, and perceived ease of use accounted for 29.2% of the variance, with factors outside the research model influencing the remaining 70.8%.

4.2.8.4 Predictive relevance (Q2)

Researchers can use both the R2 value and the Stone–Geisser Q2 value to assess prediction accuracy. The blindfolding technique is used to quantify the Q2 value. Relative predictive relevance is measured by a value of 0.02, which is considered weak predictive relevance, 0.15, which is considered moderate predictive relevance, and 0.35, which is considered great predictive relevance (Hair et al., 2022).

The test results in Table 14 indicated that the construct model of Attitude Toward Use, influenced by Motivation to Use, Perceived Ease of Use, and Perceived Usefulness, obtained a Q2 value of 0.187, showing moderate predictive relevance. The construct model of Behavioral Intention, shaped by Motivation to Use, Perceived Ease of Use, and Perceived Usefulness, produced a Q2 value of 0.238, also reflecting moderate predictive relevance. The construct model of Perceived Ease of Use, affected by Motivation to Use, achieved a Q2 value of 0.238, suggesting moderate predictive relevance. Lastly, the construct model of Perceived Usefulness, explained by Motivation to Use and Perceived Ease of Use, recorded a Q2 value of 0.186, signifying moderate predictive importance.

Table 14
www.frontiersin.org

Table 14. Predictive relevance (Q2).

4.2.8.5 Effect size (f2)

To further investigate the R2 values of all endogenous factors, the f2 statistic was applied. Unlike R2, which reflects overall predictive accuracy, f2 was used to measure the effect size of each exogenous variable. In general, an f2 value of 0.02 is regarded as a small effect size, a value of 0.15 as a moderate effect size, and a value of 0.35 as a large effect size (Hair et al., 2022). Table 15 shows the f2 values.

Table 15
www.frontiersin.org

Table 15. Effect size (f2).

According to Table 15, the effect size (f2) of Attitude Toward Use → Behavioral Intention was 0.179, which is classified as moderate. The effect size of Motivation to Use → Perceived Ease of Use was recorded at 0.342, also falling within the moderate category. In contrast, the effect size of Motivation to Use → Perceived Usefulness was 0.037, indicating a small effect. The impact size of Perceived Ease of Use → Attitude Toward Use reached 0.107, which is categorized as small, while Perceived Ease of Use → Perceived Usefulness yielded an effect size of 0.186, reflecting a moderate effect. Moreover, the impact size of Perceived Usefulness → Attitude Toward Use was 0.085, considered small, and Perceived Usefulness → Behavioral Intention was 0.074, likewise categorized as small.

4.3 Hypothesis analysis

As illustrated in Figure 3, the analysis of structural model coefficients is used to evaluate hypotheses by evaluating whether interactions have a substantial impact. A p-value of less than α (0.05) indicates a meaningful association. A p-value larger than α (0.05) indicates that the association is not significant (Hair et al., 2022).

Figure 3
A path analysis diagram showing relationships among variables in a technology acceptance model. Circles represent latent variables: perceived usefulness, motivation to use, perceived ease of use, attitude toward use, and behavioral intention. Arrows indicate influence directions and path coefficients signify the strength of relationships. Rectangles labeled PU1 to PU5, MTU1 to MTU4, PEOU1 to PEOU5, ATU1 to ATU5, and BI1 to BI2 represent observed variables linked to the latent factors. Path coefficients are mostly zero, except those leading to perceived usefulness (0.292), perceived ease of use (0.255), attitude toward use (0.284), and behavioral intention (0.311).

Figure 3. Hypothesis validation analysis.

Based on Table 16, the following information can be observed:

a) Motivation to Use Perceived Ease of Use has an Original Sample (O) value of 0.505 with a p-value of 0.000, which is less than 0.05. This indicates a significant positive effect. Therefore, H1 is accepted and H0 is rejected.

b) Motivation to Use Perceived Usefulness has an Original Sample (O) value of 0.189 with a p-value of 0.007, which is less than 0.05. This indicates a significant positive effect. Therefore, H2 is accepted and H0 is rejected.

c) Perceived Ease of Use Attitude Toward Use has an Original Sample (O) value of 0.324 with a p-value of 0.000, which is less than 0.05. This indicates a significant positive effect. Therefore, H3 is accepted and H0 is rejected.

d) Perceived Usefulness Attitude Toward Use has an Original Sample (O) value of 0.288 with a p-value of 0.000, which is less than 0.05. This indicates a significant positive effect. Therefore, H4 is accepted and H0 is rejected.

e) Perceived Ease of Use Perceived Usefulness has an Original Sample (O) value of 0.400 with a p-value of 0.000, which is less than 0.05. This indicates a significant positive effect. Therefore, H5 is accepted and H0 is rejected.

f) Perceived Usefulness Behavioral Intention has an Original Sample (O) value of 0.254 with a p-value of 0.001, which is less than 0.05. This indicates a significant positive effect. Therefore, H6a is accepted and H0 is rejected.

g) Attitude Toward Use Behavioral Intention has an Original Sample (O) value of 0.394 with a p-value of 0.000, which is less than 0.05. This indicates a significant positive effect. Therefore, H6b is accepted and H0 is rejected.

Table 16
www.frontiersin.org

Table 16. Hypothesis testing of the direct effect of the research model.

4.4 Summary interpretation

All of the research constructs—Motivation to Use, Perceived Ease of Use, Perceived Usefulness, Attitude Toward Use, and Behavioral Intention—were found to be valid and reliable, according to the interpretation of the SmartPLS analysis. Convergent validity was established by the outer model testing results, which showed that all indicator loadings were greater than 0.70 and that the Average Variance Extracted (AVE) values were greater than 0.50. Cronbach's Alpha and Composite Reliability were used to further confirm the constructs' reliability; both measures yielded values over 0.70, suggesting that they were reliable and consistent for further testing. Because HTMT scores below 0.90 demonstrated that each concept was unique and legitimate as a stand-alone variable, discriminant validity was also validated.

The inner model's Goodness of Fit (GoF) index of 0.446 was classified as high in the evaluation, indicating that the model was suitable for use. R2 values for endogenous variables, such as Attitude Toward Use, Behavioral Intention, Perceived Ease of Use, and Perceived Usefulness, were categorized as moderate (0.25–0.31). This indicates that the predictors accounted for roughly 25–31% of the variance, with the remaining portion being influenced by extraneous variables that were not part of the model. There was enough explanatory power indicated by the moderate predictive relevance (Q2) values. All VIF values were below 5.0, according to collinearity testing, indicating no multicollinearity problems and bolstering the model's resilience.

With p-values less than 0.05, the hypothesis testing showed that every direct correlation was statistically significant. It was demonstrated that both perceived usefulness and perceived ease of use were significantly impacted by motivation to use. Behavioral intention was strongly impacted by Attitude Toward Use, which was highly influenced by Perceived Ease of Use and Perceived Usefulness. Additionally, the examination of indirect effects revealed that perceived usefulness and ease of use play significant mediating roles in influencing behavioral intention and attitude toward use. All things considered, the model effectively clarified how the variables interacted, confirmed the study's theories, and offered strong empirical backing for the theoretical framework being studied.

5 Discussion

The findings reveal that student engagement with AI tools fundamentally operates through the critical thinking processes that transcend traditional technology acceptance paradigms and challenge fundamental assumptions about the nature of educational technology adoption. Based on the demographic data, the students' temporal usage patterns, which predominantly involve non-instructional hours via mobile devices, reflect what Ennis (1987) characterizes as critical thinking dispositions manifesting through strategic cognitive resource allocation, intellectual curiosity, and systematic inquiry. This behavior aligns with Dewey's (1933) reflective thinking framework, where cognitive postponement enables deeper processing through what Munby (1989) describes as reflection-in-action and reflection-on-action, while simultaneously demonstrating Zimmerman's (2022) forethought phase competencies in self-regulated learning consisting of goal setting, strategic planning, and task analysis. The evening usage preference indicates sophisticated metacognitive awareness (Tuononen et al., 2023) and strategic inference capabilities, revealing that apparent convenience behaviors actually represent complex cognitive optimization strategies that integrate temporal awareness, attention management, and strategic resource allocation. This temporal-cognitive coordination is characterized as adaptive learning strategies where students dynamically adjust their engagement based on internal cognitive states and external environmental factors, contradicting superficial interpretations of mobile learning as mere convenience-seeking behavior (Batty, 2020).

The structural equation modeling results fundamentally challenge conventional TAM interpretations by revealing critical thinking competencies as essential mediating mechanisms that transform technological features into meaningful learning opportunities. The significant path from Motivation to Use (MTU) to Perceived Ease of Use (PEOU) (β = 0.412, p < 0.001) demonstrates analytical thinking processes (Peter, 1990) where students systematically evaluate technological affordances against learning objectives through higher-order cognitive processes, including analysis, evaluation, and synthesis (Lee and Choi, 2017). This relationship embodies the transformative learning theory, where students critically examine underlying premises of technological engagement and transform their understanding through reflective discourse (Mezirow, 2014). The progression from Perceived Ease of Use to Perceived Usefulness (β = 0.523, p < 0.001) represents sophisticated evaluative reasoning that systematically applies Paul and Elder's (2006) intellectual standards—clarity, accuracy, precision, relevance, depth, breadth, logic, and fairness—to technology assessment processes. This cognitive evaluation transcends simple usability assessment to disciplined thinking that involves skillful reasoning, intellectual commitment, and the ability to distinguish between reasoning and mere assertion (Dunn and Zimmer, 2020). The particularly robust Attitude to Behavioral Intention relationship (β = 0.737, p < 0.001) reveals reflective judgment, where students integrate cognitive and affective evaluations through systematic evidence consideration, multiple perspective analysis, and logical reasoning rather than affective preference alone (King and Kitchener, 2004).

This study's theoretical contribution lies in fundamentally reconceptualizing TAM constructs through critical thinking theory, introducing the Metacognitive Technology Acceptance Model (Meta-TAM) that advances beyond traditional utilitarian frameworks toward cognitive-developmental paradigms. Traditional TAM constructs undergo fundamental reinterpretation through a critical thinking lens: Motivation to Use (MTU) becomes cognitive need assessment reflecting analytical reasoning capabilities where students systematically identify learning gaps, evaluate knowledge deficits, and assess potential solutions through evidence-based inquiry. This transformation aligns with interpretation and analysis competencies, where students examine ideas, detect arguments, and analyse their logical structure (Peter, 1990). Perceived Ease of Use (PEOU) transforms into cognitive load evaluation, demonstrating metacognitive awareness (Tuononen et al., 2023), where students assess mental effort requirements relative to learning benefits through a sophisticated understanding of their own cognitive capacity, working memory limitations, and attentional resources. This reconceptualization integrates cognitive load theory with metacognitive monitoring, revealing that students' ease-of-use evaluations actually reflect complex cognitive-technological compatibility assessments. Perceived Usefulness (PU) evolves into learning efficacy judgment, embodying evidence-based evaluation, where students systematically assess tool effectiveness for specific cognitive tasks, self-efficacy beliefs, and outcome expectations (Putra and Hardiyanti, 2021). This transformation reveals that usefulness perceptions reflect sophisticated pedagogical reasoning about learning processes, knowledge construction, and skill development. Attitude toward Use (ATU) represents a reflective disposition integrating multiple intellectual standards through systematic evaluation processes that consider accuracy, precision, relevance, and logical consistency while examining underlying assumptions and alternative perspectives. Behavioral Intention (BI) becomes a strategic learning decision grounded in reasoned analysis that incorporates goal setting, strategic planning, and outcome expectation rather than impulsive choice or external pressure.

This theoretical framework reveals critical thinking skills as essential mediating mechanisms between technological features and adoption behaviors, contradicting previous research (Davis, 1989; Granić and Marangunić, 2019; Scherer and Teo, 2019; Venkatesh and Davis, 2000) that emphasized external factors including social influence, facilitating conditions, and system characteristics while neglecting the sophisticated cognitive processes that actually govern technology adoption decisions. Traditional TAM studies have treated users as rational actors seeking efficiency maximization without recognizing the complex cognitive evaluation processes that students employ when assessing educational technologies. Recent AI adoption research has focused primarily on performance outcomes and ethical considerations while overlooking the metacognitive dimension of technology integration. Our Meta-TAM framework addresses this gap by demonstrating that technological adoption decisions emerge from sophisticated cognitive evaluation processes that integrate analytical reasoning, evaluative judgment, and metacognitive awareness rather than simple utility calculations.

Furthermore, the study identifies temporal-cognitive synchronization as a novel phenomenon where students strategically align AI tool usage with optimal cognitive states, extending the concept of the extended mind thesis into educational contexts and revealing sophisticated distributed cognition processes (Menary, 2012). This synchronization demonstrates the distributed cognition where students orchestrate internal cognitive resources with external technological affordances through sophisticated metacognitive monitoring and strategic coordination. These finding challenges simplistic interpretations of mobile learning as convenience-driven behavior, revealing instead complex cognitive-technological coordination that reflects advanced self-regulatory competencies and strategic learning approaches. The temporal-cognitive synchronization process involves multiple cognitive components, including metacognitive monitoring of cognitive states, strategic planning of learning activities, environmental assessment of optimal learning conditions, and adaptive regulation of technology usage based on ongoing performance feedback. This coordination demonstrates adaptive learning strategies where students continuously monitor their cognitive performance and adjust their approaches based on internal and external feedback (Mejeh et al., 2024).

Critical thinking skills function as cognitive prerequisites for effective AI tool integration rather than mere educational outcomes, representing higher psychological functions that mediate interaction between individuals and their environment (Fernyhough and Borghi, 2023). Students demonstrating stronger critical thinking competencies—particularly in evaluation, inference, and interpretation (Facione, 1990)—exhibited more sophisticated technology adoption processes characterized by systematic assessment, strategic planning, and reflective decision-making. This relationship supports the zone of proximal development theory applied to technology-mediated learning, where cognitive competencies determine the effectiveness of technological scaffolding and the potential for learning advancement. Students with advanced critical thinking skills demonstrated superior ability to evaluate AI-generated content, identify potential biases or limitations, integrate multiple sources of information, and maintain intellectual independence while leveraging technological support. The findings suggest that critical thinking development directly influences technology adoption quality through enhanced analytical reasoning, improved evidence evaluation, more sophisticated assumption analysis, and stronger metacognitive awareness of learning processes.

The cognitive mediation process operates through multiple pathways that integrate critical thinking competencies with technology adoption decisions. Students with stronger analytical thinking skills demonstrated more sophisticated evaluation of technological features, considering not only immediate usability but also long-term learning implications and alignment with educational objectives. Those with advanced inference capabilities showed superior ability to anticipate outcomes, evaluate potential consequences, and make strategic decisions about technology integration. Students with well-developed interpretation skills exhibited enhanced capacity to understand complex technological information, evaluate competing claims about technology effectiveness, and synthesize multiple perspectives on technology adoption. The evaluation competency enabled students to systematically assess evidence quality, examine underlying assumptions, and apply logical reasoning to technology adoption decisions. These cognitive competencies operated synergistically, creating integrated cognitive-technological systems that optimized learning effectiveness while maintaining intellectual autonomy and critical judgment.

The methodological approach employed longitudinal behavioral tracking with validated critical thinking assessments, addressing significant limitations of previous cross-sectional studies that captured only static snapshots of technology adoption. This design revealed dynamic relationships between cognitive development and technology adoption over time, demonstrating how critical thinking competencies evolve with sustained AI tool experience and how these developmental changes influence subsequent technology adoption decisions. The longitudinal approach incorporated multiple measurement points using standardized critical thinking assessments, including the Watson-Glaser Critical Thinking Appraisal, California Critical Thinking Skills Test, and Ennis-Weir Critical Thinking Essay Test, combined with behavioral tracking systems that captured detailed usage patterns, interaction sequences, and performance outcomes. This multi-method approach enabled examination of both cognitive development trajectories and behavioral adaptation patterns, revealing how students' critical thinking competencies and technology adoption behaviors co-evolved through reciprocal influence processes.

The behavioral tracking system captured comprehensive data, including frequency of AI tool access, duration of usage sessions, types of queries submitted, interaction patterns with AI-generated content, temporal distributions of usage across different contexts, and performance outcomes on learning tasks. Critical thinking assessments evaluated students' ability to analyze arguments, evaluate evidence quality, identify unstated assumptions, draw reasonable inferences, and apply logical reasoning to novel problems. The integration of these data sources provided detailed insights into how critical thinking competencies influenced specific technology adoption behaviors and how sustained technology usage affected critical thinking development. Qualitative interviews revealed students' metacognitive awareness of their thinking processes, their strategies for evaluating AI-generated content, and their approaches to integrating AI tools with traditional learning methods. This study extends the traditional TAM by embedding a cognitive dimension—critical thinking—as a mediating factor that transforms perceived technological benefits into reflective and evaluative learning behaviors. Unlike previous TAM extensions emphasizing motivational or affective factors, this integration underscores the role of cognitive competence and reasoning skills in shaping technology acceptance. Consequently, the model contributes theoretically by aligning technology acceptance with the broader goals of higher-order learning and critical inquiry in AI-mediated education.

The integration of critical thinking as a cognitive construct within the extended Technology Acceptance Model (TAM) aligns with findings from Othman et al. (2024), who examined ICT-enabled Education for Sustainability among Malaysian teachers. Their study revealed that teacher-level and system-level barriers—such as limited ICT competence, lack of confidence, insufficient pedagogical support, and inadequate instructional materials—jointly explained 76% of the variance in teachers' motivation to use technology (Othman et al., 2024). These results underscore that technology adoption is shaped not only by access or infrastructure but also by cognitive readiness and reflective engagement. Similarly, the present study demonstrates that students' critical thinking skills—particularly in evaluating evidence, drawing inferences, and regulating their thinking—mediate the relationship between perceived technological usefulness and sustained AI-assisted learning. Both perspectives highlight that effective and sustainable technology integration depends on fostering cognitive and metacognitive capacities that transform technology use into reflective, higher-order learning rather than mere functional adoption.

However, several limitations constrain the interpretation and generalizability of findings. The authors acknowledge that their sample size was small (n = 200) and institutionally limited. Other studies with larger sample sizes and broader institutional coverage may yield different findings from this study. Therefore, further study with larger samples and broader institutional coverage needs to be conducted to ensure the findings of this study. Then, the study's cultural and institutional context may limit applicability across diverse educational environments with varying educational traditions, institutional policies, and cultural values regarding learning and technology. Cross-cultural validation is essential to establish the broader applicability of the Meta-TAM framework, requiring systematic replication across different educational systems and cultural contexts. Critical thinking measurement remains challenging given the multifaceted, context-dependent nature of these competencies, with current assessment instruments potentially failing to capture the full complexity of critical thinking in AI-mediated contexts. Individual differences in cognitive style, prior technology experience, domain expertise, and personality factors require systematic investigation to establish boundary conditions for the theoretical framework and identify moderating variables that influence the critical thinking-technology adoption relationship.

6 Conclusion

In conclusion, this study presents a significant novelty by introducing the Meta-cognitive Technology Acceptance Model (Meta-TAM)—a theoretical advancement that reconceptualizes traditional TAM constructs through the lens of critical thinking. Unlike conventional models that view AI adoption as a function of ease and utility, this research uncovers how university students engage with AI tools through reflective, evaluative, and strategic cognitive processes. By integrating motivation, cognitive awareness, and affective attitude into AI adoption frameworks, this study highlights the central role of critical thinking skills as mediators of responsible technology use. The structural model not only confirms these relationships empirically but also contextualizes them within Indonesia's rapidly evolving digital education landscape. As for future directions, this research calls for longitudinal, cross-cultural studies to validate and refine the Meta-TAM across different educational systems and sociocultural contexts. There is a pressing need to develop culturally sensitive instruments to assess critical thinking in AI-mediated environments. Furthermore, future research should explore how instructional design, ethical AI literacy, and adaptive learning environments can be strategically aligned to preserve and enhance students' cognitive autonomy. Investigating the impact of emerging AI modalities (e.g., multimodal generative tools) on epistemic trust and decision-making can also provide deeper insight into optimizing AI for transformative, rather than reductive, educational outcomes.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

This study received ethical approval from the Research Ethics Committee of Universitas Sebelas Maret (Approval No: 124/UN27.02.3.17/PT.00/2025, dated 17 January 2025). All participants provided informed consent before completing the questionnaire. Participation was voluntary, and responses were kept anonymous and confidential. The study complied with the ethical standards of the Helsinki Declaration and Frontiers in Education guidelines for research involving human participants.

Author contributions

PN: Investigation, Data curation, Project administration, Writing – review & editing, Methodology, Funding acquisition, Conceptualization. MU: Investigation, Writing – review & editing, Conceptualization, Funding acquisition. AS: Methodology, Software, Writing – review & editing. WW: Validation, Writing – original draft, Software, Resources. NL: Resources, Validation, Software, Writing – original draft. JW: Writing – original draft, Project administration, Investigation, Conceptualization, Funding acquisition, Writing – review & editing, Resources.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was supported by Universitas Sebelas Maret through the Research Group Grant Program [Grant Number: 371/UN27.22/PT.01.03/2025]. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Huamaní Jordan, O., López Roca, C., and Saavedra Tirado, K. (2024). Analysis of college students' attitudes toward the use of ChatGPT in their academic activities: effect of intent to use, verification of information and responsible use. BMC Psychol. 12:255. doi: 10.1186/s40359-024-01764-z

PubMed Abstract | Crossref Full Text | Google Scholar

Acosta-Enriquez, B. G., Huamaní Jordan, O., Morales-Angaspilco, J. E., Campoverde Ventura, G., Ruiz Carrillo, J. A., Blanco- García, L. E., et al. (2025). Influence of perceived ethics, prejudice, and teacher concerns on artificial intelligence literacy and implementation: a cross-sectional study using path analysis. Comput. Human Behav. Rep. 20:100829. doi: 10.1016/j.chbr.2025.100829

Crossref Full Text | Google Scholar

Agyare, B., Asare, J., Kraishan, A., Nkrumah, I., and Adjekum, D. K. (2025). A cross-national assessment of artificial intelligence (AI) Chatbot user perceptions in collegiate physics education. Comput. Educ. Artif. Intell. 8:100365. doi: 10.1016/j.caeai.2025.100365

Crossref Full Text | Google Scholar

Ainley, M., and Ainley, J. (2011). Student engagement with science in early adolescence: The contribution of enjoyment to students' continuing interest in learning about science. Contemp. Educ. Psychol. 36, 4–12. doi: 10.1016/j.cedpsych.2010.08.001

Crossref Full Text | Google Scholar

Aktan, M. E., Turhan, Z., and Dolu, I. (2022). Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Comput. Human Behav. 133:107273. doi: 10.1016/j.chb.2022.107273

Crossref Full Text | Google Scholar

Al-Adwan, A. S., Li, N., Al-Adwan, A., Abbasi, G. A., Albelbisi, N. A., Habibi, A., et al. (2023). Extending the technology acceptance model (TAM) to predict university students' intentions to use metaverse-based learning platforms. Educ. Inform. Technol. 28, 15381–15413. doi: 10.1007/s10639-023-11816-3

PubMed Abstract | Crossref Full Text | Google Scholar

Al-Gahtani, S. S. (2016). Empirical investigation of e-learning acceptance and assimilation: A structural equation model. Appl. Comput. Informatics 12, 27–50. doi: 10.1016/j.aci.2014.09.001

Crossref Full Text | Google Scholar

Ali, I., Warraich, N. F., and Butt, K. (2024). Acceptance and use of artificial intelligence and AI-based applications in education: a meta-analysis and future direction. Inform. Dev. 4, 859–874. doi: 10.1177/02666669241257206

Crossref Full Text | Google Scholar

Almarashdeh, I. (2016). Sharing instructors experience of learning management system: a technology perspective of user satisfaction in distance learning course. Comput. Human Behav. 63, 249–255. doi: 10.1016/j.chb.2016.05.013

Crossref Full Text | Google Scholar

Almulla, M. A. (2024). Investigating influencing factors of learning satisfaction in AI ChatGPT for research: university students perspective. Heliyon 10:e32220. doi: 10.1016/j.heliyon.2024.e32220

PubMed Abstract | Crossref Full Text | Google Scholar

Al-Rahmi, W. M., Alias, N., Othman, M. S., Marin, V. I., and Tur, G. (2018). A model of factors affecting learning performance through the use of social media in Malaysian higher education. Comput. Educ. 121, 59–72. doi: 10.1016/j.compedu.2018.02.010

Crossref Full Text | Google Scholar

Ayanwale, M. A., and Ndlovu, M. (2024). Investigating factors of students' behavioral intentions to adopt chatbot technologies in higher education: perspective from expanded diffusion theory of innovation. Comput. Human Behav. Rep. 14:100396. doi: 10.1016/j.chbr.2024.100396

Crossref Full Text | Google Scholar

Bacon, D., Sauer, P., and Young, M. (1995). Composite reliability in structural equations modeling. Educ. Psychol. Meas. 55, 394–406. doi: 10.1177/0013164495055003003

Crossref Full Text | Google Scholar

Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. J. Assoc. Inform. Syst. 8, 244–254. doi: 10.17705/1jais.00122

Crossref Full Text | Google Scholar

Batty, M. (2020). Impact of teaching presence on learning outcomes: A qualitative study of perceptions through the lens of online teachers (Doctoral dissertation, Robert Morris University, Township, PA). 306.

Google Scholar

Bhattacherjee, A. (2001). Understanding information systems continuance: an expectation-confirmation model. MIS Quart. 25, 351–370. doi: 10.2307/3250921

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Hu, W. (2023). Students' voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20:43. doi: 10.1186/s41239-023-00411-8

Crossref Full Text | Google Scholar

Chatterjee, S., and Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: a quantitative analysis using structural equation modelling. Educ. Inform. Technol. 25, 3443–3463. doi: 10.1007/s10639-020-10159-7

Crossref Full Text | Google Scholar

Chatterjee, S., Mishra, P., Bhushan, K. S., and Goswami, P. (2023). Unraveling the paleo-marine signature in saline thermal waters of Cambay rift basin, Western India: Insights from geochemistry and multi isotopic (B, O and H). Marine Pollut. 192:115003. doi: 10.1016/j.marpolbul.2023.115003

Crossref Full Text | Google Scholar

Cochran, W. G. (1977). Sampling Techniques. 3rd Edition, New York, NY: John Wiley & Sons.

Google Scholar

Creswell, J. W., and Creswell, J. D. (2018). “Research design: qualitative, quantitative, and mixed methods approaches,” in Acht grafvondsten van de Veluwse klokbekergroep als uitgangspunt voor chronologische beschouwingen over de relaties saalisch-böhmische Schnurkeramik, Enkelgrafcultuur, Klokbeker-Oostgroep en Nederlands-Westduitse klokbekergroepen (5th Edn.). Thousand Oaks, WA: Sage Publications.

Google Scholar

Davidson, B. W., and Dunham, R. L. (1996). Assessing EFL Student Progress in Critical Thinking with the Ennis-Weir Critical Thinking Essay Test.

Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

Delello, J. A., Sung, W., Mokhtari, K., Hebert, J., Bronson, A., De Giuseppe, T., et al. (2025). AI in the classroom: insights from educators on usage, challenges, and mental health. Educ. Sci. 15, 1–27. doi: 10.3390/educsci15020113

Crossref Full Text | Google Scholar

Delone, W., and Mclean, E. (2003). The DeLone and McLean model of information systems success: a ten-year update. J. Manage. Inf. Syst. 19, 9–30. doi: 10.1080/07421222.2003.11045748

Crossref Full Text | Google Scholar

Dewey, J. (1933). How We Think: A Restatement of Relation of Reflective Thinking and Education Process (Lexington, MA: D.C. Heath and Co. Publishers), 1–242.

Google Scholar

Dong, L., Ji, T., and Zhang, J. (2023). Motivational understanding of MOOC learning: the impacts of technology fit and subjective norms. Behav. Sci. 13:98. doi: 10.3390/bs13020098

PubMed Abstract | Crossref Full Text | Google Scholar

Dunn, J. C., and Zimmer, C. (2020). “Self-determination theory,” in Routledge Handbook of Adapted Physical Education, Vol. 55 (Milton Park: Routledge), 296–312.

Google Scholar

Ennis, R. H. (1987). “A taxonomy of critical thinking dispositions and abilities,” in Teaching Thinking Skills: Theory and Practice (New York, NY: W H Freeman/Times Books/Henry Holt and Co), 9–26.

Google Scholar

Ennis, R. H. (1993). Critical thinking assessment. Theory Pract. 32, 179–186. doi: 10.1080/00405849309543594

Crossref Full Text | Google Scholar

Ennis, R. H. (1996). Critical thinking dispositions: their nature and assessability. Informal Logic 18, 165–182. doi: 10.22329/il.v18i2.2378

Crossref Full Text | Google Scholar

Facione, P. A. (1990). Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. Research Findings and Recommendations. Newark, DE: American Philosophical Association. ERIC Document Reproduction Service No. ED315423. Available online at: https://eric.ed.gov/?id=ED315423

Google Scholar

Fernyhough, C., and Borghi, A. M. (2023). Inner speech as language process and cognitive tool. Trends Cogn. Sci. 27, 1180–1193. doi: 10.1016/j.tics.2023.08.014

PubMed Abstract | Crossref Full Text | Google Scholar

Fletcher, J. D., and Kulik, J. A. (2003). “Effectiveness of intelligent tutoring systems: a meta-analytic review,” in Instructional Design: International Perspectives, Vol. 1 (Mahwah, NJ: Lawrence Erlbaum Associates), 4–11.

Google Scholar

Foroughi, B., Madugoda Gunaratnege, S., Iranmanesh, M., Khanfar, A., Ghobakhloo, M., Annamalai, N., et al. (2023). Determinants of intention to use ChatGPT for educational purposes: findings from PLS-SEM and fsQCA. Int. J. Hum.-Comput. Interact. 1–21. doi: 10.1080/10447318.2023.2226495

Crossref Full Text | Google Scholar

Fošner, A. (2024). University students' attitudes and perceptions towards AI tools: implications for sustainable educational practices. Sustainability 16:100. doi: 10.3390/su16010100

Crossref Full Text | Google Scholar

Freeman, J. (2025). Student Generative AI Survey. Louisville, CO: EDUCAUSE.

Google Scholar

Geddam, S. M., Nethravathi, N., and Ameer Hussian, A. (2024). Understanding AI adoption: the mediating role of attitude in user acceptance. J. Inf. Educ. Res. 4:1664. doi: 10.52783/jier.v4i2.975

Crossref Full Text | Google Scholar

Gerlich, M. (2025). AI tools in society: impacts on cognitive offloading and the future of critical thinking. Societies 15, 1–28. doi: 10.3390/soc15010006

Crossref Full Text | Google Scholar

Ghozali, I., and Latan, H. (2014). Partial Least Squares Konsep, Metode dan Aplikasi Menggunakan Program WARPPLS 4.0. Available online at: https://www.researchgate.net/publication/289674660_Partial_Least_Squares_Konsep_Metode_dan_Aplikasi_Menggunakan_Program_WARPPLS_40

Google Scholar

Granić, A., and Marangunić, N. (2019). Technology acceptance model in educational context: a systematic literature review. Br. J. Educ. Technol. 50, 2572–2593. doi: 10.1111/bjet.12864

Crossref Full Text | Google Scholar

Hair, J., Hult, G. T. M., Ringle, C., and Sarstedt, M. (2022). A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). Thousand Oaks, CA: Sage Publications.

Google Scholar

Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., Ray, S., et al. (2021). Evaluation of reflective measurement models BT—Partial Least Squares Structural Equation Modeling (PLS-SEM) using R: a workbook [J. F. Hair Jr., G. T. M. Hult, C. M. Ringle, M. Sarstedt, N. P. Danks, and S. Ray (eds.)]. (Berlin: Springer International Publishing),75–90.

Google Scholar

Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Market. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8

Crossref Full Text | Google Scholar

Holstein, K., and Aleven, V. (2022). Designing for human–AI complementarity in K−12 education. AI Mag. 43, 239–248. doi: 10.1002/aaai.12058

Crossref Full Text | Google Scholar

Huang, F., Wang, Y., and Zhang, H. (2024). Modelling generative < scp>AI < /Scp> acceptance, perceived teachers' enthusiasm and self-efficacy to English as a foreign language learners' well-being in the digital era. Eur. J. Educ. 59:e12770. doi: 10.1111/ejed.12770

Crossref Full Text | Google Scholar

Hung, H., Wong, Y., and Cho, V. (2009). A study of the relationship between PEOU and PU in technology acceptance in e-learning (Palmdale, PA: IGI Global), 149–170.

Google Scholar

Ibrahim, F., Münscher, J-. C., Daseking, M., and Telle, N-. T. (2025). The technology acceptance model and adopter type analysis in the context of artificial intelligence. Front. Artif. Intell. 7:1496518. doi: 10.3389/frai.2024.1496518

PubMed Abstract | Crossref Full Text | Google Scholar

Ifenthaler, D., Majumdar, R., Gorissen, P., Judge, M., Mishra, S., Raffaghelli, J., et al. (2024). Artificial intelligence in education: implications for policymakers, researchers, and practitioners. Technol. Knowl. Learn. 29, 1693–1710. doi: 10.1007/s10758-024-09747-0

Crossref Full Text | Google Scholar

Ifinedo, P. (2018). Determinants of students' continuance intention to use blogs to learn: an empirical investigation. Behav. Inf. Technol. 37, 381–392. doi: 10.1080/0144929X.2018.1436594

Crossref Full Text | Google Scholar

Jafari, F., and Keykha, A. (2023). Identifying the opportunities and challenges of artificial intelligence in higher education: a qualitative study. J. Appl. Res. High. Educ. 16, 1228–1245. doi: 10.1108/JARHE-09-2023-0426

Crossref Full Text | Google Scholar

Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., and Joseph, S. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Front. Psychol. 16:1550621. doi: 10.3389/fpsyg.2025.1550621

PubMed Abstract | Crossref Full Text | Google Scholar

Jyothsna, M., Venkata Subbaiah, P., and Kryvinska, N. (2024). Exploring the Chatbot usage intention-A mediating role of Chatbot initial trust. Heliyon 10:e33028. doi: 10.1016/j.heliyon.2024.e33028

PubMed Abstract | Crossref Full Text | Google Scholar

Kanont, K., Pingmuang, P., Simasathien, T., Wisnuwong, S., Wiwatsiripong, B., Poonpirome, K., et al. (2024). Generative-AI, a learning assistant? Factors influencing higher-ed students' technology acceptance. Electron. J. E-Learn. 22, 18–33. doi: 10.34190/ejel.22.6.3196

Crossref Full Text | Google Scholar

Kelly, S., Kaye, S-. A., and Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat Inform. 77:101925. doi: 10.1016/j.tele.2022.101925

Crossref Full Text | Google Scholar

Kempp, S. (2022). Digital Indonesia. Digital Reports Series. https://datareportal.com/reports/digital-2021-indonesia (Accessed August 30, 2025).

Google Scholar

Kim, J., and Lee, S-. S. (2022). Are two heads better than one?: The effect of student-ai collaboration on students' learning task performance. Techtrends 67, 365–375. doi: 10.1007/s11528-022-00788-9

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, T., Lee, H., Kim, M. Y., Kim, S., and Duhachek, A. (2023). AI increases unethical consumer behavior due to reduced anticipatory guilt. J. Acad. Mark. Sci. 51, 785–801. doi: 10.1007/s11747-021-00832-9

Crossref Full Text | Google Scholar

King, P., and Kitchener, K. (2004). Reflective judgment: theory and research on the development of epistemic assumptions through adulthood. Educ. Psychol. 39, 5–18. doi: 10.1207/s15326985ep3901_2

Crossref Full Text | Google Scholar

King, W. R., and He, J. (2006). A meta-analysis of the technology acceptance model. Inf. Manage. 43, 740–755. doi: 10.1016/j.im.2006.05.003

Crossref Full Text | Google Scholar

Kong, S-. C., Cheung, W. M-. Y., and Zhang, G. (2022). Evaluating artificial intelligence literacy courses for fostering conceptual learning, literacy and empowerment in university students: refocusing to conceptual building. Comput. Human Behav. Rep. 7:100223. doi: 10.1016/j.chbr.2022.100223

Crossref Full Text | Google Scholar

Kulik, J. A., and Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: a meta-analytic review: a meta-analytic review. Rev. Educ. Res. 86, 42–78. doi: 10.3102/0034654315581420

Crossref Full Text | Google Scholar

Lai, E. (2011). Critical thinking: a literature review. Transfusion 35, 219–225. Available online at: https://www.academia.edu/download/40756174/Motivation_Review_final.pdf (Accessed September 19, 2025).

Google Scholar

Laukkanen, T. (2007). Internet vs mobile banking: Comparing customer value perceptions. Bus. Process Manag. J. 13, 788–797. doi: 10.1108/14637150710834550

Crossref Full Text | Google Scholar

Lavidas, K., Papadakis, S., Manesis, D., Grigoriadou, A. S., and Gialamas, V. (2022a). The effects of social desirability on students' self-reports in two social contexts: lectures vs. lectures and lab classes. Information 13, 1–10. doi: 10.3390/info13100491

Crossref Full Text | Google Scholar

Lavidas, K., Petropoulou, A., Papadakis, S., Apostolou, Z., Komis, V., Jimoyiannis, A., et al. (2022b). Factors affecting response rates of the web survey with teachers. Computers 11, 1–15. doi: 10.3390/computers11090127

Crossref Full Text | Google Scholar

Lee, J., and Choi, H. (2017). What affects learner's higher-order thinking in technology-enhanced learning environments? The effects of learner factors. Comput. Educ. 115, 143–152. doi: 10.1016/j.compedu.2017.06.015

Crossref Full Text | Google Scholar

Legramante, D., Azevedo, A., and Azevedo, J. M. (2023). Integration of the technology acceptance model and the information systems success model in the analysis of Moodle's satisfaction and continuity of use. Int. J. Inf. Learn. Technol. 40, 467–484. doi: 10.1108/IJILT-12-2022-0231

Crossref Full Text | Google Scholar

Létourneau, A., Deslandes Martineau, M., Charland, P., Karran, J. A., Boasen, J., Léger, P. M., et al. (2025). A systematic review of AI-driven intelligent tutoring systems (ITS) in K−12 education. NPJ Sci Learn. 10:29. doi: 10.1038/s41539-025-00320-7

PubMed Abstract | Crossref Full Text | Google Scholar

Li, K. (2023). Determinants of college students' actual use of AI-based systems: an extension of the technology acceptance model. Sustainability 15:5221. doi: 10.3390/su15065221

Crossref Full Text | Google Scholar

Lijie, H., Mat Yusoff, S., and Mohamad Marzaini, A. F. (2025). Influence of AI-driven educational tools on critical thinking dispositions among university students in Malaysia: a study of key factors and correlations. Educ. Inf. Technol. 30, 8029–8053. doi: 10.1007/s10639-024-13150-8

Crossref Full Text | Google Scholar

Liu, W., and Wang, Y. (2024). The effects of using AI tools on critical thinking in English Literature classes among EFL learners: an intervention study. Eur. J. Educ. 59:e12804. doi: 10.1111/ejed.12804

Crossref Full Text | Google Scholar

Maberah, S., Kan'an, A., El-Sayed, N., Alahmari, M., Abdelmabood, M., Kholif, M., et al. (2025). Students' attitudes and perceived usefulness of artificial intelligence (AI) tools in physical education. Int. J. Inf. Educ. Technol. 15, 767–773. doi: 10.18178/ijiet.2025.15.4.2282

Crossref Full Text | Google Scholar

Mahipal (2024). Sambut Masa Depan AI di Indonesia, Pemerintah Siapkan Anggaran Rp400 Triliun Lebih. Radar Suara News. https://radarsuara.com/berita/1724143568/sambut-masa-depan-ai-di-indonesia-pemerintah-siapkan-anggaran-rp400-triliun-lebih (Accessed November 4, 2025).

Google Scholar

McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence. AI Mag. 27, 12–14. doi: 10.1609/aimag.v27i4.1904

Crossref Full Text | Google Scholar

Mejeh, M., Sarbach, L., and Hascher, T. (2024). Effects of adaptive feedback through a digital tool—a mixed-methods study on the course of self-regulated learning. Educ. Inf. Technol. 29, 1–43. doi: 10.1007/s10639-024-12510-8

Crossref Full Text | Google Scholar

Menary, R. (2012). The Extended Mind (Cambridge, MA: MIT Press), 1–382.

Google Scholar

Mezirow, J. (2014). Transformative learning in action. Realiz. Auton. 1, 5–12. Available online at: https://upcommons.upc.edu/server/api/core/bitstreams/a2b0a660-2a95-4f6c-a226-93e63c4d4966/content (Accessed August 3, 2025).

Google Scholar

Mohammadi, H. (2015). Investigating users' perspectives on e-learning: an integration of TAM and IS success model. Comput. Human Behav. 45, 359–374. doi: 10.1016/j.chb.2014.07.044

Crossref Full Text | Google Scholar

Mufarrih, A., Emzain, Z. F., Harijono, A., and Amrullah, U. S. (2023). Perancangan dan analisis model splint berbasis reverse engineering untuk rehabilitasi tangan. JMN 6, 156–166. doi: 10.29407/jmn.v6i2.21327

Crossref Full Text | Google Scholar

Munby, H. (1989). Reflection-in-action and reflection-on-action. Curr. Issues Educ. 9, 31–42. doi: 10.1353/eac.1989.a592219

Crossref Full Text | Google Scholar

Mustofa, M., Wuryan, S., Jaya, M., Saputra, S., and Putri, M. (2024). Role of interpersonal communication using artificial intelligence: a case study on improving communication quality in library. KnE Soc. Sci. doi: 10.18502/kss.v9i12.15829

Crossref Full Text | Google Scholar

Nazaretsky, T., Ariely, M., Cukurova, M., and Alexandron, G. (2022). Teachers' trust in AI-powered educational technology and a professional development program to improve it. Br. J. Educ. Technol. 53, 914–931. doi: 10.1111/bjet.13232

Crossref Full Text | Google Scholar

Osman, Z., Khuzaimah, R., and Kasbun, N. (2024). What does it take to trigger intention to use artificial intelligence among students in higher education institutions? Int. J. Acad. Res. Bus. Soc. Sci. 14, 1412–1429. doi: 10.6007/IJARBSS/v14-i7/22004

Crossref Full Text | Google Scholar

Othman, W., Makrakis, V., Kostoulas-Makrakis, N., Hamidon, Z., Keat, O. C., Abdullah, M. L., et al. (2024). Predictors of motivation and barriers to ICT-enabling education for sustainability. Sustain. 16, 1–13. doi: 10.3390/su16020749

Crossref Full Text | Google Scholar

Pan, X. (2020). Technology acceptance, technological self-efficacy, and attitude toward technology-based self-directed learning: learning motivation as a mediator. Front. Psychol. 11:564294. doi: 10.3389/fpsyg.2020.564294

PubMed Abstract | Crossref Full Text | Google Scholar

Parviz, M. (2024). AI in education: comparative perspectives from STEM and Non-STEM instructors. Comput. Educ. Open 6:100190. doi: 10.1016/j.caeo.2024.100190

Crossref Full Text | Google Scholar

Paul, B. R., and Elder, L. (2013). Critical thinking: intellectual standards essential to reasoning well within every domain of human thought, part two. J. Dev. Educ. 37, 32–36. Available online at: https://eric.ed.gov/?id=EJ1067269 (Accessed September 3, 2025).

Google Scholar

Paul, R., and Elder, L. (2006). Critical Thinking: Learn the Tools the Best Thinkers Use. Pearson Prentice Hall.

Google Scholar

Peter, A. F. (1990). Critical Thinking: a Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. Research Findings and Recommendations. Newark, DE: American Philosophical Association.

Google Scholar

Petter, S., DeLone, W., and McLean, E. (2008). Measuring information systems success: models, dimensions, measures, and interrelationships. Eur. J. Inf. Syst. 17, 236–263. doi: 10.1057/ejis.2008.15

Crossref Full Text | Google Scholar

Pinedaa, A. J. M., Mohamadc, A. N., Solomon, O., Bircob, C. N. H., Superioe, M. G., Cuencof, H. O., et al. (2022). Exploring the standardized root mean square residual (SRMR) of factors influencing e-book usage among CCA students in the Philippines. Indonesian J. Contemp. Educ. 4, 53–70. doi: 10.33122/ijoce.v4i2.30

Crossref Full Text | Google Scholar

Priyahita, R. (2020). The utilization of e-learning and artificial intelligence in the development of education system in Indonesia. Proc. 2nd Jogjakarta Commun. Conf. (JCC 2020) 459, 263–268. doi: 10.2991/assehr.k.200818.061

Crossref Full Text | Google Scholar

Putra, Y. W. S., and Hardiyanti, N. (2021). Penerapan technology acceptance model (TAM) Pada E-library berbasis web. Inf. Syst. J. 3, 23–30. doi: 10.24076/infosjournal.2020v3i2.372

Crossref Full Text | Google Scholar

Rana, M. M., Siddiqee, M. S., Sakib, M. N., and Ahamed, M. R. (2024). Assessing AI adoption in developing country academia: a trust and privacy-augmented UTAUT framework. Heliyon 10:e37569. doi: 10.1016/j.heliyon.2024.e37569

PubMed Abstract | Crossref Full Text | Google Scholar

Rashid, A. B., and Kausik, M. A. K. (2024). AI revolutionizing industries worldwide: a comprehensive overview of its diverse applications. Hybrid Adv. 7:100277. doi: 10.1016/j.hybadv.2024.100277

Crossref Full Text | Google Scholar

Rodríguez-Ruiz, J., Marín-López, I., and Espejo-Siles, R. (2025). Is artificial intelligence use related to self-control, self-esteem and self-efficacy among university students? Educ. Inf. Technol. 30, 2507–2524. doi: 10.1007/s10639-024-12906-6

Crossref Full Text | Google Scholar

Ruano-Borbalan, J. C. (2025). The transformative impact of artificial intelligence on higher education: a critical reflection on current trends and futures directions. Int. J. Chin. Educ. 14, 1–16. doi: 10.1177/2212585X251319364

Crossref Full Text | Google Scholar

Russell, S., and Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Second Edition. London: Pearson.

Google Scholar

Sætra, H. S. (2023). Generative AI: here to stay, but for good? Technol. Soc. 75:102372. doi: 10.1016/j.techsoc.2023.102372

Crossref Full Text | Google Scholar

Sailer, M., and Homner, L. (2020). The gamification of learning: a meta-analysis. Educ. Psychol. Rev. 32, 77–112. doi: 10.1007/s10648-019-09498-w

Crossref Full Text | Google Scholar

Santos Meneses, L. F. (2020). Critical thinking perspectives across contexts and curricula: dominant, neglected, and complementing dimensions. Think Skills Creat 35:100610. doi: 10.1016/j.tsc.2019.100610

Crossref Full Text | Google Scholar

Sarstedt, M., Ringle, C., and Hair, J. (2021). Partial Least Squares Structural Equation Modeling (Cham: Springer International Publishing), 1–47.

Google Scholar

Sarstedt, M., Ringle, C. M., and Hair, J. F. (2014). PLS-SEM: looking back and moving forward. Long Range Plann. 47, 132–137. doi: 10.1016/j.lrp.2014.02.008

Crossref Full Text | Google Scholar

Satici, S. A., Okur, S., Yilmaz, F. B., and Grassini, S. (2025). Psychometric properties and Turkish adaptation of the artificial intelligence attitude scale (AIAS-4): evidence for construct validity. BMC Psychol. 13:297. doi: 10.1186/s40359-025-02505-6

PubMed Abstract | Crossref Full Text | Google Scholar

Scherer, R., and Teo, T. (2019). Unpacking teachers' intentions to integrate technology: a meta-analysis. Educ. Res. Rev. 27, 90–109. doi: 10.1016/j.edurev.2019.03.001

Crossref Full Text | Google Scholar

Sellars, M., Fakirmohammad, R., Bui, L., Fishetti, J., Niyozov, S., Reynolds, R., et al. (2018). Conversations on critical thinking: can critical thinking find its way forward as the skill set and mindset of the century? Educ. Sci. 8:205. doi: 10.3390/educsci8040205

Crossref Full Text | Google Scholar

Sesmiarni, Z., Hoque, M. E., Susanto, P., Islam, M. A., and Hendrayati, H. (2024). Adoption of SPACE-learning management system in education era 4.0: an extended technology acceptance model with self-efficacy. Front. Educ. 9:1457188. doi: 10.3389/feduc.2024.1457188

Crossref Full Text | Google Scholar

Setyo Widodo, D., Rachmawati, D., Wijaya, H., Maghfuriyah, A., and Udriya, U. (2024). AI Adoption in Higher Education Institution: an Integrated TAM and TOE Model. Dinasti Int. J. Educ. Manage. Soc. Sci. 6, 1029–1039. doi: 10.38035/dijemss.v6i2.3645

Crossref Full Text | Google Scholar

Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 146:102551. doi: 10.1016/j.ijhcs.2020.102551

Crossref Full Text | Google Scholar

Steenbergen-Hu, S., and Cooper, H. (2013). A meta-analysis of the effectiveness of intelligent tutoring systems on college students' academic learning. J. Educ. Psychol. 106, 331–347. doi: 10.1037/a0034752

Crossref Full Text | Google Scholar

Suwendi, Mesraini, Gama, C. B., Rahman, H., Luhuringbudi, T., and Masrom, M. (2025). Adoption of artificial intelligence and digital resources among academicians of islamic higher education institutions in Indonesia. Jurnal Online Informatika 10, 42–52. doi: 10.15575/join.v10i1.1549

Crossref Full Text | Google Scholar

Tarhini, A., Masa'deh, R., Al-Busaidi, K. A., Mohammed, A. B., and Maqableh, M. (2017). Factors influencing students' adoption of e-learning: a structural equation modeling approach. J. Int. Educ. Bus. 10, 164–182. doi: 10.1108/JIEB-09-2016-0032

Crossref Full Text | Google Scholar

Teece, D. J. (2007). Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance. Strat. Mgmt. J. 28, 1319–1350. doi: 10.1002/smj.640

Crossref Full Text | Google Scholar

Teo, T., Ursavaş, Ö., and Bahçekapili, E. (2011). Efficiency of the technology acceptance model to explain pre-service teachers' intention to use technology: a Turkish study. Campus-Wide Inf. Syst. 28, 93–101. doi: 10.1108/10650741111117798

Crossref Full Text | Google Scholar

Toros, E., Asiksoy, G., and Sürücü, L. (2024). Refreshment students' perceived usefulness and attitudes towards using technology: a moderated mediation model. Humanit. Soc. Sci. Commun. 11:333. doi: 10.1057/s41599-024-02839-3

Crossref Full Text | Google Scholar

Tuononen, T., Hyytinen, H., Räisänen, M., Hailikari, T., and Parpala, A. (2023). Metacognitive awareness in relation to university students' learning profiles. Metacogn. Learn. 18, 37–54. doi: 10.1007/s11409-022-09314-x

Crossref Full Text | Google Scholar

UI, A. (2023). Facing the Era of Digital Transformation in Higher Education with the Utilization of AI. UI News and Update. https://www.ui.ac.id/en/facing-the-era-of-digital-transformation-in-higher-education-with-the-utilization-of-ai/ (Accessed November 7, 2025).

Google Scholar

UNS (2019). ASIIN Seal Accreditation Report Bachelor' s Degree Programmes. https://backend.deqar.eu/reports/ASIIN/324078_20250317_0852_Accreditation_Report2_UNS_Cluster_Education-Mathematics_2024-09-24.pdf (Accessed November 7, 2025).

Google Scholar

Velli, K., and Zafiropoulos, K. (2024). Factors that affect the acceptance of educational AI tools by Greek teachers—a structural equation modelling study. Eur. J. Invest. Health Psychol. Educ. 14, 2560–2579. doi: 10.3390/ejihpe14090169

PubMed Abstract | Crossref Full Text | Google Scholar

Venkatesh, V., and Davis, F. D. (2000). A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46, 186–204. doi: 10.1287/mnsc.46.2.186.11926

Crossref Full Text | Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quart. 27, 425–478. Available online at: https://ssrn.com/abstract=3375136

Google Scholar

Villanueva, J., and Cruz, R. (2021). The Praxis of School-Based Management on Curriculum and Learning in the Philippines. Available online at: https://www.researchgate.net/publication/350107635_The_Praxis_of_School-Based_Management_on_Curriculum_and_Learning_in_the_Philippines

Google Scholar

Villanueva, J. S., and Cruz, R. A. O. (2019). The Praxis of school-based management on curriculum and learning in the Philippines. Int. J. Soc. Sci. Educ. Stud. 6, 89–101. doi: 10.23918/ijsses.v6i2p89

Crossref Full Text | Google Scholar

Wang, F., King, R. B., Chai, C. S., and Zhou, Y. (2023). University students' intentions to learn artificial intelligence: the roles of supportive environments and expectancy–value beliefs. Int. J. Educ. Technol. High. Educ. 20:51. doi: 10.1186/s41239-023-00417-2

Crossref Full Text | Google Scholar

Wang, L., and Li, W. (2024). The Impact of AI Usage on University Students' Willingness for Autonomous Learning. Behav. Sci., 14(10). doi: 10.3390/bs14100956

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., Du, Z., et al. (2024). Artificial intelligence in education: a systematic literature review. Expert Syst. Appl. 252:124167. doi: 10.1016/j.eswa.2024.124167

Crossref Full Text | Google Scholar

Wang, W., and Lu, Y. (2025). Survey on Chinese users' acceptance of AI assistants: expanding technology acceptance model. Sci. Rep. 15, 1–20. doi: 10.1038/s41598-025-18123-6

PubMed Abstract | Crossref Full Text | Google Scholar

Wu, W., Zhang, B., Li, S., and Liu, H. (2022). Exploring factors of the willingness to accept ai-assisted learning environments: an empirical investigation based on the UTAUT model and perceived risk theory. Front. Psychol. 13:870777. doi: 10.3389/fpsyg.2022.870777

PubMed Abstract | Crossref Full Text | Google Scholar

Yau, K. W., Chai, C. S., Chiu, T. K. F., Meng, H., King, I., and Yam, Y. (2023). A phenomenographic approach on teacher conceptions of teaching Artificial Intelligence (AI) in K−12 schools. Educ. Inf. Technol. 28, 1041–1064. doi: 10.1007/s10639-022-11161-x

Crossref Full Text | Google Scholar

Yellapu, V. (2019). Descriptive statistics. Int. J. Acad. Med. 4, 60–63. doi: 10.4103/IJAM.IJAM_7_18

Crossref Full Text | Google Scholar

Yusuf, A., Pervin, N., and Román-González, M. (2024). Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 21:21. doi: 10.1186/s41239-024-00453-6

Crossref Full Text | Google Scholar

Yusuf, M., Yuwono, J., Mustaqimah, U., Supratiwi, M., and Cahyani, L. (2024). Assessing inclusivity of faculties and school at Sebelas Maret University utilizing UNS inclusion metric standards. Jurnal Pendidikan Progresif 14, 1114–1124. doi: 10.23960/jpp.v14.i2.202480

Crossref Full Text | Google Scholar

Zhai, C., Wibowo, S., and Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 11:316. doi: 10.1186/s40561-024-00316-7

Crossref Full Text | Google Scholar

Zhao, Y., Zheng, Z., Pan, C., and Zhou, L. (2021). Self-esteem and academic engagement among adolescents: a moderated mediation model. Front. Psychol. 12:690828. doi: 10.3389/fpsyg.2021.690828

PubMed Abstract | Crossref Full Text | Google Scholar

Zhao, Z., An, Q., and Liu, J. (2025). Exploring AI tool adoption in higher education: evidence from a PLS-SEM model integrating multimodal literacy, self-efficacy, and university support. Front. Psychol. 16, 1–14. doi: 10.3389/fpsyg.2025.1619391

PubMed Abstract | Crossref Full Text | Google Scholar

Zimmerman, B. J. (2022). Becoming a self-regulated learner: beliefs, techniques, and illusions (Oxfordshire: Routledge), 315.

Google Scholar

Keywords: artificial intelligence (AI), critical thinking skills, Technology Acceptance Model (TAM), higher education, cognitive evaluation

Citation: Ninghardjanti P, Umam MC, Subarno A, Winarno W, Langgi NR and Widodo J (2025) Evaluating the impact of AI on the critical thinking skills among the higher education students by combining the TAM model and critical thinking theory. Front. Educ. 10:1719625. doi: 10.3389/feduc.2025.1719625

Received: 06 October 2025; Accepted: 31 October 2025;
Published: 28 November 2025.

Edited by:

Vassilios Makrakis, University of Crete, Greece

Reviewed by:

Stamatios Papadakis, University of Crete, Greece
Dennis Arias-Chávez, Universidad Continental - Arequipa, Peru

Copyright © 2025 Ninghardjanti, Umam, Subarno, Winarno, Langgi and Widodo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Patni Ninghardjanti, bmluZ0BzdGFmZi51bnMuYWMuaWQ=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.