Your new experience awaits. Try the new design now and help us make it even better

SYSTEMATIC REVIEW article

Front. Educ., 02 February 2026

Sec. Digital Learning Innovations

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1648661

Artificial intelligence in higher education: a systematic review of its impact on student engagement and the mediating role of teaching methods

Dong Yu Long,Dong Yu Long1,2Shuai Wang
Shuai Wang3*Sabariah Md RashidSabariah Md Rashid2Xiao Tao LuXiao Tao Lu4
  • 1Department of Student Affairs, Jiaying University, Meizhou, China
  • 2Faculty of Social Sciences and Liberal Arts, UCSI University, Kuala Lumpur, Malaysia
  • 3Shanxi Province Optoelectronic Information Science and Technology Laboratory, Yuncheng University, Yuncheng, China
  • 4Department of Foreign Languages, Jiaying University, Meizhou, China

Introduction: Artificial Intelligence (AI) is increasingly integrated into higher education to personalize instruction and support student engagement. However, the mediating role of teaching methods in this process remains underexplored.

Methods: This systematic review analyzed 73 peer-reviewed articles published between 2015 and early 2025, retrieved from Scopus and Web of Science, following PRISMA guidelines. Studies were screened based on predefined inclusion criteria and coded using a structured framework that examined AI types, engagement outcomes, and instructional strategies.

Results: Findings reveal that AI tools—such as chatbots, adaptive systems, and predictive analytics—enhance engagement most effectively when embedded within interactive pedagogies like flipped classrooms, project-based learning, and scaffolded feedback loops. To conceptualize this relationship, we introduce the PMAISE model (Pedagogical Mediation of AI for Student Engagement), which maps the alignment between AI technologies, pedagogical strategies, and the affective, behavioral, and cognitive dimensions of engagement. Concrete examples from recent studies demonstrate how teaching methods amplify or inhibit the effects of AI tools. The review also critically examines emerging concerns related to ethics, data privacy, and structural barriers to equitable AI adoption.

Discussion: This study offers a conceptual and practical framework for integrating AI into higher education in a context-sensitive, evidence-based, and pedagogically meaningful manner, highlighting the crucial role of thoughtful pedagogical mediation in maximizing AI’s educational benefits.

1 Introduction

The rapid evolution of artificial intelligence (AI) technologies is reshaping higher education by introducing novel tools and methodologies that redefine student engagement and pedagogical practices. Student engagement, a critical determinant of academic success, encompasses cognitive, behavioral, and emotional dimensions, yet traditional teaching methods often struggle to address these aspects holistically. Emerging AI technologies, such as chatbots, adaptive learning systems, and AI-driven analytics, are increasingly integrated into classrooms to personalize learning, enhance interactivity, and address diverse student needs. However, the extent to which these technologies influence engagement and the mediating role of teaching methods remain underexplored. This systematic review synthesizes findings from 73 studies to analyze the impact of AI technologies on student engagement in higher education and elucidate how teaching methods mediate these effects, thereby contributing actionable insights for educators and policymakers.

1.1 Background and rationale

1.1.1 Artificial intelligence in higher education

Artificial Intelligence in Higher Education (AIHE) refers to the process of supporting teaching, management, and learning, with the goal of improving personalized learning efficiency, optimizing resource allocation, and enhancing student engagement (Kuleto et al., 2021). To improve coherence and analysis clarity, we classify AIHE applications into five functional categories: (1) personalized learning systems, (2) real-time feedback and adaptive tutoring, (3) natural language processing (NLP) tools, (4) immersive learning environments, and (5) predictive analytics. This taxonomy provides a more systematic understanding of AI’s educational roles.

For instance, AI chatbots, categorized under NLP tools, have shown potential in fostering engagement by providing instant support and interactive learning experiences. In Legal English courses, chatbots improved student participation and text-student interaction, though concerns about critical thinking trade-offs were noted (Bretan, 2024). In a quasi-experimental study involving 250 undergraduates, Yang et al. (2023) found that the Gamified Artificial Intelligence Education Robot (GAIER) significantly enhanced students’ motivation and problem-solving skills in laboratory safety courses, compared to traditional instruction. Adaptive learning tools, such as Smart Sparrow’s interactive resources, were praised by 96% of students for boosting engagement and efficiency in anatomy courses (Linden et al., 2019). These examples underscore AI’s capacity to tailor learning experiences and address individual cognitive and emotional needs. Previous research on AI in education has focused primarily on technological effectiveness, including the improvement of learning outcomes, enhancement of cognitive processes, and promotion of knowledge retention (Aluko et al., 2025).

To guide our analysis and interpretation of these diverse AI applications, we adopted three key theoretical models: the Technology Acceptance Model (TAM) (Davis, 1989), Cognitive Load Theory (CLT) (Sweller, 1988), and Self-Determination Theory (SDT) (Ryan and Deci, 2000). TAM explains students’ acceptance of AI tools based on perceived usefulness and ease of use; CLT clarifies how adaptive AI systems help reduce cognitive overload by adjusting content complexity to learners’ abilities; and SDT supports the analysis of how AI-driven personalized feedback meets learners’ needs for autonomy, competence, and relatedness—key drivers of intrinsic motivation and well-being (Bannert, 2002; Gunasekare, 2016; Zufiyardi et al., 2022). By integrating these theoretical lenses, we aim to move beyond a purely descriptive account and provide a more robust conceptual understanding of AI’s impact.

1.1.2 AIHE for student engagement

Student engagement, a central construct in educational research, is commonly conceptualized in three dimensions: cognitive, behavioral, and emotional engagement (Yang and Ghislandi, 2024; Yi et al., 2024). Cognitive engagement refers to students’ psychological investment in learning, such as attention, deep learning strategies, and intellectual effort. Behavioral engagement involves participation in academic and social activities, including attendance, homework completion, and classroom interactions. Emotional engagement reflects learners’ affective responses, such as interest, enjoyment, or sense of belonging during learning tasks.

In the context of higher education, AI has been increasingly leveraged to support student engagement through distinct tools and mechanisms (Wang et al., 2024a; Wang et al., 2024c). For instance, intelligent tutoring systems and adaptive learning platforms personalize content delivery by adjusting instructional materials based on learners’ real-time performance, thereby enhancing cognitive engagement (Liu and Yushchik, 2024). These systems use machine learning to detect knowledge gaps and provide tailored feedback, fostering deeper comprehension and sustained intellectual involvement (Strielkowski et al., 2025).

NLP applications, such as AI-powered chatbots or automated writing evaluation tools, promote both behavioral and emotional engagement by facilitating interactive learning environments. For example, virtual simulation games used in entrepreneurship courses immersed students in decision-making scenarios, enhancing not only participation rates but also emotional identification with tasks (Sziegat, 2024).

Meanwhile, AI-based learning analytics platforms, such as i-Ntervene (Utamachant et al., 2023), utilize students’ digital footprints to proactively identify disengagement. These systems support behavioral engagement by initiating timely interventions, especially for at-risk learners, and emotional engagement through personalized encouragement or scaffolding.

However, engagement gains depend on implementation context. For example, while flipped classrooms using AI tools increased cognitive and emotional engagement, behavioral engagement showed no significant difference from traditional methods (Hava, 2021). This underscores the importance of disaggregating engagement types and avoiding conceptual conflation with academic performance, which refers to learning outcomes rather than processual involvement. Furthermore, the application of Self-Determination Theory (SDT) in AIHE remains underutilized. Future research should more rigorously apply SDT as an interpretive lens when examining how AI interventions modulate different forms of engagement.

1.1.3 Teaching methods between AI and student engagement

In educational contexts, pedagogical mediation refers to the processes through which teaching methods shape the way learners interact with tools, instructors, and content. Within AI-supported higher education, pedagogical mediation involves the alignment of AI functionalities with instructional goals—namely, enhancing engagement through tailored feedback, structured interaction, and learner-centered adaptation. In this review, pedagogy was treated not only as an analytical lens but also as a selection and coding criterion. Studies were classified based on whether and how AI systems were deployed within pedagogical strategies such as blended learning, flipped classrooms, inquiry-based learning, and peer-assisted activities.

AI-mediated teaching methods were found to influence student engagement via at least three recurring mechanisms: (1) structured feedback, which enhances metacognitive awareness and task clarity; (2) interactive scaffolding, which fosters peer and teacher interaction; and (3) personalization alignment, which adapts instructional pace and content to learner profiles. These mechanisms correspond, respectively, to the cognitive, emotional, and behavioral dimensions of engagement. Blended learning models, such as those integrating AI performance prediction with learning analytics, enhanced collaborative writing outcomes by 12–15% through structured feedback loops (Ouyang et al., 2023). Game-based approaches amplified AI’s impact: role-playing simulations with ChatGPT improved communication skills and theoretical application in cloud computing courses (Stampfl et al., 2024). Conversely, poorly aligned methods risk undermining AI’s potential. For instance, students in fully online courses prioritized synchronous interactions but faced engagement barriers due to technical constraints (Alghamdi et al., 2024). Hybrid frameworks, such as the BOPPPS virtual simulation model, succeeded by combining AI-driven content with scaffolded peer interactions, achieving 93.37% student approval in nursing informatics courses (He et al., 2024). These findings highlight the necessity of aligning AI tools with pedagogies that prioritize interaction, reflection, and scaffolded support.

While these examples suggest meaningful alignment between AI and pedagogy, ethical and practical challenges persist. Socioeconomically disadvantaged students are more vulnerable to infrastructural constraints such as unstable internet or limited device access (Cullinan et al., 2021). Faculty skepticism also affects implementation: 23% of instructors doubt AI’s ability to replicate human mentorship, citing concerns over feedback authenticity and classroom dynamics (Liu and Yushchik, 2024). In addition, privacy and autonomy concerns, particularly related to generative AI tools such as ChatGPT, have raised concerns about potential overreliance and user surveillance (Husain, 2024; Wood and Moss, 2024). Successful AI integration therefore requires not only technological investment but also institutional commitment to inclusive digital pedagogy (Rahman, 2021).

1.2 Purpose of the review

This review aims to explore how AIHE influences student engagement through various teaching methods. To achieve this, the study is guided by a set of pre-defined research questions (RQs), which informed the processes of literature selection, thematic categorization, and analytical coding.

RQ1: What types of AI applications are used in higher education to support cognitive, emotional, and behavioral engagement among students?

RQ2: What types of teaching methods and pedagogical strategies are employed in AI-integrated educational settings, and how do they mediate different forms of student engagement?

RQ3: What mediating role do teaching methods play between AI technologies and student engagement?

To systematically answer the research questions, we inductively developed the PMAISE model (Pedagogical Mediation of AI for Student Engagement). The PMAISE model serves as an explanatory framework in this study, integrating the core elements of the research questions. It provides a theoretically informed perspective for understanding and presenting the complex interplay between AI, teaching methods, and student engagement.

The remainder of this paper is organized as follows. Section 2 outlines the methodological procedures. Section 3 presents the key findings. Section 4 discusses the results in relation to the three research questions. Section 5 concludes the study with a summary of key insights and implications.

2 Methods

2.1 Search strategy

This systematic review adopted the PRISMA 2020 framework to ensure methodological transparency and replicability. Two major academic databases—Web of Science (WOS) and Scopus—were selected for their comprehensive indexing of peer-reviewed literature in education, technology, and the social sciences. Search terms targeted four core themes: (1) artificial intelligence and related technologies, (2) higher education contexts, (3) pedagogical constructs, and (4) student engagement. Boolean operators (AND, OR) were used to combine search terms across these themes. The search strategy was applied to the titles, abstracts, and keywords in both databases. The exact search strings used in WOS and Scopus are presented below.

2.1.1 WOS

TS = (“artificial intelligence” OR “AI” OR “machine learning” OR “deep learning” OR “neural network” OR “reinforcement learning” OR “supervised learning” OR “unsupervised learning” OR “automatic” OR “Natural language processing” OR “Chatbot” OR “ChatGPT” OR “DeepSeek” OR “Large Language Model” OR “LLM” OR “AI-*” OR “digital technology” OR “digitalization” OR “informational” OR “informatization” OR “big data” OR “New Media” OR “Internet of Things” OR “Internet” OR “Smart”) AND TS = (“higher education” OR “university” OR “college” OR “tertiary education” OR “undergraduate” OR “bachelor” OR “post-secondary” OR “graduate*”) AND TS = (“Teach* Method*” OR “Instructional Design” OR “pedagogical strategies” OR “Pedagog* Approach*” OR “Blended Learn*”) AND TS = (“student engagement” OR “learner engagement” OR “academic participation” OR “Academic Burnout*” OR “Student Burnout*”).

2.1.2 Scopus

TITLE-ABS-KEY ((“artificial intelligence” OR “AI” OR “machine learning” OR “deep learning” OR “neural network” OR “reinforcement learning” OR “supervised learning” OR “unsupervised learning” OR “automatic” OR “Natural language processing” OR “Chatbot” OR “ChatGPT” OR “DeepSeek” OR “Large Language Model” OR “LLM” OR “AI-*” OR “digital technology” OR “digitalization” OR “informational” OR “informatization” OR “big data” OR “New Media” OR “Internet of Things” OR “Internet” OR “Smart”) AND (“higher education” OR “university” OR “college” OR “tertiary education” OR “undergraduate” OR “bachelor” OR “post-secondary” OR “graduate*”) AND (“Teach* Method*” OR “Instructional Design” OR “pedagogical strategies” OR “Pedagog* Approach*” OR “Blended Learn*”) AND (“student engagement” OR “learner engagement” OR “academic participation” OR “Academic Burnout*” OR “Student Burnout*”)).

2.2 Initial search and screening

The initial search retrieved 85 records from WOS and 277 from Scopus. Figure 1 presents a visual summary of AIHE-related publications across different academic disciplines, revealing that the majority fall within social sciences (29.4%), computer science (27.5%), and engineering (14.4%). By limiting the search to the period from 2015 to 2025 and including only peer-reviewed articles published in English, the number of papers was refined to 74 in WOS and 206 in Scopus, showing a decrease from the original search results.

Figure 1
Pie chart showing distribution of academic fields. Social Sciences: 29.4%, Computer Science: 27.5%, Engineering: 14.4%, Mathematics: 6.2%, Decision Science: 3.5%, Medicine: 3.3%, Business: 3.1%, Arts and Humanities: 1.6%, Energy: 1.4%, Psychology: 1.4%, Other: 8.2%.

Figure 1. The distribution of publications on AIHE across different academic disciplines.

2.3 Inclusion and exclusion criteria

The selection process prioritized open access publications. A total of 109 full-text articles were collected—76 from Scopus and 33 from WOS—and subjected to bibliometric analysis. The article data were then exported to a CSV file for further processing, during which duplicates were identified and eliminated using Excel’s duplicate removal tool. Papers unrelated to artificial intelligence, higher education, academic burnout, or teaching practices were excluded. Titles and abstracts of the remaining articles were independently screened by two authors to determine relevance. Discrepancies were resolved through consensus discussions to ensure selection validity. Following a screening of titles and abstracts, 73 articles were shortlisted for in-depth full-text review. Figure 2 presents the systematic PRISMA workflow used for article selection in this study. Articles were excluded from full-text reviews based on the following criteria: (a) Full text not available, (b) Document type: book chapter, short survey, note, data paper, (3) Source type: book series, book, (4) Subject area: unrelated to the fields of social science, computer science and engineering. Table 1 presents the inclusion and exclusion criteria for AIHE studies.

Figure 2
Flowchart depicting a research article selection process. Two pathways show records from WOS (85) and Scopus (277) being screened, resulting in 74 and 206 records, respectively. Articles accessed are 33 from WOS and 76 from Scopus. Studies utilized are 32 from WOS and 67 from Scopus, totaling 73 articles. Criteria include year limits (2015-2025), article types, and language.

Figure 2. Workflow adopted for selecting AIHE-related articles in this research.

Table 1
www.frontiersin.org

Table 1. Inclusion and exclusion criteria for AIHE studies.

Figure 3 illustrates the frequently used keywords in AIHE. The word cloud in Figure 3 visually represents the most frequently used keywords in the research on AI-driven educational technologies and student engagement. The dominant terms, such as “Student,” “Learning,” “Teaching,” “Artificial Intelligence,” “Student engagement,” “Burnout” and “Education,” highlight the central themes of this study—exploring the role of AI in higher education and its influence on student engagement. Key terms like “Academic Burnout,” “Student Engagement” and “Personalized Learning” indicate the increasing research focus on student well-being and performance in AI-enhanced learning environments. The presence of “Blended Learning,” “E-learning” and “Virtual Reality” suggests that technology-mediated teaching methods play a crucial role in shaping student learning experiences. Moreover, keywords such as “Active Learning,” “Federated Learning” and “Machine Learning” emphasize the pedagogical and technological advancements associated with AI in education. The inclusion of “Curricula,” “Instruction” and “Assessment” underscores the integration of AI into various aspects of the educational framework. Overall, this word cloud reflects the intersection of AI technologies, teaching methods, and student engagement, aligning with this study’s objective of systematically reviewing how AI-driven educational tools impact student learning engagement and psychological well-being in higher education.

Figure 3
Word cloud featuring prominent terms related to education and technology. Large words include

Figure 3. Word cloud of the AIHE-related publications.

2.4 Quality assessment

Keyword co-occurrence network analysis offers valuable insights into prominent themes in AIHE research. Figure 4 generated using VOSviewer visually maps keyword relationships to highlight key focus areas in the literature. In the network larger nodes represent frequently used keywords while thicker edges indicate stronger co-occurrence links helping researchers quickly grasp topic relevance and connections.

Figure 4
Network graph illustrating interconnected concepts related to education. Central nodes include students, teaching, student engagement, and human. Blue, green, and red clusters represent themes like artificial intelligence, e-learning, and human-related topics. Lines indicate relationships between concepts.

Figure 4. Keyword co-occurrence network in AIHE-related studies.

Figure 4 highlights key themes and interconnections within the research landscape of AI-driven educational technologies, higher education, academic burnout, and teaching methods. The visualization reveals three major clusters, each representing a crucial aspect of this field. The blue cluster primarily focuses on AI applications in education, emphasizing concepts such as student engagement, federated learning, educational technology, and contrastive learning. This suggests that AI-powered tools and methodologies are being increasingly integrated into higher education to enhance student experiences, promote adaptive learning, and support institutional development. The green cluster centers around teaching methodologies and e-learning, with terms such as “students,” “teaching,” “education computing” and “curricula” emerging as focal points. The presence of pedagogical approaches like flipped classrooms and deep learning techniques indicates that teaching strategies play a mediating role in AI-enhanced learning environments. This aligns closely with our study’s focus on how teaching methods shape the relationship between AI-driven education and academic burnout, emphasizing the need to evaluate their effectiveness in mitigating stress and improving learning outcomes. The red cluster is primarily concerned with human and psychological factors in education, linking keywords such as “academic performance,” “burnout,” “motivation” and “cognitive load.” The presence of “COVID-19” and “distance learning” suggests a growing body of research on the challenges of remote education and its impact on student well-being. The strong connections between instructional design, educational technology, and academic burnout reinforce the relevance of our systematic review, as it seeks to critically assess how AI-driven teaching methods influence student engagement, stress levels, and overall academic success in higher education.

2.5 Data analysis and coding scheme

To examine AIHE across the included studies, we employed a hybrid coding scheme that combined both deductive and inductive approaches. The deductive coding categories were aligned with the predefined research questions and included general bibliometric information such as article type, publication source, year of publication, institutional affiliation, and country of origin. These statistics provided an overview of the research landscape (see Table 2).

Table 2
www.frontiersin.org

Table 2. Coding scheme for AIHE studies.

In addition, we coded key analytical dimensions including educational setting, data sources, AI technologies employed, research methods, types of student engagement, and teaching strategies. Through thematic synthesis, the inductive analysis identified recurring relational patterns among AI tools, instructional methods, and student engagement outcomes. From these patterns, we developed the PMAISE model, which systematizes how instructional strategies mediate engagement in AI-enhanced settings.

The PMAISE framework emphasizes the importance of aligning AI technologies with pedagogically sound designs to maximize their cognitive, emotional, and behavioral impact on students. Figure 5 visualizes this model, illustrating how AI can be effectively integrated into higher education through thoughtful pedagogical mediation.

Figure 5
Diagram titled

Figure 5. Pedagogical mediation of AI for students engagement model.

3 Results

This section adopted a hybrid analytical approach, integrating deductive categorization with inductive thematic synthesis. The deductive coding framework was structured around seven predefined thematic domains (as outlined in Table 2): (a) basic information, (b) educational setting, (c) data sources, (d) AI technologies, (e) research methods, (f) student engagement, and (g) teaching strategies. The synthesis of these dimensions is presented in Table 3. The analytical process was carried out in three methodologically rigorous phases. First, a quantitative profiling of methodological distributions across the selected studies was performed using frequency mapping. Second, causal mechanisms linking AI technologies, pedagogical practices, and student engagement were identified through systematic pathway analysis. Finally, the findings were synthesized into a cohesive theoretical model—PMAISE—reflecting the pedagogical mediation of AI for student engagement. Intercoder reliability was assessed using Cohen’s kappa, yielding a high level of agreement (κ = 0.82, p < 0.001).

Table 3
www.frontiersin.org

Table 3. The characteristics of AIHE studies.

3.1 Publications of years

Figure 6 presents the distribution of publications from 2016 to 2025 related to AIHE and its impact on student engagement. Overall, the research in this field has shown a gradual upward trend over the past decade. A notable increase in publication volume is observed from 2021 onwards, reaching a peak in 2024 with over 35 publications. This reflects growing academic interest in the potential of AI to enhance student engagement, particularly in the context of its expanding application in higher education. The apparent decline in 2025 is primarily due to the data collection cut-off point of this systematic review, which was set in February 2025 and therefore does not include the full year’s output. This temporal distribution highlights the evolving scholarly attention toward AIHE and underscores 2024 as a pivotal year in the development of this research domain.

Figure 6
Line graph showing the number of publications from 2016 to 2025. The count is steady from 2016 to 2020, increases in 2021, peaks at 2024 with 30 publications, and drops sharply in 2025.

Figure 6. The distribution of publications from 2016 to 2025 related to AIHE.

3.2 Article types

The majority of the studies were from journals, including the BMC Medical Education (n = 4), International Journal of Educational Technology in Higher Education (n = 3). Given that these journals have strict publication requirements on techniques, it can be inferred that a large proportion of researchers focusing on AIHE have a computer science background.

3.3 Countries/regions

Figure 7 presents the geographical distribution of publications on AIHE studies. China emerges as the most active contributor with 10 publications, followed by Australia with 7. Canada and the United States each account for 6 publications, while Thailand and the United Kingdom contribute 5 apiece. Ireland has a smaller yet notable presence, with 3 publications. This distribution indicates that scholarly interest in AIHE is predominantly concentrated in Asia-Pacific and Anglophone regions. Such patterns may reflect disparities in institutional priorities, technological infrastructure, and research funding across different countries and regions.

Figure 7
Pie chart showing the percentage distribution across different countries. Others account for 20.5%, China 13.7%, Australia 9.6%, Canada 8.2%, United States 8.2%, Thailand 6.8%, United Kingdom 6.8%, Ireland 4.1%, and South Korea, Saudi Arabia, Russian Federation, Peru, Malaysia, Germany, Finland, and Belgium each at 2.7%.

Figure 7. The geographical distribution of publications on AIHE studies.

3.4 Educational setting

A mapping between the year of publication and the formal/informal environment was further performed. It was found that 12 articles were in informal environments (extracurricular activities, self-directed learning, unstructured online communities, and non-traditional teaching for emergency transitions during the epidemic), 53 articles integrated artificial intelligence into regular teaching (university courses, structured classrooms, classroom parts in blended learning, credit courses, and formal assessment methods), and 8 articles did not specify the teaching environment. Overall, the application of artificial intelligence in higher education is dominated by formal learning environments, mainly serving university courses, blended teaching, and structured assessments, and optimizing teaching methods (flipped classrooms and gamification design) through AI technologies (virtual simulation and intelligent grading) to enhance student engagement.

3.5 Research methods

As shown in Figure 8, there were 3 conceptual articles and 70 empirical studies. Regarding research methods, most of the empirical studies adopted quantitative (N = 26) and qualitative methods (N = 9) to examine students’ engagement in AI higher education environments and their dynamic interaction with teaching strategies. 36 articles adopted a mixed method to collect information through multiple data sources, including questionnaires, learning behavior logs, classroom observation records, and teacher interview texts. 2 conceptual articles adopted an exploratory approach to discuss how to construct a teaching mediation mechanism to optimize the AI-driven student engagement model from the perspectives of technical characteristics, AI tool adaptability, and educational theoretical framework.

Figure 8
Bar chart showing the number of publications by type: Quantitative (26), Qualitative (9), Mixed research (36), and Conceptual articles (2). Mixed research has the highest number of publications.

Figure 8. Research methods of publications in AIHE studies.

3.6 Data form

Table 4 provides an overview of the data sources and data formats employed in the studies included in this review. As illustrated in Figure 9, the majority of the studies utilized multimodal data, combining two or more formats to enhance the richness and depth of analysis. Common combinations included video and audio (n = 4), video and text (n = 2), and audio and text (n = 3), reflecting a trend toward integrated data collection approaches in AI-supported educational research. In contrast, a smaller number of studies relied on single-modal data, with video-only being the most frequently used format in this category (n = 4). This suggests a growing emphasis on capturing diverse aspects of classroom interactions and learner behavior through multimodal means.

Table 4
www.frontiersin.org

Table 4. Data forms employed in the studies included in this review.

Figure 9
Bar chart showing the number of publications by media type. Video leads with eight, followed by Video + Audio with six. Audio + Text has three, while Video + Text and Video + Audio + Text both have two.

Figure 9. Data form used in AIHE studies.

3.7 Synthesis of results

Out of the 73 studies, 42 (57.5%) explicitly examined student engagement as a core variable. Among these, cognitive engagement was the most frequently investigated (n = 31), followed by behavioral (n = 25) and emotional engagement (n = 21). A total of 48 studies (65.8%) explored how teaching strategies mediated the influence of AI on engagement, with techniques such as flipped classrooms, gamified instruction, and scaffolded feedback commonly employed.

The PMAISE model (see Figure 5) was inductively developed through cross-case synthesis, reflecting the triangulation of AI functions, pedagogical strategies, and engagement outcomes. By systematically analyzing how different instructional approaches shaped the effects of various AI applications on cognitive, behavioral, and emotional engagement, we identified three recurrent mechanisms: structured feedback, interactive scaffolding, and personalization alignment. These mechanisms constitute the core pathways through which pedagogical mediation operates in AI-supported higher education, forming the conceptual foundation of the PMAISE framework.

4 Discussion

This discussion section integrates findings from 73 empirical studies to address the research questions, with explicit links to data patterns and theoretical implications.

4.1 RQ1: what types of AI applications are used in higher education to support cognitive, emotional, and behavioral engagement among students?

Table 5 lists the application type of AIHE reviews. The systematic analysis of 73 studies reveals that AI applications in higher education manifest through six primary forms, each leveraging distinct technological tools to enhance pedagogical outcomes. These forms, supported by empirical evidence, demonstrate AI’s transformative potential across diverse educational contexts while highlighting critical limitations requiring scholarly attention.

Table 5
www.frontiersin.org

Table 5. Application type of AIHE reviews.

4.1.1 Intelligent tutoring and adaptive learning systems

AI-driven adaptive systems employ machine learning algorithms to deliver personalized learning experiences. The IoT-SDNCT framework (Cao, 2023) exemplifies this approach by integrating IoT sensors and reinforcement learning to optimize blended teaching through real-time engagement monitoring (accuracy: 98.2% in StuEmo24 dataset). Similarly, Smart Sparrow implementations (Linden et al., 2019) demonstrated 96% student satisfaction through dynamically adjusted anatomy modules, though reliance on self-reported data warrants cautious interpretation. Chen et al. (2024) reported that the integration of the BOPPPS model with virtual simulation significantly enhanced students’ motivation for deep learning (p < 0.05); however, the approach encountered instructor resistance due to its high technical demands.

4.1.2 Natural language processing applications

Generative AI tools such as ChatGPT have demonstrated substantial impacts on language acquisition, with structural equation modeling revealing direct effects on learner engagement (β = 0.433), language proficiency (β = 0.698), and perceived usability (β = 0.775) in Arabic EFL contexts (Zakarneh et al., 2025). Automated essay scoring experiments also identified GPT-4’s tendency toward score inflation, with AI–human rating discrepancies exceeding 1.5 standard deviations in 38% of graduate-level bibliographies (Manning et al., 2025). While these tools enhance the efficiency of formative feedback, ethical concerns regarding academic integrity remain unaddressed in 89% of the surveyed studies.

4.1.3 Computer vision and Behavioral analytics

Facial emotion recognition systems integrate transfer learning with traditional feature extraction techniques to assess classroom engagement. Rajae et al.'s (2024) hybrid MobileNet-LBP model achieved state-of-the-art accuracy (98.2%) in binary engagement classification; however, its performance declined under suboptimal lighting conditions (Δaccuracy = 12.7%). Predictive analytics based on LMS digital traces (Pecuchova and Drlik, 2024) identified at-risk students using BIRCH clustering (F1-score = 0.87), although the approach lacked validation through real-world intervention trials. While these non-invasive monitoring tools offer promising insights, they raise critical privacy concerns that remain largely unaddressed in current implementation frameworks.

4.1.4 Immersive learning integrations

AI-enhanced Virtual Reality/Augmented Reality (VR/AR) systems bridge theoretical and experiential learning. Anatomage virtual dissection (Koney et al., 2024) increased medical student engagement by 34% compared to cadaver-based practice, constrained by equipment costs exceeding $72,000 per station. Yang et al.'s (2023) gamified Artificially Intelligent Educational Robots (AIERs) system elevated flow experiences (d = 1.21) and problem-solving skills in laboratory safety training, though small sample sizes (N = 53) limit generalizability. Notably, 67% of VR studies (Peisachovich et al., 2021) neglected longitudinal effectiveness assessments beyond 8-week intervals.

4.1.5 Predictive analytics and educational data mining

Learning analytics platforms like i-Ntervene (Utamachant et al., 2023) demonstrated 83% behavioral improvement through automated programming feedback, yet struggled with complex algorithm comprehension (success rate = 41%). The Data-Driven Decision-Making (DDDM) model (Kaspi and Venkatraman, 2023) enabled assessment transformation with 29% reduction in exam anxiety, though single-institution implementation restricts policy implications. Cross-study analysis reveals predictive models achieve an average precision of 0.79 for dropout prediction but frequently omit socioeconomic moderators (present in only 11% of studies).

4.1.6 Infrastructure optimization systems

Network optimization algorithms enhance educational IoT ecosystems. Xue and Yi's (2022) Improved Energy Efficient Scalable Routing Algorithm (IEESRA) protocol improved Physical Education (PE) class data collection efficiency by 58% through adaptive routing, yet faced scalability challenges beyond 200-node deployments. Bryceson (2019) agricultural Internet of Things (IoT) network achieved 92% crop monitoring accuracy but required $145,000 initial investment, highlighting adoption barriers in resource-constrained institutions.

The reviewed studies confirm the diversification of AIHE, with adaptive learning systems, NLP, and immersive simulations being the most prominent. However, their engagement effects vary depending on pedagogical design and contextual constraints. Adaptive systems report high satisfaction levels, yet their generalizability is often limited due to single-institution deployments and lack of longitudinal validation. Moreover, emotional engagement is unevenly addressed. While facial recognition systems and sentiment-aware chatbots show promise in capturing affective responses, only 18% of studies evaluate sustained emotional or motivational outcomes. This reflects a cognitive bias in current research, where behavioral traces are over-represented, while subjective experiences are under-explored. Importantly, equity issues—such as data privacy, cultural appropriateness, and accessibility for students with disabilities—are rarely discussed, suggesting a need to embed ethical inquiry into future AI adoption studies.

Intelligent tutoring systems showed the strongest cognitive engagement gains, but required consistent use to achieve significant effects. Generative AI tools improved behavioral engagement in language learning but raised concerns about critical thinking suppression in graduate-level writing tasks. Immersive systems enhanced emotional engagement but faced scalability barriers due to equipment costs. Notably, contextual variability emerged: AI chatbots were more effective in blended classrooms than fully online settings, while predictive analytics showed higher accuracy compared to social sciences. These disparities highlight the need for discipline-specific AI integration strategies.

This study finds that different types of AI technologies influence student engagement through distinct mechanisms. For instance, intelligent tutoring systems enhance cognitive and behavioral engagement through personalized learning paths and real-time feedback (4.1.1). In contrast, natural language processing tools (4.1.2) are particularly effective in fostering interaction, generating content, and delivering feedback, though they may also raise concerns related to academic integrity. Immersive learning technologies (4.1.4) are especially effective in promoting emotional and experiential engagement. While AI demonstrates general potential to enhance student engagement, its specific applications and impacts vary significantly across academic disciplines (4.1.5). Moreover, the mode and effectiveness of AI integration may differ depending on students’ educational levels (4.1.6). Therefore, the selection and implementation of AI tools must be carefully aligned with their technical affordances and the targeted learning objectives.

4.2 RQ2: how do AI-driven technologies influence student engagement?

AI-driven technologies influence student engagement through multiple pedagogical mechanisms, including personalized learning, real-time feedback, gamification, resource accessibility, affective comfort, and contextual adaptability. Table 6 lists the pedagogical strategies in AIHE studies.

Table 6
www.frontiersin.org

Table 6. Pedagogical strategies in AIHE.

4.2.1 Personalized learning and adaptive systems

AI facilitates personalized learning by analyzing students’ behavioral and cognitive data to recommend individualized learning paths. These systems significantly improve motivation and active participation. For example, Shang (2024) implemented a cloud-based K-means clustering system for Chinese language instruction, resulting in increased student engagement. Similarly, Liu and Yushchik (2024) employed a Spark-based recommendation model to align content delivery with learner needs, enhancing both participation and knowledge acquisition. Adaptive platforms such as Smart Sparrow led to a 96% improvement in student engagement (Linden et al., 2019), while predictive models integrated with learning analytics further promoted collaboration in online environments (Ouyang et al., 2023).

4.2.2 Generative AI for interaction and feedback

The deployment of generative AI tools, particularly ChatGPT and DeepSeek, has significantly improved classroom interactivity and feedback dynamics. Role-play simulations embedded in curricula enhanced critical thinking and classroom participation (Stampfl et al., 2024), while case-based integration in legal English courses increased student confidence and fostered autonomous learning (Bretan, 2024). The i-Ntervene platform exemplifies real-time adaptive support, diagnosing learner difficulties and providing timely interventions (Utamachant et al., 2023).

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

4.2.3 Gamification and immersive technologies

AI-driven gamified learning environments, such as the Gamified Artificial Intelligence Educational Robot (GAIER) system employing the theory-driven gamification Goal, Access, Feedback, Challenge, Collaboration design (GAFCC) model, enhanced student motivation and flow in safety education (Yang et al., 2023). Immersive experiences via virtual and augmented reality—ranging from entrepreneurship simulations (Sziegat, 2024) to virtual field explorations in science education (Loizzo et al., 2019)—increased student engagement by promoting authentic, experiential learning.

4.2.4 Immediate resource access and supplementation

AI tools streamline access to educational resources, particularly in digital and hybrid learning environments. ChatGPT, for instance, complements traditional instruction by offering immediate information retrieval, especially valued in asynchronous settings (Bretan, 2024; Yan and Fan, 2022). Interactive platforms like Nextbook support structured peer engagement (De Laet, 2022), and domain-specific digital tools (e.g., molecular modeling software) reduce cognitive and technological barriers to participation (Schuessler et al., 2024).

4.2.5 Affective factors: ethics, comfort, and acceptance

Beyond technical features, affective elements such as ethical awareness and learner comfort shape engagement outcomes. The use of ChatGPT has been associated with heightened ethical understanding, indirectly reducing academic dishonesty concerns and fostering greater learning initiative (Wood and Moss, 2024). Similarly, students using AI-powered diagnostic tools—such as Traditional Chinese Medicine (TCM)-identification chatbots—reported a 93.37% satisfaction rate, positively correlating with increased learning investment (He et al., 2024).

4.2.6 Contextual adaptation in special learning scenarios

AI technologies have proven essential in remote and special education contexts. During the COVID-19 pandemic, AI-enabled distance learning platforms-maintained engagement in the absence of face-to-face instruction (Salloum et al., 2024). AI-facilitated microlearning (e.g., short videos, podcasts) also supported learning flexibility (Salas-Díaz and González-Bello, 2023). Moreover, adaptive systems optimized cognitive load for students with special education needs, enhancing overall learning experience (Khasawneh and Khasawneh, 2024).

Despite these benefits, some studies caution against overreliance on AI tools, which may suppress creativity and critical thinking (Bretan, 2024). Structural limitations such as inconsistent internet connectivity (Cullinan et al., 2021) and unequal access to AI tools across regions (Rahman, 2021) highlight the need for equitable policy and infrastructure support.

This review confirms that AI influences all three engagement domains—cognitive, emotional, and behavioral—yet the strength and direction of influence are contingent upon the type of AI and measurement tools. Cognitive engagement is most consistently reported, often operationalized through academic performance, task completion rates, or pre/post-test scores. Emotional engagement is more challenging to quantify. While NLP-based sentiment tracking or self-report scales are used, their reliability remains uncertain due to social desirability bias and tool opacity. Behavioral engagement is typically captured through digital activity logs, which may conflate task compliance with genuine participation. Notably, only 14% of studies employed multimodal engagement frameworks, limiting the analytical granularity needed to validate engagement claims across diverse educational settings.

4.3 RQ3: what role do teaching methods play between AI technologies and student engagement?

4.3.1 Alignment of pedagogical design with AI affordances

The degree to which teaching methods are aligned with the affordances of AI technologies is a key determinant of student engagement. The effectiveness of AI tools is moderated by instructional design along three principal pathways: interaction amplification, feedback loop optimization, and differentiated scaffolding. These pathways explain how pedagogical choices transform AI affordances into learner-centered engagement mechanisms. For instance, He et al. (2024) demonstrated that incorporating a TCM constitution recognition robot into nursing curricula enhanced both student participation and self-efficacy. Similarly, Bretan (2024) reported that legal English writing tasks mediated by ChatGPT, when embedded in English for Specific Academic Purposes (ESAP) courses, fostered collaborative learning through group-based verification of legal terms and template generation, thereby increasing classroom interactivity.

Moreover, AI-generated outputs are most effective when coupled with reflective or collaborative pedagogical elements. Mende et al. (2024) emphasized that generative AI tools used in tasks such as code debugging or writing improvement require structured peer discussion and instructor feedback to prevent superficial learning and dependency on automation. Additionally, the effectiveness of personalized learning through AI hinges on the implementation of differentiated teaching strategies. Liu and Yushchik (2024) found that AI-driven learning pathways tailored by student personality profiles led to higher academic performance. However, excessive reliance on AI monitoring weakened teachers’ real-time perception of engagement, as indicated by lower instructor ratings (3.8/5) compared to student self-assessments (4.5/5). This is echoed in Khasawneh and Khasawneh (2024) work on special education, which argues that adaptive AI technologies must be embedded within tiered instruction frameworks to prevent cognitive overload.

4.3.2 Modulation of AI application and ethical boundaries

Teaching methods serve a regulatory function by modulating when, how, and to what extent AI tools are utilized in classroom settings. Blended learning environments, in particular, exemplify the necessity of pedagogical control in integrating AI. Tian and Song (2024) demonstrated that students’ engagement in blended courses was significantly influenced by teachers’ preparedness and attitudinal alignment with AI tools. Effective use of AI required integration into specific interactional stages—such as AI-assisted pre-class preparation followed by in-class discussion.

The design-ethics-support (DES) model proposed by Aldhafeeri and Alotaibi (2023) further underscores that AI should be positioned to support either collaboration (e.g., virtual simulations) or self-directed exploration (e.g., personalized content curation), rather than dominate the learning process. Bretan (2024) further noted that without pedagogical scaffolding—such as using AI to support structured legal debates—students might revert to superficial information retrieval. Similarly, Wood and Moss (2024) called for embedding ethical inquiry into AI-supported instruction by encouraging learners to compare AI-generated texts with peer-reviewed sources, thus promoting critical awareness and reducing the risk of misuse.

4.3.3 Pedagogical innovation as a catalyst for effective AI integration

While current innovations focus on embedding AI into existing instructional models, emerging trajectories suggest a paradigm shift wherein AI may assume co-agency roles in the instructional process. This entails moving beyond the use of AI as a support tool toward reconfiguring teaching itself—where AI systems dynamically adapt, deliver, assess, and even co-design content alongside human educators. The impact of AI on student engagement is ultimately constrained or enhanced by the innovative capacity of teaching methods and educator competencies. Abildinova et al. (2024) found that educators trained in active pedagogies such as CBL and PBL were better positioned to utilize AI in designing collaborative and analytical learning tasks, with 99.05% of surveyed teachers affirming these methods’ efficacy in promoting engagement. Likewise, Sekli et al. (2024) emphasized the importance of teachers’ “AI literacy”—including skills in prompt engineering and output evaluation—as a precondition for transforming AI tools into pedagogically meaningful resources.

Furthermore, innovative instructional settings can significantly broaden the applicability of AI in the classroom. Stampfl et al. (2024) illustrated how role-playing games integrating ChatGPT as a simulated business consultant enhanced students’ strategic thinking. In a similar vein, Salloum et al. (2024) reported that combining inquiry-based instruction with 3D holographic visualizations heightened students’ sense of immersion in science education. These examples highlight the centrality of pedagogical creativity in realizing the transformative potential of AI technologies. In the long term, AI technologies may not merely support pedagogy, but co-define the epistemic structures of learning. A shift toward AI-human co-agency teaching models is foreseeable, where algorithmic systems personalize content flows in real time, evaluate student emotional and cognitive states, and initiate context-aware scaffolding—leaving human instructors to focus on ethical guidance, value formation, and social–emotional mentoring.

4.3.4 Challenges and limitations in pedagogical mediation of AI

Despite the promising synergy between AI technologies and pedagogy, several challenges persist. One critical issue is the risk of cognitive overload and pedagogical detachment when digital tasks are poorly scaffolded. Schuessler et al. (2024) warned that without phased instructional guidance, the cognitive demands imposed by AI-supported activities may exceed learners’ processing capacity. Zhao et al. (2024) found that unstructured use of generative AI tools such as ChatGPT led to decreased levels of active thinking, with students prioritizing convenience over cognitive engagement.

In addition, structural barriers, particularly those related to infrastructure and teacher training continue to limit the pedagogical integration of AI. Rahman (2021) and Cullinan et al. (2021) both emphasized the persistent gap in access to reliable internet, appropriate devices, and ongoing professional development. These limitations inhibit the full realization of AI’s benefits, reinforcing the need for systemic investment and capacity-building initiatives.

Table 7 reveals that 86% of AI-based teaching methods enhance the learning experience through two primary approaches: first, by tailoring learning paths based on individual student profiles (such as through automated learning data analysis), and second, by creating immersive learning environments using technologies like virtual reality. These AI-supported pedagogies mainly operate through three mechanisms: 37% reduce cognitive load by breaking down complex content with intelligent systems; 29% enhance the authenticity of teacher-student interaction, for example through AI-simulated online discussions; and 34% offer immediate feedback, helping students identify strengths and areas for improvement.

Table 7
www.frontiersin.org

Table 7. Teaching methods in AIHE.

Notably, after 2023, the use of generative tools such as ChatGPT surged by 42%, transforming three critical aspects of teaching: automated generation of assessment tasks, AI-mediated communication via chatbots, and flexible student evaluation using intelligent systems. These findings suggest that teaching methods function as “converters,” translating the technical advantages of AI into tangible learning outcomes. At the same time, they underscore the importance of adapting pedagogical strategies to align with the evolving capabilities of next-generation AI tools.

4.4 Limitations and research gaps

Despite the growing body of literature on AIHE, several methodological, pedagogical, and infrastructural limitations persist, which constrain the field’s empirical robustness and practical applicability. Current studies exhibit methodological fragmentation, particularly in evaluating AI-driven pedagogical interventions. While innovations like production-oriented approach (POA)-based blended learning demonstrate short-term improvements in writing performance and classroom participation (Zhao et al., 2024), the absence of standardized frameworks for longitudinal assessment undermines claims about sustained engagement. Most research relies on narrow quantitative metrics (e.g., test scores) or self-reported data, neglecting multidimensional analyses of cognitive, affective, and behavioral engagement (Khasawneh and Khasawneh, 2024; Yang et al., 2023). Geographical and cultural concentration also weakens the generalizability of findings. Over 75% of AIHE studies focus on single-region contexts, leaving open questions about the cross-cultural validity of engagement strategies (Alghamdi et al., 2024; Liu and Yushchik, 2024). Consequently, cross-cultural applicability remains underexplored. For instance, ethical concerns around algorithmic bias in adaptive learning tools remain disproportionately examined in Western contexts, with minimal attention to Global South challenges like digital infrastructure disparities (Stampfl et al., 2024).

Pedagogically, the mediating role of teaching methods in AI-enhanced engagement remains underexplored. Although blended learning models show promise when combined with AI tools like ChatGPT (Zhao et al., 2024), few studies dissect how instructor-led scaffolding interacts with AI-generated feedback to modulate learning outcomes (Manning et al., 2025). Similarly, while adaptive simulations improve STEM engagement through immersive experiences (Koney et al., 2024), their integration with collaborative pedagogies (e.g., peer quizzing) remains empirically untested (Kiron et al., 2024). Technical and infrastructural challenges continue to obstruct widespread adoption. Barriers include limited bandwidth, non-integrated learning management systems, and significant variability in educators’ AI literacy (Aldhafeeri and Alotaibi, 2023; Latygina et al., 2024). These challenges disproportionately affect marginalized groups, as only 12% of AIHE trials include accommodations for neurodiverse learners or students with disabilities (Khasawneh and Khasawneh, 2024). Finally, longitudinal insights remain particularly scarce, with 90% of studies adopting cross-sectional designs that fail to capture AI’s long-term impacts on burnout, career readiness, or self-regulated learning (Tuma et al., 2021; Yan and Fan, 2022). This significantly limits the field’s ability to inform sustainable educational policy and practice.

4.5 Future research

To address these gaps, future research must prioritize transdisciplinary frameworks that bridge AI innovation with pedagogical theory. Mixed-methods approaches—combining multimodal learning analytics (e.g., eye-tracking, sentiment analysis) with qualitative narratives—could unravel the complex interplay between AI tools, teaching methods, and engagement outcomes (McGuinness and Fulton, 2019). Rigorous experimental designs, including randomized controlled trials comparing AI-enhanced flipped classrooms against traditional instruction, are essential to isolate the mediating effects of pedagogical strategies (Ouyang et al., 2023). Cross-cultural collaborations should be prioritized to examine how collectivist versus individualist cultural norms shape AI adoption in diverse educational ecosystems (Suriagiri et al., 2022). For example, comparative studies of ChatGPT usage in Asian versus European universities could inform culturally responsive AI design principles.

Infrastructure development must parallel pedagogical innovation. Open-source AI platforms compatible with low-bandwidth environments could democratize access to engagement-enhancing tools while addressing equity concerns (Cao, 2023). Longitudinal studies tracking underrepresented student cohorts (e.g., rural learners, non-traditional adults) would clarify AI’s role in mitigating achievement gaps. Industry partnerships may accelerate the translation of emerging technologies—such as 3D holograms for science education—into classroom-ready solutions (Salloum et al., 2024). Concurrently, teacher training programs should emphasize AI co-design principles, empowering educators to align tools with curricular goals rather than retrofitting pedagogy to technological constraints (Abildinova et al., 2024).

Ethical and policy considerations must underpin all advancements. Multistakeholder efforts to establish global standards for AI transparency, data governance, and algorithmic accountability are critical to prevent the entrenchment of biases (Ilieva et al., 2023). Policymakers should incentivize institutional investments in AI literacy programs while mandating inclusive design practices for educational technologies (Sekli et al., 2024).

The realization of AIHE’s transformative potential demands a paradigm shift from fragmented, tool-centric experimentation to holistic, pedagogy-driven innovation (Wang et al., 2025). By embedding AI within robust theoretical frameworks, fostering cross-sector collaborations, and centering equity in both research and implementation, scholars can ensure these technologies augment—rather than replace—the human elements of teaching and learning. Future progress hinges on balancing technological possibilities with ethical imperatives, ultimately creating AI-enhanced ecosystems that prioritize sustainable engagement, inclusivity, and pedagogical integrity (Wang et al., 2024b).

To move toward globally relevant AI pedagogy, we recommend that future research adopt a cross-cultural comparative lens, integrating insights from education systems with varied pedagogical traditions and technological capacities. As education systems worldwide grapple with the integration of generative AI, the future of teaching may hinge not on whether AI can replace teachers, but on how humans and machines can co-construct meaningful learning experiences.

4.6 Contributions

This study draws on TAM, CLT, and SDT to analyze how AI technologies affect student engagement. TAM explains how the perceived ease of use influences the adoption of AI tools in higher education; CLT focuses on how AI reduces cognitive overload by optimizing content delivery and learning pathways; SDT interprets how AI supports intrinsic motivation and engagement by addressing basic psychological needs. However, applying these theories to AI integration also reveals certain tensions. Through the pedagogical mediation mechanisms proposed in the PMAISE model, this study provides empirical insights into how these theories can interact and complement one another. For instance, structured feedback offers clear guidance and immediate response, which not only supports CLT’s emphasis on effective information processing but also enhances students’ sense of competence, as highlighted in SDT. Likewise, interactive scaffolding, by encouraging peer collaboration and teacher-learner dialog, supports CLT’s social learning perspective and enhances relatedness as emphasized in SDT. These findings suggest that AI adoption (TAM) is only a starting point—its real value lies in how pedagogical design leverages AI to optimize learning efficiency (CLT) and meet students’ deeper psychological needs (SDT), thereby sustaining engagement. If AI tools are not effectively integrated with pedagogical mediation, their long-term impact on engagement may be limited, regardless of how easy they are to adopt. This highlights the importance of moving beyond a technology-centric lens to adopt a more integrative perspective—one that centers on pedagogy as the key bridge between technical adoption and meaningful learning outcomes.

5 Conclusion

This systematic review critically examined the role of artificial intelligence in higher education by synthesizing empirical evidence from 73 peer-reviewed studies published over the past decade. Following the PRISMA framework, the review identified and analyzed literature sourced from Scopus and WOS using defined search criteria centered on AI, higher education, teaching methods, and student engagement. A key contribution of this review is the inductive development of the PMAISE model. This model emerged from an iterative thematic synthesis of findings across the 73 studies, combining deductive coding with inductively identified engagement mechanisms. PMAISE frames pedagogical strategies as active mediators that translate AI affordances into student engagement outcomes through instructional scaffolding, interactional feedback, differentiation, and ethical control. This model addresses conceptual fragmentation in current AIHE research by offering an integrated framework that links AI technologies, teaching methods, and engagement pathways.

The findings consistently indicate that AI technologies are most effective in enhancing student engagement when integrated with well-designed, interactive instructional strategies such as flipped classrooms and project-based learning. However, the effectiveness of AI is not determined solely by the technology itself; it is strongly contingent upon pedagogical design, teacher competence, and institutional support.

This review offers three principal contributions. First, it synthesizes a decade of AIHE literature to map technological trends and pedagogical patterns, offering researchers and practitioners a consolidated knowledge base. Second, it introduces the PMAISE framework as a theoretically informed model to guide AI-pedagogy integration in diverse contexts. Third, the review identifies actionable implications: educators are encouraged to align AI tools with active learning strategies; institutional leaders should invest in AI literacy and infrastructure; and researchers should prioritize mixed-methods evaluations and longitudinal studies to assess sustained engagement outcomes.

In conclusion, this review foregrounds the central role of pedagogy in realizing the engagement potential of AI in higher education. It cautions against a tool-driven logic that marginalizes instructional design, teacher agency, and student diversity. As AI adoption accelerates, issues such as algorithmic bias, surveillance risks, and unequal access to infrastructure must be addressed through inclusive policies and ethical safeguards. The future of AIHE lies not in replacing educators, but in fostering human–AI partnerships that center equity, interactivity, and pedagogical integrity.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Author contributions

DYL: Project administration, Investigation, Methodology, Visualization, Writing – original draft, Resources, Writing – review & editing, Conceptualization, Validation, Formal analysis. SW: Writing – original draft, Investigation, Resources, Writing – review & editing, Visualization, Software, Formal analysis, Project administration, Conceptualization, Methodology, Supervision. XTL: Investigation, Funding acquisition, Writing – review & editing, Formal analysis, Methodology. SMR: Writing–original draft, Writing – review & editing, Methodology, Supervision.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This study received financial support from the 2024 Humanities and Social Sciences Program at Jiaying University (Project number: 2024SZY01), titled “The Mechanism and Empirical Study of the Impact of Proactive Personality on College Students’ National Identity.”

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abildinova, G., Abdykerimova, E., Assainova, A., Mukhtarkyzy, K., and Abykenova, D. (2024). Preparing educators for the digital age: teacher perceptions of active teaching methods and digital integration. Front. Educ. 9:1473766. doi: 10.3389/feduc.2024.1473766

Crossref Full Text | Google Scholar

Aldhafeeri, F., and Alotaibi, A. (2023). Reimagining education for successful and sustainable digital shifting. SAGE Open 13. doi: 10.1177/21582440231154474

Crossref Full Text | Google Scholar

Alghamdi, A. A., Alyousif, G. F., AlQarni, A. M., Amer, F. H., Alfadhel, T. O., Almutairi, R. N., et al. (2024). Factors affecting Saudi medical students’ engagement during synchronous and asynchronous eLearning and their impacts on the students’ academic achievement: a national survey. BMC Med. Educ. 24:358. doi: 10.1186/s12909-024-05323-3

PubMed Abstract | Crossref Full Text | Google Scholar

Aluko, H. A., Aluko, A., Offiah, G. A., Ogunjimi, F., Aluko, A. O., Alalade, F. M., et al. (2025). Exploring the effectiveness of AI-generated learning materials in facilitating active learning strategies and knowledge retention in higher education. Int. J. Organ. Anal. doi: 10.1108/ijoa-07-2024-4632

Crossref Full Text | Google Scholar

Bannert, M. (2002). Managing cognitive load—recent trends in cognitive load theory. Learn. Instr. 12, 139–146. doi: 10.1016/S0959-4752(01)00021-4

Crossref Full Text | Google Scholar

Bingham, T., Reid, S., and Ivanovic, V. (2016). Paint me a picture: translating academic integrity policies and regulations into visual content for an online course. Int. J. Educ. Integr. 12. doi: 10.1007/s40979-016-0008-8

Crossref Full Text | Google Scholar

Bretan, B. (2024). Teaching and learning legal english with ai: a case study on student engagement in an esap course. Stud. Univ. Babes-Bolyai Philol. 69, 153–168. doi: 10.24193/subbphilo.2024.4.07

Crossref Full Text | Google Scholar

Bryceson, K. (2019) in Disruptive technologies supporting agricultural education. eds. J. Domenech, P. Merello, E. DeLaPoza, D. Blazquez, and R. PenaOrtiz (Valencia: University of Queensland), 315–322.

Google Scholar

Cao, C. (2023). A new cloud with IoT-enabled innovation and skill requirement of college English teachers on blended teaching model. Int. J. Recent Innovation Trends Computing Communication 11, 127–137. doi: 10.17762/ijritcc.v11i6s.6816

Crossref Full Text | Google Scholar

Chavez-Maisterra, I., Corona-Pantoja, A. C., Madrigal-Gómez, L. E., Zamora-González, E. O., and López-Hernández, L. B. (2024). Student engagement in patient safety and healthcare quality improvement: a brief educational approach. Healthcare 12:1617. doi: 10.3390/healthcare12161617

PubMed Abstract | Crossref Full Text | Google Scholar

Chen, C.-Y., Shi, X.-W., Yin, S.-Y., Fan, N.-Y., Zhang, T.-Y., Zhang, X.-N., et al. (2024). Application of the online teaching model based on BOPPPS virtual simulation platform in preventive medicine undergraduate experiment. BMC Med. Educ. 24:1255. doi: 10.1186/s12909-024-06175-7

PubMed Abstract | Crossref Full Text | Google Scholar

Cullinan, J., Flannery, D., Harold, J., Lyons, S., and Palcic, D. (2021). The disconnected: COVID-19 and disparities in access to quality broadband for higher education students. Int. J. Educ. Technol. High. Educ. 18:26. doi: 10.1186/s41239-021-00262-1

PubMed Abstract | Crossref Full Text | Google Scholar

Cundell, A., and Sheepy, E. (2018). Student perceptions of the most effective and engaging online learning activities in a blended graduate seminar. Online Learning J. 22, 87–102. doi: 10.24059/olj.v22i3.1467

Crossref Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

De Laet, T. (2022). Interactive courseware to support blended learning. In Jarvinen, H.-M., Silvestre, S., Llorens, A., and Nagy, B.V. (Eds.), SEFI - Annu. Conf. Eur. Soc. Eng. Educ., proc. (pp. 1122–1130). European Society for Engineering Education (Brussels SEFI).

Google Scholar

Deng, C., Peng, J., and Li, S. (2022). Research on the state of blended learning among college students–a mixed-method approach. Front. Psychol. 13:1054137. doi: 10.3389/fpsyg.2022.1054137

PubMed Abstract | Crossref Full Text | Google Scholar

Gunasekare, U. L. T. P. (2016). Self determination theory (SDT): a review on SDT as a complementary theory of motivation. Kelaniya J. Human Resource Management 11:58. doi: 10.4038/kjhrm.v11i1.28

Crossref Full Text | Google Scholar

Hamadi, H., Tafili, A., Kates, F., Larson, S., Ellison, C., and Song, J. (2023). Exploring an innovative approach to enhance discussion board engagement. TechTrends 67, 741–751. doi: 10.1007/s11528-023-00850-0

Crossref Full Text | Google Scholar

Hasmizan, N. H., Azman, M. N. A., Prestoza, M. J. R., and Othman, M. S. (2025). Design and development of mobile teaching aids using go-based electronic games for teaching digital electronics in higher education. Int. J. Interact. Mob. Technol. 19, 4–21. doi: 10.3991/ijim.v19i03.51691

Crossref Full Text | Google Scholar

Hava, K. (2021). The effects of the flipped classroom on deep learning strategies and engagement at the undergraduate level. Particip. Educ. Res. 8, 379–394. doi: 10.17275/per.21.22.8.1

Crossref Full Text | Google Scholar

He, W., Qiu, X., and Liao, J. (2024). Integration of traditional Chinese medicine constitution identification robot into nursing informatics curriculum: a case study at Xiangnan university. Front. Artif. Intell. Appl 387, 115–123. doi: 10.3233/FAIA240249

Crossref Full Text | Google Scholar

Husain, A. (2024). Potentials of ChatGPT in computer programming: insights from programming instructors. J. Inf. Technol. Educ. Res. 23:002. doi: 10.28945/5240

Crossref Full Text | Google Scholar

Ilieva, G., Yankova, T., Klisarova-Belcheva, S., Dimitrov, A., Bratkov, M., and Angelov, D. (2023). Effects of generative chatbots in higher education. Information 14:492. doi: 10.3390/info14090492

Crossref Full Text | Google Scholar

Johnson, C., Hill, L., Lock, J., Altowairiki, N., Ostrowski, C., dos Santos, L. R., et al. (2017). Using design-based research to develop meaningful online discussions in undergraduate field experience courses. Int. Rev. Res. Open Distance Learn. 18, 36–53. doi: 10.19173/irrodl.v18i6.2901

Crossref Full Text | Google Scholar

Kaspi, S., and Venkatraman, S. (2023). Data-driven decision-making (DDDM) for higher education assessments: a case study. Systems 11:306. doi: 10.3390/systems11060306

Crossref Full Text | Google Scholar

Khasawneh, Y., and Khasawneh, M. (2024). Cognitive load analysis of adaptive learning technologies in special education classrooms: a quantitative approach. Int. J. Adv. Appl. Sci. 11, 34–41. doi: 10.21833/ijaas.2024.12.004

Crossref Full Text | Google Scholar

Kiron, N., Omar, M. T., and Vassileva, J. (2024). Engagement strategies in a peer-quizzing game: investigating student interactions and powergaming. Proc. European Conf. Games-based Learn 18, 506–515. doi: 10.34190/ecgbl.18.1.2729

Crossref Full Text | Google Scholar

Koney, N. K.-K., Ansah, A. O., Asaku, B. N. A., Ahenkorah, J., Hottor, B. A., Adutwum-Ofosu, K., et al. (2024). Anatomage virtual dissection versus traditional human body dissection in anatomy pedagogy: insights from Ghanaian medical students. BMC Med. Educ. 24:1059. doi: 10.1186/s12909-024-06029-2

PubMed Abstract | Crossref Full Text | Google Scholar

Kuleto, V., Ilić, M., Dumangiu, M., Ranković, M., Martins, O. M., Păun, D., et al. (2021). Exploring opportunities and challenges of artificial intelligence and machine learning in higher education institutions. Sustainability 13:10424. doi: 10.3390/su131810424

Crossref Full Text | Google Scholar

Latygina, A., Zvarych, I., Latygina, N., Dubinina, O., Kolot, L., and Yuvkovetska, Y. (2024). The role of mobile applications in a foreign language learning. WSEAS Trans. Inf. Sci. Appl. 21, 47–54. doi: 10.37394/23209.2024.21.5

Crossref Full Text | Google Scholar

Linden, K., Pemberton, L., and Webster, L. (2019). Evaluating the bones of adaptive learning: Do the initial promises really increase student engagement and flexible learning within first year anatomy subjects? Domenech, J., Merello, P., DeLaPoza, E., Blazquez, D., and PenaOrtiz, R. (Eds.), València: Charles Sturt University; pp. 331–339).

Google Scholar

Liu, Z., and Yushchik, E. (2024). Exploring the prospects of using artificial intelligence in education. Cogent Educ. 11:2353464. doi: 10.1080/2331186X.2024.2353464

Crossref Full Text | Google Scholar

Loizzo, J., Harner, M. J., Weitzenkamp, D. J., and Kent, K. (2019). Electronic field trips for science engagement: the streaming science model. J. Appl. Commun. 103:2. doi: 10.4148/1051-0834.2275

Crossref Full Text | Google Scholar

Madleňák, R., D’Alessandro, S. P., Marengo, A., Pange, J., and Ivánneszmélyi, G. (2021). Building on strategic elearning initiatives of hybrid graduate education a case study approach: MHEI-ME Erasmus+ project. Sustainability (Switzerland) 13:7675. doi: 10.3390/su13147675

Crossref Full Text | Google Scholar

Malonisio, M. (2023). Blended learning modality in teaching statistics in a graduate program of a state university in the Philippines. J. Ilm. Peuradeun 11, 403–424. doi: 10.26811/peuradeun.v11i2.889

Crossref Full Text | Google Scholar

Manning, J., Baldwin, J., and Powell, N. (2025). Human versus machine: the effectiveness of ChatGPT in automated essay scoring. Innov. Educ. Teach. Int. 62:1500–1513. doi: 10.1080/14703297.2025.2469089

Crossref Full Text | Google Scholar

McGuinness, C., and Fulton, C. (2019). Digital literacy in higher education: a case study of student engagement with e-tutorials using blended learning. J. Inform. Technol. Educ. 18, 001–028. doi: 10.28945/4190

PubMed Abstract | Crossref Full Text | Google Scholar

Mende, S., Proske, A., and Narciss, S. (2024). Generative preparation tasks in digital collaborative learning: actor and partner effects of constructive preparation activities on deep comprehension. Front. Psychol. 15:1335682. doi: 10.3389/fpsyg.2024.1335682

PubMed Abstract | Crossref Full Text | Google Scholar

Naeem, N.-K., and Khan, R. A. (2019). Stuck in the blend: challenges faced by students enrolled in blended programs of masters in health professions education. Pak. J. Med. Sci. 35, 929–933. doi: 10.12669/pjms.35.4.12

PubMed Abstract | Crossref Full Text | Google Scholar

Ouyang, F., Wu, M., Zheng, L., Zhang, L., and Jiao, P. (2023). Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course. Int. J. Educ. Technol. High. Educ. 20:4. doi: 10.1186/s41239-022-00372-4

PubMed Abstract | Crossref Full Text | Google Scholar

Pecuchova, J., and Drlik, M. (2024). Enhancing the early student dropout prediction model through clustering analysis of students’ digital traces. IEEE Access 12:e13692. 159336–159367. doi: 10.1109/ACCESS.2024.3486762

Crossref Full Text | Google Scholar

Peisachovich, E., Appel, L., Sinclair, D., Luchnikov, V., and Da Silva, C. (2021). CVRriculum program faculty development workshop: outcomes and suggestions for improving the way we guide instructors to embed virtual reality into course curriculum. Cureus. 13:e13692. doi: 10.7759/cureus.13692

PubMed Abstract | Crossref Full Text | Google Scholar

Rahman, A. (2021). Using students’ experience to derive effectiveness of COVID-19-lockdown-induced emergency online learning at undergraduate level: evidence from Assam, India. High. Educ. Future. 8, 71–89. doi: 10.1177/2347631120980549

Crossref Full Text | Google Scholar

Rajae, A., Amina, R., and El Hassane, I. E. H. (2024). An improved student’s facial emotions recognition method using transfer learning. Indonesian J. Electrical Engineering Computer Science 36, 1199–1208. doi: 10.11591/ijeecs.v36.i2.pp1199-1208

Crossref Full Text | Google Scholar

Ryan, R. M., and Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 55, 68–78. doi: 10.1037/0003-066X.55.1.68

Crossref Full Text | Google Scholar

Salas-Díaz, F., and González-Bello, E. (2023). Preferences of micro-learning among university students in Mexico. Magis-Revista Internacional De Investigacion En Educacion 16, 1–22. doi: 10.11144/Javeriana.m16.pmeu

Crossref Full Text | Google Scholar

Salloum, S., Alhumaid, K., Alfaisal, A., Aljanada, R., and Alfaisal, R. (2024). Adoption of 3D holograms in science education: transforming learning environments. IEEE Access 12, 70984–70998. doi: 10.1109/ACCESS.2024.3402549

Crossref Full Text | Google Scholar

Saqr, M., Fors, U., and Tedre, M. (2018). How the study of online collaborative learning can guide teachers and predict students’ performance in a medical course. BMC Med. Educ. 18:24. doi: 10.1186/s12909-018-1126-1

PubMed Abstract | Crossref Full Text | Google Scholar

Scholten, D. J., Wijtmans, M., Dekker, S. J., Vuuregge, A. H., Boon, E. J. J., Vos, J. C., et al. (2021). Practical guidelines to redesign introductory chemistry courses using a flexible and adaptive blended format. J. Chem. Educ. 98, 3852–3863. doi: 10.1021/acs.jchemed.1c00644

Crossref Full Text | Google Scholar

Schuessler, K., Striewe, M., Pueschner, D., Luetzen, A., Goedicke, M., Giese, M., et al. (2024). Developing and evaluating an e-learning and e-assessment tool for organic chemistry in higher education. Front. Educ. 9:1355078. doi: 10.3389/feduc.2024.1355078

Crossref Full Text | Google Scholar

Sekli, G., Godo, A., and Veliz, J. (2024). Generative ai solutions for faculty and students: a review of literature and roadmap for future research. J. Inform. Technol. Educ. Res. 23:14. doi: 10.28945/5304

Crossref Full Text | Google Scholar

Shang, Y. (2024). Research on the individualized teaching of Chinese in higher vocational colleges based on cloud computing. Appl. Math. Nonlinear Sci. 9. doi: 10.2478/amns-2024-2733

Crossref Full Text | Google Scholar

Stampfl, R., Geyer, B., Deissl-O‘meara, M., and Ivkic, I. (2024). Revolutionising role-playing games with ChatGPT. Advances Artificial Intelligence Machine Learning 4, 2244–2257. doi: 10.54364/aaiml.2024.42129

Crossref Full Text | Google Scholar

Stanislaw, P. (2024). Solving the global STEM educational crisis using cognitive load optimization and artificial intelligence–a preliminary comparative analysis. Eurasia J. Math. Sci. Technol. Educ. 20, 1–12. doi: 10.29333/ejmste/14448

Crossref Full Text | Google Scholar

Strielkowski, W., Grebennikova, V., Lisovskiy, A., Rakhimova, G., and Vasileva, T. (2025). AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev. 33, 1921–1947. doi: 10.1002/sd.3221

Crossref Full Text | Google Scholar

Suriagiri, S., Norlaila, N., Wahyurudhanto, A., and Akrim, A. (2022). Online vs. in-campus, comparative analysis of intrinsic motivation inventory, student engagement and satisfaction: a way forward for post COVID-19 era. Electron. J. E-Learn. 20, 588–604. doi: 10.34190/ejel.20.5.2618

Crossref Full Text | Google Scholar

Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cogn. Sci. 12, 257–285. doi: 10.1016/0364-0213(88)90023-7

Crossref Full Text | Google Scholar

Sziegat, H. (2024). “Virtual simulation games in entrepreneurship education: status quo and prospects” in Proc. European conf. Games-based learn. eds. K. Kilsa and R. V. Basaiawmoit, vol. 18 (Aarhus: Dechema), 1099–1106.

Google Scholar

Tian, H., and Song, Y. (2024). The relationship between teaching practices, dedication to learning, and the learning effect of blended learning from the perspective of adults. Int. J. Innov. Res. Sci. Stud. 7, 36–55. doi: 10.53894/ijirss.v7i1.2413

Crossref Full Text | Google Scholar

Tuma, F., Nassar, A. K., Kamel, M. K., Knowlton, L. M., and Jawad, N. K. (2021). Students and faculty perception of distance medical education outcomes in resource-constrained system during COVID-19 pandemic. A cross-sectional study. Ann. Med. Surg. 62, 377–382. doi: 10.1016/j.amsu.2021.01.073

PubMed Abstract | Crossref Full Text | Google Scholar

Utamachant, P., Anutariya, C., and Pongnumkul, S. (2023). I-Ntervene: applying an evidence-based learning analytics intervention to support computer programming instruction. Smart Learn. Environ. 10:37. doi: 10.1186/s40561-023-00257-7

Crossref Full Text | Google Scholar

Wang, S., Chen, M., Ratnavelu, K., Shibghatullah, A. S. B., and Keoy, K. H. (2024a). Online classroom student engagement analysis based on facial expression recognition using enhanced YOLOv5 for mitigating cyberbullying. Meas. Sci. Technol. 36:015419. doi: 10.1088/1361-6501/ad8a80

Crossref Full Text | Google Scholar

Wang, S., Ratnavelu, K., and Bin Shibghatullah, A. S. (2025). UEFN: efficient uncertainty estimation fusion network for reliable multimodal sentiment analysis. Appl. Intell. 55:171. doi: 10.1007/s10489-024-06113-6

Crossref Full Text | Google Scholar

Wang, S., Shibghatullah, A. S., Iqbal, T. J., and Keoy, K. H. (2024b). A review of multimodal-based emotion recognition techniques for cyberbullying detection in online social media platforms. Neural Comput. & Applic. 36, 21923–21956. doi: 10.1007/s00521-024-10371-3

Crossref Full Text | Google Scholar

Wang, S., Shibghatullah, A. S., Keoy, K. H., and Iqbal, J. (2024c). YOLOv5 based student engagement and emotional states detection in E-classes. J. Robot. Netw. Artif. Life 10, 357–361. doi: 10.57417/jrnal.10.4_357

Crossref Full Text | Google Scholar

Wood, D., and Moss, S. H. (2024). Evaluating the impact of students’ generative AI use in educational contexts. J. Res. Innov. Teach. Learn. 17, 152–167. doi: 10.1108/JRIT-06-2024-0151

Crossref Full Text | Google Scholar

Xue, R., and Yi, H. (2022). Advancement in physical education teaching using improved energy efficient scalable routing algorithm-based wireless network. Wirel. Commun. Mob. Comput. 2022:2308255. doi: 10.1155/2022/2308255

Crossref Full Text | Google Scholar

Yan, D., and Fan, Q. (2022). Online informal learning community for interpreter training amid COVID-19: a pilot evaluation. PLoS One 17:e0277228. doi: 10.1371/journal.pone.0277228

PubMed Abstract | Crossref Full Text | Google Scholar

Yang, N., and Ghislandi, P. (2024). Quality teaching and learning in a fully online large university class: a mixed methods study on students’ behavioral, emotional, and cognitive engagement. High. Educ. 88, 1353–1379. doi: 10.1007/s10734-023-01173-y

Crossref Full Text | Google Scholar

Yang, Q.-F., Lian, L.-W., and Zhao, J.-H. (2023). Developing a gamified artificial intelligence educational robot to promote learning effectiveness and behavior in laboratory safety courses for undergraduate students. Int. J. Educ. Technol. High. Educ. 20:18. doi: 10.1186/s41239-023-00391-9

Crossref Full Text | Google Scholar

Yi, S., Zhang, Y., Lu, Y., and Shadiev, R. (2024). Sense of belonging, academic self-efficacy and hardiness: their impacts on student engagement in distance learning courses. Br. J. Educ. Technol. 55, 1703–1727. doi: 10.1111/bjet.13421

Crossref Full Text | Google Scholar

Zakarneh, B. I., Aljabr, F., AlSaid, N., and Jlassi, M. (2025). Assessing pedagogical strategies integrating ChatGPT in English language teaching: a structural equation modelling-based study. World J. Engl. Lang. 15, 364–375. doi: 10.5430/wjel.v15n3p364

Crossref Full Text | Google Scholar

Zhao, Y., Li, Y., Xiao, Y., Chang, H., and Liu, B. (2024). Factors influencing the acceptance of ChatGPT in high education: an integrated model with PLS-SEM and fsQCA approach. SAGE Open 14:21582440241289835. doi: 10.1177/21582440241289835

Crossref Full Text | Google Scholar

Zufiyardi, Z., Yusmaniarti, Y., Fraternesi, F., and Ibrahim, A. (2022). Mengukur Niat Penggunakan Aplikasi Akuntansi Dengan Pendekatan Theory Technology Acceptance Model (TAM). Jurnal Akuntansi Keuangan Dan Teknologi Informasi Akuntansi 2, 341–369. doi: 10.36085/jakta.v2i2.2805

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, higher education, student engagement, teaching methods, pedagogical mediation, education quality

Citation: Long DY, Wang S, Md Rashid S and Lu XT (2026) Artificial intelligence in higher education: a systematic review of its impact on student engagement and the mediating role of teaching methods. Front. Educ. 10:1648661. doi: 10.3389/feduc.2025.1648661

Received: 20 June 2025; Accepted: 10 September 2025;
Published: 02 February 2026.

Edited by:

Nisar Ahmed Dahri, University of Technology Malaysia, Malaysia

Reviewed by:

Cai Lianyu, Zhejiang Normal University, China
Aniella Vieriu, University Politehnica Bucharest, Romania

Copyright © 2026 Long, Wang, Md Rashid and Lu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Shuai Wang, MTgwMzU5MDg4NjhAMTYzLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.