Abstract
AI-driven personalised learning is increasingly shaping mathematics education, yet evidence remains fragmented regarding its role in developing learners’ mathematical problem-solving skills. This systematic review examined how AI-driven personalised learning influences students’ mathematical problem-solving skills. A structured search of recent empirical studies (2019–2025) identified 20 eligible investigations, which were analysed thematically. Findings show that AI tools, such as adaptive learning systems, intelligent tutoring systems, and chatbots, can enhance mathematical problem solving by providing tailored feedback, adaptive challenges, and scaffolded support that align with learners’ needs. These benefits were observed across primary, secondary, and tertiary settings, contributing to enhanced conceptual understanding, improved strategic reasoning, and increased learner engagement. At the same time, the review highlights notable variations in effectiveness. Some studies have reported an over-reliance on AI hints, misaligned adaptivity, platform complexity, and limited teacher readiness, which have constrained learners’ development of independent problem-solving skills. Infrastructure disparities and data privacy concerns also emerged as persistent challenges. Despite the growing number of studies on AI in mathematics education, limited systematic evidence exists that focuses specifically on AI-driven personalised learning and its influence on mathematical problem-solving processes. This review addresses this gap by synthesising recent empirical studies and identifying the key mechanisms through which AI personalisation supports or constrains learners’ problem-solving development. In general, the review suggests that AI-driven personalised learning holds meaningful potential for strengthening mathematics instruction when grounded in sound pedagogy and supported by adequate technological and instructional resources. This synthesis contributes evidence-based insights for educators and policymakers aiming to integrate AI responsibly and effectively in mathematics education.
Introduction
Mathematics remains a foundational discipline for academic progress, informed decision-making, and participation in a technologically advanced society (Opesemowo and Ndlovu, 2024). Yet persistent challenges in learners’ mathematical problem-solving abilities continue to undermine achievement across educational levels (Pane et al., 2017). In this review, mathematical problem solving refers to learners’ ability to understand mathematical tasks, select and apply appropriate strategies, monitor progress, and reflect on solutions. It includes both cognitive processes (conceptual understanding, reasoning, strategy selection) and metacognitive processes (planning, monitoring, evaluating), consistent with Polya’s and Schoenfeld’s models (Polya, 1945; Schoenfeld, 1985). One-size-fits-all instructional models often fail to accommodate individual differences in prior knowledge, strategy use, learning pace, and affective factors, which can contribute to disengagement, mathematics anxiety, and conceptual gaps (Hu, 2024; D’Mello et al., 2012). These enduring challenges have prompted growing interest in artificial intelligence (AI) as a pedagogical tool capable of offering personalised, adaptive, and responsive mathematical learning environments.
AI systems, including intelligent tutoring systems, chatbots, learning analytics dashboards, and adaptive learning platforms, are increasingly recognised as tools with the potential to deliver customised feedback, scaffold learners within productive cognitive zones, and support strategy development during problem-solving (Hwang and Tu, 2021; Hidayat et al., 2022). Empirical evidence suggests that AI-driven learning can enhance students’ persistence, conceptual understanding, and overall mathematical performance (Li et al., 2024; Holmes et al., 2019). In this review, personalised learning refers to instructional approaches that adapt learning experiences to learners’ needs, pace, and progress. AI-driven personalised learning refers specifically to personalisation enabled by artificial intelligence techniques such as machine learning, intelligent tutoring algorithms, natural language processing, and large language models (LLMs) that dynamically generate feedback, adapt task difficulty, or provide interactive guidance. In contrast, non-AI personalised learning may rely on rule-based branching systems or static learning analytics dashboards that do not autonomously adapt based on intelligent algorithms. This distinction is important because AI-driven systems may influence learners’ cognitive and metacognitive problem-solving processes differently than traditional personalisation models. However, to understand the state of knowledge, it is essential to consider what existing systematic reviews and meta-analyses have already established.
Several high-quality reviews have synthesised broad trends in AI-enhanced education. For example, Chen et al. (2020) identified conceptual and theoretical gaps in early AI-in-education research, while Zawacki-Richter et al. (2019) examined the scope of AI applications in higher education. Li et al. (2024) reviewed AI learning task design, emphasising opportunities and challenges in K–12 AI instruction. In mathematics education specifically, Hidayat et al. (2022) conducted a systematic review of AI use in mathematics instruction, and Hwang and Tu (2021) provided a bibliometric and systematic overview of research trends. In addition, meta-analyses such as Ma et al. (2014) demonstrated the effectiveness of intelligent tutoring systems, and Fryer et al. (2020) synthesised the impact of AI chatbots across educational contexts. Reviews of adaptive learning systems (Xie et al., 2019) and emerging analyses of AI in STEM education (Kasneci et al., 2023) further highlight AI’s instructional potential.
However, these reviews have several limitations in relation to the present study. Chen et al. (2020) primarily examined conceptual and theoretical gaps in AI-in-education research but did not focus on mathematics-specific problem-solving outcomes. Zawacki-Richter et al. (2019) concentrated on AI applications in higher education broadly, with limited attention to personalised learning mechanisms and their cognitive influence on mathematical reasoning. Similarly, Hidayat et al. (2022) and Hwang and Tu (2021) addressed AI trends in mathematics education but focused mainly on tool classification and research patterns rather than analysing how personalisation features such as scaffolding, adaptivity, and feedback shape learners’ problem-solving processes. Meta-analyses such as Ma et al. (2014) and Fryer et al. (2020) confirmed effectiveness of ITS and chatbots but did not isolate AI-driven personalised learning systems or explore their impact on problem-solving autonomy and cognitive development. These gaps justify the need for a focused synthesis that links AI personalisation mechanisms directly to mathematical problem-solving development.
While these reviews contribute significantly to understanding AI’s capabilities, none directly examine the specific relationship between AI-driven personalised learning and the development of mathematical problem-solving skills. Existing reviews tend to (a) focus broadly on AI in education, (b) emphasise general mathematics learning outcomes rather than problem-solving processes, (c) concentrate on technological classifications rather than cognitive mechanisms, or (d) explore AI tools without analysing how personalisation features such as adaptivity, scaffolding, feedback type, and metacognitive support influence mathematical reasoning and strategy use.
Thus, a clear gap remains concerning how AI-driven personalised learning environments support, constrain, or transform mathematical problem-solving. This area requires synthesis across empirical studies grounded in cognitive, metacognitive, and pedagogical perspectives. Addressing this gap is particularly important because problem-solving in mathematics involves complex cognitive processes, including conceptual restructuring, strategic planning, monitoring, and reflection, which may be differentially supported by AI systems (Kasneci et al., 2023; Xie et al., 2019). Given this landscape, the present systematic review synthesises empirical evidence on the role of AI-driven personalised learning in enhancing students’ mathematical problem-solving skills. Specifically, it examines how personalised AI systems provide adaptive feedback, scaffold learning processes, and offer tailored instructional pathways to support diverse learners. It also evaluates which AI design features, such as the level of adaptivity, scaffolding mechanisms, task sequencing, and feedback modalities, most effectively influence mathematical problem-solving outcomes across educational contexts. In doing so, the review seeks to answer the following key questions:
What is the impact of AI-driven personalized learning in improving students’ mathematical problem-solving skills?
How do AI-personalized learning environments affect different aspects of problem-solving ability in mathematics?
What AI-driven personalized learning features influence the effectiveness of improving mathematics problem-solving skills?
Theoretical framework
This review adopts a unified theoretical framework that integrates established learning theories with mathematics-specific problem-solving models to conceptualise how AI-driven personalised learning supports, shapes, or constrains mathematical problem-solving. While Constructivist Learning Theory (Piaget, 1936), Vygotsky’s Zone of Proximal Development (Vygotsky, 1978), and Adaptive Learning Theory (emerging from educational technology research in the 1990s–2000s) offer foundational insights into how learners acquire knowledge through exploration, scaffolding, and personalised instruction, these theories alone are insufficient for analysing the cognitive and metacognitive complexities of mathematical problem-solving. Therefore, this framework is strengthened by incorporating discipline-specific models—namely Polya’s problem-solving heuristics (Polya, 1945), Schoenfeld’s cognitive-metacognitive problem-solving theory (Schoenfeld, 1985), and APOS Theory developed by Dubinsky and colleagues in the 1990s—which together provide a coherent and field-appropriate conceptual lens.
Constructivist Learning Theory (Piaget, 1936) frames mathematical problem-solving as an active, meaning-making process in which learners construct understanding through engagement with tasks, representations, and feedback. AI-based learning environments that offer interactive content, exploratory tasks, and opportunities for iterative refinement align with this tradition by allowing learners to construct knowledge through purposeful engagement. Vygotsky’s (1978) ZPD further explains how learners progress when provided with scaffolded support that enables them to perform tasks beyond their independent abilities. In AI-driven contexts, scaffolding appears through real-time hints, adaptive prompts, worked examples, and automated difficulty adjustments. Adaptive Learning Theory, although not attributed to a single theorist, evolved through the work of educational technologists such as Corbett and Anderson (1995) and Shute and Zapata-Rivera (2012), emphasising how technology personalises instruction using learner data. These theories collectively justify the pedagogical value of AI while also illuminating risks such as overscaffolding, shallow engagement, misaligned feedback, and cognitive dependence on AI systems.
In practical AI-driven learning contexts, these risks are increasingly visible in generative AI systems such as ChatGPT, where learners may receive complete solution pathways that reduce cognitive effort and productive struggle. For instance, within Piaget’s constructivist view, AI tools should support exploration and concept formation; however, when students rely on AI-generated answers, they may bypass active construction of mathematical meaning. Similarly, within Vygotsky’s ZPD, AI-generated hints and prompts can function as scaffolding, but excessive prompting may prevent learners from internalising strategies independently. From Schoenfeld’s perspective, AI tools should strengthen learners’ metacognitive monitoring and strategic decision-making, yet generative AI may unintentionally encourage shortcut-solving behaviours if not moderated. Therefore, AI pedagogy must be intentionally designed to ensure that systems like chatbots foster reasoning, reflection, and autonomy rather than dependency.
To capture the specialised demands of mathematical problem-solving, this study draws on additional models. Polya’s (1945) four-stage model (understanding the problem, devising a plan, carrying out the plan, and reflecting) explains the heuristic processes involved in solving mathematical tasks. Schoenfeld (1985) identifies the importance of strategic decision-making, control, beliefs, and metacognition in learners’ problem-solving behaviour, offering constructs for analysing whether AI tools genuinely foster deeper reasoning or merely procedural success. APOS Theory, developed by Dubinsky and colleagues in the 1990s, explains how mathematical concepts evolve from actions to processes, objects, and structured schemas. APOS provides a way of interpreting whether AI systems support (or impede) conceptual progression in mathematics.
Integrating these theories yields a unified conceptual lens that views AI-driven personalised learning as a mediated learning space where learners interact with dynamic, data-informed support systems designed to optimize problem-solving. Through this lens, AI tools are analysed based on their ability to: (a) facilitate exploratory learning and meaning-making, (b) deliver appropriate scaffolding aligned with learners’ developmental zones, (c) adjust instruction responsively through adaptivity, (d) strengthen heuristic and strategic processes, and (e) support conceptual reorganisation and schema building.
This integrated framework explains not only why AI-based learning environments may improve mathematical problem-solving but also why they may produce mixed or negative outcomes. For instance, excessive scaffolding may undermine strategic autonomy (Schoenfeld), poorly calibrated adaptivity may frustrate learners or distort progression (Shute; Anderson), and AI-generated explanations may inhibit conceptual transitions (APOS Theory). Thus, the framework explicitly accommodates interpretations of both constructive and problematic learning effects.
Importantly, this theoretical integration informed the interpretation of findings across the included studies. The framework provided a conceptual lens for analysing how AI-driven personalised learning environments support mathematical problem-solving. It guided the discussion of patterns related to scaffolding, adaptivity, feedback mechanisms, learner autonomy, and metacognitive engagement. This ensured strong theoretical coherence between the conceptual framework and the interpretation of the synthesised evidence.
Methodology
This study adopted a systematic review approach to identify, evaluate, and synthesise research evidence on the effectiveness of AI-driven personalized learning in enhancing students’ mathematical problem-solving skills, adhering to the guidelines outlined by the Preferred Reporting Items for Systematic Reviews and Meta-analysis Protocols (PRISMA) checklist to ensure methodical transparency and rigor (Aromataris and Munn, 2020).
Search strategy
A systematic search was conducted in Scopus and Web of Science to identify relevant empirical studies published between January 2019 and June 2025. The database search was conducted on 29 June 2025. Scopus and Web of Science were selected because they are widely recognised as comprehensive citation databases with strong peer-reviewed journal coverage and robust indexing standards, making them suitable sources for systematic review searches (Gusenbauer and Haddaway, 2020; Mongeon and Paul-Hus, 2016). Although other specialised databases such as ERIC, PsycINFO, and IEEE Xplore may contain relevant literature in education, psychology, and computer science, Scopus and Web of Science were selected because of their broad interdisciplinary coverage and strong indexing of high-impact journals across these fields. Using these two comprehensive databases allowed the review to capture peer-reviewed studies from multiple disciplinary perspectives while maintaining a transparent and replicable search strategy.
Search terms were applied to the title, abstract, and keywords fields using Boolean operators. To enhance transparency and reproducibility of the search process, the complete database-specific search strings used in Scopus and Web of Science are presented in Table 1. The search process was conducted in two stages: an initial search was performed to identify relevant studies, followed by refinement of keywords and filters to improve precision and remove irrelevant results. Searches were limited to peer-reviewed journal articles written in English and published between 2019 and 2025. Database-specific filters and advanced search functions were used to enhance relevance and ensure that only studies directly addressing AI-driven personalised learning in mathematics were retrieved.
Table 1
| Database | Search string |
|---|---|
| Scopus | TITLE-ABS-KEY (“artificial intelligence” OR “AI” OR “intelligent tutoring system” OR “ITS” OR “adaptive learning” OR “machine learning” OR “learning analytics” OR “generative AI” OR “large language model” OR “LLM” OR “ChatGPT” OR “AI chatbot” OR “AI tutoring system”) AND TITLE-ABS-KEY (“mathematics education” OR “math learning” OR “mathematics learning” OR “math problem solving” OR “mathematical problem solving” OR “math education” OR “mathematics teaching”) AND TITLE-ABS-KEY (“problem-solving skills” OR “problem solving ability” OR “student achievement” OR “academic performance” OR “learning outcomes” OR “mathematics achievement” OR “problem-solving competence”) |
| Web of science | TS = (“artificial intelligence” OR “AI” OR “intelligent tutoring system” OR “ITS” OR “adaptive learning” OR “machine learning” OR “learning analytics” OR “generative AI” OR “large language model” OR “LLM” OR “ChatGPT” OR “AI chatbot” OR “AI tutoring system”) AND TS = (“mathematics education” OR “math learning” OR “mathematics learning” OR “math problem solving” OR “mathematical problem solving” OR “math education” OR “mathematics teaching”) AND TS = (“problem-solving skills” OR “problem solving ability” OR “student achievement” OR “academic performance” OR “learning outcomes” OR “mathematics achievement” OR “problem-solving competence”) |
Database search strings used in the systematic review.
Eligibility criteria
This review included empirical studies published in peer-reviewed journals from 2019 to 2025 that investigated AI-driven personalised learning interventions specifically within mathematics education. Eligible studies reported outcomes related to students’ mathematical problem-solving skills, achievement, or engagement and involved learners across any educational level, from primary to tertiary. Only articles published in English were considered. Studies were excluded if they were conference abstracts, dissertations, editorials, opinion pieces, or other non-peer-reviewed works. Additionally, studies were excluded if they did not specifically address AI personalisation in mathematics education or problem-solving skills, were not available in full text, or focused solely on general AI applications in education without direct reference to mathematics problem-solving.
Study selection
A total of 761 records were identified through database searches, including 614 from Scopus and 147 from Web of Science. The records were exported and merged using the Bibliometrix package in RStudio, which facilitated systematic duplicate detection and removal (Aria and Cuccurullo, 2017). After removing 123 duplicate records, 638 records remained for title and abstract screening. During screening, 50 records were excluded as irrelevant, leaving 588 reports sought for retrieval. Of these, 292 full-text articles were not retrieved, resulting in 296 reports assessed for eligibility. These reports were not retrieved because full texts were unavailable through institutional database access, publisher restrictions, or incomplete indexing records at the time of retrieval. Following full-text screening, 276 studies were excluded because they did not meet the inclusion criteria: 120 did not focus on AI-driven interventions in mathematics education, and 156 did not address AI-driven personalised learning and mathematical problem-solving. Ultimately, 20 studies met the eligibility criteria and were included in the systematic review (see Table 2). The selection process is summarised in the PRISMA flow diagram (Figure 1).
Table 2
| Author | Country | Study focus/objectives | Study design | Sample/educational context | AI intervention tool | AI effectiveness | Data collection tool | Key findings | Recommendations | Challenges and limitations |
|---|---|---|---|---|---|---|---|---|---|---|
| Lintner (2024) | USA | Critical thinking and AI in middle school mathematics | Case study | 30 middle school students | AI-based critical thinking tools | Improved critical thinking and problem-solving | Observations, interviews | AI modules fostered higher-order thinking | Integrate AI modules in curriculum | Small sample size, context-specific |
| Opesemowo and Ndlovu (2024) | South Africa | AI in mathematics education: benefits and drawbacks | Autoethnographic qualitative research. | NR | AI-based personalized learning platform | Enhanced engagement, some equity issues | Autoethnographic reflection | AI can personalize learning but may widen gaps | Teacher training, equity focus | Access disparity, teacher readiness |
| Suparatulatorn et al. (2023) | Thailand | Technology and realistic mathematics education for calculus | Quasi-experimental | 103 University calculus students | Tech-enhanced RME + AI | Improved problem-solving, conceptual understanding | Observation notes, and interviews | AI and RME boost problem-solving skills | Broader implementation | NR |
| Filiz and Gür (2025) | Turkey | Metacognitive awareness in problem-solving with ChatGPT | Survey and experimental | 82 university students | ChatGPT | Enhanced metacognitive awareness | Metacognitive awareness inventory | ChatGPT supports metacognitive strategies | Use AI for metacognitive training | Self-report bias |
| Chau et al. (2025) | Vietnam | Personalized math teaching with AI chatbots | Quasi-experimental | 100 High school students | AI chatbots | Significant improvement in problem-solving | Surveys, performance data | AI chatbots personalize support, boost competence | Broader AI integration | Limited generalizability |
| Daher and Faaiz Gierdien (2024) | South Africa | Generative AI (ChatGPT) in math problem-solving | Quasi-experimental | NR | ChatGPT | Improved language use, solution strategies | Task analysis, interviews | AI clarifies language, supports reasoning | Scaffold AI use in classrooms | Language barriers, over-reliance |
| Zhan and Qiao (2022) | China | Diagnostic analysis of problem-solving via process data | Correlational Design | 250 Secondary students | AI-based diagnostic system | Accurate identification of skill gaps | Process data analysis | AI pinpoints weaknesses for targeted support | Use diagnostic AI for remediation | Data privacy, technical complexity |
| del Olmo-Muñoz et al. (2022) | Spain | Intelligent tutoring systems for word problems (COVID-19) | True Experimental Design | 60 Primary students | AI-based ITS | Improved problem-solving during remote learning | Usage logs, interviews | ITS helped maintain learning during disruptions | Expand ITS access | Tech access, teacher training |
| Dignam et al. (2025) | UK | Robotics and emotional AI in STEM education | Embedded Mixed Methods | 80 secondary STEM students | AI-driven robotics | Improved engagement, collaboration | Observations, focus groups | AEI enhances motivation, problem-solving, and collaboration | Integrate emotional AI | Cost, teacher expertise data privacy, inaccuracies in emotion recognition, tech access |
| Yunianto et al. (2024) | Indonesia | ChatGPT for GeoGebra-based math+CT tasks | Action research | 38 Secondary students | ChatGPT, GeoGebra | Supported computational thinking, problem-solving | Task analysis | AI aids in complex geometry tasks | Combine AI with dynamic tools | Limited to specific topics |
| Lee et al. (2024) | South Korea | Teaching math for AI with ChatGPT and GPT-4 | Design-based research | 40 High school classes | ChatGPT, GPT-4 Omni | Enhanced AI literacy, problem-solving | Classroom observations, tests | AI tools fostered AI/math integration | Develop AI-focused curricula | NR |
| Yohannes and Chen (2024) | Ethiopia/China | Flipped RME on achievement, self-efficacy, critical thinking | Quasi-experimental | 120 Secondary students | Flipped RME with AI support | Improved achievement, self-efficacy | Questionnaires, interviews, classroom discussions | AI-supported RME boosts outcomes | Broaden flipped AI models | NR |
| Gutierrez et al. (2025) | Ecuador | Adaptive AI learning in technical education | Quasi-experimental | 200 Technical college students | Adaptive AI system integrated into Moodle (via LAI API) | Significant gains in problem-solving | System logs, user experience questionnaire, surveys | AI adapts to learner needs, improves skills | Implement adaptive AI widely | Platform complexity |
| Fang et al. (2025) | China | Parasocial interaction theory with AI agents in STEAM | Experimental | 90 STEAM students | AI pedagogical agent | Improved collaboration, problem-solving | Performance tasks, surveys | AI agents foster teamwork, problem-solving | Design AI for social learning | NR |
| Liu et al. (2024) | China | Personalized federated learning in math | Experimental | NR | Federated AI learning platform | Enhanced personalization, privacy | Learning analytics | AI balances personalization and privacy | Broader adoption of federated AI | Technical barriers |
| Wijaya et al. (2024) | Indonesia/China | AI literacy, trust, and dependency in math teachers | Cross sectional | 215 Secondary Math teachers | AI literacy programs | NR | Surveys, profile analysis | AI literacy relates to 21st-century skills | Foster AI trust and literacy | NR |
| Song et al. (2025) | USA | Explore perceptions of AI in elementary math classes | Quasi-experimental design | 15 teachers and 180 elementary school students | AI-based tools (e.g., generative AI like ChatGPT, intelligent tutoring systems) | Improved engagement, learning and problem-solving ability | Surveys, interviews, classroom observations, tests. | AI supported learning; teachers unsure about integration | Train teachers, improve support | Teacher preparedness, technical infrastructure, data privacy concerns |
| Mukuka (2024) | Zambia | Assess MTEs’ proficiency and willingness to use tech | Cross-sectional survey with Structural Equation Modeling (PLS-SEM) | 104 universities and colleges Maths Teacher Educators | General digital tools (e.g., math software) M | Proficiency affects willingness to integrate tech | Online questionnaire, SEM analysis | Low–moderate proficiency, high perceived usefulness | Improve tech training and infrastructure | Limited resources, unclear tools |
| Pop et al. (2025) | Romania | Explore how Agentic AI supports flexibility and readiness in STEM | Case study / exploratory empirical analysis | NR | Agentic AI tools (e.g., Copilot, MATHia) | Improved cognitive flexibility and job readiness | Literature review, usage data, case examples | AI enhanced adaptability and reduced cognitive load | Promote AI use in STEM; teach ethics and equity | Access gaps, ethical concerns, infrastructure needs |
| Torres-Peña et al. (2024) | Colombia and Spain | Use AI tools to improve calculus teaching | Action research | 50 University engineering students | ChatGPT, MathGPT, Gemini, Wolfram Alpha | Improved accuracy, engagement, conceptual clarity | Observations, transcripts, questionnaires | AI helped with understanding derivatives and rates of change | Integrate AI tools and teach prompt design | Tech access, teacher training, prompt limitations |
Summary of reviewed studies on AI-driven personalised learning in mathematics education.
NR indicates information that was not reported in the original study.
Figure 1
Data extraction
A standardized data extraction form was used to collect key information from each included study, including author(s), publication year, study context, sample characteristics, AI intervention, research design, outcome measures, key findings, and reported limitations. Data extraction was conducted by two reviewers to enhance accuracy and consistency.
Quality assessment
The methodological quality of the included studies was appraised using the Joanna Briggs Institute (JBI) Critical Appraisal Checklists, with the appropriate checklist applied according to each study design. The appraisal considered criteria such as clarity of research objectives, appropriateness of the research methodology, validity of measurement instruments, adequacy of sample description, and rigor of data analysis. Based on the extent to which the studies met the relevant JBI criteria, each study was classified as high, moderate, or low methodological quality. Studies meeting most appraisal criteria were categorised as high quality, while studies with partially met or unclear criteria were classified as moderate quality. Studies demonstrating multiple methodological limitations, such as insufficient methodological reporting or unclear data collection procedures, were categorised as low quality. Of the 20 studies included in the review, 8 were rated as high methodological quality, 10 as moderate quality, and 2 as low quality based on the JBI appraisal criteria (see Table 3). Although all studies were published in peer-reviewed journals indexed in Scopus or Web of Science, variations in methodological rigor and reporting quality were observed across the studies. Quality appraisal informed the interpretation of findings during the narrative synthesis; however, no studies were excluded solely on the basis of quality assessment.
Table 3
| Quality level | Number of studies |
|---|---|
| High quality | 8 |
| Moderate quality | 10 |
| Low quality | 2 |
| Total | 20 |
Summary of methodological quality of included studies (JBI appraisal).
Data synthesis
Due to heterogeneity in study designs, participant groups, and reported outcomes, a narrative synthesis approach was adopted. Findings were thematically grouped to illustrate how AI-driven personalised learning influences students’ mathematical problem-solving abilities, engagement, and achievement. To address the methodological and contextual heterogeneity of the included studies, the synthesis organized findings into thematic categories reflecting (a) the impact of AI-driven personalised learning on mathematical problem-solving performance, (b) the influence of AI-supported learning environments on problem-solving components such as strategy use, metacognitive awareness, and confidence, and (c) key technological and contextual features influencing the effectiveness of AI-driven personalised learning. The thematic organisation of the findings was informed by the study’s integrated theoretical framework, which incorporates Constructivist Learning Theory, Vygotsky’s Zone of Proximal Development, Adaptive Learning Theory, and mathematics-specific models such as Polya’s problem-solving heuristics, Schoenfeld’s metacognitive framework, and APOS theory. This framework guided the interpretation of patterns related to scaffolding, adaptivity, feedback mechanisms, learner autonomy, and conceptual development in AI-supported mathematics learning environments. A summary of the included studies and their key characteristics is presented in Table 2.
Results
The results of this systematic review are organized according to the research questions (RQs) guiding this study.
RQ 1: What is the impact of AI-driven personalized learning in improving students’ mathematical problem-solving skills?
Across the 20 included studies, AI-driven personalised learning generally showed positive effects on students’ mathematical problem-solving abilities. Learner’s demonstrated improved conceptual understanding, enhanced critical thinking, and increased accuracy and persistence in solving mathematical tasks. Studies such as Lintner (2024); Chau et al. (2025); and del Olmo-Muñoz et al. (2022) consistently reported gains in problem-solving competence supported by adaptive feedback, personalised task sequencing, and interactive learning environments. However, the evidence is not uniformly positive. Several studies have highlighted the conditions under which AI-based interventions produce limited, inconsistent, or even counterproductive outcomes. For example, Daher and Gierdien (2024) found that students sometimes relied excessively on AI-generated hints and language scaffolds, which reduced opportunities for independent reasoning. Opesemowo and Ndlovu (2024) reported that AI tools could inadvertently widen achievement gaps when access to digital infrastructure was unequal. Additionally, Gutierrez et al. (2025) observed that some learners struggled with the complexity of adaptive AI platforms, which hindered rather than supported engagement. These findings suggest that contextual factors, including access, digital proficiency, task design, and the quality of scaffolding provided, moderate the impact of AI-driven personalised learning.
Importantly, the challenges reported across studies raise ethical and pedagogical concerns. Over-reliance on AI-generated hints may weaken learners’ cognitive autonomy by reducing opportunities for independent reasoning and strategic planning, which are central to mathematical problem solving. Misaligned adaptivity may also lead to inappropriate task sequencing that either discourages learners through excessive difficulty or limits growth through overly simplified tasks. Such issues highlight the risk that AI-driven systems may unintentionally substitute genuine reasoning with automated guidance. In addition, the use of AI systems that rely on learner data introduces privacy risks and accountability concerns, particularly when deployed in contexts with limited regulatory oversight.
RQ 2: How do AI-personalized learning environments affect different aspects of problem-solving ability in mathematics?
AI-personalised environments influenced multiple dimensions of mathematical problem-solving, including metacognition, strategy use, conceptual development, and learner motivation, reflecting key elements of Schoenfeld’s metacognitive framework of mathematical problem solving. Studies such as those by Filiz and Gür (2025) and Zhan and Qiao (2022) have demonstrated that AI tools can support diagnostic analysis, promote reflective thinking, and help learners identify specific areas of difficulty. Emotional AI systems and collaborative robotic tools (Dignam et al., 2025) also contributed to increased motivation and engagement, which are essential for sustained problem-solving effort. Yet, several studies documented contradictory effects. For instance, Song et al. (2025) found that teachers often lacked confidence in integrating AI tools effectively, which limited their pedagogical value. In some cases, the adaptivity mechanisms did not align well with learners’ needs, leading to confusion or disengagement. This was also noted by Liu et al. (2024) and Gutierrez et al. (2025), who observed that AI systems sometimes misdiagnosed learner performance or provided feedback that was either too advanced or insufficiently challenging. These mismatches reduced opportunities for productive struggle and hindered deeper problem-solving processes. Thus, while AI personalisation can enhance several components of mathematical problem-solving, its effectiveness depends on the appropriateness of adaptive algorithms, the clarity of feedback, and the presence of teacher mediation.
RQ 3: What AI-driven personalized learning features influence the effectiveness of improving mathematics problem-solving skills?
Several AI design features emerged as particularly influential in shaping the outcomes of mathematical problem-solving. Adaptivity played a central role, with systems that adjusted tasks to learners’ proficiency levels generally supporting more consistent gains, consistent with the principles of Adaptive Learning Theory. However, studies have also shown that when adaptive algorithms are poorly calibrated, they can occasionally produce frustration or conceptual errors by presenting tasks that are either misaligned with learners’ abilities or sequenced inappropriately (Gutierrez et al., 2025; Liu et al., 2024). Scaffolding was another key feature: real-time hints, stepwise prompts, and guided explanations often helped maintain learners within their Zone of Proximal Development, supporting more effective engagement with problem-solving processes in line with Vygotsky’s scaffolding principles. Yet, excessive scaffolding sometimes resulted in learner dependency or superficial engagement, limiting opportunities for independent reasoning and deeper cognitive effort (Daher and Gierdien, 2024). This suggests that the effectiveness of AI personalisation depends not only on providing support, but also on how adaptivity is calibrated to gradually fade scaffolding as learners gain competence. Effective AI systems should therefore promote productive struggle and independent reasoning, ensuring that support mechanisms function as temporary learning aids rather than permanent cognitive substitutes.
The type and quality of feedback also shaped learning outcomes. Context-aware feedback tailored to learners’ misconceptions enhanced strategy use and conceptual clarity, whereas generic or poorly timed feedback tended to reduce problem-solving efficiency and hinder learners’ ability to reflect productively on their approaches (Song et al., 2025). Ultimately, issues of accessibility and usability significantly impacted the overall effectiveness of AI systems. In contexts with uneven digital infrastructure or limited access to devices, the benefits of otherwise effective AI tools were diminished, reinforcing concerns about equity and technological readiness (Opesemowo and Ndlovu, 2024; Pop et al., 2025). Together, these findings suggest that the success of AI-driven personalised learning depends heavily on design quality, user readiness, and contextual implementation factors. When well-calibrated and accessible, AI systems can enhance mathematical reasoning; when poorly designed or implemented under inequitable conditions, they may hinder learning.
Summary of findings
Although AI-driven personalised learning shows considerable promise, several recurring limitations emerged across the reviewed studies. In some cases, learners became overly reliant on AI-generated hints, reducing opportunities for independent cognitive effort (Daher and Gierdien, 2024). Teacher uncertainty or resistance also limited effective integration (Song et al., 2025). Persistent equity and access disparities constrained AI’s potential in contexts with limited infrastructure (Opesemowo and Ndlovu, 2024; Pop et al., 2025). Some learners were overwhelmed by the complexity of AI platforms (Gutierrez et al., 2025), while inaccurate adaptivity at times led to misaligned feedback and poor task sequencing (Liu et al., 2024). Ethical and data privacy concerns were also reported, especially with emotional AI and diagnostic systems (Dignam et al., 2025). Collectively, these findings underscore the importance of context-sensitive implementation, teacher training, equitable access, and the careful calibration of adaptive algorithms.
Discussion
This systematic review examined how AI-driven personalised learning influences students’ mathematical problem-solving skills. The discussion interprets the findings through an integrated theoretical framework, Constructivism, Vygotsky’s ZPD, Adaptive Learning Theory, and mathematics-specific models such as Polya, Schoenfeld, and APOS, to explain when and why AI supports or constrains learners’ mathematical reasoning. By linking the thematic results to these theoretical constructs, the discussion provides a balanced account of the benefits, limitations, and contextual factors shaping the effectiveness of AI-driven personalisation.
Impact of AI-driven personalized learning on mathematical problem-solving skills
The review indicates generally positive trends in the impact of AI-driven personalised learning on students’ mathematical problem-solving abilities across different educational levels and contexts, although the strength of these outcomes varies depending on study design, intervention type, and implementation context. Importantly, the interpretation of these findings considered the methodological quality of the included studies, with greater emphasis placed on evidence derived from studies rated as high or moderate quality in the JBI appraisal. Several studies, including Lintner (2024) and Chau et al. (2025), reported improvements in problem-solving accuracy, conceptual understanding, and persistence when students engaged with AI tools such as chatbots and intelligent tutoring systems. Adaptive platforms, like those investigated by Gutierrez et al. (2025), facilitated tailored learning experiences that responded to individual strengths and weaknesses, leading to enhanced performance. These findings can be understood through the lens of Constructivist Learning Theory, which emphasizes active knowledge construction through meaningful problem-solving tasks (Piaget, 1936; Grubaugh et al., 2023). AI technologies empower students to engage in self-directed exploration, manipulating mathematical concepts at their own pace, which supports deeper learning. Moreover, the adaptability and responsiveness of AI tools create environments where students can repeatedly practice and refine problem-solving strategies, consistent with constructivist ideals.
Additionally, the benefits reported align with Vygotsky’s ZPD by providing scaffolding through real-time feedback and adaptive difficulty (Orhani, 2024). AI systems that monitor learner progress and adjust support help students tackle problems slightly beyond their current ability, promoting growth. For example, Intelligent Tutoring Systems assessed in this review simulate scaffolding by offering hints, prompts, and tailored challenges, keeping learners within their ZPD. Finally, the findings echo Adaptive Learning Theory’s principles, wherein AI-driven platforms dynamically adjust content and learning paths based on student interactions and performance data (Grájeda et al., 2024). This real-time personalization not only improves cognitive outcomes but also sustains learner motivation by ensuring tasks are neither too easy nor overwhelmingly difficult.
However, several studies also revealed that AI-driven personalisation did not always strengthen learners’ problem-solving abilities. In some cases, frequent access to AI hints reduced opportunities for productive struggle, a process essential for developing strategic control as described by Schoenfeld’s metacognitive model (Daher and Gierdien, 2024). When learners relied heavily on automated prompts, they engaged less with Polya’s early problem-solving stages of understanding and planning. Additionally, miscalibrated adaptive systems occasionally presented tasks that were too advanced or insufficiently challenging, disrupting the intended progression from action to process and schema as conceptualised in APOS theory (Liu et al., 2024). These findings suggest that AI enhances problem-solving most effectively when scaffolds support, rather than replace, learners’ strategic reasoning.
Furthermore, cognitive overload emerged as a practical implementation barrier. Several AI platforms require learners to navigate complex interfaces, interpret automated feedback, and manage multiple representations simultaneously. For students with limited digital literacy, these demands may increase extraneous cognitive load, reducing their capacity to engage in deep mathematical reasoning. Thus, even when AI systems are theoretically aligned with adaptive learning principles, poor usability and design complexity may undermine learning outcomes.
Influence of AI-personalized learning environments on problem-solving components
The review highlighted that AI-personalized environments affect various facets of mathematical problem-solving, including strategy use, accuracy, metacognitive awareness, and confidence. Filiz and Gür (2025) found that AI tools like ChatGPT enhance students’ metacognitive skills, which are crucial for effective problem-solving. Similarly, Zhan and Qiao (2022) demonstrated that diagnostic AI systems accurately identified learners’ skill gaps, enabling targeted interventions that improved strategy use and accuracy. Such multifaceted improvements align well with the constructivist approach, which views problem-solving as an active, reflective process requiring strategic thinking and self-monitoring (Grubaugh et al., 2023). AI systems facilitate this by embedding metacognitive prompts and scaffolding that encourage students to plan, monitor, and evaluate their problem-solving approaches, thus fostering deeper engagement.
Moreover, the role of scaffolding emphasized in Vygotsky’s ZPD is evident here; AI tools provide tailored feedback and adjustable challenges, enabling learners to develop problem-solving strategies progressively. The integration of emotional AI, as explored by Dignam et al. (2025), further supports confidence and engagement, illustrating the importance of affective factors in learning within the ZPD framework. From the perspective of Adaptive Learning Theory, these environments’ capacity to respond dynamically to learner needs is critical. AI’s ability to analyse diverse data points (accuracy, response time, engagement) and customize tasks supports the development of precise problem-solving skills and confidence by adapting feedback style and difficulty level in real time (Grájeda et al., 2024).
Despite these advantages, some AI-personalised environments introduced challenges that affected learners’ problem-solving development. For instance, when scaffolded prompts were overly directive, learners engaged superficially with tasks rather than developing metacognitive control over planning, monitoring, and evaluating their strategies—components central to Schoenfeld’s framework. Similarly, platform complexity occasionally hindered learners’ cognitive engagement, particularly when unfamiliar interfaces increased extraneous cognitive load (Gutierrez et al., 2025). These patterns indicate that the effectiveness of AI scaffolding depends on preserving cognitive balance: providing support within the learner’s ZPD while still requiring sufficient autonomy to internalise problem-solving processes.
Features of AI-driven personalized learning influencing effectiveness
The review identified several AI features that notably influence the effectiveness of personalized learning in mathematics. Adaptivity emerged as a key feature, where systems modify content pacing, difficulty, and feedback based on learner data, as seen in Gutierrez et al. (2025) and Liu et al. (2024). Feedback type, particularly real-time, context-aware feedback delivered via intelligent tutoring systems or chatbots, was also critical for improving problem-solving skills (Chau et al., 2025; Lee et al., 2024). The moderating role of learner characteristics such as age and educational context was apparent, with effective AI applications tailored to developmental levels and curricular demands. This finding resonates with Vygotsky’s notion of the ZPD, which highlights the importance of providing appropriately challenging tasks relative to learners’ developmental stages (Orhani, 2024). AI’s capacity to scaffold learners individually reflects this principle in practice.
Furthermore, AI’s integration of metacognitive scaffolding and inquiry-based learning, as demonstrated by platforms like ChatGPT and Mathigon (Filiz and Gür, 2025), exemplifies the convergence of Constructivist Learning Theory and Adaptive Learning Theory. These systems embody the constructivist emphasis on student-centred exploration while adaptively responding to learner inputs to optimize instructional delivery. However, the review also surfaced challenges such as access inequities, teacher readiness, and technical limitations, which may moderate AI’s effectiveness. Addressing these factors is essential for realizing AI’s full potential in diverse educational settings.
However, several features also constrained the effectiveness of AI-driven personalised learning. Inaccurate adaptivity sometimes produced misaligned sequencing or feedback, preventing learners from progressing through Polya’s problem-solving stages or consolidating conceptual structures described in APOS theory (Liu et al., 2024). Teacher readiness emerged as a critical moderating factor; where teachers were uncertain about interpreting AI-generated insights, the tools were less effectively integrated into classroom routines (Song et al., 2025). Furthermore, inequitable access to devices and reliable connectivity limited the benefits of AI systems in several contexts (Opesemowo and Ndlovu, 2024). However, it should be noted that this finding originates from a study rated as low methodological quality in the JBI appraisal and should therefore be interpreted with caution. These challenges emphasise that AI features must be embedded within supportive instructional and infrastructural conditions to fully realise their potential in fostering mathematical problem-solving. Overall, these findings suggest that the effectiveness of AI-driven personalised learning in mathematics depends not only on the technological capabilities of AI systems but also on pedagogical design, teacher mediation, and equitable access to digital resources.
Conclusion
The findings of this study revealed that AI tools can meaningfully support conceptual understanding, strategic reasoning, and metacognitive development when adaptivity, feedback, and scaffolding are well aligned with learners’ needs. At the same time, the review revealed that these benefits are not universal, with effectiveness moderated by factors such as scaffold quality, adaptivity accuracy, teacher readiness, platform usability, and technological access. These insights highlight that the promise of AI lies not in the technology alone but in how well its design and implementation support the cognitive, metacognitive, and contextual demands of mathematical problem solving. In conclusion, AI-driven personalised learning offers substantial potential to enhance mathematics instruction, provided it is implemented thoughtfully, equitably, and in alignment with sound pedagogical and theoretical principles.
Educational implications
The findings of this review highlight significant opportunities for transforming mathematics education through AI-driven personalized learning. For educators, AI tools offer the ability to tailor instruction dynamically to individual student needs, moving beyond traditional, uniform teaching methods. By providing real-time feedback, adaptive challenges, and scaffolded support, AI fosters deeper conceptual understanding and sustained engagement, which are critical for developing robust problem-solving skills. This shift necessitates rethinking instructional design and classroom practice. Teachers are encouraged to integrate AI technologies as complementary resources that support constructivist and inquiry-based pedagogies, enabling learners to actively explore mathematical concepts at their own pace within their Zone of Proximal Development. Such integration could enhance differentiated instruction, allowing educators to better address diverse learner profiles and close achievement gaps. In addition, the review highlights the need for teachers to mediate AI feedback and scaffolding, as learners benefit most when AI tools complement, rather than replace, explicit instruction and strategic guidance. This reinforces the role of teachers as facilitators who help students interpret AI insights and transfer them into independent problem-solving practice. Crucially, teachers remain central in moderating AI feedback, guiding students in interpreting AI-generated hints, and ensuring that learners engage in reasoning rather than copying automated solutions. Without strong teacher mediation, AI-driven personalisation may increase dependency and reduce independent problem-solving development.
Practically, teachers can integrate AI-driven personalised learning into mathematics lessons by using AI tools to provide adaptive practice tasks, diagnostic feedback, and scaffolded problem-solving support during classroom activities. For example, teachers may use AI-based tutoring systems or chatbots to generate step-by-step hints when students encounter difficulty while solving complex problems, while still encouraging learners to explain their reasoning before consulting AI suggestions. AI analytics dashboards can also help teachers identify misconceptions in real time, allowing them to adjust instruction or provide targeted support to specific learners. In addition, teachers can design inquiry-based problem-solving activities in which students use AI tools as exploratory aids rather than answer-generating systems, ensuring that learners engage in reasoning, reflection, and strategy development.
For curriculum developers and policymakers, these findings underscore the importance of embedding AI literacy and digital competence within teacher education programmes. Equipping educators with the skills to effectively use AI tools is essential for maximizing their pedagogical potential. Furthermore, policies could prioritize equitable access to AI infrastructure and resources to ensure all students benefit regardless of socioeconomic status. Ultimately, embracing AI-driven personalized learning holds promise not only for improving problem-solving outcomes but also for fostering 21st-century skills such as critical thinking, self-regulation, and learner autonomy. These implications call for systemic support, ongoing professional development, and thoughtful integration strategies to realize AI’s transformative potential in mathematics education.
Limitations
This systematic review is limited by the small number of high-quality studies focused specifically on AI-driven personalized learning in mathematics, with considerable variation in study designs and contexts. Most studies examined short-term effects, limiting insight into long-term impacts. The review included only English-language peer-reviewed articles, which may have excluded relevant studies published in other languages, particularly from non-English speaking regions where AI innovations in education may be emerging. This language restriction may therefore limit the global representativeness and applicability of the findings. Although grey literature (e.g., dissertations and technical reports) may contain emerging evidence, this review focused on peer-reviewed journal articles indexed in Scopus and Web of Science to strengthen methodological credibility and ensure quality assurance of included studies. However, the exclusion of specialised databases such as ERIC, PsycINFO, and IEEE Xplore may have resulted in the omission of relevant studies published in education-focused or technical AI outlets. Additionally, broader factors such as technological access and teacher readiness, though important, were not fully addressed in the reviewed studies. Furthermore, the variability in AI system design, adaptivity mechanisms, and feedback structures across studies made it challenging to compare outcomes directly. In addition, a few included studies implemented AI functionality within broader technology-enhanced instructional approaches (e.g., AI-supported flipped or realistic mathematics education models). While these studies incorporated AI-based analytics or adaptive components, the degree of AI integration varied, reflecting the evolving and hybrid nature of AI-supported mathematics learning environments. This diversity highlights the need for clearer categorisation of AI personalisation models in future research.
Recommendations
Based on the findings of this systematic review, it is essential that efforts be made to support educators in effectively integrating AI-driven personalized learning tools into mathematics instruction. Professional development and teacher training programmes should prioritize building AI literacy, enabling teachers to interpret learner data, implement adaptive instructional strategies, and provide appropriate scaffolding tailored to individual student needs. Future development of AI tools should prioritise transparency in how adaptivity decisions are made, ensuring that teachers understand the basis for AI suggestions and can adjust them when necessary. Such transparency would strengthen teacher agency and prevent over-reliance on automated recommendations. Without adequate training, the potential of AI to enhance problem-solving skills may not be fully realized. Equitable access to AI technologies should also be a priority to prevent exacerbating existing educational inequalities. Policymakers and education authorities should ensure that schools have the necessary infrastructure, devices, and reliable internet connectivity to support AI-based learning environments. Addressing digital divides would help guarantee that all students, regardless of their socioeconomic background, can benefit from personalized AI interventions.
Moreover, AI developers should work closely with educators to design tools that are both culturally responsive and pedagogically sound. Such collaboration would ensure that AI systems are relevant to diverse learners, aligned with curriculum standards, and capable of supporting meaningful, inquiry-based learning experiences. This alignment would maximize the educational impact of AI and foster student engagement and motivation. Future research should focus on conducting large-scale, longitudinal studies to evaluate the long-term effectiveness and sustainability of AI-driven personalized learning in mathematics. Such studies are crucial to understand how these tools influence not only immediate problem-solving skills but also learner motivation, knowledge retention, and the transfer of skills to new contexts. Additionally, investigating the scalability of AI interventions across different educational settings would provide insights necessary for broader implementation. Also, AI tools should be integrated within constructivist and collaborative pedagogical frameworks. Rather than replacing human instruction, AI should serve as a complementary resource that supports inquiry-based learning, peer collaboration, and the development of critical thinking skills. Finally, future research should examine how AI-driven personalisation supports the transition from scaffolded to independent problem solving, a key feature of mathematical proficiency that remains underexplored in current literature.
Statements
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
NE: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. MM: Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing. FE: Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Visualization, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Acknowledgments
The authors acknowledge the support of the ETDP SETA Research Chair in Mathematics Education for their support in carrying out this research. The authors further appreciate the contributions of scholars whose published studies formed the basis of this systematic review.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. The authors verify and take full responsibility for the use of generative AI in the preparation of this manuscript. Generative AI was used for language editing and proofreading purposes only. Specifically, tools such as ChatGPT and Grammarly were used to improve grammar, clarity, sentence structure, and overall readability. No generative AI tools were used to generate research data, conduct the systematic search, extract data, perform analysis, interpret findings, or develop the study’s conclusions. All content was reviewed, verified, and finalised by the authors, who remain fully accountable for the accuracy, originality, and integrity of the manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AriaM.CuccurulloC. (2017). Bibliometrix: an R-tool for comprehensive science mapping analysis. J. Informetr.11, 959–975. doi: 10.1016/j.joi.2017.08.007
2
AromatarisE.MunnZ. (2020). JBI Manual for Evidence Synthesis. Adelaide: JBI.
3
ChauD. B.LuongV. T.LongT. T.ThaoT. (2025). Personalized mathematics teaching with the support of AI chatbots to improve mathematical problem-solving competence for high school students in Vietnam. Eur. J. Educ. Res.14, 323–333. doi: 10.12973/eu-jer.14.1.323
4
ChenX.XieH.ZouD.HwangG.-J. (2020). Application and theory gaps during the rise of artificial intelligence in education. Comput. Educ. Artif. Intell.1:100002. doi: 10.1016/j.caeai.2020.100002
5
CorbettA. T.AndersonJ. R. (1995). Knowledge tracing: modeling the acquisition of procedural knowledge. User Model. User-Adapt. Interact.4, 253–278. doi: 10.1007/BF01099821
6
D’MelloS.OlneyA.WilliamsC.HaysP. (2012). Gaze tutor: a gaze-reactive intelligent tutoring system. Int. J. Hum.-Comput. Stud.70, 377–398. doi: 10.1016/j.ijhcs.2012.01.004
7
DaherW.GierdienF. (2024). Use of language by generative AI tools in mathematical problem solving: the case of ChatGPT. Afr. J. Res. Math. Sci. Technol. Educ.28, 222–235. doi: 10.1080/18117295.2024.2384676
8
del Olmo-MuñozJ.González-CaleroJ. A.DiagoP. D.ArnauD.Arevalillo-HerráezM. (2022). Intelligent tutoring systems for word problem solving in COVID-19 days: could they have been (part of) the solution?ZDM55, 35–48. doi: 10.1007/s11858-022-01396-w,
9
DignamC.SmithC. M.KellyA. L. (2025). The heart and art of robotics: from AI to artificial emotional intelligence in stem education. J. Educ. Sci. Environ. Health11, 151–169. doi: 10.55549/jeseh.813
10
FangJ.GuoX.MengX.HwangG.TuY. (2025). Realization of parasocial interaction theory in steam education with an AI pedagogical agent: insights from learning performance, collaboration tendency, and problem-solving tendency. Interact. Learn. Environ., 1–24. doi: 10.1080/10494820.2025.2508326
11
FilizA.GürH. (2025). Students’ perceptions and applications of metacognitive awareness levels in problem solving with ChatGPT. Educ. Process Int. J.14:e2025063. doi: 10.22521/edupij.2025.14.63
12
FryerL. K.AinleyM.ThompsonA.GibsonA.SherlockZ. (2020). Chatbots in education: a systematic review and meta-analysis. Educ. Psychol. Rev.32, 957–986. doi: 10.1007/s10648-020-09508-8
13
GrájedaA.MartínezL.LópezR. (2024). Assessing student-perceived impact of using artificial intelligence tools: construction of a synthetic index of application in higher education. Cogent Educ.11:2287917. doi: 10.1080/2331186x.2023.2287917
14
GrubaughS.LevittG.DeeverD. (2023). Harnessing AI to power constructivist learning: an evolution in educational methodologies. EIKI J. Effect. Teach. Methods1, 81–83. doi: 10.59652/jetm.v1i3.43
15
GusenbauerM.HaddawayN. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google scholar, PubMed, and 26 other resources. Res. Synth. Methods11, 181–217. doi: 10.1002/jrsm.1378,
16
GutierrezR.Eduardo Villegas-ChW.Maldonado NavarroA.Luján-MoraS. (2025). Optimizing problem-solving in technical education: an adaptive learning system based on artificial intelligence. IEEE Access13, 61350–61367. doi: 10.1109/access.2025.3557281
17
HidayatR.MohamedM. Z. B.SuhaiziN. N. B.SabriN. B. M.MahmudM. K. H. B.BaharuddinS. N. B. (2022). Artificial intelligence in mathematics education: a systematic literature review. Int. Electron. J. Math. Educ.17:0694. doi: 10.29333/iejme/12132
18
HolmesW.BialikM.FadelC. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
19
HuJ. (2024). The challenge of traditional teaching approach: a study on the path to improve classroom teaching effectiveness based on secondary school students’ psychology. Lecture Notes Educ. Psychol. Public Med.50, 213–219. doi: 10.54254/2753-7048/50/20240945
20
HwangG.-J.TuY.-F. (2021). Roles and research trends of artificial intelligence in mathematics education: a bibliometric mapping analysis and systematic review. Mathematics9:584. doi: 10.3390/math9060584
21
KasneciE.SesslerK.KüchemannS.BannertM.DementievaD.GuggemosJ.et al. (2023). ChatGPT and artificial intelligence in STEM education: a systematic review. Comput. Educ. Artif. Intell.4:100121. doi: 10.1016/j.caeai.2023.100121
22
LeeS.-G.LeeJ. H.LimD. S.ParkD. (2024). Teaching mathematics for AI with ChatGPT and GPT-4 Omni. Math. Educ. Res. Practice27, 449–466. doi: 10.7468/jksmed.2024.27.4.449
23
LintnerA. (2024). A case study on critical thinking and artificial intelligence in middle school. Turkish Online J. Educ. Technol.23, 1–7. Available at: https://files.eric.ed.gov/fulltext/EJ1444543.pdf
24
LiuW.MaoX.ZhangX.ZhangX. (2024). Robust personalized federated learning with sparse penalization. J. Am. Stat. Assoc.120, 266–277. doi: 10.1080/01621459.2024.2321652
25
LiL.YuF.ZhangE. (2024). A systematic review of learning task design for K-12 AI education: Trends, challenges, and opportunities. Computers and Education: Artificial Intelligence, 6, 100217. doi: 10.1016/j.caeai.2024.100217
26
MaW.AdesopeO. O.NesbitJ. C.LiuQ. (2014). Intelligent tutoring systems and learning outcomes: a meta-analysis. J. Educ. Psychol.106, 901–918. doi: 10.1037/a0037123
27
MongeonP.Paul-HusA. (2016). The journal coverage of web of science and Scopus: a comparative analysis. Scientometrics106, 213–228. doi: 10.1007/s11192-015-1765-5
28
MukukaA. (2024). Data on mathematics teacher educators’ proficiency and willingness to use technology: a structural equation modelling analysis. Data Brief54:110307. doi: 10.1016/j.dib.2024.110307,
29
OpesemowoO. A. G.NdlovuM. (2024). Artificial intelligence in mathematics education: the good, the bad, and the ugly. J. Pedagog. Res.8, 333–346. doi: 10.33902/jpr.202426428
30
OrhaniS. (2024). Use of mathematical models in epidemiology to predict infectious. Partners Univ. Multidiscip. Res. J.1, 96–111. doi: 10.5281/zenodo.14208781
31
PaneJ. F.SteinerE. D.BairdM. D.HamiltonL. S. (2017). Informing Progress: Insights on Personalized Learning Implementation and Effects. Santa Monica, CA: RAND Corporation.
32
PiagetJ. (1936). Origins of Intelligence in the Child. London: Routledge.
33
PolyaG. (1945). How to Solve It: A New Aspect of Mathematical Method. Princeton, NJ: Princeton University Press.
34
PopM. V.TonțG.FlontaF.FloreM. (2025). Agentic AI in STEM education: enhancing cognitive flexibility and workforce readiness. Brain Broad Res. Artif. Intell. Neurosci.16:239. doi: 10.70594/brain/16.s1/20
35
SchoenfeldA. H. (1985). Mathematical Problem Solving. Cambridge, MA: Academic Press.
36
ShuteV. J.Zapata-RiveraD. (2012). “Adaptive educational systems,” in Adaptive Technologies for Training and Education, eds. DurlachP. J.LesgoldA. M. (Cambridge: Cambridge University Press).
37
SongX.MakJ.ChenH. (2025). Teachers and learners’ perceptions about implementation of AI tools in elementary mathematics classes. SAGE Open15:21582440251334545. doi: 10.1177/21582440251334545
38
SuparatulatornR.Jun-onN.HongY.-Y.IntarosP.SuwannautS. (2023). Exploring problem-solving through the intervention of technology and realistic mathematics education in the calculus content course. J. Math. Educ.14, 103–128. doi: 10.22342/jme.v14i1.pp103-128
39
Torres-PeñaR. C.Peña-GonzálezD.Chacuto-LópezE.ArizaE. A.VergaraD. (2024). Updating calculus teaching with AI: A classroom experience. Education Sciences,14, 1019. doi: 10.3390/educsci14091019
40
VygotskyL. (1978). Mind in society: the development of higher psychological processes. Cambridge, MA: Harvard University Press
41
WijayaT. T.YuQ.CaoY.HeY.FrederickK. S. L. (2024). Latent profile analysis of AI literacy and trust in mathematics teachers and their relations with AI dependency and 21st-century skills. Behav. Sci.14, 1008–1008. doi: 10.3390/bs14111008
42
XieH.ChuH.-C.HwangG.-J.WangC.-C. (2019). Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. Computers & Education,140, 103599. doi: 10.1016/j.compedu.2019.103599
43
YohannesA.ChenH.-L. (2024). The effect of flipped realistic mathematics education on students’ achievement, mathematics self-efficacy and critical thinking tendency. Educ. Inf. Technol.29, 16177–16203. doi: 10.1007/s10639-024-12502-8
44
YuniantoW.LaviczaZ.Kastner-HaulerO.HoughtonT. (2024). Investigating the use of ChatGPT to solve a GeoGebra based mathematics computational thinking task in a geometry topic. J. Math. Educ.15, 1027–1052. doi: 10.22342/jme.v15i3.pp1027-1052
45
Zawacki-RichterO.MarínV. I.BondM.GouverneurF. (2019). Systematic review of research on artificial intelligence applications in higher education. Int. J. Educ. Technol. High. Educ.16:39. doi: 10.1186/s41239-019-0171-0
46
ZhanP.QiaoX. (2022). Diagnostic classification analysis of problem-solving competence using process data: an item expansion method. Psychometrika87, 1529–1547. doi: 10.1007/s11336-022-09855-9,
Summary
Keywords
adaptive learning systems, AI-driven personalized learning, constructivist pedagogy, mathematics education, problem-solving skills
Citation
Eti N, Mosia M and Egara FO (2026) The role of AI-driven personalised learning in enhancing mathematics problem-solving skills: a systematic review. Front. Comput. Sci. 8:1813431. doi: 10.3389/fcomp.2026.1813431
Received
18 February 2026
Revised
09 March 2026
Accepted
12 March 2026
Published
25 March 2026
Volume
8 - 2026
Edited by
Edgar R. Eslit, St. Michael’s College (Iligan), Philippines
Reviewed by
Naveen Kumar, Chandigarh University, India
Selim Yavuz, Indiana University, United States
Allan Mesa Canonigo, University of the Philippines Diliman, Philippines
Updates
Copyright
© 2026 Eti, Mosia and Egara.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Felix O. Egara, felix.egara@unn.edu.ng
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.