EDITORIAL article

Front. Educ.

Sec. Assessment, Testing and Applied Measurement

Volume 10 - 2025 | doi: 10.3389/feduc.2025.1612274

This article is part of the Research TopicEducational Evaluation in the Age of Artificial Intelligence: Challenges and InnovationsView all 8 articles

Editorial: Educational Evaluation in the Age of Artificial Intelligence: Challenges and Innovations

Provisionally accepted
  • 1University of the Western Cape, Bellville, South Africa
  • 2University of South Africa, Pretoria, South Africa
  • 3University of British Columbia, Vancouver, British Columbia, Canada
  • 4University of Bergen, Bergen, Hordaland, Norway

The final, formatted version of the article will be published soon.

The rapid evolution of artificial intelligence in education is reshaping how learning is measured and redefining the theoretical and methodological foundations of assessment. This Research Topic creates a dedicated forum for rigorous academic debate on the transformative potential of AI in educational evaluation. By examining innovations in automated scoring, dynamic assessment creation, data-driven analytics, and early detection, the contributions in this collection illuminate both the promise and the challenges inherent in this digital transformation. Building on the theme of automation, Khanyisile Twabu and Mathabo Nakene-Mginqi present an innovative design thinking approach to developing an AI-driven auto-marking and grading system tailored for higher education in South Africa. Their iterative, user-centred methodology integrates principles from design thinking with advanced machine-learning techniques to streamline the grading process. Their system exemplifies how digital transformation can enhance operational efficiency while maintaining academic rigour by reducing lecturer workload and ensuring consistent, bias-mitigated feedback.Saleem Hamady, Khaleel Mershad, and Bilal Jabakhanji contribute a dynamic perspective by integrating GeoGebra and Moodle to create multi-version interactive assessments. Their approach leverages the interactive capabilities of GeoGebra to generate animated, unique test items while using Moodle's robust learning management framework to administer assessments securely. This method not only combats issues of academic dishonesty through randomized test forms but also enriches the student experience by offering varied, context-sensitive problem sets. The study highlights the potential of integrating interactive tools to elevate assessments' engagement and validity. Amirreza Mehrabi, Jason W. Morphew, and Breejha S. Quezada further advance our understanding of AI-enhanced evaluation by refining performance factor analysis. Their research introduces an attention mechanism to integrate detailed skill profiles and item similarity, providing a nuanced perspective on student learning trajectories. The authors' methodology underscores the importance of capturing the interdependencies between emergent and internalized skills, offering an analytical model that could improve the predictive accuracy of educational outcomes and inform personalized learning strategies.Complementing these approaches, Mahdi-Reza Borna, Hanan Saadat, Aref Tavassoli Hojjati, and Elham Akbari explore using clickstream data to predict student performance. By applying advanced machine learning models to digital interaction data, they demonstrate how behavioural analytics can serve as an early-warning system. Their work provides valuable insights into the correlations between online engagement and academic success, suggesting that data-driven interventions could significantly enhance the responsiveness of educational environments.Njål Foldnes, Per Henning Uppstad, Steen Grønneberg, and Jenny M. Thomson address the crucial issue of early detection with their study on identifying struggling readers using gameplay data and machine learning. The authors use a rich dataset derived from a pedagogical literacy game to develop predictive models that pinpoint students at risk of reading difficulties at school entry. Their research highlights the potential for timely, non-intrusive diagnostics that can trigger early interventions, thereby mitigating long-term academic challenges and supporting equitable learning outcomes.Rounding out the collection, Ali Ateeq, Mohammed Alzoraiki, Marwan Milhem, and Ranyia Ali Ateeq critically examine the broader implications of AI in education by focusing on academic integrity and the shift toward holistic assessment. Their work interrogates the ethical and theoretical underpinnings of AI-driven evaluation, advocating for assessment practices that balance automated efficiency with human oversight. By considering issues such as algorithmic bias, data privacy, and the erosion of traditional evaluative measures, their study calls for robust policy frameworks and comprehensive reforms that ensure reliability and fairness in the digital age.In conclusion, the studies presented in this Research Topic collectively chart a transformative course for educational evaluation. They illustrate how AI can automate and enrich assessment practices through innovative methodologies and data-driven insights while raising critical questions about validity, reliability, and ethical implementation. Future research must delve deeper into these challenges-developing more nuanced theoretical frameworks, ensuring equitable access, and continuously evaluating the long-term impacts of digital transformation on education. The field can harness AI's full potential to create robust, adaptive, and just educational environments for the digital age by addressing these specific challenges and exploring detailed future directions.Together, these contributions provide a comprehensive vision of how AI is poised to reshape educational evaluation, urging researchers and practitioners alike to engage with both its transformative benefits and the imperative for thoughtful, critical oversight.

Keywords: educational evaluation, artificial intelligence, Challenges, innovations, assessment

Received: 15 Apr 2025; Accepted: 16 Apr 2025.

Copyright: © 2025 Archer, Young, Grover and Khalil. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Kelly Anne Young, University of South Africa, Pretoria, South Africa

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.