Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Educ.

Sec. Assessment, Testing and Applied Measurement

Volume 10 - 2025 | doi: 10.3389/feduc.2025.1710992

This article is part of the Research TopicAssessment with Artificial Intelligence in Ibero-American Higher Education: Innovations, Challenges, and Ethical PerspectivesView all articles

Human-in-the-Loop Assessment with AI: Implications for Teacher Education in Ibero-American Universities

Provisionally accepted
Diana  Carolina Fajardo-RamosDiana Carolina Fajardo-Ramos1Chiappe,  AndrésChiappe, Andrés1*Javier  Mella-NorembuenaJavier Mella-Norembuena2
  • 1Universidad de La Sabana, Chia, Colombia
  • 2Universidad de Las Americas, Santiago, Chile

The final, formatted version of the article will be published soon.

This scoping review examines how artificial intelligence (AI) reshapes assessment in Ibero-American higher education and specifies the teacher-training capacities and ethical safeguards needed for responsible adoption. Guided by PRISMA procedures and an eligibility scheme based on PPCDO (Population–Phenomenon–Context–Design–Outcomes), we searched Scopus and screened records (2015–2025; English/Spanish/Portuguese), yielding 76 peer-reviewed studies. Synthesis combined qualitative thematic analysis with quantitative descriptors and an exploratory correlational look at tool–outcome pairings. Rather than listing generic ICT, we propose a function-by-purpose taxonomy of assessment technologies that distinguishes pre-AI baselines from AI-specific mechanisms— generativity, adaptivity, and algorithmic feedback/analytics. Read through this lens, AI's value emerges when benefits are paired with conditions of use: explainability practices, data stewardship, audit trails, and clearly communicated assistance limits. The review translates these insights into a decision-oriented agenda for teacher education, specifying five competency clusters: (1) feedback literacy with AI (criterion-anchored prompting, sampling and audits, revision-based workflows); (2) rubric/item validation and traceability; (3) data interpretation and fairness; (4) integrity and transparency in AI-involved assessment; and (5) orchestration of platforms and moderation/double-marking when AI assists scoring. Exploratory correlations reinforce these priorities, signalling where training should concentrate. We conclude that Ibero-American systems are technically ready yet pedagogically under-specified: progress depends less on adding tools and more on professionalising human-in-the-loop assessment within robust governance. The article offers a replicable taxonomy, actionable training targets, and a research agenda on enabling conditions for trustworthy, AI-enhanced assessment.

Keywords: Artificial intelligence in education, teacher training, ICT integration, Educational Technology, digital competence, Pedagogical innovation

Received: 23 Sep 2025; Accepted: 10 Oct 2025.

Copyright: © 2025 Fajardo-Ramos, Andrés and Mella-Norembuena. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Chiappe, Andrés, andres.chiappe@unisabana.edu.co

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.