Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Educ.

Sec. Digital Learning Innovations

This article is part of the Research TopicHarnessing Generative AI for Inclusive Education: Opportunities and ChallengesView all 6 articles

Connecting Dots in GenAI for Inclusive Education: Opportunities and Challenges

Provisionally accepted
  • 1University of L'Aquila, L'Aquila, Italy
  • 2University of Warwick, Coventry, United Kingdom
  • 3Libera Universita di Bolzano, Bolzano, Italy

The final, formatted version of the article will be published soon.

As we navigate the rapidly evolving landscape of Generative Artificial Intelligence (GenAI), the integration of GenAI into education has shifted decisively from speculative disruption to operational reality. Yet this integration remains uneven, often deepening the very inequities it promised to address, which is a reality increasingly documented in comparative studies across global contexts. The research in this collection interrogates a central paradox: Can technologies built on statistical normalization and English-centric data truly serve learners who exist, by definition, outside the norm? A striking theme across the contributing research is the persistent perception gap between educators and students; a gap that extends far beyond diKerent adoption rates to reveal a fundamental divergence in how these stakeholders understand AI's ethical and pedagogical implications.In research conducted at a Peruvian university by Reina Martin et al. stark contrasts emerged: 73.1% of faculty expressed doubt regarding AI's eKectiveness in teaching, and 65.9% evaluated its overall learning impact unfavorably. In sharp contrast, 84% of students reported high confidence in the ethical handling of data by these same tools. This gap creates what we might call a "shadow curriculum," in which students engage with AI covertly, often bypassing faculty guidance entirely. This dynamic is particularly problematic in inclusive education, where students with disabilities depend on faculty advocacy and informed guidance. When educators view AI primarily as a threat to academic integrity rather than as a pedagogically grounded assistive tool, they abdicate their responsibility to model the critical literacies that learners need to engage with these technologies safely and eKectively.This disconnect persists strikingly among pre-service teachers, i.e. the very professionals who will shape educational practices in coming years. Research with preservice teachers at the University of Latvia by Kalinina et al. revealed a troubling paradox: while 75% recognized that AI could meaningfully support students navigating language barriers, only a minority had translated this theoretical recognition into practice. Fewer than half reported using AI tools in their own studies, and only about a quarter had engaged with ChatGPT. Even more revealing, when pre-service teachers did use AI, roughly half engaged with it primarily "to have a friendly chat", suggesting that for many emerging educators, the perceived value of AI lies as much in social-emotional companionship as in cognitive or pedagogical support. This gap between belief and practice underscores a critical need: professional preparation programmes must bridge theory and lived experience, helping future educators develop not just awareness of AI's potential but genuine literacies and competencies in its pedagogical deployment. Perhaps the most profound challenge to inclusion is the epistemological structure of Large Language Models (LLMs) themselves. In "How Inclusive Large Language Models Can Be? The Curious Case of Pragmatics," Manna, Cominetti, and Eradze argue that LLMs, by design, favor "standard" language use. These probabilistic engines predict the next most likely token, eKectively flattening linguistic diversity.For neurodivergent students, particularly those with autism who may employ nonstandard pragmatics, irony, or unique socio-linguistic scripts, this normalization is exclusionary. If an AI "corrects" a student's unique voice to match a corporate or academic standard, it performs a type of erasure, reinforcing a "normative" way of thinking and communicating.True inclusion in an AI-mediated educational landscape requires not tools that enforce homogeneity, but systems designed to recognize, respect, and translate across diverse pragmatic and linguistic repertoires. This is not a minor technical refinement but it is a fundamental ethical imperative for the field. Gogh and Kovari's contribution, "Homework in the AI era: cheating, challenge, or change?", dismantles the traditional "product-oriented" assessment model. They argue that if an assignment can be fully automated by a chatbot, the flaw lies in the assignment, not the student. If the tools are bringing down the walls of traditional assessment, the pedagogy must shift.For students with Special Educational Needs and Disabilities (SEND), the shift must be towards process and metacognition. The authors propose a "dialogic" approach where students might use AI to generate a solution and then critique it, or use tools like Wolfram Alpha to verify manual work. This shifts the assessment from the final output, which can be hampered by dysgraphia or processing speed issues, to the student's reasoning and critique.Simultaneously, the systematic review by Mukhtarkyzy et al. reminds us that "inclusive AI" is often synonymous with "visual AI." Their analysis of assistive technologies from 2012 to 2023 highlights Augmented Reality (AR) as a dominant tool for visualizing complex scientific concepts, particularly for students with Autism Spectrum Disorder (ASD). AR allows students to manipulate virtual objects, bypassing fine-motor constraints and providing a safe "sandbox" for experimentation. The articles in this research topic collectively point toward an urgent agenda: the need for pedagogically grounded, ethically-informed approaches to AI integration that center the voices and needs of historically marginalized learners. This requires moving beyond enthusiasm for technological capability toward genuine partnership between technologists, educators, and learners in shaping AI systems that enhance rather than diminish human communicative and cognitive potential.The research collected in this topic suggests that the future of inclusive education depends on a "Human-in-the-Loop" framework. We must move beyond the "medical model" of using AI to "fix" the student and toward a "social model" where AI does not increase the exclusion. This requires faculty to bridge the perception gap, designers to address algorithmic bias, and policymakers to ensure that the "eKiciency" of AI does not automate away the humanity of care. As we look toward the future, the mandate for educators is clear: we must harness Generative AI not to standardize our students, but to emancipate them.

Keywords: GenAI, Generative artificial intelligence, Inclusive eduaction, language learning, pragmatics, Teacher Education

Received: 11 Dec 2025; Accepted: 16 Dec 2025.

Copyright: © 2025 Eradze, Manna, Sunar, Dovigo and Ianes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Maka Eradze

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.