Your new experience awaits. Try the new design now and help us make it even better

OPINION article

Front. Educ., 09 January 2026

Sec. Digital Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1681836

This article is part of the Research TopicThe Transformative Impact of Digital Tools on Quality Education and Sustainable DevelopmentView all 9 articles

Artificial intelligence in education: applications and limitations for teachers in low- and middle-income countries

  • School of Psychological Sciences, Faculty of Social Sciences, National Autonomous University of Honduras, Tegucigalpa, Honduras

1 Introduction

The advent of generative artificial intelligence (AI)—advanced machine learning systems capable of autonomously producing coherent, contextually relevant text, imagery, and multimedia content (Hashmi and Bal, 2024)—has rapidly transformed numerous sectors, including education (Memarian and Doleck, 2023; Mohd Amin et al., 2025). However, the available literature on AI largely neglects low- and middle-income countries (LMICs), with scholarly attention predominantly directed toward advanced economies. Thus, data from AI adoption in LMICs is limited (Khan et al., 2024). In LMICs, where educational systems are frequently constrained by resource limitations, infrastructure deficits, and pedagogical challenges (Betthäuser et al., 2023; Delprato and Antequera, 2021; Little, 2006; Naparan and Alinsug, 2021), generative AI offers an unprecedented opportunity to reimagine teaching and learning (Díaz and Nussbaum, 2024). Yet, this technological promise is accompanied by complex socio-technical and ethical dilemmas that require careful navigation if the potential benefits are to be realized equitably and sustainably (Adel et al., 2024). This paper provides a brief overview of the pedagogical applications of AI for teachers in LMICs, with a particular focus on how these tools can both support and potentially constrain educational outcomes. Building on these considerations, this paper argues that while generative AI offers meaningful pedagogical benefits for teachers in resource-constrained settings, including improved linguistic access, support for instructional design, and more effective differentiation, its equitable adoption in LMICs ultimately depends on resolving key infrastructural, contextual, and governance challenges.

2 Expanding educational access: overcoming the language barrier

Scientific discourse is predominantly disseminated through English-language publications (Márquez and Porras, 2020). The hegemony of English in global scientific discourse constitutes a profound structural barrier for scholars in LMICs (Ramírez-Castañeda, 2020), impacting both the production and consumption of knowledge. On the production side, limited access to advanced English-language education and editorial resources disproportionately hinders researchers' ability to contribute to high-impact publications (Di Bitetti and Ferreras, 2017), secure international funding, and participate in transnational academic networks, conditions that reinforce systemic disparities in scholarly visibility and influence. Equally consequential is the effect on scientific consumption: the overwhelming prevalence of English-language literature restricts the accessibility of cutting-edge research for non-English speaking practitioners, educators, and policymakers (Amano et al., 2016; Arenas-Castro et al., 2024), thus affecting resource availability for instruction. This linguistic asymmetry not only curtails the integration of global evidence into local contexts but also perpetuates epistemic hierarchies that marginalize alternative knowledge systems and local epistemologies. In this way, the monolingual orientation of contemporary science operates as a mechanism of exclusion, entrenching global inequities in the circulation, recognition, and application of knowledge (Polanco and Mayorga, 2025; Scientific publishing has a language problem, 2023).

However, advances in AI offer promising avenues for mitigating the linguistic barriers that have long constrained the equitable participation of LMICs scholars in global scientific discourse (Salani and Tapfuma, 2025). High-quality, context-sensitive translation tools powered by AI can facilitate both the production and consumption of academic knowledge across linguistic divides (Gulati et al., 2024), enabling researchers and teachers to draft, revise, and disseminate manuscripts in English with greater precision and efficiency, while also enhancing access to English-language publications for non-Anglophone readers. The employment of AI tools to enhance grammatical accuracy, improve textual readability, and facilitate effective language translation is generally regarded as an ethically appropriate practice in scholarly environments (Cheng et al., 2025). Moreover, AI-driven platforms can support real-time multilingual collaboration. Such capacity enables the localization of curricula, fostering inclusivity and engagement by embedding cultural relevance and context within learning materials.

3 AI-enabled support across the teaching continuum: from planning to assessment

A recent systematic review synthesizes empirical evidence on the pedagogical affordances of AI for teachers, identifying key advantages across three interrelated domains: instructional planning, classroom implementation, and assessment (Celik et al., 2022). In the planning phase, AI systems facilitates data-informed decision-making by providing insights into students' socio-academic backgrounds and supporting the selection of instructional content based on readability or learning needs. During implementation, AI technologies enable real-time monitoring of student engagement and cognitive states, allowing for timely, adaptive interventions that not only enhance instructional responsiveness but also support personalized learning. These systems can also enrich the instructional experience by promoting more dynamic teacher–student interactions and fostering greater instructional enjoyment. In the domain of assessment, AI contributes to automating evaluative processes, such as essay scoring and plagiarism detection, arguably improving both the efficiency and objectivity of student evaluation. The automation of labor-intensive tasks, such as grading and lesson plan generation (Gurl et al., 2025; Jukiewicz, 2024; Li et al., 2025; Liu et al., 2022), reallocates teachers' time toward more meaningful pedagogical engagement, individualized instruction, and mentorship.

AI can simplify complex concepts by adapting explanations to match students' varying age groups and comprehension levels. Through advanced natural language processing and pedagogical modeling, AI can break down intricate ideas into more accessible content based on lexical, syntactical, and discourse-level simplification (Smirnova et al., 2025). This customization often involves generating simplifications that distill key information into digestible portions, as well as crafting analogies that relate new concepts to familiar experiences or everyday objects. AI can be used to tailor such analogies based on the student's cultural context and cognitive demands (Cao et al., 2024). Thus, AI supports differentiated and inclusive learning (Kooli and Chakraoui, 2025), making challenging material more engaging and understandable for diverse student populations, thereby enhancing overall educational effectiveness.

Table 1 illustrates how generative AI can provide teachers in LMICs with accessible, bilingual explanations of complex scientific ideas, offering a practical use case that aligns with resource-limited classrooms rather than the high-tech scenarios often emphasized in current debates. By connecting abstract concepts to everyday experiences, these examples help students grasp difficult material without relying on expensive resources or specialized equipment. In settings where class sizes are large and teacher workloads substantial (Organisation for Economic Cooperation Development, 2024), this type of support can meaningfully enhance instructional quality. However, the example in Table 1 also has limitations, since AI outputs may contain inaccuracies or culturally narrow references, making teacher verification essential.

Table 1
www.frontiersin.org

Table 1. A ChatGPT-generated example of age-appropriate analogies and stories to teach key concepts of evolution in English and Spanish.

4 Challenges of implementing generative AI in LMICs

Despite its transformative potential, the deployment of generative AI in LMICs education systems is shaped and often constrained by persistent infrastructural and systemic barriers (Farooqi et al., 2024). Access to reliable electricity, stable internet connectivity, and affordable digital devices remains uneven, especially in rural and marginalized communities. Without substantial investment in digital infrastructure, AI-driven educational interventions risk exacerbating existing inequities rather than alleviating them (Khan et al., 2024).

Additionally, generative AI models are fundamentally data-driven, relying on extensive corpora predominantly sourced from Western contexts and major world languages. This epistemic foundation introduces the risk of reproducing and amplifying cultural biases (Foka and Griffin, 2024; Tao et al., 2024). In LMICs, where cultural heterogeneity and historical legacies of colonialism shape social realities, AI-generated materials that lack local contextualization may inadvertently perpetuate stereotypes, misrepresent histories, or convey inappropriate normative assumptions (Ferrara, 2023; van Kolfschooten and Pilottin, 2024).

In this sense, AI-generated content may also be imprecise, distorted, or entirely fabricated, posing risks to the accuracy and integrity of academic work (Sun et al., 2024). As such, both teachers and students should approach AI outputs critically (Ng et al., 2021), not as definitive sources, but as preliminary drafts or scaffolds that require careful evaluation, verification, and refinement (Salido et al., 2025). Integrating AI into the learning process should thus emphasize human oversight and intellectual engagement, rather than passive dependence (Yue Yim, 2024). Therefore, rather than treating AI-generated content as authoritative, learners should be taught to interrogate, verify, and revise AI outputs, using them as springboards for deeper inquiry rather than endpoints. This pedagogical stance not only guards against misinformation and bias but also reinforces essential cognitive skills such as evaluation, synthesis, and argumentation. As AI becomes increasingly embedded in classrooms and academic workflows, it is imperative to reframe its role, not as a substitute for human judgment, but as a dynamic partner in the co-construction of knowledge (Kohnke et al., 2025). For LMICs, this shift presents a unique opportunity: to build more inclusive, adaptable, and resilient education systems that are both technologically empowered and pedagogically grounded.

From a pedagogical standpoint, overreliance on AI-generated content and responses could attenuate critical thinking and deep cognitive engagement among learners if not carefully scaffolded (Gerlich, 2025; Zhai et al., 2024). In this sense, a recent study concluded that using AI for essay writing led to reduced brain connectivity, lower cognitive engagement, and weaker performance compared to using search engines or writing without tools (Kosmyna et al., 2025). Compounding this issue, many public educational institutions in LMICs may lack both clear policies and the technological infrastructure needed to detect AI-generated content. Without institutional guidelines or access to reliable detection tools, teachers are left to rely on intuition or inconsistent methods to identify work produced by AI, which can lead to confusion, mistrust, and uneven enforcement of academic integrity. This regulatory gap not only undermines fair assessment practices but also makes it more challenging to cultivate a shared understanding among students and educators about the appropriate and ethical use of AI in learning environments (Azevedo et al., 2024; Chan, 2023). In the absence of proactive policy frameworks, AI use in classrooms risks evolving in ad hoc, unmonitored ways that may further entrench existing educational inequities. Ultimately, generative AI should be understood not as a panacea but as a pedagogical aid, one that requires thoughtful integration, sustained teacher training, and robust human oversight to ensure its benefits are equitably and ethically realized (Kohnke et al., 2025; Wiese et al., 2025).

5 Discussion

The integration of generative AI into educational systems presents a critical challenge for low-and middle-income countries: how can this technology be meaningfully adopted in contexts marked by limited infrastructure, scarce funding, and uneven digital literacy? Although AI holds promise for instructional planning, language accessibility, and classroom support, its broader implications for educators require scrutiny. Generative tools can personalize learning (Merino-Campos, 2025), translate complex content, and increase teacher efficiency (Tan et al., 2025), advantages that are particularly appealing in resource-constrained environments. However, these benefits must be weighed against the socio-technical, infrastructural, and epistemic conditions that shape educational practice. Without serious efforts to localize content, mitigate algorithmic bias, and invest in digital infrastructure, AI may exacerbate rather than alleviate existing inequalities. AI should no longer be treated as a peripheral innovation (Ma et al., 2025); its responsible integration requires deliberate strategies that center both educators and learners.

At the same time, educators in these contexts face long-standing structural and economic challenges. Teachers often work under intense pressure, managing overcrowded classrooms, limited materials, and competing demands on their time (Delprato and Antequera, 2021; Little, 2006; Naparan and Alinsug, 2021). The arrival of AI does not automatically reduce these burdens; in fact, it often introduces new technical and ethical complexities (Eyal, 2025; Nguyen et al., 2023). For AI to be usable and beneficial, teachers need more than digital tools; they need training that supports their ability to engage critically and confidently with AI in the classroom. Building AI literacy requires targeted professional development (Pei et al., 2025). This involves not only technical instruction but also the cultivation of ethical and contextual awareness so that educators can evaluate and adapt AI for their specific teaching environments (Gouseti et al., 2025). To understand what such integration demands in practice, it is helpful to examine the concrete conditions under which many schools in LMICs operate.

In this sense, the case of Honduras may serve to exemplify a broader reality in many LMICs, where schools face severe infrastructural challenges that hinder both learning environments and the effective adoption of educational technologies. The Honduran public education system currently lacks a formal regulatory framework for the use of AI. Yet, there are promising early efforts underway. Initiatives such as teacher symposiums and short-term training programs aim to empower educators and promote the responsible integration of AI into the classroom (Secretaría de Educación de Honduras [Honduran Ministry of Education], 2023). However, 60% of schools in the country require roof repairs, 50% have damaged walls, and 12% do not have electricity. Water supply is often irregular, and many schools lack proper sanitation infrastructure. Only half have sinks, 70% of toilets need repair, and 9% of schools do not have waste disposal systems. Furniture shortages are also widespread, with 60% lacking sufficient chairs (FONAC, 2024). A similar pattern is evident in Sub-Saharan Africa, where studies highlight that limited electrification, deteriorated school buildings, and shortages of basic furniture constrain both teaching quality and student learning (Hassan et al., 2022). These physical deficits severely limit the feasibility of implementing digital tools. Without reliable electricity, safe facilities, and adequate learning environments, the adoption of AI risks remaining a distant aspiration rather than a practical solution. Any meaningful implementation must be accompanied by investment in foundational infrastructure to ensure that technology addresses, rather than deepens, educational inequality.

In conclusion, AI must be reimagined as a tool for equity (Garcia Ramos and Wilson-Kennedy, 2024; Kohnke and Zaugg, 2025), not just efficiency. This means positioning teachers as co-designers and informed decision makers, rather than passive adopters. Policies should prioritize locally developed, adapted, or curated AI systems (Hsu et al., 2022), training programs that build critical digital capacity, and implementation strategies rooted in the material and cultural realities of schools in low and middle-income countries. Beyond technical training, educators should be supported to question whose knowledge is embedded in AI systems, and whose is omitted. These questions are essential to prevent AI from reinforcing marginalization. The goal is not to replicate the technological trajectories of high-income countries, but to carve out educational futures that are contextually responsive, culturally grounded, and led by those who understand the classroom from within. For teachers in LMICs, this means participating actively in shaping technological change, not just responding to it. However, LMICs are not a uniform group; their economic, social, and political conditions differ significantly, leading to varied educational challenges and outcomes (Local Burden of Disease Educational Attainment Collaborators, 2020). Effective policies must therefore be tailored to each country's specific context and developmental priorities. A coherent policy agenda for LMICs should focus on three interconnected priorities. First, investment in foundational infrastructure is essential, since reliable electricity, safe facilities, and adequate learning environments determine whether AI tools can be used at all. Second, sustained teacher training is needed to build both technical competence and the critical capacity to evaluate and adapt AI for local pedagogical purposes. Third, clear governance frameworks must guide procurement, data use, and algorithmic accountability so that AI adoption aligns with national educational goals and avoids reinforcing existing inequities.

Generative AI's role in education must also be examined through the lens of systemic readiness and institutional governance. Sustainable integration in LMICs depends on policy coherence, ethical regulation, and the establishment of data governance frameworks that ensure privacy and accountability (Papagiannidis et al., 2025). Governments must prioritize building AI governance capacities aligned with educational goals, including developing national strategies to guide procurement, data use, and algorithmic transparency. At the institutional level, universities and teacher-training colleges can function as key incubators for AI literacy, embedding critical and context-sensitive use of AI tools into teacher education programs (Daher, 2025; Kelley and Wenzel, 2025). Equally important is fostering South–South collaboration, allowing LMICs to share regionally relevant best practices and locally developed tools rather than relying solely on imported technologies shaped by external pedagogical assumptions. When aligned with inclusive policy design and institutional capacity building, AI can move beyond pilot interventions to become an embedded component of equitable educational reform, strengthening local ownership and long-term sustainability (Khan et al., 2024).

Author contributions

ML-B: Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. Funding for the article processing charge (APC) was provided by the National Autonomous University of Honduras.

Conflict of interest

The author declares that that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was used in the creation of this manuscript. The initial English translation of the manuscript was generated using ChatGPT. Subsequently, Grammarly Premium was used to refine grammar and style. The author then reviewed and revised the translation by comparing it with the original version in Spanish to ensure accuracy and clarity. Additionally, Table 1 was created with the assistance of ChatGPT as an example of a teaching exercise involving generative AI.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Adel, A., Ahsan, A., and Davison, C. (2024). ChatGPT Promises and challenges in education: computational and ethical perspectives. Educ. Sci. 14:814. doi: 10.3390/educsci14080814

Crossref Full Text | Google Scholar

Amano, T., González-Varo, J. P., and Sutherland, W. J. (2016). Languages are still a major barrier to global science. PLoS Biol. 14:e2000933. doi: 10.1371/journal.pbio.2000933

PubMed Abstract | Crossref Full Text | Google Scholar

Arenas-Castro, H., Berdejo-Espinola, V., Chowdhury, S., Rodríguez-Contreras, A., James, A. R. M., Raja, N. B., et al. (2024). Academic publishing requires linguistically inclusive policies. Proc. R. Soc. B Biol. Sci. 291. doi: 10.32942/X2NS3K

PubMed Abstract | Crossref Full Text | Google Scholar

Azevedo, L., Mallinson, D. J., Wang, J., Robles, P., and Best, E. (2024). AI policies, equity, and morality and the implications for faculty in higher education. Public Integrity 1–16. doi: 10.1080/10999922.2024.2414957

Crossref Full Text | Google Scholar

Betthäuser, B. A., Bach-Mortensen, A. M., and Engzell, P. (2023). A systematic review and meta-analysis of the evidence on learning during the COVID-19 pandemic. Nat. Human Behav. 7, 375–385. doi: 10.1038/s41562-022-01506-4

PubMed Abstract | Crossref Full Text | Google Scholar

Cao, C. C., Chen, E., Fang, Z., Cao, L. Y., Lin, J., and Li, R. (2024). “LLMgenerated personalized analogies to foster AI literacy in adult novices,” in International Conference on Computers in Education (Manila: Asia-Pacific Society for Computers in Education). doi: 10.58459/icce.2024.4809

Crossref Full Text | Google Scholar

Celik, I., Dindar, M., Muukkonen, H., and Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: a systematic review of research. TechTrends 66, 616–630. doi: 10.1007/s11528-022-00715-y

Crossref Full Text | Google Scholar

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. Higher Educ. 20:38. doi: 10.1186/s41239-023-00408-3

Crossref Full Text | Google Scholar

Cheng, A., Calhoun, A., and Reedy, G. (2025). Artificial intelligence-assisted academic writing: recommendations for ethical use. Adv. Simul. 10:22. doi: 10.1186/s41077-025-00350-6

PubMed Abstract | Crossref Full Text | Google Scholar

Daher, R. (2025). Integrating AI literacy into teacher education: a critical perspective paper. Disc. Artif. Intell. 5:217. doi: 10.1007/s44163-025-00475-7

Crossref Full Text | Google Scholar

Delprato, M., and Antequera, G. (2021). School efficiency in low and middle income countries: an analysis based on PISA for development learning survey. Int. J. Educ. Dev., 80:102296. doi: 10.1016/j.ijedudev.2020.102296

Crossref Full Text | Google Scholar

Di Bitetti, M. S., and Ferreras, J. A. (2017). Publish (in English) or perish: the effect on citation rate of using languages other than English in scientific publications. Ambio 46, 121–127. doi: 10.1007/s13280-016-0820-7

PubMed Abstract | Crossref Full Text | Google Scholar

Díaz, B., and Nussbaum, M. (2024). Artificial intelligence for teaching and learning in schools: the need for pedagogical intelligence. Comput. Educ. 217:105071. doi: 10.1016/j.compedu.2024.105071

Crossref Full Text | Google Scholar

Eyal, L. (2025). Rethinking artificial-intelligence literacy through the lens of teacher educators: the adaptive AI model. Comput. Educ. Open 9:100291. doi: 10.1016/j.caeo.2025.100291

Crossref Full Text | Google Scholar

Farooqi, M. T. K., Amanat, I., and Awan, S. M. (2024). Ethical considerations and challenges in the integration of artificial intelligence in education: a systematic review. J. Excell. Manag. Sci. 3, 35–50. doi: 10.69565/jems.v3i4.314

Crossref Full Text | Google Scholar

Ferrara, E. (2023). Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Science 6:3. doi: 10.3390/sci6010003

Crossref Full Text | Google Scholar

Foka, A., and Griffin, G. (2024). AI cultural heritage, and bias: some key queries that arise from the use of GenAI. Heritage 7, 6125–6136. doi: 10.3390/heritage7110287

Crossref Full Text | Google Scholar

FONAC (2024). Informe de Veeduría Social a la Infraestructura Educativa 2022: Resumen Ejecutivo. Available online at: https://fonac.hn/informe-de-veeduria-social-a-la-infraestructura-educativa-2022-resumen-ejecutivo-2/ (accessed on July 02, 2025).

Google Scholar

Garcia Ramos, J., and Wilson-Kennedy, Z. (2024). Promoting equity and addressing concerns in teaching and learning with artificial intelligence. Front. Educ. 9. doi: 10.3389/feduc.2024.1487882

Crossref Full Text | Google Scholar

Gerlich, M. (2025). AI tools in society: impacts on cognitive offloading and the future of critical thinking. Societies 15:6. doi: 10.3390/soc15010006

Crossref Full Text | Google Scholar

Gouseti, A., James, F., Fallin, L., and Burden, K. (2025). The ethics of using AI in K-12 education: a systematic literature review. Technol. Pedag. Educ. 34, 161–182. doi: 10.1080/1475939X.2024.2428601

Crossref Full Text | Google Scholar

Gulati, V., Roy, S. G., Moawad, A., Garcia, D., Babu, A., Poot, J. D., and Teytelboym, O. M. (2024). Transcending language barriers: can ChatGPT be the key to enhancing multilingual accessibility in health care? J. Am. Coll. Radiol. 21:1888–1895. doi: 10.1016/j.jacr.2024.05.009

PubMed Abstract | Crossref Full Text | Google Scholar

Gurl, T. J., Markinson, M. P., and Artzt, A. F. (2025). Using ChatGPT as a lesson planning assistant with preservice secondary mathematics teachers. Digit. Exp. Math. Educ. 11, 114–139. doi: 10.1007/s40751-024-00162-9

Crossref Full Text | Google Scholar

Hashmi, N., and Bal, A. S. (2024). Generative AI in higher education and beyond. Bus. Horiz. 67, 607–614. doi: 10.1016/j.bushor.2024.05.005

Crossref Full Text | Google Scholar

Hassan, E., Groot, W., and Volante, L. (2022). Education funding and learning outcomes in Sub-Saharan Africa: a review of reviews. Int. J. Educ. Res. Open 3:100181. doi: 10.1016/j.ijedro.2022.100181

Crossref Full Text | Google Scholar

Hsu, Y.-C., ‘Kenneth' Huang, T.-H., Verma, H., Mauri, A., Nourbakhsh, I., and Bozzon, A. (2022). Empowering local communities using artificial intelligence. Patterns 3:100449. doi: 10.1016/j.patter.2022.100449

PubMed Abstract | Crossref Full Text | Google Scholar

Jukiewicz, M. (2024). The future of grading programming assignments in education: the role of ChatGPT in automating the assessment and feedback process. Think. Skills Creativity 52:101522. doi: 10.1016/j.tsc.2024.101522

Crossref Full Text | Google Scholar

Kelley, M., and Wenzel, T. (2025). Advancing artificial intelligence literacy in teacher education through professional partnership inquiry. Educ. Sci. 15:659. doi: 10.3390/educsci15060659

Crossref Full Text | Google Scholar

Khan, M. S., Umer, H., and Faruqe, F. (2024). Artificial intelligence for low income countries. Humanit. Soc. Sci. Commun. 11:1422. doi: 10.1057/s41599-024-03947-w

Crossref Full Text | Google Scholar

Kohnke, L., Zou, D., Ou, A. W., and Gu, M. M. (2025). Preparing future educators for AI-enhanced classrooms: insights into AI literacy and integration. Comput. Educ. Artif. Intell. 8:100398. doi: 10.1016/j.caeai.2025.100398

Crossref Full Text | Google Scholar

Kohnke, S., and Zaugg, T. (2025). Artificial intelligence: an untapped opportunity for equity and access in STEM education. Educ. Sci. 15:68. doi: 10.3390/educsci15010068

Crossref Full Text | Google Scholar

Kooli, C., and Chakraoui, R. (2025). AI-driven assistive technologies in inclusive education: benefits, challenges, and policy recommendations. Sustain. Fut. 10:101042. doi: 10.1016/j.sftr.2025.101042

Crossref Full Text | Google Scholar

Kosmyna, N., Hauptmann, E., Tong Yuan, Y., Situ, J., Liao, X.-H., Vivian Beresnitzky, A., Braunstein, I., and Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. Available online at: https://www.media.mit.edu/publications/your-brain-on-chatgpt/ (accessed June 24, 2025)

Google Scholar

Li, Y., Raković, M., Srivastava, N., Li, X., Guan, Q., Gašević, D., and Chen, G. (2025). Can AI support human grading? Examining machine attention and confidence in short answer scoring. Comput. Educ. 228:105244. doi: 10.1016/j.compedu.2025.105244

Crossref Full Text | Google Scholar

Little, A. W. (2006). “Multigrade lessons for EFA: a synthesis,” in Education for All and Multigrade Teaching (Dordrecht: Springer Netherlands), 301–348. doi: 10.1007/1-4020-4591-3_14

Crossref Full Text | Google Scholar

Liu, Y., Chen, L., and Yao, Z. (2022). The application of artificial intelligence assistant to deep learning in teachers' teaching and students' learning processes. Front. Psychol. 13:929175. doi: 10.3389/fpsyg.2022.929175

PubMed Abstract | Crossref Full Text | Google Scholar

Local Burden of Disease Educational Attainment Collaborators (2020). Mapping disparities in education across low- and middle-income countries. Nature 577, 235–238. doi: 10.1038/s41586-019-1872-1

Crossref Full Text | Google Scholar

Ma, M., Ng, D. T. K., Liu, Z., and Wong, G. K. W. (2025). Fostering responsible AI literacy: A systematic review of K-12 AI ethics education. Computers and Education: Artificial Intelligence. 8:100422. doi: 10.1016/j.caeai.2025.100422

Crossref Full Text | Google Scholar

Márquez, M. C., and Porras, A. M. (2020). Science communication in multiple languages is critical to its effectiveness. Front. Commun. 5:00031. doi: 10.3389/fcomm.2020.00031

Crossref Full Text | Google Scholar

Memarian, B., and Doleck, T. (2023). ChatGPT in education: methods, potentials, and limitations. Comput. Human Behav. Artif. Humans 1:100022. doi: 10.1016/j.chbah.2023.100022

Crossref Full Text | Google Scholar

Merino-Campos, C. (2025). The impact of artificial intelligence on personalized learning in higher education: a systematic review. Trends Higher Educ. 4:17. doi: 10.3390/higheredu4020017

Crossref Full Text | Google Scholar

Mohd Amin, M. R., Ismail, I., and Sivakumaran, V. M. (2025). Revolutionizing education with artificial intelligence (AI)? Challenges, and implications for open and distance learning (ODL). Soc. Sci. Humanit. Open 11:101308. doi: 10.1016/j.ssaho.2025.101308

Crossref Full Text | Google Scholar

Naparan, G. B., and Alinsug, V. G. (2021). Classroom strategies of multigrade teachers. Soc. Sci. Humanit. Open 3:100109. doi: 10.1016/j.ssaho.2021.100109

Crossref Full Text | Google Scholar

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., and Qiao, M. S. (2021). Conceptualizing AI literacy: an exploratory review. Comput. Educ. Artif. Intell. 2:100041. doi: 10.1016/j.caeai.2021.100041

Crossref Full Text | Google Scholar

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., and Nguyen, B.-P. T. (2023). Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 28, 4221–4241. doi: 10.1007/s10639-022-11316-w

PubMed Abstract | Crossref Full Text | Google Scholar

Organisation for Economic Cooperation and Development (2024). Education at a Glance 2024: OECD Indicators. Paris: OECD Publishing.

Google Scholar

Papagiannidis, E., Mikalef, P., and Conboy, K. (2025). Responsible artificial intelligence governance: a review and research framework. J. Strat. Inf. Syst. 34:101885. doi: 10.1016/j.jsis.2024.101885

Crossref Full Text | Google Scholar

Pei, B., Lu, J., and Jing, X. (2025). Empowering preservice teachers' AI literacy: current understanding, influential factors, and strategies for improvement. Comput. Educ. Artif. Intell. 8:100406. doi: 10.1016/j.caeai.2025.100406

Crossref Full Text | Google Scholar

Polanco, M. I. F., and Mayorga, C. A. E. (2025). Scientific Production in Central America (1996–2023): bibliometric analysis of regional trends, collaboration, and research impact. Publications 13:44. doi: 10.3390/publications13030044

Crossref Full Text | Google Scholar

Ramírez-Castañeda, V. (2020). Disadvantages in preparing and publishing scientific papers caused by the dominance of the English language in science: the case of Colombian researchers in biological sciences. PLoS ONE 15:e0238372. doi: 10.1371/journal.pone.0238372

PubMed Abstract | Crossref Full Text | Google Scholar

Salani, J., and Tapfuma, M. M. (2025). Artificial intelligence transforming the publishing industry: a case of the book sector in Africa. Front. Res. Metrics Anal. 10:1504415. doi: 10.3389/frma.2025.1504415

PubMed Abstract | Crossref Full Text | Google Scholar

Salido, A., Syarif, I., Sitepu, M. S., Suparjan Wana, P. R., Taufika, R., and Melisa, R. (2025). Integrating critical thinking and artificial intelligence in higher education: a bibliometric and systematic review of skills and strategies. Soc. Sci. Human. Open 12:101924. doi: 10.1016/j.ssaho.2025.101924

Crossref Full Text | Google Scholar

Scientific publishing has a language problem (2023). Nat. Hum. Behav. 7, 1019–1020. doi: 10.1038/s41562-023-01679-6

Crossref Full Text | Google Scholar

Secretaría de Educación de Honduras [Honduran Ministry of Education] (2023). Lanzan Proyecto Sistema de Tecnologías WOLFRAM Honduras. Available online at: https://www.se.gob.hn/detalle-articulo/1975/ (accessed July 03, 2025).

Google Scholar

Smirnova, A., Chun, K., beom Rothman, W. L., and Sarma, S. (2025). “Text simplification for children: evaluating LLMs vis-à-vis human experts,” in Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–10. doi: 10.1145/3706599.3719889

Crossref Full Text | Google Scholar

Sun, Y., Sheng, D., Zhou, Z., and Wu, Y. (2024). AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content. Human. Soc. Sci. Commun. 11:1278. doi: 10.1057/s41599-024-03811-x

Crossref Full Text | Google Scholar

Tan, X., Cheng, G., and Ling, M. H. (2025). Artificial intelligence in teaching and teacher professional development: a systematic review. Comput. Educ. Artif. Intell. 8:100355. doi: 10.1016/j.caeai.2024.100355

Crossref Full Text | Google Scholar

Tao, Y., Viberg, O., Baker, R. S., and Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus 3:346. doi: 10.1093/pnasnexus/pgae346

PubMed Abstract | Crossref Full Text | Google Scholar

van Kolfschooten, H., and Pilottin, A. (2024). Reinforcing stereotypes in health care through artificial intelligence—generated images: a call for regulation. Mayo Clin. Proc. Digit. Health 2, 335–341. doi: 10.1016/j.mcpdig.2024.05.004

PubMed Abstract | Crossref Full Text | Google Scholar

Wiese, L. J., Patil, I., Schiff, D. S., and Magana, A. J. (2025). AI ethics education: a systematic literature review. Comput. Educ. Artif. Intell. 8:100405. doi: 10.1016/j.caeai.2025.100405

Crossref Full Text | Google Scholar

Yue Yim, I. H. (2024). A critical review of teaching and learning artificial intelligence (AI) literacy: developing an intelligence-based AI literacy framework for primary school education. Comput. Educ. Artif. Intell. 7:100319. doi: 10.1016/j.caeai.2024.100319

Crossref Full Text | Google Scholar

Zhai, C., Wibowo, S., and Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 11:28. doi: 10.1186/s40561-024-00316-7

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, ChatGPT, education, GenAI, low and middle income countries, teachers

Citation: Landa-Blanco M (2026) Artificial intelligence in education: applications and limitations for teachers in low- and middle-income countries. Front. Educ. 10:1681836. doi: 10.3389/feduc.2025.1681836

Received: 07 August 2025; Revised: 10 December 2025;
Accepted: 15 December 2025; Published: 09 January 2026.

Edited by:

Emine KIlavuz, Nuh Naci Yazgan University, Türkiye

Reviewed by:

Ranilson Oscar Araújo Paiva, Federal University of Alagoas, Brazil
Burcu Oralhan, Nuh Naci Yazgan University, Türkiye

Copyright © 2026 Landa-Blanco. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Miguel Landa-Blanco, bWlndWVsLmxhbmRhQHVuYWguZWR1Lmhu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.