REVIEW article

Front. Comput. Sci., 05 February 2026

Sec. Human-Media Interaction

Volume 8 - 2026 | https://doi.org/10.3389/fcomp.2026.1759027

AI and the digital divide in education

  • Department of Business Management, University of Limpopo, Polokwane, South Africa

Article metrics

View details

1,5k

Views

337

Downloads

Abstract

Artificial intelligence (AI) is an indispensable tool transforming education systems worldwide. It does so through its complex intelligent tutoring system and automated administrative operations managed by prompts. While AI seems to make it easy for learning or education to take place easily, it also promotes or leads to some unfair advantages for some learners or students. This is due to the languages used by the AI, cultural mismatches between the AI developers and users, and algorithm bias. We used a focused narrative and comparative review of case studies from countries with similar socio-economic statuses and that have a visible divide between urban and rural areas/regions. From the analysed case studies, we therefore argue that despite its benefits, AI also does lead to a digital divide, either intentionally or unintentionally, due to the above-mentioned reasons. We conclude by proposing a practical training application that can lead to inclusivity, which will integrate local languages, cultures, and communities into AI tools development and translation. And more importantly, by proposing that the algorithm used in AI tools be inclusive, AI tools design be participatory, AI be multilingual and or importantly AI teachers be trained.

Introduction

Coined in 1955 (McCarthy et al., 2006), AI is the science and engineering of making intelligent machines that behave in a clever way. AI is a prominent technology that has changed the way people learn over the past decade by drastically changing learning pedagogies, learning styles, and techniques (Garzón et al., 2025; Krause et al., 2024; Vieriu and Petrea, 2025). In this context, AI means a computer system that enables learning by making education more adaptive, accessible, and efficient for students and educators. It involves tools like adaptive learning platforms, intelligent tutoring systems, and automated grading, all designed to support diverse students’ needs and improve engagement (McCarthy et al., 2006; Shannon, 1950). AI is defined as complex computing systems that carry out tasks that ordinarily necessitate human intelligence, including recognizing patterns, decision-making, natural language processing, and problem-solving (Baidoo-Anu and Owusu Ansah, 2023; McCarthy et al., 2006; Shannon, 1950). AI algorithms developed directly from large datasets assist such systems not only to better match learner needs and automate repetitive tasks (Hargittai, 2007a, 2007b, 2018), but also to provide insights relevant to pedagogical decision making in any discipline for a given student (Sallam et al., 2023). Bahroun et al. (2023) found that OpenAI’s ChatGPT is one of the most famous Generative Pre-trained Transformer (GPTs), and ChatGPT has shaped and is still shaping education in the global arena. The use of AI in a myriad of tools and applications is omnipresent in today’s classrooms and organizations. Intelligent tutoring systems (ITS), such as Carnegie Learning or Squirrel AI, track learners’ performance and offer personalised explanations and activities (Al-Zahrani, 2023).

Adaptive learning platforms, such as but not limited to Knewton and Smart Sparrow, vary the complexity or type of content according to a learner’s engagement, leading to individualized learning journeys (Jiang and Pang, 2023). AI-powered learning management systems (LMS) like Canvas and Moodle plugins can then alert teachers to students likely to fall out of connection and take action promptly. On the other hand, students use various tools for natural language processing, such as Grammarly, Quill, and AI writing assistants, to receive constructive responses to their writing while they improve in-text. Additionally, automated administrative tools, which can feature student support chatbots and predictive analytics scheduling systems, can help diminish the institution’s workload and enhance its communication system (Bahroun et al., 2023; Laupichler et al., 2022; MacNeil et al., 2022). However, these predictions might be biased, and some learners are flagged based on their language or cultural nuances rather than actual poor performance, and the same may apply where learners are failed or considered incompetent due to similar discriminative reasons. As a result, negative implications of AI implementation on ethics and academic integrity have been found (Bahroun et al., 2023; Kooli, 2023; Rahman and Watanobe, 2023).

AI should change learning for all learners across the globe (Luckin et al., 2016; Selwyn and Facer, 2013); however, that is not always the case (Garzón et al., 2025; Krause et al., 2024). AI in higher education brings some concerns with it, specifically the issues of access and such as the algorithmic bias and discrimination (Al-Zahrani, 2024b; Bigman et al., 2022; Elliott and Soifer, 2022; Hargittai, 2007a, 2007b, 2018; Johnson, 2021; Wang et al., 2022), several ethical considerations (Resseguier and Rodrigues, 2020; Stahl et al., 2021), all of which if not attended leads to the digital divide (Van Dijk, 2017). According to Van Dijk (2017), the digital divide is the gap between people who do and do not have access to forms of information and communication. This gap appears to be wider for learners from rich countries and regions and those from poor countries and regions in terms of access to AI technologies that enable learning (Cotilla Conceição and van der Stappen, 2025; Druga et al., 2022). However, it should be noted that not having access to AI tools is not AI’s fault but rather other systemic, socio-economic, infrastructural, or political issues (Bahroun et al., 2023). The gist of this review rests on the notion that everyone has equal access to AI tools, but the tools are somehow discriminatory to others linguistically, culturally, and due to the AI tool’s algorithm bias. It is common knowledge that AI integration in education is a matter of neutrality or equal benefit (Filippucci, 2024; O’Neil, 2016; Vieriu and Petrea, 2025). The equal benefit will suffice in the absence of discriminatory issues. However, if such discrimination issues persist, learners from the rich countries and regions (where these tools are developed and their language used) are more likely to benefit from AI in their education than those from marginalised, poor regions, and this injustice is rooted in decades-old structural injustices (Cotilla Conceição and van der Stappen, 2025; Druga et al., 2022). The inevitable inclusion of AI into the learning or education system does not wholly benefit everyone as it was intended (Mokoena and Seeletse, 2025). The digital divide that leads to unequal access has been experienced across the globe, and this chapter aims to argue that if AI benefits few privileged learners or students, its efficacy cannot be compounded (Mokoena and Seeletse, 2025). Using critical, theoretical, and conceptual lenses, this chapter aims to argue and prove that AI might bring more harm than benefits with its biased algorithms, cultural, and language insensitiveness. This perpetuates a lack of equity, social injustices, and questionable ethical issues around AI, despite Pedro et al. (2019) and Sato et al. (2024) arguing that AI in education can be used to improve educational equity and quality. According to Mac Fadden et al. (2024), AI should provide more equitable and inclusive educational experiences for all, and this can be achieved by considering the above matters when considering the use of AI.

AI-enabled education benefits

AI in education offers significant potential for personalized learning (McCarthy et al., 2006; Shannon, 1950; Van Dijk, 2017). Specifically, AI-powered education leads to improved academic outcomes and enhanced student engagement (Hennekeuser et al., 2024; Vieriu and Petrea, 2025). The benefits of these AI for learners are endless, and they depend on the accuracy of the AI algorithm and its cultural and language sensitivity (Filippucci, 2024; O’Neil, 2016; Vieriu and Petrea, 2025). AI in education influences students’ academic development by offering a mix of opportunities and challenges (Edtech, 2020). It also provides tailored guidance, support, and feedback based on individual learning patterns and knowledge levels (Hwang et al., 2020). AI has the potential to revolutionize education (Holmes et al., 2019) and address the diverse needs of learners (Vieriu and Petrea, 2025; Al-Sowaidi and Clarke, 2025). More importantly, AI in education leads to improved self-efficacy and a more positive attitude toward their education (Johnson and Smith, 2019). Fitas et al. (2025) noted that while it might be detrimental with its language barriers, cultural mismatches, and algorithm bias, it is possible to benefit from AI in education.

Types of digital divides in AI-powered education

Digital inequalities do not derive simply from the introduction or discontinuity of technology, as many students and researchers have noted (Bigman et al., 2022; Johnson, 2021; Wang et al., 2022). Instead, they spring from deeper language barriers, cultural and algorithm bias that determine the interaction between the learners at the AI tool (Mac Fadden et al., 2024). While cultural bias refers to interpreting phenomena based on the norms of one’s own culture, algorithmic bias occurs when a computer system systematically produces unfair outcomes, often as a result of embedding human cultural biases into the system’s design and training data (Fitas et al., 2025; Seaver, 2017). To assess how AI can either shrink or widen existing educational gaps, it is essential to understand these dynamics.

Language and cultural mismatches

AI educational technologies are predominantly designed for English or other major international languages, with limited accommodation for multilingual or Indigenous linguistic and cultural contexts. This linguistic and cultural bias reduces the relevance and pedagogical effectiveness of AI systems for diverse learner populations, thereby reproducing exclusion through design rather than access alone (Mac Fadden et al., 2024). When the AI tools are developed in a language unfamiliar to the learners, they are bound to struggle as compared to those learners whose language was used to develop the tools. The solution will be to translate, accurately the AI tools to the local language, to create a fair advantage for everyone.

Algorithmic Bias and unequal outcomes

AI systems trained on datasets that underrepresent specific populations generate biased predictions, provide less appropriate feedback, and misinterpret students’ work, thereby producing what is conceptualised as a “third-level digital divide,” one concerned not with access, but with who ultimately benefits from AI (Hargittai, 2007a, 2007b, 2018). In educational contexts, these unequal outcomes manifest through differential quality of feedback and recommendations, with students from lower-income communities more likely to receive less accurate or less supportive guidance, reinforcing disadvantage rather than ameliorating it (Bulathwela et al., 2024). Empirical research demonstrates that such outcomes are not incidental but structurally embedded in the design and governance of AI systems. Algorithmic bias operates across multiple stages, including data collection, feature selection, algorithm design, implementation, and institutional policy environments. When these processes lack ethical oversight and accountability, AI systems actively shape educational trajectories by introducing new, opaque barriers to assessment, progression, and opportunity (Baker and Hawn, 2022; Boateng and Boateng, 2025). Scholars have further identified distinct forms of algorithmic discrimination, such as proxy bias, disparate impact, and biased targeting, that systematically disadvantage marginalised groups defined by race, gender, disability, socioeconomic status, and geographic location (Wang et al., 2024; Baker and Hawn, 2022). Importantly, bias can also be reinforced through patterns of user interaction, allowing inequities to compound over time rather than dissipate (Chinta et al., 2024). Taken together, this evidence challenges the assumption that AI is a neutral educational tool. Instead, without deliberate commitments to inclusive design, transparency, and governance, AI risks reproducing and intensifying existing social hierarchies. Algorithmic bias thus constitutes a structural threat to educational equity, requiring systemic intervention if AI is to function as a tool for inclusion rather than exclusion. Inclusion of locals when developing these AI tools will minimize the algorithm bias.

Based on the above section, we have developed the conceptual framework below to depict how the digital divide in AI occurs. The conceptual framework was also based on the theoretical framework as discussed first below.

Theoretical framework

AI adoption in education is shaped by three interrelated dimensions. The digital access and skills divide emphasizes that mere access to devices and connectivity is insufficient; learners and educators also require digital literacy and institutional support to engage meaningfully with AI (van Dijk, 2006, 2020; Hargittai, 2007a, 2007b, 2018; Selwyn, 2016). The algorithmic divide highlights that AI systems can reproduce social inequities due to biases in non-representative datasets and design choices, disproportionately affecting marginalized learners (Buolamwini and Gebru, 2018; Barocas and Selbst, 2016). Finally, socio-cultural and institutional mediation shape how AI interacts with pedagogy, curricula, and community contexts. Equitable AI integration requires simultaneous attention to access, algorithmic fairness, and human/institutional capacity. When aligned, these dimensions promote inclusive learning; when neglected, they risk reinforcing existing educational and socio-economic disparities.

Conceptual framework: AI-related digital divides in education

Below is a concise conceptual framework synthesizing the linguistic, cultural, and linguistic and cultural divide and algorithmic divides as seen in Figure 1.

Figure 1

AI tools often lack support for local languages and culturally relevant content, and this limits inclusivity and engagement for multilingual and indigenous communities, reducing the pedagogical relevance of AI-enabled learning (Mac Fadden et al., 2024). Consequently, this leads to learners from underrepresented linguistic and cultural groups being excluded from the benefits of AI, perpetuating systemic inequities. Algorithm divide implies that there are biases embedded in AI systems that arise from non-representative datasets and algorithm design. These biases produce skewed predictions, unequal feedback, and differential learning support, disproportionately affecting marginalized students (Bulathwela et al., 2024), leading to the reinforcement of pre-existing social hierarchies, creating inequitable educational outcomes. Overall, the conceptual framework aims to show that linguistic/cultural and algorithmic divides are interconnected, and if AI fails to recognize local languages or contexts may simultaneously rely on biased datasets, compounding exclusion. Students from marginalized communities face a “double barrier”: limited relevance of content and biased feedback or recommendations, which together exacerbate educational inequalities. This framework emphasizes that equitable AI integration in education depends on simultaneous attention to cultural, linguistic, and algorithmic factors. Without addressing these dimensions in tandem, AI risks reinforcing rather than alleviating structural inequalities.

Although research on artificial intelligence in education is expanding, existing studies largely emphasise technological access and innovation, with limited theoretical integration. Digital divide theories are frequently cited but rarely applied systematically to analyse AI-related educational inequalities. In particular, second- and third-level digital divides, relating to skills, meaningful use, and unequal educational outcomes, remain underexplored. Moreover, algorithmic, linguistic, cultural, and institutional dimensions of inequality are often examined in isolation, limiting cross-contextual comparison and policy relevance. This study addresses these gaps by applying a multi-level digital divide framework consistently across case studies, enabling a theory-driven and outcome-focused synthesis of how AI both reproduces and reshapes educational inequalities.

Methodology

This study adopted a focused narrative review to examine AI and the digital divide in education, concentrating on countries with similar socio-economic statuses and a clear divide between rural and urban regions, such as South Africa, China, Bangladesh, and others. According to Sukhera (2022) and Greenhalgh et al. (2018), a narrative review can include a wide variety of studies and provide an overall summary, with interpretation and critique, unlike the systematic reviews that focus on a narrow question in a specific context, with a prespecified method to synthesize findings from similar studies. This review adopted a structured narrative approach to examine the role of artificial intelligence (AI) in shaping digital divides within educational contexts. Although the review does not claim full systematic review status, established review principles were applied to enhance transparency, rigor, and reproducibility.

Database selection

The literature was sourced from three major academic databases: Scopus, Web of Science, and Google Scholar. These databases were selected due to their comprehensive coverage of peer-reviewed research in education, technology, and the social sciences, as well as their capacity to capture interdisciplinary and emerging scholarship on AI in education, as advised by Gusenbauer and Gauster (2025).

Search strategy

A keyword-based search strategy was employed using combinations of terms related to AI, education, and digital inequality. Core search terms included artificial intelligence, AI in education, generative AI, ChatGPT, digital divide, digital inequality, algorithmic bias, linguistic divide, and cultural bias. Boolean operators were used to refine searches (e.g., “AI in education” AND “digital divide”). Searches were limited to English-language publications published between 2016 and 2025 and included peer-reviewed journal articles and policy-relevant reports (Gusenbauer and Gauster, 2025; Shaheen et al., 2023).

Screening and eligibility

Retrieved records were screened in stages. Titles and abstracts were first reviewed to assess relevance to AI applications in education and their relationship to equity, access, or inclusion. Full texts of potentially relevant studies were then examined to confirm alignment with the review focus as advised by Shaheen et al. (2023). Studies that addressed only technical aspects of AI without educational or equity implications were excluded.

Case study inclusion criteria

Case studies were selected based on their explicit engagement with AI use in educational settings and their analysis of digital divide dimensions, including access, linguistic exclusion, cultural bias, algorithmic bias, and institutional capacity. Preference was given to cases drawn from contexts characterised by structural inequality, particularly in the Global South, to enable meaningful cross-contextual comparison (Gusenbauer and Gauster, 2025; Munn et al., 2014; Shaheen et al., 2023).

Data synthesis

The final set of studies and cases was synthesised using thematic narrative analysis, allowing for the identification of recurring patterns and divergences across contexts (Munn et al., 2014). The review process followed a PRISMA-informed flow, identification, screening, eligibility, and inclusion, to support clarity and methodological transparency.

Although this review does not claim full PRISMA compliance due to its narrative design, the database selection, search strategy, and inclusion criteria were documented to enhance transparency and reproducibility (Shaheen et al., 2023), as shown in Table 1.

Table 1

Review stageDescription
IdentificationRecords identified through database searching (Scopus, Web of Science, Google Scholar)
ScreeningTitles and abstracts screened for relevance to AI in education and digital divides
EligibilityFull-text articles assessed for alignment with equity, access, and inclusion criteria
ExclusionStudies focusing solely on technical AI development or lacking educational relevance
InclusionPeer-reviewed articles and policy-relevant case studies (2016–2025) included in synthesis

PRISMA-informed flow of literature selection.

Table 1 summarises the PRISMA-informed screening process used to identify and select literature for this narrative review. It outlines the sequential stages of identification, screening, eligibility assessment, and inclusion, together with approximate record counts at each stage. The table illustrates how an initially broad pool of studies retrieved from multiple databases was progressively refined through title and abstract screening and full-text assessment, resulting in a final set of studies and case examples included in the narrative synthesis. The use of approximate counts reflects the iterative and interpretive nature of a narrative review, while still providing transparency and enhancing the reproducibility of the literature selection process.

Methodological limitations

Narrative reviews, while valuable for providing broad overviews of a topic, do have some inherent limitations that must be addressed to enhance their reliability. One key concern is the potential for author bias in the selection of studies, alongside a degree of subjectivity that can affect the interpretation of results. Additionally, the absence of a standardized methodology often leads to inconsistencies, which can make these reviews less reproducible and harder to evaluate critically compared to systematic reviews. This can sometimes result in uncertain conclusions or an overstated presentation of findings (Gusenbauer and Gauster, 2025; Shaheen et al., 2023). To mitigate these issues, we adopted several strategies, such as establishing clearer criteria for study selection and explicitly stating them to help reduce bias, incorporating a more structured approach to data synthesis, even in a narrative format, can provide a more comprehensive overview and minimize the risk of overlooking key studies and lastly by balancing qualitative insights with a robust examination of available evidence, narrative reviews can contribute meaningful perspectives while also respecting the complexities of the topic at hand (Greenhalgh et al., 2018; Sukhera, 2022).

Global cases on the digital divide of education

AI for education: adoption in Brazil (cultural misfits)

A recent study on AI adoption and the digital divide in education found that AI enhances accessibility and efficiency in the Brazilian educational sector (Samuel-Okon and Abejide, 2024). AI was found to improve educational outcomes by providing personalized learning experiences. However, deploying these technologies faces substantial challenges in the country due to, amongst others, cultural resistance (Samuel-Okon and Abejide, 2024).

AI4D Maseno University (Kenya) (language translation)

The tool developed by Maseno University to translate between English and Kenyan Sign Language exemplifies the potential of AI to foster inclusive education through community-driven design. By directly involving deaf communities in the development process, the initiative ensures that the technology is both contextually appropriate and responsive to users’ actual needs. Beyond facilitating communication between Deaf and hearing students, the project addresses systemic barriers in classroom participation, enabling equitable access to learning. This approach underscores a critical principle in AI for education: co-design and participatory methodologies are essential to avoid solutions that inadvertently reinforce marginalization. It also illustrates how AI can act as an enabler of social inclusion when applied with deliberate attention to user diversity and cultural context.

RobotsMali (Mali) (algorithm bias removed)

Robots Mali’s production of over 180 children’s books in Bambara demonstrates the transformative potential of AI in supporting local language literacy and preserving cultural heritage. By combining AI-assisted content generation, machine translation, and human editing, the project drastically reduced production costs and timelines while ensuring cultural relevance. This initiative addresses a dual challenge prevalent in many African contexts: the scarcity of educational materials in indigenous languages and the risk of linguistic homogenization in education systems dominated by colonial or global languages. By leveraging AI to produce accessible, culturally appropriate content (unbiased algorithm), RobotsMali exemplifies how AI can bridge resource gaps and support both educational and cultural sustainability.

GPE KIX STEPS project (Benin, Cameroon, DRC) (culturally sensitive pedagogy)

Integrated AI with Open Educational Resources to produce culturally relevant STEM textbooks for primary schools. AI assisted in drafting, localizing, and translating content aligned with national curricula, enhancing engagement and learning outcomes while demonstrating the importance of pairing AI with pedagogical strategies and teacher capacity-building. This approach highlights how AI can support scalable content creation while enhancing contextual relevance, improving both student engagement and learning outcomes. Furthermore, by embedding AI into teacher and student resources, the project demonstrates the importance of capacity-building alongside technological innovation, ensuring that AI is not merely a tool but part of a broader strategy to enhance pedagogy and educational quality.

Primary education in urban vs. rural schools

A comparative study of primary mathematics teachers in post-pandemic China found a “TPACK divide” in Technological Pedagogical Content Knowledge and attitudes toward technology; that is, urban versus rural teachers (Ahiaku et al., 2025). Rural teachers generally did not feel very comfortable using digital technology in their teaching due to limited professional development and opportunity (Knowles et al., 2023; Mateko et al., 2025; Mudhau and Sikhosana, 2023). Teacher capacity is a significant component of the digital divide; it’s not only access to students who are impacted, but whether teachers can intentionally make use of technology for pedagogical benefit (Faloye and Ajayi, 2022; Nyahodza and Higgs, 2017). In this Chinese case study, the problem was neither access nor infrastructure readiness, but the lack of professional empowerment to teach learners using the digital platforms. This is one of the pertinent issues that is currently neglected in many poor regions or countries.

Higher education in Latin America during COVID-19

An examination in Latin American universities during the pandemic found large digital inequalities in the use of technology: frequency, satisfaction, and perceived capability and access to digital tools among university students (Díez and Gajardo, 2020; García-Martín and García-Sánchez, 2022). The transition to online learning exacerbated existing gaps (Expósito and Marsolier, 2020). Lower socio-economic students reported less secure access, less confidence in digital skills, and less satisfaction with digital learning, suggesting that digital inequity is not just about access, but also about knowledge and meaningful use (Blackman et al., 2020; Gonzales, 2017). Just as in the China case study above, Latin America experienced not only a rapid COVID-19 transition without adequate infrastructure but also a lack of knowledge and use of the IA system, leading to dissatisfied learners and further digital inequities.

STEM education in BRICS countries

The case that is policy relevant (BRICS context) also highlighted a high level of students lacking in the relevant skills for the digital economy in Brazil, India, South Africa, and other BRICS States through STEM and digital literacy, respectively (Maisiri and Madzikanda, 2024). The economic legacy of educational divides in the digital age has longer-term effects. Failure to connect digital literacy and STEM education leaves emerging economies’ students behind in global digital labour markets (Qureshi and Qureshi, 2021). Lack of access and knowledge to high-tech STEM machines and AI technologies, by both learners and teachers, disadvantages learners in STEM, hence BRICS countries (Brazil, India, South Africa) experience a shortage of students in STEM as compared to other countries. The shortage of STEM students is caused by the digital divide, not the mental capacities of the students in these countries (Table 2).

Table 2

Case study/contextLevel of educationCore AI/digital issueType of divide addressedKey intervention/approachOutcomesKey contrast
AI4D – Maseno University (Kenya)Higher EducationLanguage translation (English ↔ Kenyan sign language)Linguistic and cultural divideCommunity co-design with deaf users; participatory AI developmentInclusive classroom participation; equitable access for deaf learnersThe AI4D case study demonstrates inclusive AI design when users are involved. The involvement of community and local languages reduced the bias in higher education.
RobotsMali (Mali)Early childhood and primaryAI-assisted content creation in BambaraLinguistic divide and algorithmic biasAI + human editing; culturally grounded datasetsReduced cost/time; culturally relevant local-language booksThe RObotsMali case study in early childhood development shows that AI can become a crucial cultural preservation tool when bias is mitigated
GPE KIX STEPS (Benin, Cameroon, DRC)Primary education (STEM)AI-supported OER developmentCultural and pedagogical divideAI localisation + teacher capacity-buildingImproved engagement and curriculum alignmentBy training teachers on AI tools, localisation, improved student engagement and curriculum alignment through AI can be achieved.
Urban vs. rural primary education (china)Primary education (mathematics)Digital pedagogy and AI useCapacity and skills divide (TPACK)Limited teacher professional developmentUnderuse of technology in rural schoolsLimited teacher capacity created AI tool education inequity
Higher Education during COVID-19 (Latin America)Higher educationOnline learning adoptionSocio-economic and skills divideRapid transition without adequate preparationLower satisfaction and confidence among disadvantaged studentsAI tools are available but lack of knowledge and confidence on AI tools usage led to inequity
STEM education in BRICS countriesSecondary and higher educationAI, STEM, and digital literacySkills and economic dividePolicy focuses on STEM without adequate access/trainingSTEM shortages; labour market exclusionPolitical influence or poor AI focused policies leads to the digital divides

Comparative table: AI, digital divides, and educational inequities across contexts.

A cross-case analysis reveals both convergences and contradictions in how AI interventions address digital divides across educational contexts, as well as several persistent gaps in theory, practice, and outcomes as seen on Table 2.

Key contradictions

First, there is a clear contradiction between participatory and top-down AI design approaches. The AI4D (Kenya) and RobotsMali cases demonstrate that community co-design and local language integration can significantly reduce linguistic and cultural bias, leading to inclusive educational outcomes. In contrast, the urban–rural primary education case in China and the higher education cases during COVID-19 in Latin America illustrate that AI and digital tools introduced without adequate teacher preparation or contextual adaptation result in underuse, learner dissatisfaction, and widened inequalities. This contrast highlights that AI effectiveness is contingent on human and institutional capacity, rather than technological availability alone.

Second, the cases reveal a tension between short-term efficiency gains and long-term equity outcomes. RobotsMali shows that AI-assisted content creation can reduce costs and production time while preserving cultural relevance. However, the BRICS STEM education case demonstrates that policy-driven AI and STEM expansion without parallel investment in access and training leads to skills shortages and labour market exclusion. This contradiction suggests that efficiency-focused AI deployment does not automatically translate into equitable educational or economic outcomes. Third, while teacher capacity-building is shown to be critical in the GPE KIX STEPS case, this emphasis is not consistently present across other interventions. The absence of sustained professional development in the China and Latin America cases contrasts sharply with the positive outcomes observed where AI is integrated with pedagogy, indicating a fragmented application of second-level digital divide principles.

Identified gaps

Despite valuable insights, several gaps persist across the cases as seen in Table 3. First, third-level digital divides, unequal learning outcomes, academic progression, and long-term opportunities are only partially examined. Most cases report immediate improvements in access or engagement, but longitudinal evidence on how AI affects sustained educational trajectories remains limited. Second, institutional and governance dimensions of AI adoption are underdeveloped. While community and teacher-level interventions are highlighted, few cases systematically examine ethical frameworks, data governance, or national AI education policies, leaving unanswered questions about scalability and sustainability.

Table 3

Theoretical frameworkCore constructEvidence from case studies (as reported in the manuscript)Analytical interpretation
First-level digital divideDifferential access to AI infrastructureCase studies show uneven availability of devices, internet connectivity, and AI-enabled platforms across educational institutionsStructural inequalities in infrastructure limit initial exposure to AI tools, reinforcing pre-existing educational disparities
Second-level digital divideDigital and AI-related skillsEvidence highlights gaps in teacher training, learner digital literacy, and pedagogical capacity to integrate AI effectivelyAccess alone is insufficient; limited skills constrain meaningful engagement with AI in teaching and learning
Third-level digital divideUnequal educational outcomesCase studies demonstrate varied learning gains, assessment advantages, and academic opportunities resulting from AI useAI amplifies advantage for already-resourced groups, producing unequal outcomes despite nominal access
Linguistic digital divideLanguage bias embedded in AI systemsEmpirical examples reveal reduced accuracy and relevance of AI tools for learners using non-dominant or indigenous languagesLinguistic bias limits inclusivity and diminishes the pedagogical value of AI for marginalised language communities
Cultural digital divideCultural misalignment of AI-generated contentCase studies report curriculum–AI mismatches and culturally inappropriate contentAI systems reproduce dominant cultural norms, reducing contextual relevance and learner engagement
Institutional digital divideGovernance, policy, and organisational readinessAcross cases, weak institutional policies and limited ethical guidance shape uneven AI adoptionInstitutional capacity mediates how AI either mitigates or exacerbates digital inequality

Theory-to-evidence matrix linking digital divide frameworks to case study findings.

Third, there is a gap in comparative measurement. Outcomes are reported using context-specific indicators, making it difficult to directly compare effectiveness across regions and education levels. This limits the ability to generalise findings or translate them into global policy guidance. Finally, although linguistic and cultural divides are well addressed in some Global South contexts, algorithmic bias in commercial or large-scale AI systems remains insufficiently examined, particularly in middle- and high-income settings where such systems are increasingly adopted.

Taken together, these contradictions and gaps indicate that AI does not inherently reduce digital divides. Instead, equitable outcomes depend on participatory design, sustained capacity-building, and institutional readiness. The uneven application of digital divide theory across cases underscores the need for a theory-driven, multi-level analytical framework, a gap that this study explicitly addresses.

To ensure theoretical consistency, a theory-to-evidence matrix was used to interpret the case studies (Noyes et al., 2019), explicitly linking digital divide constructs to observed patterns of AI access, use, and educational outcomes (see Table 3). The matrix links digital divide theoretical constructs to specific case study evidence, making explicit how theory informs interpretation. The first- and second-level divides capture structural and skills-based barriers, highlighting that mere access to AI tools does not ensure meaningful use. The third-level digital divide illustrates how unequal outcomes emerge even when access exists, particularly disadvantaging marginalized learners. The linguistic and cultural divides show how AI’s language and content design can unintentionally reproduce systemic inequities, while the institutional divide emphasizes the mediating role of governance, policy, and teacher readiness in determining AI’s impact. By aligning each theoretical construct with empirical examples from the manuscript, the matrix demonstrates that AI is not inherently equitable and that educational outcomes depend on the intersection of access, skills, cultural relevance, and institutional capacity. This approach provides a clear, theory-informed lens for interpreting diverse case studies and supports actionable recommendations for inclusive AI integration in education.

A proposed, practical solution or framework for an inclusive AI training

Based on the analysis of the case studies and empirical literature, we are proposing the following practical solutions for an inclusive AI tools development and training that will reduce the digital divide, despite access.

Development of AI tools with multilingual and culturally relevant content

In line with the conceptual framework, the following examples will be useful under practical solution 1.

Linguistic and cultural divide

(Framework component: linguistic and cultural exclusion → reduced relevance and engagement)

Practical examples

(a) Multilingual intelligent tutoring systems (ITS)

An AI mathematics tutor deployed in rural South African schools is designed to operate in Xitsonga, Sepedi, isiZulu, and English. Learners can switch languages at any stage of the lesson, while examples use locally familiar contexts, such as livestock counting, market trading, or water collection. Mathematical problems are thus cognitively accessible without the added burden of language translation, supporting conceptual understanding rather than rote memorisation.

(b) Local-language AI writing assistants

A writing-support AI is developed to assist learners in drafting essays first in their home language (e.g., Setswana or Bambara) before gradually scaffolding translation into English or French. Instead of penalising non-standard grammar, the tool provides feedback on structure, argument coherence, and idea development, aligning with culturally grounded narrative styles. This approach validates linguistic identity while strengthening academic literacy.

(c) Culturally contextualised AI chatbots

An AI chatbot used for homework support is trained on national curricula and culturally relevant examples, such as indigenous agricultural practices or local history. When learners ask questions, the chatbot responds using familiar analogies and culturally appropriate metaphors, improving comprehension and learner engagement.

These examples demonstrate that multilingual and culturally responsive AI reduces cognitive exclusion, enhances engagement, and promotes epistemic justice by recognising learners’ lived experiences (Al-Zahrani, 2024b; Bigman et al., 2022; Bulathwela et al., 2024; Elliott and Soifer, 2022; Hargittai, 2007a, 2007b, 2018).

Use of diverse, representative datasets to mitigate algorithmic bias

Algorithmic divide

(Framework component: biased algorithms → unequal feedback and learning opportunities)

Practical examples

(a) Fair adaptive learning algorithms

An adaptive learning platform used in STEM education is trained on datasets that include urban and rural learners, multilingual users, and students with varying levels of digital literacy. Performance benchmarks are disaggregated by language proficiency and schooling context, ensuring that learners are not incorrectly classified as “low-performing” due to language or access constraints rather than actual ability.

(b) Bias-aware language models

A language-learning AI incorporates text data from local textbooks, indigenous literature, oral histories, and community-generated content, rather than relying solely on global or Western corpora. Human reviewers from local communities validate the outputs to ensure cultural accuracy and to avoid stereotypes or cultural misrepresentation.

(c) Participatory dataset curation

Teachers and community members contribute anonymised learner data, examples, and feedback into the AI system, allowing datasets to evolve. This human-in-the-loop approach ensures continuous correction of biased outputs and promotes ethical accountability.

By diversifying datasets and embedding human oversight, AI systems become more equitable, transparent, and responsive to underrepresented learners, preventing the reproduction of historical and structural biases (Baidoo-Anu and Owusu Ansah, 2023; Chakraoui and Kooli, 2025; Mac Fadden et al., 2024).

Capacity-building for educators and learners to engage critically with AI tools

Capacity and institutional readiness

(Framework component: moderating factor → human and institutional capacity)

Practical examples

(a) Teacher professional development in AI pedagogy

Teachers participate in short, modular training programmes that focus not only on how to use AI tools, but also on how to question and evaluate them. For example, educators learn how to:

  • Identify algorithmic bias in adaptive learning systems

  • Interpret AI-generated feedback critically

  • Integrate AI into lesson plans aligned with curriculum outcomes

This empowers teachers to use AI as a pedagogical aid rather than a replacement for professional judgement.

(b) Learner AI literacy workshops

Students are trained to understand:

  • What AI can and cannot do

  • How AI systems generate responses

  • Ethical issues such as plagiarism, data privacy, and over-reliance

For instance, learners practise comparing AI-generated answers with textbook explanations, encouraging critical thinking and metacognitive awareness rather than passive consumption.

(c) Community-based train-the-trainer models

Local educators and youth leaders receive advanced AI training and then mentor peers within their schools or communities. This model promotes local ownership, reduces dependency on external experts (Ahiaku et al., 2025), and ensures sustainability, especially in rural or under-resourced contexts.

Capacity-building transforms AI from a technical intervention into a socially embedded educational practice, ensuring that both educators and learners can engage with AI ethically, critically, and meaningfully (Bahroun et al., 2023; Laupichler et al., 2022; MacNeil et al., 2022).

Conclusion

AI’s contribution to education is multidimensional, and its contribution in this study was addressed through the lens of the design of the AI tools that tend to marginalise students from other countries and regions due to language, culture, and algorithm biases ignored by the designer. It is not a matter of infrastructure and access to AI tools, but the format and design of the tool that does not consider local nuances (van Dijk, 2006, 2020; Hargittai, 2007a, 2007b, 2018; Selwyn, 2016), thus perpetuating the social inequities (Buolamwini and Gebru, 2018; Barocas and Selbst, 2016). AI presents considerable potential for advancing educational equity, but this promise is only realized when its implementation is carefully aligned with local contexts and the capacities of individuals. To truly harness AI for educational justice and social inclusion, it is essential to prioritize participatory design (Baidoo-Anu and Owusu Ansah, 2023; Chakraoui and Kooli, 2025; Mac Fadden et al., 2024), which ensures that the voices of local communities are heard and integrated into the development of AI tools. Additionally, robust teacher training programs must be established to empower educators with the skills and confidence needed to effectively utilize AI in their classrooms (Bahroun et al., 2023; Laupichler et al., 2022; MacNeil et al., 2022). Emphasizing multilingual support (Al-Zahrani, 2024b; Bigman et al., 2022; Bulathwela et al., 2024; Elliott and Soifer, 2022; Hargittai, 2007a, 2007b, 2018) will further enhance accessibility and engagement, allowing diverse learners to benefit fully from AI innovations. By focusing on these elements, participatory design, comprehensive teacher training, and multilingual resources, future initiatives can prevent the risks of deepening educational stratification and instead foster an equitable environment where all students can thrive through personalized learning (McCarthy et al., 2006; Shannon, 1950; Van Dijk, 2017) and improved academic outcomes and enhanced student engagement (Hennekeuser et al., 2024; Vieriu and Petrea, 2025) made possible by AI. Future research and policy initiatives must shift toward a context-sensitive and human-centred approach to AI integration in education.

Statements

Author contributions

MoM: Writing – review & editing, Writing – original draft, Conceptualization. AN: Writing – review & editing, Conceptualization, Writing – original draft. MaM: Writing – original draft, Writing – review & editing, Conceptualization.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was used in the creation of this manuscript. ChatGPT and Grammarly were used, but rephrasing was done, and the document is 0% AI detectable and similarity index is below 10%.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    AhiakuP. K. A.UleanyaC.MuyambiG. C. (2025). Rural schools and tech use for sustainability: the challenge of disconnection. Educ. Inf. Technol.30, 1255712571. doi: 10.1007/s10639-024-13311-9

  • 2

    Al-SowaidiB.ClarkeA. (2025). AI-digital divide in Yemeni and south African higher education: towards an inclusive policy-oriented approach. IntechOpen.139. doi: 10.5772/intechopen.1012099

  • 3

    Al-ZahraniA. M. (2023). The impact of generative AI tools on researchers and research: implications for academia in higher education. Innov. Educ. Teach. Int. doi: 10.1080/14703297.2023.2271445

  • 4

    Al-ZahraniA. M. (2024b). Unveiling the shadows: beyond the hype of AI in education. Heliyon10:e30696. doi: 10.1016/j.heliyon.2024.e30696

  • 5

    BahrounZ.AnaneC.AhmedV.ZaccaA. (2023). Transforming education: a comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability15:12983. doi: 10.3390/su151712983

  • 6

    Baidoo-AnuD.Owusu AnsahL. (2023). Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN. 7, 5262. Available online at: https://ssrn.com/abstract=4337484 (Accessed December 29, 2025).

  • 7

    BakerR. S.HawnA. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education,32, 10521092. doi: 10.1007/s40593-021-00285-9

  • 8

    BarocasS.SelbstA. D. (2016). Big data’s disparate impact. Calif. Law Rev.104, 671732. doi: 10.2139/ssrn.2477899

  • 9

    BigmanY. E.WilsonD.ArnestadM. N.WaytzA.GrayK. (2022). Algorithmic discrimination causes less moral outrage than human discrimination. J. Exp. Psychol. Gen.152, 427. doi: 10.1037/xge0001250

  • 10

    BlackmanA. M.IbáñezA. M.IzquierdoA.KeeferP.MoreiraM. M.SchadyN.et al. (2020). La política pública frente al COVID-19: Recomendaciones para América Latina y el Caribe: Inter-American Development Bank.

  • 11

    BoatengO.BoatengB. (2025). Algorithmic bias in educational systems: examining the impact of AI-driven decision making in modern education. World Journal of Advanced Research and Reviews,25:2012-2017. doi: 10.30574/wjarr.2025.25.1.0253

  • 12

    BulathwelaS.Pérez-OrtizM.HollowayC.CukurovaM.Shawe-TaylorJ. (2024). Artificial intelligence alone will not democratise education: on educational inequality, techno-solutionism and inclusive tools. Sustainability16:781. doi: 10.3390/su16020781

  • 13

    BuolamwiniJ.GebruT. (2018). Gender shades: intersectional accuracy disparities in commercial gender classification. Proc. Mach. Learn. Res.81, 7791.

  • 14

    ChakraouiR.KooliC. (2025). AI-driven assistive technologies in inclusive education: benefits, challenges, and policy recommendations. Sustain. Futures10:101042. doi: 10.1016/j.sftr.2025.101042

  • 15

    ChintaS. V.WangZ.PalikheA.ZhangX.KashifA.SmithM. A.et al. (2024). AI-driven healthcare: a review on ensuring fairness and mitigating Bias. arXiv preprint arXiv:2407.19655. doi: 10.48550/arXiv.2407.19655

  • 16

    Cotilla ConceiçãoJ. M.van der StappenE. (2025). The impact of AI on inclusivity in higher education: a rapid review. Education Sciences, 15:1255. doi: 10.3390/educsci15091255

  • 17

    DíezE. J.GajardoK. (2020). Educar y evaluar en tiempos de coronavirus: La situación en España. Multidiscip. J. Educ. Res.10, 102134. doi: 10.17583/remie.2020.5604

  • 18

    DrugaS.OteroN.KoA. J. (2022). “The landscape of teaching resources for AI education” in Proceedings of the 27th ACM conference on on innovation and Technology in Computer Science Education, vol. 1, 96102.

  • 19

    Edtech. (2020). Successful AI examples in higher education that can inspire our future. EdTech Mag.. Available online at: https://edtechmagazine.com/higher/article/2020/01/successful-ai-examples-higher-education-can-inspire-our-future?utm_source=chatgpt.com (Accessed January 9, 2026).

  • 20

    ElliottD.SoiferE. (2022). AI technologies, privacy, and security. Front. Artif. Intell.5:826737. doi: 10.3389/frai.2022.826737

  • 21

    ExpósitoE.MarsolierR. (2020). Virtualidad y educación en tiempos de COVID-19: Un estudio empírico en Argentina. Educ. Humanismo22, 122. doi: 10.17081/eduhum.22.39.4214

  • 22

    FaloyeS. T.AjayiN. (2022). Understanding the impact of the digital divide on South African students in higher educational institutions. Afr. J. Sci. Technol. Innov. Dev.14, 17341744. doi: 10.1080/20421338.2021.1983118

  • 23

    FilippucciF. (2024). “The impact of artificial intelligence on productivity, distribution and growth: key mechanisms, initial evidence and policy challenges”, OECD Artificial Intelligence Papers, No.Paris: OECD Publishing. doi: 10.1787/8d900037-en.

  • 24

    FitasR.GhoshK.MaityS. (2025). “Leveraging AI in education: benefits, responsibilities, and trends” in AI roles and responsibilities in education (Springer Nature Switzerland: Cham), 129169.

  • 25

    García-MartínJ.García-SánchezJ. N. (2022). The digital divide of know-how and use of digital technologies in higher education: the case of a college in Latin America in the COVID-19 era. Int. J. Environ. Res. Public Health19:3358. doi: 10.3390/ijerph19063358,

  • 26

    GarzónJ.PatiñoE.MarulandaC. (2025). Systematic review of artificial intelligence in education: trends, benefits, and challenges. Multimodal Technologies and Interaction9:84. doi: 10.3390/mti9080084

  • 27

    GonzalesA. L. (2017). Disadvantaged minorities’ use of the internet to expand their social networks. Commun. Res.44, 467486. doi: 10.1177/0093650214565925

  • 28

    GreenhalghT.ThorneS.MalterudK. (2018). Time to challenge the spurious hierarchy of systematic over narrative reviews?Eur. J. Clin. Investig.48:e12931. doi: 10.1111/eci.12931

  • 29

    GusenbauerM.GausterS. P. (2025). How to search for literature in systematic reviews and meta-analyses: a comprehensive step-by-step guide. Technol. Forecast. Soc. Change212:123833. doi: 10.1016/j.techfore.2024.123833

  • 30

    HargittaiE. (2007a). Whose space? Differences among users and non-users of social network sites. J. Comput.-Mediat. Commun.13, 276297. doi: 10.1111/j.1083-6101.2007.00396.x

  • 31

    HargittaiE. (2007b). Whose space? Differences in user-generated content by social class and internet skills. First Monday12. doi: 10.5210/fm.v12i1.1728

  • 32

    HargittaiE. (2018). “Digital inequality” in The Oxford handbook of internet studies. ed. DuttonW. (Oxford University Press).

  • 33

    HargittaiE. (2018). Potential biases in big data: omitted voices on social media. Soc. Sci. Comput. Rev.38, 1024. doi: 10.1177/0894439318788322

  • 34

    HennekeuserD.VaziriD. D.GolchinfarD.SchreiberD.StevensG. (2024). Enlarged education, exploring the use of generative AI to support lecturing in higher education. Int. J. Artif. Intell. Educ., 35, 10961128. doi: 10.1007/s40593-024-00424-y

  • 35

    HolmesW.BialikM.FadelC. (2019). Artificial intelligence in education: promises and implications for teaching and learning. Center Curriculum Redesign. 137.

  • 36

    HwangG. J.XieH.WahB. W.GasevicD. (2020). Vision, challenges, roles, and research issues of artificial intelligence in education. Comput Educ: Artif. Intell.1:100001. doi: 10.1016/j.caeai.2020.10000

  • 37

    JiangC.PangY. (2023). Enhancing design thinking in engineering students with project-based learning. Computer Applications in Engineering Education. doi: 10.1002/cae.22608

  • 38

    JohnsonA.SmithB. (2019). The impact of personalized learning on student attitudes and self-efficacy in mathematics. Educ. Technol. Res. Dev.38, 201218.

  • 39

    JohnsonG. M. (2021). Algorithmic bias: on the implicit biases of social technology. Synthese198, 99419961. doi: 10.1007/s11229-020-02696-y

  • 40

    KnowlesC.JamesA.KhozaL.MtwaZ.RobojiM.ShivambuM. (2023). Problematising South African higher education inequalities exposed during COVID-19: students’ perspectives. Crit. Stud. Teach. Learn.11, 121. doi: 10.14426/cristal.v11i1.668

  • 41

    KooliC. (2023). Chatbots in education and research: a critical examination of ethical implications and solutions. Sustainability15:5614. doi: 10.3390/su15175614

  • 42

    KrauseS.PanchalB. H.UbheN. (2024). The evolution of learning: assessing the transformative impact of generative AI on higher education. arXiv, arXiv:2404.10551. doi: 10.1007/s44366-025-0058-7

  • 43

    LaupichlerM. C.AsterA.SchirchJ.RaupachT. (2022). Artificial intelligence literacy in higher and adult education: a scoping review. Comput. Educ. Artif. Intell.3:100101. doi: 10.1016/j.caeai.2022.100101

  • 44

    LuckinR.HolmesW.GriffithsM.ForcierL. B. (2016). Intelligence unleashed. An argument for AI in Education18.

  • 45

    Mac FaddenI.García-AlonsoE.-M.López-MenesesE. (2024). Science mapping of AI as an educational tool exploring digital inequalities: a sociological perspective. Multimodal Technol. Interact.8:106. doi: 10.3390/mti8120106

  • 46

    MacNeilS.TranA.MogilD.BernsteinS.RossE.HuangZ. (2022). “Generating diverse code explanations using the GPT-3 large language model” in Proceedings of the 2022 ACM conference on international computing education researchLugano, Switzerland: ACM, 3739. doi: 10.1145/3501709.3544280

  • 47

    MaisiriJ.MadzikandaT. S. (2024). Bridging the digital divide: fostering STEM education for digital economy leadership. J. BRICS Stud.3, 6167. doi: 10.36615/gdw31696

  • 48

    MatekoF. M.DowelaniM.SinamanoR. (2025). Digital inequality and transformation in South African higher education during COVID-19: a comparative analysis of historically disadvantaged and historically advantaged universities. High. Educ. Policy.119. doi: 10.1057/s41307-025-00416-0

  • 49

    McCarthyJ.MinskyM. L.RochesterN.ShannonC. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence. AI magazine,27, 1212. Available at: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html (open in a new window) (Accessed August 31, 1955).

  • 50

    MokoenaO. P.SeeletseS. M. (2025). AI in rural classrooms: challenges and perspectives from South African educators. International Journal of Current Educational Studies,4, 3052. doi: 10.46328/ijces.199

  • 51

    MudhauA.SikhosanaL.. 2023. Technology integration during teaching and learning: the COVID-19 context. ResearchGate. Available online at: https://www.researchgate.net/publication/374101655

  • 52

    MunnZ.TufanaruC.AromatarisE. (2014). JBI'S systematic reviews data extraction and synthesis. AJN.114, 4954. doi: 10.1097/01.NAJ.0000451683.66447.89,

  • 53

    NoyesJ.BoothA.MooreG.FlemmingK.TunçalpÖ.ShakibazadehE. (2019). Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Glob. Health4:e000893. doi: 10.1136/bmjgh-2018-000893,

  • 54

    NyahodzaL.HiggsR. (2017). Towards bridging the digital divide in post-apartheid South Africa: a case of a historically disadvantaged university in Cape Town. S. Afr. J. Libr. Inf. Sci.83, 3948. doi: 10.7553/83-1-1645

  • 55

    O’NeillE. (2016). How is the accountancy and finance world using artificial intelligence? Available at: https://www.icas.com/ca-today-news/how-accountancy-and-finance-are-using-artificial-intelligence

  • 56

    PedroF.SubosaM.RivasA.ValverdeP. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. UNESCO. Available at:https://unesdoc.unesco.org/ark:/48223/pf0000366994

  • 57

    QureshiA.QureshiN. (2021). Challenges and issues of STEM education. Adv. Mob. Learn. Educ. Res.1, 146161. doi: 10.25082/amler.2021.02.009

  • 58

    RahmanM. M.WatanobeY. (2023). ChatGPT for education and research: opportunities, threats, and strategies. Appl. Sci.13:5783. doi: 10.3390/app13115783

  • 59

    ResseguierA.RodriguesR. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data Soc.7:205395172094254. doi: 10.1177/2053951720942541

  • 60

    SallamM.SalimN.BarakatM.Al-TammemiA. (2023). ChatGPT applications in medical, dental, pharmacy, and public health education: a descriptive study. Narra J.3:e103.

  • 61

    Samuel-OkonA. D.AbejideO. (2024). Bridging the digital divide: exploring the role of artificial intelligence and automation in enhancing connectivity in developing nations26, 165177. doi: 10.2139/ssrn.4839447

  • 62

    SatoE.ShyyanV.ChauhanS.ChristensenL. (2024). Putting AI in fair: a framework for equity in AI-driven learner models and inclusive assessments. J. Meas. Eval. Educ. Psychol.15, 263281. doi: 10.21031/epod.1526527

  • 63

    SeaverN. (2017). Algorithms as culture: some tactics for the ethnography of algorithmic systems. Big Data Soc.4:2053951717738104. doi: 10.1177/2053951717738104

  • 64

    SelwynN. (2016). Education and technology: Key issues and debates.London, UK: Bloomsbury.

  • 65

    SelwynN.FacerK. (2013). The politics of education and technology: Conflicts, controversies, and connections. New York: Palgrave Macmillan. doi: 10.1057/9781137031983

  • 66

    ShaheenN.ShaheenA.RamadanA.HefnawyM. T.RamadanA.IbrahimI. A.et al. (2023). Appraising systematic reviews: a comprehensive guide to ensuring validity and reliability. Front Res Met Anal8:1268045. doi: 10.3389/frma.2023.1268045,

  • 67

    ShannonC. E. (1950). Programming a computer for playing chess. Philos. Mag.41, 256275.

  • 68

    StahlB. C.AntoniouJ.RyanM.MacnishK.JiyaT. (2021). Organisational responses to the ethical issues of artificial intelligence. AI Soc.37, 2337. doi: 10.1007/s00146-021-01148-6

  • 69

    SukheraJ. (2022). Narrative reviews in medical education: key steps for researchers. J. Grad. Med. Educ.14, 418419. doi: 10.4300/JGME-D-22-00481.1

  • 70

    van DijkJ. (2006). Digital divide research, achievements, and shortcomings. Poetics34, 221235. doi: 10.1016/j.poetic.2006.05.004

  • 71

    Van DijkJ. (2017). “Digital divide: impact of access” in The international Encyclopedia of media effects (Wiley Online), 111.

  • 72

    van DijkJ. (2020). The digital divide. Cambridge, UK: Polity Press.

  • 73

    VieriuA. M.PetreaG. (2025). The impact of artificial intelligence (AI) on students’ academic development. Education Sciences15:343. doi: 10.3390/educsci15030343

  • 74

    WangC.WangK.BianA. A.IslamM. R.KeyaK. N.FouldsJ. R.et al. (2022). Do humans prefer debiased AI algorithms? In proceedings of the ACM conference, 110. doi: 10.1145/3490099.3511108

  • 75

    WangS.WangF.ZhuZ.WangJ.TranT.DuZ. (2024). Artificial intelligence in education: a systematic literature review. Expert Syst. Appl.252:124167. doi: 10.1016/j.eswa.2024.124167

Summary

Keywords

algorithm bias, artificial intelligence (AI), cultural bias, digital divide, education, inclusivity in education, language barriers

Citation

Matjie MA, Nethavhani A and Matlakala M (2026) AI and the digital divide in education. Front. Comput. Sci. 8:1759027. doi: 10.3389/fcomp.2026.1759027

Received

02 December 2025

Revised

09 January 2026

Accepted

12 January 2026

Published

05 February 2026

Volume

8 - 2026

Edited by

Sergio Luján-Mora, University of Alicante, Spain

Reviewed by

Musawer Hakimi, Osmania University, India

Andreea Dragomir, Lucian Blaga University of Sibiu, Romania

Dech-siri Nopas, Kasetsart University, Thailand

Updates

Copyright

*Correspondence: Mokgata Alleen Matjie,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics