Your new experience awaits. Try the new design now and help us make it even better

OPINION article

Front. Educ., 08 December 2025

Sec. Teacher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1720706

This article is part of the Research TopicNavigating Artificial Intelligence (AI) and Digital Learning Innovation: Transforming  Pedagogy in Social Sciences and HumanitiesView all articles

Learning, meaning, and teaching in the age of AI: communication development checkpoints

  • 1Faculty of Medicine, Ain Shams University, Cairo, Egypt
  • 2Academic Department, Rabdan Academy, Abu Dhabi, United Arab Emirates

1 Introduction

Generative AI tools increasingly reshape how communication skills are learned, practiced, and assessed. Processes such as drafting, paraphrasing, summarizing, translation, and feedback—once central to students' cognitive and disciplinary development—are now widely automated. These shifts influence not only academic habits but also the epistemic norms that govern knowledge production in higher education.

Recent research highlights both the possibilities and risks of AI-mediated communication learning. Students report improved efficiency yet reduced cognitive engagement when interacting with text-generating systems (Park et al., 2023; Cotos et al., 2023). Parallel studies raise concerns about the ethical, cultural, and epistemic implications of language-model-based tools in educational ecosystems (van Dis et al., 2023).

This commentary introduces four communication development checkpoints. These checkpoints are theoretically anchored in Vygotsky's sociocultural theory, Bruner's meaning-making traditions, dialogic pedagogy, literacy development research, and the technological–pedagogical–content knowledge (TPACK) framework (Mishra and Koehler, 2006). Examples are deliberately drawn from multilingual and Global South contexts, where AI's impact is magnified by linguistic diversity, uneven digital literacies, and the dominance of Western-trained language models (Khanna et al., 2023; Bali et al., 2024).

This clarification is important in light of recent editorial feedback suggesting the need for a clearer stance. The stance is this: communication development is a human process, and AI should be integrated in ways that preserve, not erode, the cognitive, relational, and ethical work of meaning-making. Rather than adopting a binary ‘good vs. bad' narrative, the commentary argues for balanced appropriation, where AI is a tool to enrich learning but never the site where learning happens.

This argument holds particular relevance for educational contexts in the Global South, including Egypt, where disparities in digital access, linguistic diversity, and infrastructural constraints uniquely shape how AI tools are appropriated in classrooms (Zawacki-Richter et al., 2019). Egyptian universities—like many across the Middle East and Africa—are witnessing accelerated AI adoption without parallel growth in critical AI literacy or institutional policy. These pressures make it crucial for educators to have conceptual tools to guide judgment rather than blanket encouragement or prohibition.

This commentary, therefore, speaks directly to educators, curriculum designers, and policy makers who seek practical orientation rather than ideological debate. The checkpoints introduced below are intended to support concrete pedagogical decisions about AI-mediated teaching and learning.

2 Communication development checkpoints

2.1 Foundational literacy

Foundational literacy depends on decoding, vocabulary growth, syntactic awareness, and the gradual construction of semantic networks. Developmental literacy frameworks emphasize the importance of effortful cognitive engagement during reading and meaning-making. Although AI tools can scaffold early literacy through leveled texts and adaptive vocabulary modeling, recent evidence suggests that frequent reliance on AI paraphrasing or summarization can reduce syntactic processing, weaken inferencemaking, and diminish deeper semantic engagement (Park et al., 2023; Khanna et al., 2023).

In multilingual learning environments, AI's normalization of dominant English varieties may also dilute learners' use of local linguistic repertoires and translanguaging practices, which are essential for cognitive flexibility. Effective pedagogy therefore requires positioning AI as a supplement that enhances exposure and differentiation rather than as a substitute for the cognitive work of literacy development.

2.2 Dialogic discussion

Dialogic learning involves reciprocal, unpredictable exchanges that require shared grounding and emotional attunement. AI chatbots can reduce performance anxiety and promote rehearsal of conversational patterns. For instance, Lau and Tsai (2025) show that chatbot-mediated rehearsal reduces cognitive load and increases readiness for participation. Complementary findings show that conversational agents can stimulate dialogic preparation but cannot replicate relational dynamics or genuine co-construction of meaning (Wang et al., 2024; Seth and Anwardeen, 2024).

Students often treat AI-mediated interaction as transactional rather than relational. Educators should thus frame AI-based dialogue as preparatory and reflective—an opportunity for analyzing tone, turn-taking, communicative pragmatics, and cultural nuance before engaging in human-to-human discussion.

2.3 Academic writing

Academic writing requires recursive drafting, argumentation, metacognitive monitoring, and engagement with disciplinary discourse. While AI systems offer support for structure, coherence, and precision, empirical evidence indicates that premature reliance on AI-generated text compresses the cognitive labor required for revision and idea development (Kim et al., 2024). Students may conflate surface fluency with conceptual depth, weakening their ability to construct arguments independently.

This checkpoint emphasizes using AI outputs as objects of critique rather than as substitutes for original writing. A first-year writing programme, for example, used AIIn review generated drafts to teach rhetorical analysis and strengthen feedback literacy (Park et al., 2023; Cotos et al., 2023). This aligns with composition theory, supporting metacognition and intellectual ownership.

2.4 Professional and intercultural communication

Professional communication requires intercultural awareness, rhetorical adaptability, and collaborative digital literacy. AI translation, tone-modeling, and summarization tools are particularly attractive in multilingual and Global South contexts, where linguistic diversity is high. Yet studies show that AI systems often homogenize language and flatten culturally embedded meaning, resulting in tonal mismatches or misrepresentations of local pragmatics (Khanna et al., 2023; Bali et al., 2024).

To support communicative adaptability, educators can design tasks where students critique AI translations, identify cultural distortions, and recontextualize messages for diverse audiences. These practices foster intercultural competence rather than dependence on automated correctness.

3 Risks and tensions

The uneven integration of AI tools risks widening disparities between students with strong digital literacies and those with limited access, particularly in Global South institutions (Khanna et al., 2023). Large language models trained predominantly on Western datasets may reinforce epistemic hierarchies and marginalize non-Western linguistic forms (Bali et al., 2024; Williamson and Kizilcec, 2023). At the same time, prohibiting AI entirely may reduce opportunities for students to develop critical AI literacy, a capability increasingly essential for academic and professional environments.

A balanced approach to AI governance is therefore needed—one that maintains core communication processes while enabling guided, reflective engagement.

4 Ethical concerns and pedagogical implications

4.1 Algorithmic bias

AI tools embody algorithmic biases grounded in their training data. These biases can manifest in differential error rates, tonal misalignments, cultural misinterpretations, and skewed evaluative feedback (Williamson and Kizilcec, 2023; Seth and Anwardeen, 2024). In multilingual classrooms, these biases may disadvantage learners whose linguistic identities fall outside high-resource language norms.

4.2 Data privacy

Educational use of external AI platforms raises concerns regarding data sovereignty, consent, long-term storage, and secondary use of student-generated text. Systematic reviews highlight substantial privacy vulnerabilities and insufficient institutional protections for educational data (Mishra and Koehler, 2006).

4.3 Teacher agency

Teacher agency remains central to mediating AI's instructional role. Over-automation risks shifting cognitive authority to AI tools, eroding professional judgment and pedagogical autonomy. Recent research shows that AI adoption can subtly reshape teacher identity and classroom power dynamics (Colman and Koole, 2025). Educators need targeted professional development to critically evaluate AI, diagnose its limitations, and design learning experiences that foreground reasoning, creativity, and relational communication.

5 Recommendations

To preserve developmental integrity while leveraging AI's affordances, institutions should:

1. Integrate dual-literacy frameworks combining communication development with AI literacy across curricula.

2. Implement reflective writing tasks where students critique and revise AI generated outputs.

3. Strengthen teacher preparation on ethical integration, algorithmic bias, culturally responsive AI mediation, and process-oriented assessment.

4. Develop institutional policies clarifying transparency expectations, authorship norms, and data privacy protections.

6 Conclusion

AI's expanding presence in higher education demands more than enthusiasm or caution—it requires developmental judgment. This commentary advances a clear and actionable position: AI should be integrated only in ways that strengthen, rather than replace, the human processes of meaning-making at the heart of communication development. The four communication checkpoints proposed—foundational literacy, dialogic discussion, academic writing, and professional/intercultural communication—offer educators a practical framework for evaluating when AI enriches learning and when it risks undermining it.

For educators working in Egypt, the Arab region, and the wider Global South, the developmental perspective is especially critical. These contexts face distinctive challenges: multilingual classrooms, infrastructural disparities, uneven AI-literacy, and persistent algorithmic bias against non-Western linguistic and cultural norms. By embedding these realities into the analysis, this commentary positions the checkpoints as context-sensitive tools that help educators navigate AI adoption in environments where both opportunities and vulnerabilities are amplified.

The aim has been to influence how educators think and act—not by advocating technological optimism or resistance, but by equipping teachers, curriculum designers, and policy makers with a framework that preserves the reflective, dialogic, and ethical dimensions of learning. Teaching in the age of AI requires renewed attention to the human work of communication: interpretation, negotiation, and relational meaning-making. When AI is integrated transparently, intentionally, and developmentally, it can support these aims without displacing them.

In moving forward, institutions should adopt policies and pedagogies that elevate teacher agency, protect student data, reduce algorithmic bias, and cultivate dual literacy—communication literacy and AI literacy—as essential graduate outcomes. If this balance is achieved, AI will not redefine education; rather, educators will define how AI is meaningfully and ethically woven into educational practice.

Author contributions

SA: Writing – original draft, Conceptualization, Methodology, Investigation, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The author(s) declare that Gen AI was used in the creation of this manuscript. Chat GPT 5.1 used for Language revision and academic writing revision of the references.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Bali, M., Sharma, S., and Osman, R. (2024). AI and epistemic justice in higher education: implications for the Global South. High. Educ. Res. Dev. 8, 1–3. doi: 10.1080/07294360.2024.2401120

Crossref Full Text | Google Scholar

Colman, A., and Koole, M. (2025). Teacher agency in AI-mediated classrooms: a sociotechnical lens on professional identity. Teach. Teach. Educ. 139:104506. doi: 10.1016/j.tate.2024.104506

Crossref Full Text | Google Scholar

Cotos, E., Huffman, S., and Link, S. (2023). Learners' critical engagement with AI writing assistants: implications for feedback literacy. J. Second Lang. Writ. 62:101023. doi: 10.1016/j.jslw.2023.101023

Crossref Full Text | Google Scholar

Khanna, P., Singh, R., and Varma, A. (2023). AI-mediated learning in multilingual classrooms: a critical review from the Global South. Int. J. Educ. Dev. 100:102812. doi: 10.1016/j.ijedudev.2023.102812

Crossref Full Text | Google Scholar

Kim, Y., Huang, J., and Wang, X. (2024). AI-mediated writing feedback in higher education: efficacy and ethics. Read. Res. Q. 59, 412–430. doi: 10.1002/rrq.521

Crossref Full Text | Google Scholar

Lau, W., and Tsai, M. (2025). Dialogic learning with chatbots: cognitive load and engagement. Comput. Educ. 207:105155. doi: 10.1016/j.compedu.2025.105155

Crossref Full Text | Google Scholar

Mishra, P., and Koehler, M. J. (2006). Technological pedagogical content knowledge: a framework for teacher knowledge. Teach. Coll. Rec. 108, 1017–1054. doi: 10.1111/j.1467-9620.2006.00684.x

Crossref Full Text | Google Scholar

Park, J., Zhang, Y., and Warschauer, M. (2023). Twenty-first century writing with AI: student use and perceptions of generative language tools. Comput. Educ. 205:104561. doi: 10.1016/j.compedu.2023.104561

Crossref Full Text | Google Scholar

Seth, M., and Anwardeen, M. (2024). The role of AI chatbots in multilingual classrooms: opportunities and constraints. Lang. Learn. Technol. 28, 45–63. doi: 10.10125/73404

Crossref Full Text | Google Scholar

van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., and Wieling, M. (2023). The ethics of using AI language models in education. Nat. Mach. Intell. 5, 476–483. doi: 10.1038/s42256-023-00656-7

Crossref Full Text | Google Scholar

Wang, Y., Xie, Q., and Luo, T. (2024). AI-driven conversational agents in higher education: impacts on student engagement and dialogic thinking. Br. J. Educ. Technol. 55, 164–184. doi: 10.1111/bjet.13391

Crossref Full Text | Google Scholar

Williamson, B., and Kizilcec, R. (2023). Algorithmic bias and the new politics of AI in education. Learn. Media Technol. 48, 323–337. doi: 10.1080/17439884.2023.2220183

Crossref Full Text | Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. Int. J. Educ. Technol. High Educ. 16:39. doi: 10.1186/s41239-019-0171-0

Crossref Full Text | Google Scholar

Keywords: checkpoints, assessments, artificial intelligence, curriculum, outcome-based instruments

Citation: Ahmed SA (2025) Learning, meaning, and teaching in the age of AI: communication development checkpoints. Front. Educ. 10:1720706. doi: 10.3389/feduc.2025.1720706

Received: 09 October 2025; Revised: 19 November 2025;
Accepted: 21 November 2025; Published: 08 December 2025.

Edited by:

Rajeev Kamal Kumar, Anugrah Narayan Sinha Institute of Social Studies, India

Reviewed by:

Sanjay Kumar, Patna University, India
Abhijit Ghosh, Mahatma Gandhi College, India

Copyright © 2025 Ahmed. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Samar A. Ahmed, c2FtYXJAbWVkLmFzdS5lZHUuZWc=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.