OPINION article
Front. Educ.
Sec. Teacher Education
This article is part of the Research TopicNavigating Artificial Intelligence (AI) and Digital Learning Innovation: Transforming Pedagogy in Social Sciences and HumanitiesView all articles
Learning, Meaning, and Teaching in the Age of AI: Communication Development Checkpoints
Provisionally accepted- 1Ain Shams University, Cairo, Egypt
- 2Rabdan Academy, Abu Dhabi, United Arab Emirates
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Generative AI tools increasingly reshape how communication skills are learned, practiced, and assessed. Processes such as drafting, paraphrasing, summarizing, translation, and feedback-once central to students' cognitive and disciplinary development-are now widely automated. These shifts influence not only academic habits but also the epistemic norms that govern knowledge production in higher education.Recent research highlights both the possibilities and risks of AI-mediated communication learning. Students report improved efficiency yet reduced cognitive engagement when interacting with text-generating systems (Park et al., 2023;Cotos et Confidential al., 2023). Parallel studies raise concerns about the ethical, cultural, and epistemic implications of language-model-based tools in educational ecosystems (van Dis et al., 2023). This commentary introduces four communication development checkpoints. These checkpoints are theoretically anchored in Vygotsky's sociocultural theory, Bruner's meaning-making traditions, dialogic pedagogy, literacy development research, and the technological-pedagogical-content knowledge (TPACK) framework (Mishra and Koehler, 2006). Examples are deliberately drawn from multilingual and Global South contexts, where AI's impact is magnified by linguistic diversity, uneven digital literacies, and the dominance of Western-trained language models (Khanna et al., 2023;Bali et al., 2024). This clarification is important in light of recent editorial feedback suggesting the need for a clearer stance. The stance is this: communication development is a human process, and AI should be integrated in ways that preserve, not erode, the cognitive, relational, and ethical work of meaning-making. Rather than adopting a binary 'good versus bad' narrative, the commentary argues for balanced appropriation, where AI is a tool to enrich learning but never the site where learning happens. This argument holds particular relevance for educational contexts in the Global South, including Egypt, where disparities in digital access, linguistic diversity, and infrastructural constraints uniquely shape how AI tools are appropriated in classrooms. Egyptian universities-like many across the Middle East and Africa-are witnessing accelerated AI adoption without parallel growth in critical AI literacy or institutional policy. These pressures make it crucial for educators to have conceptual tools to guide judgment rather than blanket encouragement or prohibition. This commentary, therefore, speaks directly to educators, curriculum designers, and policy makers who seek practical orientation rather than ideological debate. The checkpoints introduced below are intended to support concrete pedagogical decisions about AI-mediated teaching and learning. Foundational literacy depends on decoding, vocabulary growth, syntactic awareness, and the gradual construction of semantic networks. Developmental literacy frameworks emphasize the importance of effortful cognitive engagement during reading and meaning-making. Although AI tools can scaffold early literacy through leveled texts and adaptive vocabulary modelling, recent evidence suggests that frequent reliance on AI paraphrasing or summarization can reduce syntactic processing, weaken inferencemaking, and diminish deeper semantic engagement (Park et al., 2023;Khanna et al., 2023).In multilingual learning environments, AI's normalization of dominant English varieties may also dilute learners' use of local linguistic repertoires and translanguaging practices, which are essential for cognitive flexibility. Effective pedagogy therefore requires positioning AI as a supplement that enhances exposure and differentiation rather than as a substitute for the cognitive work of literacy development. Dialogic learning involves reciprocal, unpredictable exchanges that require shared grounding and emotional attunement. AI chatbots can reduce performance anxiety and promote rehearsal of conversational patterns. For instance, Lau and Tsai (2025) show that chatbot-mediated rehearsal reduces cognitive load and increases readiness for participation. Complementary findings show that conversational agents can stimulate dialogic preparation but cannot replicate relational dynamics or genuine coconstruction of meaning (Wang et al., 2024;Seth and Anwardeen, 2024).Students often treat AI-mediated interaction as transactional rather than relational.Educators should thus frame AI-based dialogue as preparatory and reflective-an opportunity for analysing tone, turn-taking, communicative pragmatics, and cultural nuance before engaging in human-to-human discussion. Academic writing requires recursive drafting, argumentation, metacognitive monitoring, and engagement with disciplinary discourse. While AI systems offer support for structure, coherence, and precision, empirical evidence indicates that premature reliance on AI-generated text compresses the cognitive labour required for revision and Confidential idea development (Kim et al., 2024). Students may conflate surface fluency with conceptual depth, weakening their ability to construct arguments independently. This checkpoint emphasises using AI outputs as objects of critique rather than as substitutes for original writing. A first-year writing programme, for example, used AIIn review generated drafts to teach rhetorical analysis and strengthen feedback literacy (Park et al., 2023;Cotos et al., 2023). This aligns with composition theory, supporting metacognition and intellectual ownership. Professional communication requires intercultural awareness, rhetorical adaptability, and collaborative digital literacy. AI translation, tone-modelling, and summarization tools are particularly attractive in multilingual and Global South contexts, where linguistic diversity is high. Yet studies show that AI systems often homogenize language and flatten culturally embedded meaning, resulting in tonal mismatches or misrepresentations of local pragmatics (Khanna et al., 2023;Bali et al., 2024).To support communicative adaptability, educators can design tasks where students critique AI translations, identify cultural distortions, and recontextualize messages for diverse audiences. These practices foster intercultural competence rather than dependence on automated correctness. The uneven integration of AI tools risks widening disparities between students with strong digital literacies and those with limited access, particularly in Global South institutions (Khanna et al., 2023). Large language models trained predominantly on Western datasets may reinforce epistemic hierarchies and marginalize non-Western linguistic forms (Bali et al., 2024;Williamson and Kizilcec, 2023). At the same time, prohibiting AI entirely may reduce opportunities for students to develop critical AI literacy, a capability increasingly essential for academic and professional environments.A balanced approach to AI governance is therefore needed-one that maintains core communication processes while enabling guided, reflective engagement. AI tools embody algorithmic biases grounded in their training data. These biases can manifest in differential error rates, tonal misalignments, cultural misinterpretations, and skewed evaluative feedback (Williamson and Kizilcec, 2023;Seth and Anwardeen, 2024). In multilingual classrooms, these biases may disadvantage learners whose linguistic identities fall outside high-resource language norms. Educational use of external AI platforms raises concerns regarding data sovereignty, consent, long-term storage, and secondary use of student-generated text. Systematic reviews highlight substantial privacy vulnerabilities and insufficient institutional protections for educational data (Zhao and Weng, 2024). Teacher agency remains central to mediating AI's instructional role. Over-automation risks shifting cognitive authority to AI tools, eroding professional judgment and pedagogical autonomy. Recent research shows that AI adoption can subtly reshape teacher identity and classroom power dynamics (Colman and Koole, 2025). Educators need targeted professional development to critically evaluate AI, diagnose its limitations, and design learning experiences that foreground reasoning, creativity, and relational communication. To preserve developmental integrity while leveraging AI's affordances, institutions should:1. Integrate dual-literacy frameworks combining communication development with AI literacy across curricula.2. Implement reflective writing tasks where students critique and revise AIgenerated outputs.3. Strengthen teacher preparation on ethical integration, algorithmic bias, culturally responsive AI mediation, and process-oriented assessment. Confidential norms, and data privacy protections. AI's expanding presence in higher education demands more than enthusiasm or caution-it requires developmental judgment. This commentary advances a clear and actionable position: AI should be integrated only in ways that strengthen, rather than replace, the human processes of meaning-making at the heart of communication development. The four communication checkpoints proposed-foundational literacy, dialogic discussion, academic writing, and professional/intercultural communicationoffer educators a practical framework for evaluating when AI enriches learning and when it risks undermining it.For educators working in Egypt, the Arab region, and the wider Global South, the developmental perspective is especially critical. These contexts face distinctive challenges: multilingual classrooms, infrastructural disparities, uneven AI-literacy, and persistent algorithmic bias against non-Western linguistic and cultural norms. By embedding these realities into the analysis, this commentary positions the checkpoints as context-sensitive tools that help educators navigate AI adoption in environments where both opportunities and vulnerabilities are amplified.The aim has been to influence how educators think and act-not by advocating technological optimism or resistance, but by equipping teachers, curriculum designers, and policy makers with a framework that preserves the reflective, dialogic, and ethical dimensions of learning. Teaching in the age of AI requires renewed attention to the human work of communication: interpretation, negotiation, and relational meaningmaking. When AI is integrated transparently, intentionally, and developmentally, it can support these aims without displacing them.In moving forward, institutions should adopt policies and pedagogies that elevate teacher agency, protect student data, reduce algorithmic bias, and cultivate dual literacy-communication literacy and AI literacy-as essential graduate outcomes. If this balance is achieved, AI will not redefine education; rather, educators will define how AI is meaningfully and ethically woven into educational practice.
Keywords: Checkpoints, Assessments, artificial intelligence, Curriculum, outcom-based instruments
Received: 09 Oct 2025; Accepted: 21 Nov 2025.
Copyright: © 2025 Ahmed. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Samar A. Ahmed, samar@med.asu.edu.eg
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.