- Department of Geography, University of British Columbia, Vancouver, BC, Canada
This perspective piece addresses the rapid integration of generative artificial intelligence (AI) in higher education and the imperative to move beyond a purely technical understanding towards fostering critical AI literacy among students. Despite the benefits of AI in enhancing learning experiences and preparing students for a tech-driven workforce, concerns exist regarding misinformation, diminished critical thinking, ethical dilemmas, and a lack of regulatory frameworks. This perspective piece proposes a circular pedagogical framework comprising contextual preparation, guided engagement, and collective critical reflection, drawing on Vygotsky’s sociocultural theory, Freire’s critical consciousness, and Mackey and Jacobson’s metaliteracies framework. The framework aims to address three critical competency gaps: AI tool assessment, critical AI evaluation skills, and AI information literacy. The paper highlights the importance of discipline-specific AI integration and scaffolded learning, supported by student reflection and metacognition, as demonstrated in the geography seminar courses discussed in the paper. Recognizing the need for instructor AI literacy, the paper concludes by emphasizing the necessity of institutional support through targeted training and interdisciplinary collaboration to ensure AI enhances learning effectively.
1 Introduction
In this perspective article, we address the rapid adoption of generative artificial intelligence (AI) in higher education which is radically transforming the way students approach academic work. Indeed, the AI in education market is already estimated to be $5.88 billion in 2024 and is projected to grow at an annual rate of 31.2% between now and 2030 (Grand View Research, 2024). This significant growth is primarily supported by a growing number of universities and colleges seeking to leverage these technologies to enhance their e-learning platforms while also looking to support a wide range of other applications, such as intelligent tutoring systems (Wang et al., 2023), chatbots (Ma et al., 2024), automated assessment tools (Lee and Moore, 2024), and AI-supported learning analysis (Alotaibi, 2024). This increasing integration of AI in educational settings has been mirrored by a substantial uptake of tools like Chat GPT (OpenAI, 2024), Gemini (Google, 2024), and Llama (Meta, 2024) by post-secondary students. Studies indicate growing student adoption of these technologies. For instance, Arowosegbe et al. (2024) conducted a survey (n = 136) and found that 31% of students reported regularly using generative AI to assist with academic tasks.
The integration of generative AI into university classrooms can result in a wide range of benefits for students, including enhancing personal learning experiences while providing instantaneous feedback (McGuire et al., 2024) and assisting with complex problem-solving (Rane, 2023), thereby improving student engagement and understanding. AI tools can also help to promote equitable classroom experiences, providing support to students with learning disabilities (Patibandla et al., 2024) and English language learners (Wei, 2023). Moreover, experience using generative AI can prepare students for a workforce that is becoming increasingly reliant on such technologies (Babashahi et al., 2024) by fostering essential digital literacy skills.
Despite these benefits, educators and students have concerns that the transformative potential of generative AI tools could be diminished if learners are not properly equipped to engage with them critically, ethically, and safely (Chan and Hu, 2023; Irfan et al., 2023; Jose and Jayaron Jose, 2024). A key concern is the risk of misinformation, as generative AI can create content that appears credible but is factually inaccurate (McIntosh et al., 2024; Ahmad et al., 2023). An overreliance on AI-generated content may also risk diminishing critical thinking skills (Song and Song, 2023). Furthermore, ethical dilemmas exist regarding privacy (Huang, 2023), copyright (Lucchi, 2024), and how algorithmic biases can reinforce existing inequalities (Zajko, 2022). These challenges are further compounded by a lack of comprehensive regulatory frameworks within the academy (Wang et al., 2024).
While similar challenges, such as addressing misinformation, navigating ethical dilemmas, and adapting regulatory frameworks, have emerged across educational contexts worldwide, perspectives from educational systems outside of North America and Western Europe highlight particularly varied approaches to AI literacy that reflect diverse cultural, institutional, and pedagogical traditions. For instance, research conceptualizing AI literacy from regions like East Asia (such as the work by Ng et al. (2021) from Hong Kong) often proposes a composite structure encompassing knowing/understanding AI, using/applying AI, evaluating/creating AI, and addressing AI ethics. Studies focusing on student conceptualizations, such as Černý (2024) (from the Czech Republic), reveal an emphasis on social-ethical discussions and critical engagement with the world, sometimes prioritizing these dimensions over purely labor-market prerequisites for AI literacy, and suggesting a shift towards social reflection and ecological/network dynamic interactionist paradigms. In some contexts, like that explored by Yetisensoy and Rapoport (2023) from Turkey, AI literacy is discussed specifically as an emerging citizenship competence that can be integrated into disciplines like social studies education, emphasizing ethical use and practical, hands-on application using age-appropriate tools. Furthermore, scoping reviews focusing on the Global South, including Africa (Van Wyk, 2024), highlight significant challenges such as existing digital divides, resource constraints, and issues of equity and exclusion. These perspectives critically examine the dominance of Western philosophical and ethical frameworks in AI discourse and call for the inclusion of diverse ethical viewpoints, such as African Ubuntu ethics, while also stressing the need for capacity building for educators and information professionals to effectively teach AI literacy. These varied perspectives underscore that understanding and fostering AI literacy requires acknowledging diverse definitions, pedagogical strategies, and socio-cultural contexts globally.
While acknowledging this diverse global landscape on AI Literacy, our perspective piece focuses on fostering critical engagement with generative AI through a particular pedagogical framework as one approach to address these challenges within higher education.
2 Do we currently have “AI literacy” in higher education?
We view AI literacy as a subset of digital literacy, extending beyond basic competencies demanded by digitally connected societies. While Gilster (1997) initial definition of digital literacy focused on understanding and using information via computers, we align with Audrin and Audrin (2022) in arguing that the concept has evolved to include a wider range of competencies, such as technical skills, critical thinking, and socio-emotional sensibilities. AI literacy necessitates moving beyond a purely technical understanding of these tools and reflecting on the ethical, social, and informational challenges they present.
Shiri (2024) offers an innovative approach to conceptualizing AI literacy through a faceted taxonomy that emphasizes the need for a multidisciplinary framework. Their taxonomy includes 13 high-level facets categorizing AI literacy into distinct but interconnected domains, such as conceptualization, applied knowledge (like machine learning), ethical and social considerations (including transparency and bias), and contextual literacy frameworks (integrating digital, data, and algorithmic literacy). Shiri (2024) also underscores the foundational role of data in AI systems and highlights the ethical challenges of data privacy, misinformation, and algorithmic discrimination, arguing that AI literacy demands a multifaceted approach integrating technical skills with critical thinking, ethical reasoning, and socio-political awareness. This aligns with earlier calls for a more comprehensive approach to AI literacy in higher education.
Concerns have been raised that the integration of generative AI in higher education often prioritizes technological proficiency, efficiency, and adaptability at the expense of fostering critical discourse about AI’s broader societal implications (Bond et al., 2024). An increasing dependence on AI tools by students risks reducing educational processes to algorithm-driven outputs, potentially overshadowing critical socio-political considerations surrounding these technologies (Collin et al., 2024; Chan and Hu, 2023). Studies indicate that students frequently engage with AI for superficial tasks like generating ideas or rephrasing text, reflecting a limited technical or critical understanding of AI’s potential and limitations (Shibani et al., 2024).
Barriers to cultivating comprehensive AI literacy are not limited to students; educators are often unprepared to critically engage with AI tools or guide their students in developing such skills (Jose and Jayaron Jose, 2024; Irfan et al., 2023). Faculty members often lack access to necessary training and professional development, perpetuating a cycle where students are taught to use AI for efficiency rather than criticality. A significant shift is needed in how higher education conceptualizes and implements AI literacy, moving towards a more nuanced and critical engagement that empowers students to explore the ethical, social, and political dimensions of AI alongside its technical capabilities.
3 Critical AI literacy through integrated pedagogical perspectives
The rapid integration of generative AI into higher education demands a theoretical framework that bridges immediate practical challenges with established educational theories. Our framework synthesizes complementary theoretical perspectives to create an approach for AI literacy development that acknowledges both technical competency and critical engagement.
The foundation of this framework lies in Vygotsky (1978) sociocultural theory, which conceptualizes learning as a socially mediated process. When applied to AI literacy, this theoretical lens helps us understand how students develop sophisticated engagement with AI tools through social interactions and guided practice. The AI tools themselves become what Vygotsky termed psychological tools - mediators of human thought and learning (Wertsch, 1985). This mediation occurs not in isolation, but within the social context of academic environments, where students and educators collectively negotiate meaning and understanding. This sociocultural foundation from Vygotsky extends easily into Freire (1970) critical consciousness framework, particularly when examining how students move from basic tool use to critical engagement. The progression from surface-level interaction to deeper critical awareness mirrors Vygotsky’s Zone of Proximal Development (ZPD), while incorporating Freire’s emphasis on power dynamics and social implications. Students develop not only technical competence but also the ability to recognize and question how AI systems might reflect and reinforce existing power structures (Selwyn, 2019).
Mackey and Jacobson's (2019) metaliteracies framework builds upon this sociocultural-critical foundation by emphasizing metacognitive reflection in digital environments. Their work helps us understand how students develop the capacity to monitor and regulate their use of AI tools while maintaining critical awareness of their role in knowledge creation. This metacognitive dimension becomes crucial, as students navigate the complex relationship between human and machine contributions to learning, requiring constant reflection on both technical capabilities and ethical implications (Zawacki-Richter et al., 2019). Contemporary critical digital pedagogy, as articulated by Stommel and Morris (2018), provides a bridge between theoretical foundations and current educational technology practices. Their emphasis on examining power relationships in technological spaces aligns with Freire’s critical consciousness and Vygotsky’s ideas on mediated learning. This theoretical integration manifests in practice through a developmental progression that begins with situated learning in authentic contexts where students engage with AI tools under guided conditions (Brown et al., 1989). Through careful scaffolding, students develop increasingly sophisticated critical framing abilities, learning to question, and evaluating AI outputs, while considering broader societal implications (Knox et al., 2020).
Our theoretical framework illuminates three critical competency gaps identified through reflective teaching insights, each reflecting different aspects of the integrated theories discussed above:
3.1 AI tool assessment competency
This gap directly connects to Vygotsky’s concept of tool mediation and the Zone of Proximal Development (Vygotsky, 1978; Wertsch, 1985). Students struggle to select appropriate AI tools because they lack the scaffolded learning experience needed to develop sophisticated tool assessment skills. The challenge is not merely technical, but reflects Freire’s concerns about critical consciousness (Freire, 1970) - students must understand not just how to use tools, but their broader implications for academic integrity and knowledge production (Zawacki-Richter et al., 2019). In our teaching practice, we have observed that students often select AI tools based on convenience rather than appropriateness, demonstrating a gap between their current level of development and the potential level they could achieve through guided instruction (Ahmed et al., 2024). This connects to Mackey and Jacobson's (2019) emphasis on metacognitive awareness - students need frameworks to reflect on tool-selection decisions.
3.2 Critical AI evaluation skills
This competency gap exemplifies the intersection of Freire’s critical consciousness with Stommel and Morris's (2018) critical digital pedagogy. Students’ difficulties in questioning AI outputs reflect a broader challenge in developing critical consciousness about technological tools (Selwyn, 2019). The verification strategies students need to develop align with the metaliteracies emphasis on reflective evaluation and understanding of knowledge creation processes (Mackey and Jacobson, 2019). Our reflective teaching practice indicates that students often accept AI-generated content without questioning its limitations or biases, highlighting the need for stronger development of critical evaluation frameworks (Knox et al., 2020). This gap demonstrates why the scaffolded development of critical consciousness, as suggested by both Vygotsky and Freire, is essential.
3.3 AI information literacy
This gap sits at the convergence of all theoretical perspectives. The challenge of evaluating AI-generated content requires what Vygotsky recognized as higher-order thinking skills, developed through social interaction and guided practice (Wertsch, 1985). Freire’s emphasis on understanding power structures is crucial when examining AI biases and their impact on research quality (Freire, 1970; Selwyn, 2019). The metaliteracies framework is particularly relevant here, as students must develop sophisticated understanding of how knowledge is created and verified in AI-enhanced environments (Mackey and Jacobson, 2019). Equally relevant is that the global context reinforces the necessity of a Freirean perspective for AI literacy, showing that critical consciousness regarding power, equity, and diverse values is crucial worldwide (Černý, 2024). Our teaching reflections show students struggling to integrate AI-generated content with traditional academic sources, indicating the need for stronger metacognitive skills and critical awareness.
4 Critical metaliteracy: a circular framework
Our pedagogical framework follows a circular rather than a linear model, reinforcing the iterative nature of learning. Each phase – contextual preparation, guided engagement, and collective critical reflection – builds upon the previous one while simultaneously shaping future interactions (Figure 1).
Contextual Preparation – Establish the learning objective within the course and discipline. Clearly define the specific task students will engage with and articulate why generative AI is relevant to this task. Introduce the selected AI tool(s) with explicit instructions on their functionality, ensuring students understand their purpose and limitations before use.
Guided Engagement – Assign students a discipline-specific task or concept to explore using a curated list of AI tools. Students apply these tools to investigate, analyze, or generate insights, working independently or in groups. This phase emphasizes hands-on exploration, critical interaction with AI-generated content, and engagement with course materials.
Critical Reflection – Regather as a class to analyze and discuss the outputs. Encourage students to critically evaluate their experiences, considering key issues such as ethics, bias, reliability, and the broader societal implications of AI in the discipline. Facilitate structured discussions that deepen their understanding of the role AI plays in shaping knowledge, decision-making, and research practices.
Unlike traditional stepwise approaches, this framework emphasizes recursivity: insights gained through engagement and reflection continuously refine both disciplinary understanding and AI literacy. As students critically analyze AI-generated outputs within the context of their course/discipline, their evolving awareness leads them to reframe questions, select more appropriate AI tools, and refine their analytical approaches during subsequent cycles. The specific classroom context and process for the implementation of this circular framework in two distinct geography seminars are detailed in Table 1 below, providing a concrete illustration of how these phases operate in practice.
It is important to note that our development of this framework itself represents a form of transformative praxis as conceptualized by Luitel and Dahal (2020), who describe it as a process that “leads toward an envisioning of existing practices in the direction of promoting socially just and equitable systems” (p. 2). Through critical reflection on our teaching experiences in these two geography courses, we engaged in the recursive “reflections and action upon the world in order to transform it” (Luitel and Dahal, 2020, p. 2) that characterizes transformative praxis. Our methodology aligns with what Waghid (2001) describes as a reflexive approach where educators examine “their teaching and learning processes to respond to a future that cannot be imagined” (p. 77). By analyzing our classroom experiences, identifying patterns in student engagement with AI tools, and continuously refining our pedagogical approaches, we embodied the principles of transformative praxis in the very creation of the framework we propose.
5 Discussion
The practical implementation of the circular framework in the course examples underscores the efficacy of an iterative approach to AI literacy development. This cyclical engagement, moving through contextual preparation, guided interaction, and critical reflection, allows for a progressively sophisticated understanding of both the technical and critical dimensions of AI tools. This suggests that a one-off approach to AI education is insufficient (Mackey and Jacobson, 2019).
The success of these examples further emphasizes the necessity of discipline-specific AI integration. Rather than teaching AI literacy as a generic skill, embedding it within the context of disciplinary knowledge structures makes learning more meaningful and directly applicable to students’ fields of study (Ng et al., 2021; Yetisensoy and Rapoport, 2023). This approach counters the tendency to overemphasize technical skills at the expense of deeper disciplinary understanding (Bond et al., 2024).
The structured pedagogical interventions in both courses directly aimed at addressing the three critical competency gaps: AI Tool Assessment, Critical AI Evaluation Skills, and AI Information Literacy (Zawacki-Richter et al., 2019). By designing activities that require students to question AI outputs, compare AI-generated information with traditional sources, and reflect on the appropriateness of AI tools (Ahmad et al., 2023; McIntosh et al., 2024), these courses provide concrete examples of how to cultivate these essential competencies. This suggests that explicit instruction and activities targeting these gaps are crucial.
The operationalization of scaffolded learning within the Zone of Proximal Development (Vygotsky, 1978; Wertsch, 1985), as seen in the progression of tasks in both courses, demonstrates its vital role in supporting students’ development of complex AI literacy skills. This gradual increase in complexity, moving from guided tool use to independent critical evaluation (Brown et al., 1989), provides the necessary support for students to move beyond superficial engagement with AI.
The central role of student reflection and metacognition, aligning with Mackey and Jacobson’s metaliteracies framework (Mackey and Jacobson, 2019), highlights the importance of students actively monitoring and regulating their use of AI tools. Formalizing reflection through documentation and discussions encourages a dialogic rather than transactional engagement with AI (Stommel and Morris, 2018), fostering critical awareness of their role in knowledge creation (Knox et al., 2020).
Finally, the recognition of the crucial role of instructor AI literacy emphasizes that fostering these skills in students necessitates a corresponding development in faculty expertise (Jose and Jayaron Jose, 2024; Irfan et al., 2023). The paper acknowledges that the instructors in the provided examples have significant AI expertise, which may not be the norm. Addressing this requires institutional investment in targeted, both university-wide and discipline-specific, professional development to equip educators with the necessary understanding of generative AI and its pedagogical applications (Wang et al., 2024). Without this support, expecting instructors to independently develop proficiency in this rapidly evolving area is unrealistic.
Our approach differs from other AI literacy frameworks in the literature through its emphasis on recursive learning cycles rather than linear skill acquisition (Shiri, 2024), its integration of critical theory with practical implementation, and its explicit focus on discipline-specific applications that avoid treating AI literacy as a generic skillset.
Collectively, these six points underscore that the effective integration of AI in higher education demands a multifaceted approach that is iterative, discipline-specific, grounded in established learning theories, focused on developing critical competencies (Freire, 1970), and supported by ongoing development for both students and educators.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
SM: Conceptualization, Funding acquisition, Writing – original draft, Writing – review & editing, Methodology. MJ: Writing – review & editing, Methodology, Writing – original draft, Conceptualization, Visualization.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that Gen AI was used in the creation of this manuscript. An AI language model (NotebookLM Plus) was utilized as an assistant for refining the manuscript. This involved receiving suggestions on the level of detail included, offering an assessment of the manuscript's alignment with the target journal's scope, and generating proposals for a concise running title.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Ahmad, Z., Kaiser, W., and Rahim, S. (2023). Hallucinations in chatGPT: an unreliable tool for learning. Rupkatha J. Interdisciplinary Stu. Hum. 15:17. doi: 10.21659/rupkatha.v15n4.17
Ahmed, Z., Shanto, S. S., and Jony, A. I. (2024). Potentiality of generative AI tools in higher education: evaluating chatGPT's viability as a teaching assistant for introductory programming courses. STEM Education 4, 165–182. doi: 10.3934/steme.2024011
Alotaibi, N. S. (2024). The impact of AI and LMS integration on the future of higher education: opportunities, challenges, and strategies for transformation. Sustainability 16:10357. doi: 10.3390/su162310357
Arowosegbe, A., Alqahtani, J. S., and Oyelade, T. (2024). Perception of generative AI use in UK higher education. Front. Educ. 9:1463208. doi: 10.3389/feduc.2024.1463208
Audrin, C., and Audrin, B. (2022). Key digital literacy skills for academic success in higher education: a systematic literature review. Educ. Inf. Technol. 27, 7395–7419. doi: 10.1007/s10639-021-10832-5
Babashahi, L., Barbosa, C. E., Lima, Y., Lyra, A., Salazar, H., Argôlo, M., et al. (2024). AI in the workplace: a systematic review of skill transformation in the industry. Admin. Sci. 14:127. doi: 10.3390/admsci14060127
Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., et al. (2024). A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 21:436. doi: 10.1186/s41239-023-00436-z
Brown, J. S., Collins, A., and Duguid, P. (1989). Situated cognition and the culture of learning. Educ. Res. 18, 32–42. doi: 10.3102/0013189X018001032
Černý, M. (2024). “AI literacy in higher education: theory and design” in New media pedagogy: Research trends, methodological challenges, and successful implementations. NMP 2023. Communications in Computer and Information Science. ed. Ł. Tomczyk (Cham: Springer).
Chan, C. K. Y., and Hu, W. (2023). Students' voices on generative ai: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20:4118. doi: 10.1186/s41239-023-00411-8
Collin, S., Lepage, A., and Nebel, L. (2024). Ethical and critical issues of artificial intelligence in education: a systematic review of the literature. Can. J. Learn. Technol. 49, 1–29. doi: 10.21432/cjlt28448
Google. (2024). Gemini—Chat to supercharge your ideas. Available online at: https://gemini.google.com (accessed March 6, 2025).
Grand View Research (2024). Artificial intelligence in education market size report, 2024–2030. Available online at: https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-education-market (Accessed January 3, 2025).
Huang, L. (2023). Ethics of artificial intelligence in education: student privacy and data protection. Sci. Insights Educ. Front. 16, 2577–2587. doi: 10.15354/sief.23.re202
Irfan, M., Aldulaylan, F., and Alqahtani, Y. (2023). Ethics and privacy in Irish higher education: a comprehensive study of artificial intelligence (AI) tools implementation at University of Limerick. Glob. Soc. Sci. Rev. 8, 201–210. doi: 10.31703/gssr.2023(VIII-II).19
Jose, J., and Jayaron Jose, B. (2024). Educators' academic insights on artificial intelligence: challenges and opportunities. Electronic Journal of E-Learning, 59–77. doi: 10.34190/ejel.21.5.3272
Knox, J., Wang, Y., and Gallagher, M. (2020). Artificial intelligence and inclusive education. London: Springer.
Lee, S. S., and Moore, R. L. (2024). Harnessing generative AI (GenAI) for automated feedback in higher education: a systematic review. Online Learn. 28:4593. doi: 10.24059/olj.v28i3.4593
Lucchi, N. (2024). ChatGPT: a case study on copyright challenges for generative artificial intelligence systems. Eur. J. Risk Regul. 15, 602–624. doi: 10.1017/err.2023.59
Luitel, B. C., and Dahal, N. (2020). Conceptualising transformative praxis. J. Transf. Praxis 1, 1–8. doi: 10.3126/jrtp.v1i1.31756
Ma, W., Ma, W., Hu, Y., and Bi, X. (2024). The who, why, and how of AI-based chatbots for learning and teaching in higher education: a systematic review. Educ. Inf. Technol. 30, 7781–7805. doi: 10.1007/s10639-024-13128-6
Mackey, T. P., and Jacobson, T. E. (2019). Metaliterate learning for the post-truth world. Chicago, IL: ALA Neal-Schuman.
McGuire, A., Qureshi, W., and Saad, M. (2024). A constructivist model for leveraging GenAI tools for individualized, peer-simulated feedback on student writing. Int. J. Technol. Educ. 7, 326–352. doi: 10.46328/ijte.639
McIntosh, T. R., Liu, T., Susnjak, T., Watters, P., Ng, A., and Halgamuge, M. N. (2024). A culturally sensitive test to evaluate nuanced GPT hallucination. IEEE Trans. Artif. Intell. 5, 2739–2751. doi: 10.1109/TAI.2023.3332837
Meta. (2024). Introducing Meta llama 3: The most capable openly available LLM to date. Available online at: https://ai.meta.com/blog/meta-llama-3/ (accessed: March 6, 2025).
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., and Qiao, M. S. (2021). Conceptualizing AI literacy: an exploratory review. Comput. Educ. Artif. Int. 2:100041. doi: 10.1016/j.caeai.2021.100041
OpenAI. (2024). Introducing ChatGPT | OpenAI. Introducing ChatGPT | OpenAI. Available online at: https://openai.com/index/chatgpt/ (accessed March 6, 2025).
Patibandla, R. S. M. L., Rao, B. T., Rao, D. M., and Ramakrishna Murthy, M. (2024). “Reshaping the future of learning disabilities in higher education with AI” in Applied assistive technologies and informatics for students with disabilities. Applied intelligence and informatics. eds. R. Kaluri, M. Mahmud, T. R. Gadekallu, D. S. Rajput, and K. Lakshmanna (Singapore: Springer).
Rane, N. (2023). Enhancing mathematical capabilities through ChatGPT and similar generative artificial intelligence: roles and challenges in solving mathematical problems. SSRN Electron. J. 1–9. doi: 10.2139/ssrn.4603237
Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Cambridge: John Wiley & Sons.
Shibani, A., Knight, S., Kitto, K., Karunanayake, A., and Buckingham Shum, S. (2024). “Untangling critical interaction with AI in students' written assessment,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–6.
Shiri, A. (2024). Artificial intelligence literacy: a proposed faceted taxonomy. Digital Library Perspectives. Digital Library Perspectives 40, 681–699. doi: 10.1108/DLP-04-2024-0067
Song, C., and Song, Y. (2023). Enhancing academic writing skills and motivation: assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Front. Psychol. 14:1260843. doi: 10.3389/fpsyg.2023.1260843
Stommel, J., and Morris, S. M. (2018). An urgency of teachers: The work of critical digital pedagogy. Washington, DC: Hybrid Pedagogy Inc.
Van Wyk, B. (2024). Exploring the philosophy and practice of AI literacy in higher education in the global south: a scoping review. Cybrarians Journal 73, 1–21. doi: 10.70000/cj.2024.73.601
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press.
Waghid, Y. (2001). Transforming university teaching and learning through a reflexive praxis. S. Afr. J. High. Educ. 15, 77–83. doi: 10.4314/sajhe.v15i1.25383
Wang, H., Dang, A., Wu, Z., and Mac, S. (2024). Generative AI in higher education: seeing ChatGPT through universities' policies, resources, and guidelines. Comput. Educ. Artif. Int. 7:100326. doi: 10.1016/j.caeai.2024.100326
Wang, H., Tlili, A., Huang, R., Cai, Z., Li, M., Cheng, Z., et al. (2023). Examining the applications of intelligent tutoring systems in real educational contexts: a systematic literature review from the social experiment perspective. Educ. Inf. Technol. 28, 9113–9148. doi: 10.1007/s10639-022-11555-x
Wei, L. (2023). Artificial intelligence in language instruction: impact on english learning achievement, l2 motivation, and self-regulated learning. Front. Psychol. 14:1261955. doi: 10.3389/fpsyg.2023.1261955
Wertsch, J. V. (1985). Vygotsky and the social formation of mind. Cambridge: Harvard University Press.
Yetisensoy, O., and Rapoport, A. (2023). Artificial intelligence literacy teaching in social studies education. J. Pedagog. Res. 7, 100–110. doi: 10.33902/JPR.202320866
Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. Int. J. Educ. Technol. High. Educ. 16, 1–41. doi: 10.1186/s41239-019-0171-0
Keywords: AI literacy, pedagogical framework, critical engagement, generative AI, higher education, Vygotsky’s sociocultural theory, critical AI evaluation skills, AI information literacy
Citation: McPhee SW and Jerowsky M (2025) Beyond technical skills: a pedagogical perspective on fostering critical engagement with generative AI in university classrooms. Front. Educ. 10:1593278. doi: 10.3389/feduc.2025.1593278
Edited by:
Yu-Chun Kuo, Rowan University, United StatesReviewed by:
Reham Salhab, Palestine Technical University Kadoorie, PalestineCopyright © 2025 McPhee and Jerowsky. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Siobhán Wittig McPhee, c2lvYmhhbi5tY3BoZWVAdWJjLmNh