Your new experience awaits. Try the new design now and help us make it even better

OPINION article

Front. Educ.

Sec. Digital Education

Volume 10 - 2025 | doi: 10.3389/feduc.2025.1716353

This article is part of the Research TopicFrom Consumers to Co-Designers: Participatory Pedagogy for Ethical AI Integration in Professional PracticeView all articles

Learning with, Rather than Through, AI: Co-Designing Science Education for Critical AI Literacy

Provisionally accepted
  • Near East University, Department of Computer Information Systems, Nicosia 99138, Cyprus, Nicosia, Cyprus

The final, formatted version of the article will be published soon.

The New Classroom Reality: AI at Students' Fingertips Classrooms have entered an era where generative AI tools like ChatGPT are ubiquitous knowledge brokers. In a recent cross-national survey spanning seven countries, researchers found that the use of AI for science-related information searches is widespread and growing (Greussing et al., 2025;Kessler et al., 2025). Young people are especially quick to adopt such tools. Anecdotal evidence and early studies confirm that many students are already using ChatGPT for schoolwork, sometimes extensively (Chow, 2025;Gecker, 2025). This means that beyond textbooks and teachers, students now have a third player in the learning process: a conversational AI that can explain concepts, solve problems, and even write essays on demand. Science communication -by which one means the exchange and discussion of especially scientific ideas and information between educators and learners in a classroom setting -is no longer a one-way flow from teacher to student; it has become a more complex network, with AI as a new node connecting learners to information.From the student's perspective, this seems empowering. AI provides instant answers and personalised explanations at any time. For example, a biology student stuck on a homework question about plant cell organelles can ask ChatGPT for help and receive a tailored explanation within seconds. Indeed, AI's powerful knowledge base and rapid feedback have huge potential for supporting student learning. Studies have begun to document benefits: a recent meta-analysis of 51 research studies concluded that using ChatGPT can significantly improve students' learning performance on average, and even bolster higher-order thinking skills when used with proper guidance (Chow, 2025;Fang et al., 2025 [preprint]). Such findings suggest that, used well, ChatGPT might act as a kind of virtual teaching assistant, providing scaffolding and practice for learners. It is no wonder that many educators see promise in integrating AI into education (Küçükuncular & Ertugan, 2025).Yet this new reality is also disorienting for teachers. Historically, a science teacher was the primary source of expert knowledge in the classroom. Now, that monopoly on information is gone. The challenge is clear: How do teachers maintain their educational impact when ' Google on steroids' is sitting in every student's pocket? Some teachers fear that if answers are always available on-demand, students may stop paying attention in class or doing the hard thinking themselves. In one striking MIT Media Lab study, college-aged participants who used ChatGPT to write essays showed markedly lower brain engagement than those who wrote without AI assistance (Fang et al., 2025 [preprint]). The AI-assisted essays tended to be uniform and characterless, and by the third assignment many students had resorted to simple copy-and-paste usage of ChatGPT (ibid.). Brainwave measurements indicated these students were engaging far less in critical thought, suggesting that overreliance on AI can make learning disturbingly passive (Bennet & Royce, 2025). Although that study focused on older students and is still preliminary, its implications resonate in high school settings as well. If students outsource too much thinking to AI, they risk bypassing the very cognitive processes by which deep learning and scientific reasoning develop. The rise of AI helpers poses several challenges to science communication and teaching. Firstly, educators worry that easy answers from AI could short-circuit students' critical thinking and curiosity. Roughly half of teachers surveyed in the U.S. suspect that student use of AI might decrease teens' ability to think independently and persist through difficult problems (Gecker, 2025). The concern is that if a chemistry student can ask ChatGPT to balance a chemical equation or explain a concept, they might not bother grappling with it themselves. Over time, this 'use it or lose it' effect could weaken their problem-solving muscles. As one report noted, heavy reliance on AI tools can have unintended cognitive consequences for developing brains, potentially weakening memory and reasoning skills if not checked (Zhai, 2022). Science educators thus face the task of ensuring that AI becomes a springboard for thinking, not a replacement for it.Secondly, despite their fluency, AI models are not infallible. ChatGPT can produce answers that sound authoritative but are partially incorrect or even completely fabricated (Chelli et al., 2024). For instance, it might confidently mix real scientific facts with a few wrong details, or generate plausible-sounding references to studies that do not exist. Such hallucinations are a known weakness of large language models (ibid.). Science educators, who prioritise evidence-based explanations, may find this aspect of ChatGPT particularly problematic. A student might receive an answer that is superficially convincing yet subtly wrong, and without guidance, they may not catch the error. The difficulty of science communication now includes an extra layer of fact-checking AIprovided information. Teachers must often play the role of AI misinformation detectors, helping students learn to verify answers against reliable sources (textbooks, scientific articles) instead of taking the AI's word as final.Thirdly, another challenge is the potential bias in AI-generated content. These models are trained on vast datasets from the internet, and they can inadvertently reflect the biases present in those data. OpenAI itself has acknowledged that ChatGPT may produce content that perpetuates harmful biases and stereotypes and is generally skewed toward Western perspectives (Cooper, 2023). This raises important questions for inclusive science communication. If students from historically marginalised communities or non-Western cultures turn to AI for science knowledge, will they see their perspectives reflected? Or will the information subtly prioritise Western examples and viewpoints? Would these voices be amplified or silenced by the algorithm? (ibid.). High school teachers, especially those committed to equity, must be vigilant. They need to help students recognise bias in AI outputs and seek out diverse sources of scientific information. In practice, this might mean discussing why an AI's answer about the history of a discovery only mentioned Western male scientists, or why its explanation of an environmental issue omitted indigenous knowledge. Science communication in the AI era must actively counteract bias to remain inclusive.The ease with which generative AI can produce essays, homework answers, or lab reports creates a cheating dilemma. Teachers are finding that they now have to distinguish a student's original work from AI-generated work (Bennet & Royce, 2025). Although some clues might help, since AI-written text may be surprisingly polished, generic in tone, and devoid of personal voice or errors, detection is not always straightforward. This puts stress on the traditional modes of assessment. If a student can get a perfectly formatted lab summary from ChatGPT, teachers must rethink assignments to emphasise process and understanding over flawless output. It is also important to acknowledge that the growing prevalence of digital and online assessments may further increase students' access to, and dependence on, AI tools; however, an in-depth analysis of this issue lies beyond the scope of this piece. Nevertheless, it remains a challenge to design assessments that reward genuine engagement and cannot be completed by a bot. Many schools responded initially by banning AI tools outright, but outright bans are often unenforceable and may deprive students of learning how to use AI ethically. The trend now is toward finding ways to incorporate AI smartly rather than ignore it, which requires careful thought from educators (Küçükuncular & Ertugan, 2025).These challenges highlight that teaching science has, in some ways, become harder. The presence of an all-knowing bot in the room means teachers must constantly reinforce why learning science matters beyond just getting answers. It also calls for involving students more directly in shaping classroom norms for ethical AI use, ensuring they are not only consumers but also co-designers of responsible learning practices. Likewise, it forces educators to emphasise process, skills, and context -areas where human guidance is still essential -while managing the new pitfalls AI introduces. Even with these difficulties, many educators are cautiously optimistic. The key is to redefine the teacher's role in light of what AI can do. Rather than seeing ChatGPT as a competitor, savvy teachers treat it as a tool, and as such, one that needs a skilled human operator. A recent report found that among U.S. teachers who do use AI, about 80% said it saves them time on tasks like preparing worksheets, quizzes, and lesson plans (Gecker, 2025). Freed from some routine chores, teachers can devote more energy to interacting with students or designing creative activities. In the same report, about 60% of these teachers felt AI tools actually improved the quality of their work when adapting materials or providing feedback to students (ibid.). In other words, AI can augment teacher effectiveness if used properly. For example, a science teacher might use ChatGPT to generate a draft of a quiz, then refine the questions to better suit her class. Or an educator might quickly get a few alternative explanations for a tricky concept and then decide which one best fits their students' level. These uses of AI treat it as an assistant, not a replacement.More importantly, teachers are now tasked with a new pedagogical mission: teaching AI literacy. Just as past generations of educators taught students how to use search engines or discern credible sources online, today's teachers must show students how to interact intelligently with AI. "If I'm on the soapbox of 'AI is bad and kids are going to get dumb,' well yeah, that'll happen if we don't teach them how to use the tool", remarked Mary McCarthy, a high school teacher, emphasising that it is "the responsibility of the adult in the room to help [students] navigate this future" (as quoted in Gecker, 2025) Rather than forbid AI, she models proper use in her classroom, demonstrating, for instance, how to ask the right questions of ChatGPT and how to cross-check its answers. This perspective is increasingly echoed by educators around the world. Such modelling not only shapes responsible AI but also invites students to co-design classrooms norms for ethical AI use, reinforcing their agency in learning with, rather than though, AI.So, what does critical AI education look like in practice for a science class? It involves pulling back the curtain on tools like ChatGPT and guiding students in their use. A teacher might start by openly discussing how large language models work at an appropriate level of detail, explaining that these AIs predict answers based on patterns in data, and therefore do not know facts in the way humans do (Wang & Fan, 2025). From there, teachers can integrate AI into assignments in a structured way. For example, a high school biology teacher could have students use ChatGPT to get a quick explanation of a concept before a lesson, then spend class time dissecting that AI-generated explanation: Was it accurate? What details were missing? Did use any questionable sources? This kind of exercise turns AI from a shortcut into a subject of critical analysis (Greussing et al., 2025). In one science education study, an educator described modelling this approach, having students prompt ChatGPT for a brief overview of a topic to overcome knowledge gaps, but then critically evaluating the response together (Cooper, 2023). Such practices actually reinforce scientific thinking: students must evaluate evidence, identify errors, and consider alternative explanations, just as a scientist would. Using AI becomes a means to make students' thinking visible and hone their analytical skills, rather than a crutch to avoid thinking. Indeed, many experts assert that using AI is not necessarily about replacing teachers or making the learning of science easier for students. When employed intentionally in the classroom, AI tools can prompt students to ask better questions, consider multiple perspectives, and reflect on their reasoning. For instance, a chemistry teacher might use ChatGPT to generate an incorrect explanation of a concept and ask students to find the flaws in it, a twist on the traditional misconception-based teaching strategy (Henke, 2025). Alternatively, teachers might engage AI as a debate partner: one class activity described by educators is to have students debate an AI on a controversial science topic, forcing the student to fact-check the AI's arguments and respond with evidence (Bennet & Royce, 2025). Such an exercise can make learning interactive and sharpen critical thinking, as students must counter an AI's claims and, in the process, deepen their own understanding. In these ways, the teacher orchestrates learning experiences where AI is a means to an end, the end being engaged, thoughtful students.Crucially, teachers remain the mentors and curators of the learning process. They provide what AI cannot: context, ethical guidance, emotional support, and personalization grounded in genuine understanding of their students (Küçükuncular & Ertugan, 2025). Students consistently report valuing this human empathy, the sense that their teachers can understand, encourage, and relate to their experiences -a dimension that no algorithm (at least for now) can replicate (Ampofo et al., 2025). A chatbot might give a correct formula or definition, but it takes a teacher to observe that Johnny is still confused about why it matters, or that Aisha, who speaks English as a second language, might be taking the AI's text too literally. Educators bring professional judgment to decide when to trust an AI-generated resource and when to override it. "We want to make sure that AI isn't replacing the judgment of a teacher", as Maya Israel, a professor of education technology, aptly put it (as quoted in Gecker, 2025). This means, for example, that if a teacher uses an AI to help grade or give feedback, they must review the output, especially for nuanced student work, and make the final call themselves. In a world flooded with automated information, the teacher's role as a wise guide and facilitator is arguably more important than ever. That is, not merely a 'guide on the side', but a reflective mentor who helps students cultivate discernment, curiosity, and critical reasoning in dialogue with technology.And despite the fears, teachers are not obsolete -not even close. "AI does not replace the expertise of the science teacher (yet)," writes Cooper (2023, p. 450), reminding colleagues that a teacher's experience and understanding of their students remain key to sound pedagogy. In fact, teachers who embrace AI often find themselves re-energised in their practice. By handing off some tedious tasks to AI and focusing on higher-order skills, they can transform their classrooms into vibrant workshops of inquiry. Some early-career teachers report that AI tools have improved their work-life balance and reduced burnout by cutting down on planning time (Gecker, 2025). The human energy saved can be reinvested in creative teaching, like designing experiments, guiding discussions, or mentoring student research projects, which AI cannot do for us. In this sense, AI serves as a catalyst for the kind of participatory, inquiry-based teaching that most educators already aspire to: less repetitive lecturing, more coaching and curiosity-driven exploration. If there is a guiding principle to navigate science communication in the age of AI, it ought to be critical AI literacy for both students and teachers. Science educators should incorporate into their curriculum the skills needed to use AI ethically, effectively, and critically. This includes learning how to phrase good questions (prompts) for tools like ChatGPT, how to interpret the answers with a sceptical eye, and how to double-check information against trusted scientific sources (Küçükuncular & Ertugan, 2025). It also means grappling with the broader implications of AI in society. In a science class, this could translate to discussing real-world issues such as the environmental impact of large data centres that run AI or the human labour involved in moderating AI content (Wu et al., 2021). Such conversations help connect scientific knowledge to social responsibility and global citizenship, ensuring that ethical reflection accompanies technical understanding. These topics connect science and technology with ethics and economics, enriching students' understanding of the context in which modern science operates. By treating AI itself as a subject of inquiry, teachers model the critical thinking we expect in science: questioning how a tool works and what its limitations are.Moreover, this must be done by taking concrete steps to foster AI literacy. One approach is to implement guided use policies: rather than banning AI, outline when and how it may be used for assignments. For instance, a teacher might allow students to use ChatGPT to brainstorm ideas for projects, but require them to document any AI assistance they received and reflect on its usefulness. This parallels emerging norms in academia, where some journals now ask authors to disclose any AI tools used in writing or research (cf Sage, 2025;Springer, 2025). Such transparency encourages students to be mindful of their reliance on AI and to take responsibility for their learning. Teachers should also emphasise that AI is a starting point, not the finish line. If a student uses an AI-generated explanation, they should be encouraged to build on it, annotate it with their own insights, critique it, or supplement it with examples from class. The act of editing or improving AI output can be a powerful learning exercise, as it requires understanding the material well enough to spot what is missing or off-target. One educator noted that it is important for teachers and students to be "critical about the ChatGPT output, deleting parts that are not helpful and building on elements that are" (Cooper, 2023, p. 448), treating the AI's contribution as raw material to be refined. This practice not only develops analytical potential, but also empowers students as co-authors of their learning process, reinforcing participatory pedagogy in digital contexts.Moreover, embracing AI literacy means preparing students for a future where AI is commonplace. Just as we teach lab safety or the proper way to handle a Bunsen burner, we now must teach AI safety in an intellectual sense. This includes addressing questions like: When is it appropriate to use AI for help, and when is it better to struggle through a problem without it? How do we cite or credit AI assistance to maintain academic integrity? What biases might an AI have, and how can we detect them? These questions encourage a habit of ethical discernment that complements scientific reasoning; they imbue students with a healthy scepticism and a sense of agency. They learn that they are in control of the tool, not vice versa. In fact, by confronting AI's shortcomings and ethical pitfalls head-on, educators can turn AI into a lesson in scientific values: respect for evidence, transparency in method, and awareness of uncertainty.Finally, an often overlooked aspect of critical AI education is its potential to promote metacognition. When students reflect on how they used an AI tool, such as what questions they asked, how they interpreted the answer, or where the AI was helpful or not, they are essentially thinking about their own thinking. This kind of reflection is known to deepen learning (An et al., 2024). Some teachers integrate AI-based journaling or reflection prompts, where an AI might suggest questions like 'Did the result surprise you?' or 'What would you do differently next time?', and then students respond with their own thoughts. The AI here acts as a catalyst for reflection, but the insights come from the student. Encouraging this reflective dialogue with technology helps cultivate metacognitive awareness -teaching students not only what they know, but how they come to know it. In sum, the goal is to teach students to approach AI as informed, critical consumers, much like we teach them to critically evaluate a source or an experiment's design in science. If they can master that, AI transforms from a threat to an ally in their lifelong learning journey. The age of AI is changing the landscape of science education, but it does not spell the end of teachers; rather, it calls for their evolution. Teachers find themselves at the forefront of this change, balancing between caution and innovation. On one hand, they must guard against the pitfalls of AI: the potential for misinformation, diminished critical thinking, inequitable access or biases, and broader societal or environmental risks associated with large-scale AI deployment. On the other hand, they have a chance to harness AI to make science communication more effective and inclusive than ever before. A student who can get a quick answer from ChatGPT is a student who can also use that answer as a springboard to ask deeper questions, but only if we properly show them how. Educators have long been described as guides rather than mere transmitters of information; what distinguishes the AI era is the intensified need for that guidance. When answers are readily available, wisdom becomes the new scarcity, and teachers' role in cultivating discernment takes on renewed importance.In this new chapter, the most successful science educators will likely be those who accept AI's presence and turn it to their advantage. They will demonstrate that curiosity is not satisfied by a single AI response, that understanding requires dialogue, experimentation, and reflection -processes that AI can support but not replace. They will also advocate for policies and curricula that integrate AI literacy, ensuring every student gains the skills to navigate an AI-rich world. In doing so, they move beyond mere adaptation toward participatory pedagogy, inviting students to have responsibility for how AI is ethically and creatively integrated into learning. Communication of science, at its core, has always been about more than facts; it is about inspiring wonder, scepticism, and the pursuit of truth. Those ideals remain unchanged. If anything, they become even more important when a talking machine can deliver facts on cue.Ultimately, the difficulty of science communication in the age of AI is also its promise. It forces us to focus on what truly matters in education. Succinctly put: AI is not replacing teachers; it is empowering thinkers. A teacher's human touch, the ability to connect, to read the room, to challenge and nurture, is irreplaceable, but now it can be complemented by AI's capabilities. By embracing that partnership, educators transform AI from a mere efficiency tool into an ally in critical and creative inquiry. By embracing that synergy, we can prepare students not just to recall scientific knowledge, but to engage with it thoughtfully and creatively. In doing so, we turn a potentially disruptive technology into a tool for deeper learning, and ensure that the art of teaching science not only survives, but thrives, in the AI era.

Keywords: generative artificial intelligence (AI), Critical AI Literacy, science education, participatory pedagogy, teacher agency, Student Co-Design, Ethical AI Integration

Received: 30 Sep 2025; Accepted: 17 Oct 2025.

Copyright: © 2025 KÜÇÜKUNCULAR. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ahmet KÜÇÜKUNCULAR, ahmet@kucukuncular.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.