Your new experience awaits. Try the new design now and help us make it even better

OPINION article

Front. Educ.

Sec. Digital Education

Volume 10 - 2025 | doi: 10.3389/feduc.2025.1647687

Epistemic Authority and Generative AI in Learning Spaces: Rethinking Knowledge in the Algorithmic Age

Provisionally accepted
Binny  JoseBinny Jose1*Anu  CleetusAnu Cleetus2Bindhu  JosephBindhu Joseph2Lumy  JosephLumy Joseph3Benymol  JoseBenymol Jose3Amruth  K JohnAmruth K John3
  • 1Department of Health and Wellness, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
  • 2St Joseph College of Teacher Education for Women, Kochi, India
  • 3Department of Computer Applications, Marian College Kuttikkanam Autonomous, Kuttikkanam, India

The final, formatted version of the article will be published soon.

Generative AI's emergence in learning environments has triggered a potent transformation in how we produce, consume, and verify knowledge. With tools like ChatGPT and other large-scale language models, classrooms are now populated with technologies that assist students in producing essays, solving math problems, summarizing texts, and constructing arguments. These systems bring new efficiencies-but also raise deeper pedagogical questions. What happens when students begin treating machine-generated outputs as epistemic authorities? How might this reshape the traditional roles of teachers and learners in determining what counts as knowledge?Historically, the classroom has been a space not only for the transfer of information but for coconstructing knowledge through discussion, inquiry, and critique (Freire,1970;Mejía-Arauz & Wells, 2005). The teacher's authority has rested not just in content expertise but in guiding learners through processes of meaning-making grounded in human judgment, experience, and ethical inference (Van Oers & Dobber, 2015). However, the rise of generative AI-capable of producing fluent, instantaneous, and confident responses-is subtly reconfiguring where authority in learning resides, often without institutional or pedagogical mediation. This article investigates the epistemological consequences of this shift. We examine how generative AI challenges epistemic agency, disrupts traditional justifications for knowledge, and reshapes the teacher-student dynamic. We explore the idea of AI as a surrogate knower, analyze the hidden curriculum embedded in AI systems, and propose strategies for reclaiming humancentered authority and agency in the classroom.While much of the current discourse focuses on academic dishonesty or automation of labor, we argue for a more foundational conversation: one that interrogates the restructuring of knowledge itself. Drawing from empirical studies and philosophical models, we contend that generative AI is not merely a new tool, but a new actor in the epistemic space-one whose authority must be understood, challenged, and situated within a pedagogy that resists passive consumption and fosters critical discernment. The kind of epistemic presence introduced by generative AI systems such as ChatGPT is a new type-one that appears fluent, confident, and perpetually available (Anjum et al., 2023). To many students, such instruments are not merely aids, but "superior knowers" (Rastogi & Lawati, 2024). This signals a basic epistemological shift: from the collaborative construction of knowledge to a framework that privileges immediate, polished output (Schei et al., 2024). The implications of this are profound.More recent studies suggest that students increasingly see ChatGPT as something considerably more substantial than support software, using it as the central, trustworthy source of learning (Rastogi & Lawati, 2024). Such trust tends to skirt critical reflection on biases or the model's limits. This is the automation bias, one in which AI results come to be trusted too much, even when users question or recognize errors (Benz et al., 2022;Nguyen, 2023). Such tendencies compromise ground-level epistemic practices-such as evidence assessment, source triangulation, and epistemic modesty. Above all, this transformation is not merely practical, but philosophical. According to Coeckelbergh, (2025) AI systems impact not only beliefs, but belief revision itself-the reconfiguring of mechanisms in which people adopt, reject, or modify claims to knowledge. In the classroom, this implies students will update their knowledge in response to algorithmic authority, without recourse to further justification or reflection. This failure of normative epistemic checks further obfuscates the distinction between tool and epistemic agent.Here, AI is no longer a neutral assistant. It becomes a substitute for a knower-reshaping what students regard as justified knowledge and who they consider experts. The educator's challenge is not to reject these systems, but to reassert epistemic agency in classrooms now cohabited by fluent but non-sentient interlocutors. The increasing dominance of AI-created information in schoolrooms threatens not just what is taught, but even who can teach. As generative AI becomes a quiet dialogue partner in learning environments, learners start adjusting their perceptions of whose information carries more weight: that of their educator or that of the algorithm.Rising research suggests that learners increasingly seek out ChatGPT to support or contradict teacher feedback, an indication that AI is being used more and more as an epistemological counterpoint to human teaching (Gordon & Foucault, 1980). For example, learners report being more willing to seek feedback from ChatGPT than from peers or teachers, even though they are not convinced it is reliable (Marquart & Bruhn, 2025). The research shows that most learners prefer feedback from AI because it is immediate, clear, and fluent, and prefer it to traditional teacher feedback. Specifically, feedback delivered through AI has been associated with reduced writing anxiety and improved fluency, suggesting that speed and polish contribute to its assumed authority (Wang, 2024). The dynamic can contribute to a gradual undermining of pedagogical trust. This shift is compounded when instructors themselves begin using AI tools to assess student work or generate feedback-whether to save time or adhere to institutional demands for efficiency. When students recognize that the same generative system is evaluating their performance and assisting them in completing assignments, the boundary between "student" and "teacher" knowledge production becomes blurred. If both are deferring to the same tool, epistemic authority becomes further displaced from the human educator to the algorithm. This not only disrupts traditional authority but creates confusion around who ultimately evaluates understanding.Additionally, the teacher's authority-long based in their ability to facilitate understanding and provide ethical judgment-is now more often subjected to algorithmic benchmarking. Students may easily assume that the AI is "objective" and that the teacher is "biased," particularly in situations requiring interpretation or critique. These comparisons are not often done consciously, though they shape classroom affective and intellectual dynamics. Such epistemic displacement is especially precarious in institutional contexts where teachersparticularly women and racialized faculty-have historically been denied full authority in the eyes of students. Research in higher education repeatedly demonstrates that women and faculty of color are more likely to be challenged or discredited by students, especially in contexts involving authority, grading, or political content (Chávez & Mitchell, 2020). In such settings, the rise of generative AI as an "objective" voice risks reinforcing, rather than counteracting, these credibility gaps. The result is a dual undermining: first by existing structural biases, and second by the technological displacement of authority.This change in some contexts has resulted in a kind of role reversal, wherein teachers are asked to justify their reasoning in contrast to machine output-frequently without any institutional guidance or support. This not only burdens teachers with defending their professional relevance but can also result in a retreat from rich pedagogical practices toward those that conform more closely to algorithmically scorable content.The outcome is a quiet remaking of authority: from educator expertise to machine-made credibility. Absent a clear pedagogical counter-narrative, this shift threatens to hollow out the teacher's epistemological role. All learning tools carry embedded pedagogies-messages about what is considered knowledge, how to learn, and which voices are seen as authoritative. Generative AI models, though often portrayed as value-neutral or purely functional, are no exception. Their output contains assumptions, perpetuates dominant discourses, and conditions user expectations in subtle but enduring ways.Recent research indicates that AI-generated content can reinforce cultural and linguistic biases, which students might unknowingly reproduce in their academic work. For instance, AI applications have been shown to replicate gender bias in classroom content, influencing students' writing and framing decisions (Mattiazzi, 2025). Similarly, language generation patterns affect how students engage with topics and formulate arguments (Singh, 2024). These tools do not merely offer information-they teach rhetorical norms, foreground specific topics, and promote particular moral frameworks-implicitly shaping what students come to see as academically appropriate.Consider a student tasked with writing a position paper on climate justice. Turning to ChatGPT for help, the student receives a highly structured, Western liberal humanist framing of the issuecentered on policy reform and individual behavior. Without recognizing this as a partial epistemic stance, the student may internalize this framework as the correct or default way to approach such topics. In this way, AI systems implicitly frame not only what is "true," but also what kinds of reasoning and perspectives are permissible.Scholars have cautioned that AI systems tend to replicate Western epistemologies embedded in their training data, thereby marginalizing plural or Indigenous ways of knowing. As Lewis et al. (2024) explain, most existing AI models reflect rationalist frameworks that systematically exclude non-Western knowledge systems. Similarly, Ofosu-Asare (2024) identifies the persistence of cognitive imperialism in AI, whereby Eurocentric reasoning is privileged unless explicitly counterbalanced through design. Thus, the AI functions as an epistemic filter-valorizing certain forms of knowledge while silencing others.The hidden curriculum is not merely a question of content, but also of cognitive stance. Generative AI tends to favor fluency, assertiveness, and certainty over ambiguity or discomfort. Its quick, confident responses often discourage deeper questioning, modeling a style of knowledge delivery that rewards speed over critical reflection (Li, 2024;Orlanda-Ventayen, 2024). Gradually, this encourages learners to shift from inquiry-based learning toward reproduction of polished, preformed answers.Unless explicitly addressed, this epistemic conditioning risks undermining teachers' efforts to cultivate ambiguity tolerance, intellectual humility, and reflective skepticism-qualities that are essential to pluralistic, democratic learning environments. In light of algorithmic epistemology's encroachment on traditional learning structures, it is more necessary than ever to cultivate intentional, reflective, and persistent learners. Revitalizing epistemic agency includes not just empowering students to use AI tools effectively, but to interrogate, contextualize, and critique them. It is about reaffirming human agency in the production of knowledge-not as passive recipients of algorithmic information, but as active, situated interpreters.Automation bias tends to lead users to over-rely on AI systems, especially when those systems present information with fluency and confidence. Research shows that assertive, polished output encourages uncritical acceptance-even in cases of clear error or contradiction (Horowitz & Kahn, 2023;Kutza et al., 2024). This misplaced trust discourages learners from asking critical questions or engaging in epistemic self-reflection.What is needed in response is the deliberate infusion of epistemic vigilance in educational practices. Specifically, students can be directed to compare AI responses against peer or instructor responses, identifying gaps, assumptions, as well as rhetorical differences. "Trust audits," in a controlled manner, can encourage students to query when and why they feel most ready to trust the AI. Journals or reflection essays can invite learners to note the degree to which their thoughts varied after being presented machine-generated material. These interventions move well beyond digital literacy-they aim to recuperate a more dialogic, evaluative relationship towards knowledge.These practices illustrate what it is to be an "epistemic mentor"-an instructor who doesn't impart information, but educates for discernment. An epistemic mentor shows students how to proceed in uncertainty, estimate credibility, and comprehend that knowledge is disputed and provisional. This encompasses assisting learners in recognizing their own positionality as well as the sociotechnical circumstances that determine the instrumentations at their command. Instead of protecting students from the impact of AI, educators can support critical engagement with it-questioning not only what is transmitted as knowledge, but why it is set out in this manner, and by whom.These capabilities ground epistemic agency, defined here as the power of the learner to question, warrant, and claim to know responsibly. In practice, this means verifying AI-generated content, cross-checking sources, and the reliance on human judgement, particularly when matters of interpretation, ethics, or context arise. Educators need institutional support not merely to deliver material well, but to be recognized as epistemic agents in their own right-offerors of rich, sophisticated thought in the digitally rich learning environment. With AI integrated into daily pedagogical practices, the teacher is confronted with the pressing epistemological question: What is teaching-or knowledge-when machines issue automatic, confident, and fluent answers at will? This article has been looking at the way generative AI reassigns classroom power, reconfigures students' notion of expertise, and reifies an unobtrusive curriculum that values fluency at the expense of depth and economy at the expense of inquiry. But the question at issue here is not merely technological, it is at base philosophical and relational.Artificial intelligence programs are revolutionizing the manner in which students come to know, what they hold true, and whose voices they listen to as authoritative. If not checked, these advancements risk emboldening the pattern of passive epistemic consumption, negating the student's agency as well as the instructor's role as mentor to critical, reflective thought.In response, this paper has made three core contributions. First, we analyzed how AI tools function as surrogate knowers, subtly collapsing justification norms and privileging algorithmic output over dialogic reasoning. Second, we highlighted how both student and institutional behavior can erode the teacher's epistemic authority, particularly in contexts already shaped by structural inequities. Third, we outlined strategies for reclaiming epistemic agency-through reflective pedagogy, classroom practices that foster critical AI literacy, and a renewed model of the teacher as epistemic mentor.For educators, researchers, and institutions engaging with digital technologies in education, the challenge is not to reject AI but to reframe its place in the learning process. This includes equipping students to interrogate AI-generated claims, fostering awareness of cognitive bias, and designing learning environments that support epistemic plurality.The future of education will not be defined solely by what AI can generate, but by what human learners-guided by reflective educators-choose to question, interpret, and reimagine. Reclaiming epistemic agency is not only a pedagogical imperative but a democratic one. In the algorithmic age, learning how to think critically is inseparable from learning how to resist automation as the default mode of knowledge.

Keywords: epistemic agency, Generative AI, Educational authority, automation bias, hidden curriculum

Received: 16 Jun 2025; Accepted: 04 Aug 2025.

Copyright: © 2025 Jose, Cleetus, Joseph, Joseph, Jose and John. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Binny Jose, Department of Health and Wellness, Marian College Kuttikkanam Autonomous, Kuttikkanam, India

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.