OPINION article
Front. Educ.
Sec. Digital Education
Volume 10 - 2025 | doi: 10.3389/feduc.2025.1632990
From Tools to Co-Learners: Entangled Humanism and the Co-Evolution of Intelligence in AI Education Introduction: Reframing Intelligence in Education
Provisionally accepted- 1Department of Health and Wellness, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
- 2Peet Memorial Training College, Mavelikara, India
- 3Vimala College, Thrissur, India
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
In an ecology of learning, AI evolves from an instrumental tool to an engaged co-learner. Such systems deliver immediate feedback, adjust to learners' preferences, and synthesize rich information, enhancing human cognitive abilities (Luo, 2024). AI systems, as epistemic partners, work together with learners to build meaning, especially within exploratory, information-dense contexts.Empirical evidence supports this potential. Studies discovered that learning systems adapted to specific learners exceeded traditional approaches in more than, particularly for support-seeking learners (Fu et al., 2025). A randomized controlled trial of an AI-powered tutoring system for secondary students in Spain demonstrated significant gains in standardized math scores (+0.26 SD) and improved end-of-year grades, particularly in disadvantaged populations (Gortázar et al., 2024). A recent study found that generative AI tools, when integrated into collaborative STEM learning, significantly enhanced reflective thinking, problem-solving, and conceptual clarity. Learners using a GPT-based summarization tool (GASA) performed better than peers in traditional group settings (Lin et al., 2024).Such findings are supported by cognitive science evidence as well. Adaptive real-time feedback can alleviate unnecessary cognitive load and optimize working memory (Blayney et al., 2015). AI's ability to personalize learning is also what is supported by dual-process cognitive theories, which contrast analytical and intuitive learning habits (Zhang et al., 2025). But caution is needed. Over-reliance on AI tools can subtly downplay emotional engagement, mentoring, and pedagogical improvisation-the aspects of teaching that don't lend well to machine imitation.Learning today increasingly takes place in hybrid environments that combine physical and digital modalities. Experiential and embodied learning is supported by immersive technologies like virtual and augmented reality (Crogman et al., 2025). Intrinsic motivation and mastery of concepts is enhanced by gamification (Ruble et al., 2021). In one study using a gamified system in higher education, researchers observed statistically significant gains in engagement, test scores, and course satisfaction, particularly when leaderboards and point-based systems were implemented (Ruiz-Alba et al., 2016). A systematic review on gamification in MOOCs found that elements like badges, leaderboards, and challenges significantly increased student motivation and engagement, and led to notable improvements in completion rates across multiple platforms (Zakaria, 2024). Moreover, online communities allow for peer learning and intercultural collaboration. These changes require redefinition of the teacher's role. With AI systems increasingly taking up standard pedagogic functions, educators are transforming into learning designers, dialogue facilitators, and ethical guardians. Such repositioning not only demands professional upskilling but also requires changes within institutions to acknowledge and enable these evolving roles.Humanism entangled also demands that creation and validation of knowledge be decentralized. Institutions no longer solely create and legitimate knowledge. Institutions like Coursera, edX, and Udemy disseminate expert content to any individual, while nondomain expertise is verified by systems like Stack Overflow and blockchain credentialing systems.Students become more empowered to design learning pathways from open educational resources (OERs), peer-created content, and decentralized credentials. This is in accordance with Weller's ( 2020) promotion of open, collaborative learning environments that support learners having maximum agency and sharing epistemic power (Lane & Goode, 2021).A successful example of this is Singapore's learning platform 'Abacus,' which offers real-time diagnosis and interactive math learning which contributed to 'Singapore Math' (Ahmad et al., 2015). A study using the SAVI (Somatic, Auditory, Visual, and Intellectual) cooperative learning model showed significant gains in both metacognitive awareness and academic achievement among high school students, particularly when interactive group learning strategies were applied (Asri et al., 2016).Although entangled humanism holds immeasurable potential, it also presents underlying issues of concern. Algorithmic bias remains a perpetual problem, as AI systems can reinforce and magnify social prejudices based on poorly trained data or poorly designed algorithms (Baker & Hawn, 2022). Bender and Friedman(2018) expose that poorly audited AI systems can create biased learning outcomes. Solutions to counter these include adversarial debiasing, transparent algorithms, and explainable AI.The digital divide also raises another problem, where uneven availability of technology can exacerbate educational imbalances, particularly in disadvantaged contexts (Li, 2023). Moreover, AI adoption in educational decision-making necessitates close regulation. Issues like AI grading of student assignments, as seen in the case of AI grading in the UK during COVID-19, pose an example of taking discretion away from humans in high-stakes learning contexts (Ovchinnikov, 2020). Such examples illustrate the need for regulation by humans, transparency, and ethical responsibility.The European Union (AI Act) and UNESCO (AI Ethics Recommendations, 2021) and OECD (AI Principles) initiatives are beginning to create robust frameworks for integrating AI responsibly. Entangled humanism encourages educators to familiarize themselves with these standards and design AI-enabled systems to be equitable, inclusive, and transparent.To support this, here, we propose four principles of AI-driven policy design: (1) transparent algorithms, (2) student data ownership, (3) equitable development of infrastructure, and (4) professional development AI literacy training of teachers. They provide an anchor for AI adoption in education that is ethical and practicable in the real world.We also acknowledge increasingly salient cross-disciplinary critiques of AI's sociotechnical dimensions beyond its formal and technological properties. Algorithmic systems intertwine with social inequality and power imbalances. Integrating such critical perspectives into education policy and pedagogy is important to prevent perpetuating existing epistemic or social injustices in the name of entangled humanism.The future of learning is not one of substitution of artificial and human intelligence, but of coevolving them. Entangled humanism recontextualizes intelligence as co-constructed, distributed, and relational within socio-technical assemblages of humans, objects, and media. It overturns traditional hierarchies and encourages educators to conceptualize AI as intellectual co-conspirator to augment, expand, even reconfigure what it means to be able to learn as a human being.In order to achieve that, educational systems need to operationalize multimodality, ethical resilience, and decentralization. Learning environments need to facilitate real interaction, as humans, with AI, establish strong ethical norms, and overcome old digital divides. We push educators, policymakers, technologists, and researchers to confront that change as an abstraction in the distance, but as an imminent instructional mandate. In an AI-altered world where AI is increasingly shaping the ways that we think, learn, and relate, no longer is it a matter of if AI has any place in the classroom-but of how to reimagine learning as it does.In coming years, AI integration could result in radically reconfigured learning organizations. Consider learning spaces where affective AI agents assist learners in managing anxiety and becoming resilient, or where decentralized AI tutors negotiate learning by time and geography. An educational symbiosis is possible-the intelligent learning coaches that grow up together with learners, responding to cognitive development, shifts in affect, and development of moral character. Entangled humanism not only readies us to address these futures, but it encourages us to design them with philosophic understanding and social responsibility.
Keywords: Artificial intelligence in education, entangled humanism, hybrid learning environments, Post-Humanist Pedagogy, Educational technology ethics
Received: 22 May 2025; Accepted: 28 Aug 2025.
Copyright: © 2025 Jose, Verghis, Varghese, S and Cherian. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Binny Jose, Department of Health and Wellness, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.