Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Educ.

Sec. Digital Learning Innovations

This article is part of the Research TopicArtificial Intelligence in Educational and Business Ecosystems: Convergent Perspectives on Agency, Ethics, and TransformationView all 13 articles

Editorial: Artificial Intelligence in Educational and Business Ecosystems: Convergent Perspectives on Agency, Ethics, and Transformation

Provisionally accepted
  • 1Touro University Graduate School of Education, New York City, United States
  • 2Touro University, New York, United States
  • 3Ramapo College of New Jersey, Mahwah, United States

The final, formatted version of the article will be published soon.

The rapid technological advancements in artificial intelligence (AI) have sparked conversations across almost every sphere about the dramatic changes in personal and professional learning. "Technology is more than a tool -it is also an element within the teaching context which, when examined, has the power to transform" (Dacey et. al, 2016;pp 169). AI possesses infinite possibilities to create novel forms of cognitive extension, enhance funds of knowledge, and be adaptive and interactive, yet it can also limit, misrepresent, mislead, distort, and negatively influence how people think and learn, depending on its use. Understanding AI's transformative power requires transdisciplinary investigations into how AI systems, human actors, and learning environments coalesce in nonlinear ways, forming dynamic educational assemblages (Braidotti, 2013;2016). As AI systems become increasingly sophisticated and we experience them as acquiring anthropomorphic characteristics, we must reflect, reconceptualize traditional frameworks of teaching and learning, and develop new pedagogical approaches that incorporate ethical principles to maximize good and minimize harm. (Dacey, 2022;Dacey et al., 2025). Recent advances in AI are fundamentally altering the educational landscape and have garnered significant attention for the dilemmas they raise for learners, teachers, and leaders in P12 classrooms, higher education environments, and business organizations.For this special issue, we solicited articles that address multiple transdisciplinary perspectives. The global submissions reveal the current state of accelerating and compounding changes resulting from the integration of AI into educational systems. We wanted dialogue about more than a technological shift, a fundamental change in knowledge production, institutional authority, and pedagogical purpose. Questions emerged about how cognition is distributed across human-AI networks, how agency emerges from these interactions, and how these configurations transform educational spaces. Research highlighted in this issue can assist educators and leaders in grappling with the nature of knowledge construction, the role of embodied experience, our pedagogical approaches, and the ethical implications of AI-mediated learning. As Paulus and Lester (2023) acknowledge, AI usage is not a neutral endeavor. We must carefully and consciously reflect on how AI can reshape and limit learning, with implications for both education and business, two fields that require thoughtful forecasting and policy revisions. Four broad categories of manuscripts were identified: 1) AI pedagogical integration and teaching practices, 2) institutional policy and governance, 3) academic integrity and ethics, and 4) faculty perspectives and professional development. The integration of AI into pedagogical practice reveals tension between technical implementation and meaningful educational transformation. Khoza and Van Der Walt's systematic review, A systematic review on AIenhanced pedagogies in higher education in the Global South, shows that AI applications in institutions in the Global South tend to prioritize administrative efficiency over pedagogical innovation, due to infrastructure limitations and inadequate faculty preparation. Bastian et al. demonstrate an alternative in Using AI chatbots to facilitate mathematics pre-service teachers' noticing skills through empirical validation of chatbot-supported professional development, in which AI scaffolds pre-service mathematics teachers' noticing skills through structured feedback mechanisms. Murniarti and Siahaan's structural equation modeling in The synergy between artificial intelligence and experiential learning in enhancing students' creativity through motivation positions motivation as a critical variable linking AI-enhanced experiential learning to student creativity, suggesting that technological affordances alone are insufficient without attention to affective dimensions. Leon et al. synthesize these concerns through their transdisciplinary framework in Artificial intelligence in STEM education: a transdisciplinary framework for engagement and innovation, identifying student agency, assessment paradigm shifts, and ethical tensions as core challenges requiring resolution. Their analysis underscores a fundamental dilemma: STEM education risks replicating efficiency-driven models that subordinate inclusive access and epistemic reflexivity to instrumental rationality. Across these contributions, the imperative emerges not for wholesale adoption of AI, but for pedagogically coherent implementations that honor contextual specificities and epistemic diversity. Rimal and Sharma's Ensemble Machine Learning Prediction Accuracy: Local vs. Global Precision and Recall for Multiclass Grade Performance of Engineering Students is distinct in its focus on machine learning algorithms for grade prediction and data-informed teaching practices. Machine learning applications in grade prediction raise questions about educational measurement, intervention timing, and the purposes of academic assessment within engineering education. The implications extend beyond technical refinement to questions about whether data-informed decision-making enhances educational equity or reinforces existing inequalities through algorithms that appear neutral.Institutional Policy and Governance: Educational institutions confront generative AI through reactive policy development that prioritizes compliance over strategic vision, resulting in fragmented governance frameworks that are inadequate to the technology's transformative scope. Aristombayeva et al. document this institutional ambivalence in Guiding the Uncharted: The Emerging (and Missing) Policies on Generative AI in Higher Education, noting how universities conform to external regulatory pressures while failing to address research-specific applications or the distinctive challenges confronting art-focused institutions. Their analysis suggests that current policies overlook the cognitive, affective, and metacognitive demands of AI-assisted learning, revealing a deeper failure to reconceptualize educational frameworks that account for algorithmic mediation. Zhu et al. situate these governance deficits within the context of geopolitical tensions in Cross-border higher education cooperation under the dual context of artificial intelligence and geopolitics: opportunities, challenges, and pathways, demonstrating how techno-nationalism, data sovereignty conflicts, and divergent algorithmic values fragment cross-border educational cooperation. Their proposal for inclusive technological ecosystems and mutual recognition mechanisms addresses the fundamental problem of AI governance within competing national and cultural frameworks. Berkovich's empirical findings in The rise of AIassisted instructional leadership: empirical survey of generative AI integration in school leadership and management work further complicate this picture, revealing that school leaders are the early adopters of AI-assisted instructional leadership. These patterns suggest AI integration proceeds rapidly despite inadequate governance structures, potentially entrenching inequities and centralizing power within technology providers.Academic Integrity and Ethics: Large language model (LLM) services offer advantages for students developing their writing skills, particularly for second language learners. The benefits of using AI assistance come with drawbacks. As the Academic Integrity Office in my School of Education community, I understand firsthand the complexity involved in navigating questions about authorship, originality, and the learning process. AI technology complicates the writing process in many ways: it can diminish voice, damage or build a developing writer's confidence, and add a layer to teaching practices, depending on how AI is integrated. Garcia Ramos proposes a disclosure-based framework in Development and introduction of a document disclosing AI-use: exploring self-reported student rationales for artificial intelligence use in coursework: a brief research report that transforms AI use from a hidden practice into a site for metacognitive development and ethical deliberation. Her qualitative analysis reveals students deploy AI primarily for verification, immediate academic support, procrastination management, and overcoming material obstacles, suggesting instrumental rather than collaborative engagement patterns. The disclosure mechanism itself functions pedagogically, fostering transparency and self-regulation while exposing the gap between institutional policies and actual student practices. Almufarreh et al. document the scale of this challenge through mixed-methods research in Ethical implications of ChatGPT and other large language models in academia, which confirms widespread LLM adoption and deep stakeholder concerns about plagiarism, bias, and the erosion of authenticity. Their proposed interventions encompass transparent usage policies, integrated LLM literacy training, institutional review frameworks, and ongoing stakeholder dialogue; however, these recommendations rest on assumptions about institutional capacity and willingness to engage with complex ethical terrain. Both studies highlight how AI integration forces confrontation with longstanding questions about knowledge production, intellectual property, and academic integrity in today's context (Eke, 2023), as well as the purposes of academic assessment that extend well beyond technological considerations. In what ways will educators rethink and adjust their practices to address the underlying issues related to trust (e.g., the ability to trust oneself, trust one another, and the technology), a necessary element in the relational work of teaching and learning? Faculty Perspectives and Professional Development: Faculty engagement with generative AI reveals a troubling disconnect between the recognition of potential benefits and actual implementation, suggesting barriers that extend beyond technical knowledge to encompass pedagogical uncertainty and institutional responsibility for professional development. Almisad and Aleidan document moderate awareness levels among Kuwaiti faculty in Faculty perspectives on generative artificial intelligence: insights into awareness, benefits, concerns, and uses, coupled with strong perceptions of GenAI's capacity to reduce administrative burdens, support research activities, and enhance online learning environments. Yet utilization rates fall substantially short of both awareness and perceived benefits, indicating that knowledge alone is insufficient for technology adoption. Faculty concerns center on threats to academic integrity, risks of plagiarism, and dangers of over-reliance, reflecting legitimate anxieties about AI's potential to undermine core educational values. The identification of gender-based differences in awareness, perception, and utilization patterns, absent rank-based variations, suggests that sociocultural dimensions shape technology adoption, a phenomenon that institutional policies typically overlook. Professional development initiatives must therefore address not merely technical competencies but the deeper pedagogical and ethical questions that determine whether faculty perceive AI as compatible with their educational commitments and professional identities, since this institutional context also affects students' technological engagement, as Prohorovs et al. explore in Understanding Higher Education Students' Reluctance to Adopt GenAI in Learning in Latvia and Ukraine. Further, what are the best approaches to having faculty explore using AI to assist and enhance their own learning? What are the policies that can help guide responsible AI use for faculty and students?Collectively, the contributions bring into focus that technical implementation questions inevitably cascade into epistemological, ethical, and political territories that no single discipline can adequately address. The contributing authors represent institutions in 17 countries worldwide: South Africa, Kazakhstan, the United Kingdom, the United States, China, Germany, Norway, Ireland, Saudi Arabia, Pakistan, Cyprus, Malaysia, Israel, Indonesia, Kuwait, Nepal, and India. Their diverse locations illustrate how questions of agency, governance, ethics, and human learning are unfolding in distinct cultural and institutional settings yet remain connected through shared concerns about the role of AI in shaping social and organizational life. As Cowin observes, "We are not just retooling old systems with new technology. We are reimagining what it means to learn, to teach, and to be human in a digital age" (Cowin, 2025).The researchers in this issue explored AI as a phenomenon that extends beyond the boundaries of a discrete technological intervention. AI operates not only as an institutional force that reorganizes authority, reshapes professional roles, redistributes cognitive labor, and reframes core educational purposes, but also as a catalyst for a wider reordering of the assumptions that structure contemporary systems of learning, work, and organizational life. Choices related to governance, assessment, and pedagogy intersect with shifts in workforce expectations and business practices, revealing how AI reshapes the relationships among knowledge, expertise, institutional power, and economic activity. In this sense, AI functions as an ecosystemic agent whose influence becomes fully intelligible only through relational and interconnected analysis, and whose effects contribute to the emerging configuration of global education, labor, and business landscapes. "The integration of AI in education is not a static process but a dynamic one that will continue to require improvisation, adaptation, and flexibility" (Dacey et al., 2025, p. 69). While the articles primarily focus on formal educational environments, the issues that emerge resonate with transformations occurring in contemporary business ecosystems. The tensions surrounding privacy rights, automation, expertise, data governance, and algorithmic accountability mirror challenges faced by organizations that integrate AI into managerial, operational, and decision-making processes. By foregrounding these parallels, the special issue situates educational institutions within a broader socio-technical landscape in which education and business confront comparable questions of agency, ethics, and value creation. This alignment clarifies why insights from both domains must be considered jointly when evaluating the societal implications of AI. The contributions curated in this Special Issue resist the techno-solutionist narrative that frames AI as a neutral efficiency tool requiring only proper deployment. Instead, they capture tensions between automation and augmentation, between administrative rationalization and pedagogical transformation, and between predictive precision and educational equity. These tensions operate simultaneously across institutional, instructional, and individual levels, calling for analytical frameworks that draw on computer science, learning sciences, ethics, policy studies, and critical pedagogy.The authors invite readers to continue building collaborative and integrative approaches that match the scale and complexity of AI's global impact. We can benefit from reconsidering our teaching methods and comparing the ethical challenges of AI in business education, as well as how AI can be effectively incorporated to help prepare workforce-ready graduates. The effects of using artificial intelligence in education are just beginning to be measured, and its use will likely have ripple effects for years to come (Dacey et al., 2023). As we look towards the horizon, we see the need for more flexible, decentralized, and inclusive governance and educational structures, much like a forest of diverse species adapting to changing conditions within a vibrant ecosystem (Dacey et al., 2025, p. 70). Throughout the production process of this special issue, we remained mindful that "companies like OpenAI stand at the forefront, pushing the boundaries of what generative pre-trained transformers or 'GPTs' can achieve. However, beneath this veneer of progress lies a complex web of epistemological and pedagogical quandaries that question the very foundation of AI's trajectory in education" (Cowin, 2024). We invite continued dialogue on these fundamental questions as the field evolves.

Keywords: artificial intelligence, ethical practice, Institutional policy, Posthuman pedagogy, professional learning, Transformative impact

Received: 04 Jan 2026; Accepted: 05 Feb 2026.

Copyright: © 2026 Dacey, Cowin and de los Reyes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Charity M. Dacey

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.