- Faculty of Education and International Studies (LUI), Oslo Metropolitan University, Oslo, Norway
The public introduction of ChatGPT in 2022 marked a turning point in mainstream awareness of artificial intelligence (AI), catalyzing widespread debates on what it was, what it could be used for and how this would alter our world. In the domain of higher education, AI was framed as a transformative tool for knowledge production and pedagogical innovation, contingent on users’ critical technological awareness and competence. This study explores how AI was introduced, interpreted, and acted upon within a teacher education context in the aftermath of the ChatGPT release. It takes the authors’ own university faculty and an interdisciplinary digitalization network as its case and employs a methodology that combines a reflexive and autoethnographic approach with document and survey data analysis. By way of using discourse, domestication and social mechanisms as analytical concepts we identify central events, arguments and controversies in the larger context and discuss local ‘taming’ processes and dynamics. Our findings highlight both opportunities and challenges in integrating AI into teacher education. While the introduction of AI did set in motion transformations of established practices, it varied how these transformations were understood and reacted upon. When AI increasingly became ingrained and automated in teacher educators work technology, lack of institutional strategies to guide AI use in teacher educator practices individualized responsibilities and diversified orientations to and applications of AI in pedagogical processes. In this situation bottom-up organized discussion networks among colleagues from different teacher educator departments and subject fields scaffolded reflexive and transformative discussions and engagements with AI, as a context where taken-for granted knowledge and perspectives about these emergent technologies could be opened, vocalized, experimented with and challenged.
Introduction
Although artificial intelligence (AI) has been discussed since the 1950s, it first gained mainstream attention after the introduction of ChatGPT, a variant of Open AI’s Generative Pretrained Transformer language model, in 2022. The post 2022 debates, dominated by market, expert and policy actors, predominantly portrayed AI as a technology with huge potentials for societal change, efficient knowledge development and innovative methods, given required regulation, critical technology awareness and competence among users (i.e., Digitaliserings- og forvaltningsdepartementet, 2024; United Nations, 2024).
In this article we discuss how AI came into use, was understood, acted on and played into dynamics of transformations – that is, changes in attitudes to, practices with and procedures related to AI – in a Norwegian teacher education setting. Taking the post 2022 AI discussions, regulations and events at our own university and teacher-education faculty as our case, we employ a user-centric, auto-ethnographic (Ellis et al., 2011), (self)reflexive and case-based (Flyvbjerg, 2006) approach. Our empirical investigation builds on three sources of data gathered through an interdisciplinary teacher-educator digitalization network initiated by the authors: (i) document review of policy papers and media debates relevant to AI implementation in Norwegian universities and teacher education; (ii) online surveys among faculty members about AI-use at a faculty of teacher education in Norway, and; (iii) autoethnographic accounts that links our subjective and collective experiences with AI in teacher education to broader debates and events in the sector.
In the discussion and analysis we employ ‘discourse’ (Foucault, 1969/1972) as a conceptual tool to describe performed and governing statements by different actors in the education sector and the concept of ‘domestication’ (Silverstone et al., 1992; Silverstone, 2006) to address processes whereby we and our colleagues came into contact, engaged and sought to ‘tame’ and make the AI technology acceptable in teacher-education in the first years following 2022. The connections that drove these taming processes are explained by references to the concept of ‘social mechanisms (Elster, 2007).
In the sections below, we position the study within current research and outline our methodological and analytical approach. We thereafter describe the implementation of and discourses about AI in the Norwegian university system and how AI was domesticated in teacher education at our own university. Finally, we discuss dynamics and social mechanisms that affected the engagements and actual ‘taming’ practices with AI by teacher educators in our context. We argue that insights into context-specific ‘small discourses’ is central to understanding the dynamics and transformations that the AI discourses and domestication processes set in motion in teacher-education and provide examples of ‘social mechanisms’ (Elster, 2007) that can explain connections that drove the taming processes and the following transformations in procedures and curricula. In conclusion, we call for the need for educators to engage in critical explorations of the role of AI in education and reflect, in a postscript, on how our teacher-educator digitalization network, in hindsight had functioned as an innovative arrangement for building AI awareness, competence and skills among us participating members in the first three post ChatGPT years.
Background and state of the art
Teacher education in Norway represents an interesting context and case for explorations into AI implementation and approaches to developing technology awareness, competence and skills across disciplines. Norway because the uptake of technology in everyday consumption and education has been rapid and extensive (Slettemeås and Storm-Mathisen, 2021) and the building of digital awareness, competence and skills has been part of education and work-life policies for decades (Engen, 2019). Teacher education for three reasons; its dual responsibility for fostering awareness, competence and skills in teacher students, so that they can do the same for pupils in schools with impact on society; the complexity in teacher education related to aligning practical training and theoretical approaches, interdisciplinarity, diversity of understandings of what are challenges in work-life and fruitful frameworks to address and solve these and bridging students on-campus learning with work-life realities (Cochran-Smith et al., 2020) and; since the roles and identities of teacher educators are diverse, not clearly defined (MacPhail and O’Sullivan, 2019) and where some identify primarily as practice professionals or subject specialists who happen to teach student teachers, whilst others see themselves as teacher educators (White, 2019). As the largest Norwegian educator of teachers from kindergarten through upper secondary school and in vocations, the authors’ own university faculty represents an exemplary case (Flyvbjerg, 2006) of these characteristics, diversities and challenges.
Although international research on AI use and competence in higher education has been rapidly growing, it is still predominantly technology-centered and underdeveloped in the context of teacher education (Sperling et al., 2024). Prevailing accounts, spanning from optimistic, to negative, to more dystopian perspectives, share the expectation that AI will be transformative of higher education, but are still vague on how AI influences teaching and learning (i.e., Bearman et al., 2022). Research on AI in education has been based on ideas of replacing teachers, improving teachers’ capacities (augmentation) (Chan and Tsi, 2024) or as a new form of interaction that leads to new dilemmas that challenge teacher professionality (Stenliden and Sperling, 2024).
Most available research has focused on AI as a tool in teaching, fewer studies have looked at how AI might support teachers’ professional competence, particularly with respect to technological and ethical challenges associated with the use of AI (Tan et al., 2025). Studies have argued that AI can be transformative for more personalized learning processes, supervision and teaching methods with potential to improve individual tailoring, motivation and engagement among students (Meylani, 2024). An early study that examined the transformative role and potential of large learning models (LLMs) as learning tools proposed concrete prompts as a strategy to ensure AI serves to support not replace ‘the human in the loop’ (Mollick and Mollick, 2023). In contrast, a recent study indicates that AI may obstruct students’ learning processes and that students, across educational levels, using LLMs reported less ownership of their own texts and had difficulties to recall or cite their own writing compared to students using “brain-only” who demonstrated a stronger sense of ownership, greater academic satisfaction, and deeper cognitive engagement (Kosmyna et al., 2025).
Studies investigating factors that influence educator’s adoption of AI suggest that educators are generally optimistic about the opportunities AI offers for customization and personalization of learning material and experiences, as well as the potential to enhance efficiency and save time (Brandhofer and Tengler, 2024; Al-Mughairi and Bhaskar, 2024). Teachers’ lack of motivation has been identified as the largest preventor for AI adoption (Aljemely, 2024), other barriers include concerns about the reliability and accuracy of AI systems, the reduction of human interaction and collaboration, and ethical and legal issues related to data use (Al-Mughairi and Bhaskar, 2024). It has therefore been suggested that teaching programs should be made more engaging, individually tailored, practical and with weight on the benefits of using AI in teaching (Aljemely, 2024). Several studies call for the need for teacher education to embrace AI in a reflective way, with regard to the possibilities of improving teaching and learning as well as the potential pitfalls (Alexandrowicz, 2024) and that strengthened AI literacy among teacher educators is crucial in the way forward (Meylani, 2024).
However, although teachers’ knowledge of generative AI may improve after a course on the topic, this does not necessarily make teachers more willing to use AI in their future teaching (Bae et al., 2024). The rapid uptake of AI technology challenges the rather slow traditional course or lecture-based digital literacy fostering arrangements in universities. Research on earlier educational technology shows that competence is closely linked to context, concrete situations and practical use, and that the process of making new technology suitable for practice within teaching and classroom contexts requires translations in the form of cultural and social adjustments by faculty over time (Engen, 2019).
A recent interview-based study on the uptake of AI in Norwegian higher education institutions, concluded that there was an imbalance between the expectations on AI and the institutions’ initial responses (Korseberg and Elken, 2025). Shortly after the introduction of ChatGPT in 2002, the technology was primarily perceived as a challenge related to students cheating and assessments practices. It was also seen as a “moving target,” which complicated the efforts to establish stable institutional strategies. The institutions’ initial response was characterized by an organizational paralysis of “wait and see” combined with insecurity and a lack of knowledge about the technology, recognizing a need for research-based knowledge.
While much of the current discourse focuses on practical challenges and institutional adaption, Selwyn (2024, p. 3) has argued that ‘The recent hyperbole around artificial intelligence (AI) has impacted on our ability to properly consider the lasting educational implications of this technology’ and that current discussions should pay ‘more attention to issues of power, resistance and the possibility of re-imagining education AI along more equitable and educationally beneficial lines’.
As a response to calls in current research which highlights the underdeveloped state of AI in teacher education, the diversity of perspectives and complex challenges with regard to adoption and ethical integration, this paper offers a multimethodological in-depth exploration of how AI is being domesticated within a concrete teacher education context.
Methods, materials, conceptual and analytical approach
AI as a technical phenomenon has been described as “a sophisticated form of statistical processing involving math, data, and computer programming’ (Selwyn, 2024). However, AI also involves a social process that puts emphasis on certain values, functions, users and practices over others. The rapid increase of (more or less) automated integration of AI in media platforms, search engines, email and cloud storage systems is as such part of an ongoing digitalization and ‘deep medialization’ process in which complex digitally mediating infrastructures becomes woven into societal functions and processes and takes part in constructing the social reality (Couldry and Hepp, 2017). The more technologies become integrated into the fabric of everyday life, the more taken for granted and challenging it becomes to distinguish them from the rest (Lupton, 2015; Slettemeås and Storm-Mathisen, 2021). For instance, many of us, teacher educators included, may have hands-on experience with technologies such as AI in our everyday and work-life, without awareness or understanding of its workings and impacts on our physical surroundings and social life. This has methodological implications for research on the use of such technologies. It accentuates that there is limited value of using discursive data alone and that insights require a methodology and analytical framework that can grasp the interplay of people, technology and infrastructures in the social environments and wider contexts where the technology is applied (e.g., Storm-Mathisen, 2016; Slettemeås and Storm-Mathisen, 2021; Helle-Valle and Storm-Mathisen, 2020).
Methods and materials
To generate insights into discourses, taming practices and transformative dynamics related to AI use in teacher education, we have therefore chosen to use a (self)reflexive and case-based (Flyvbjerg, 2006) approach, taking our own experiences at the teacher education Faculty in Norway as case. As the largest teacher education Faculty in Norway, the case is not representative of everything that might go on, but it is well suited to illustrate core dynamics that play out in the field. To investigate these we employ three different complementary methods and data sources, further described below:
• Document review of policy papers and debates relevant to AI implementation in teacher education
• Online surveys about AI use among teacher educators at the faculty of education and
• Autoethnographic accounts of encountering and using AI as teacher educators.
Document review of policy papers and media debates relevant to AI implementation in teacher education
We conducted broad online queries to build an overview of (i) the chronology of AI implementation in the Norwegian context, (ii) of relevant national and institutional policy documents and reports and (iii) of different themes brought into the public debate in higher education and of how attitudes and positions in these debates developed in the period 2022 to 2025 nationally and related to teacher education. In the latter case we focused on articles and opinion pieces published in the Norwegian university newspaper Khrono, a platform for debates on AI in higher education and teacher education among academic, student and university administration audiences following the ChatGPT launch in 2022.
Online surveys about AI use at the faculty of education
To map attitudes to and experiences from using AI, we developed an online survey. A Norwegian online form service, Nettskjema.no, was used to secure the anonymity and privacy of respondents. The survey included a combination of multiple-choice, Likert scale, and open free-text questions covering AI user frequency, areas of use, concrete uses, experienced advantages/disadvantages, attitudes and competence related to AI. Invitations to respond to the survey were distributed to all Faculty members at the four departments at the Faculty of Teacher Education at our university, on two occasions. Invitations in December 2023 were, by advice from the Faculty Administration, distributed as a post, with link to the survey, in the Faculty General Microsoft Teams room. This, unfortunately, resulted in a very low response rate (N was 23 out of 410 invited). When repeating the survey in March 2025 invitations to participate were sent individually to all faculty members on e-mail, the response rate was somewhat improved (N was 90 out of 426). The respondents to the 2025 survey covered 21 percent of the faculty members overall, representing staff at all four departments. Fourteen percent of the respondents were younger than age 39, 60 percent were aged 40 to 59 and 26 percent were older. The survey results is therefore likely to portray attitudes, uses and experiences of AI of both experienced and less experienced teachers/researchers.
Autoethnographic accounts—individually and collectively
We also employed an auto ethnographic approach seeking “to describe and systematically analyze (graphy) personal experience (auto) in order to understand cultural experience (ethno)” (Ellis et al., 2011, p. 273). The autoethnography activities included: (i) individual recollections of and experimenting with using different AI technologies (for different educational and research purposes, to learn more about the technology), (ii) discussing, with colleagues at our departments, at network meetings and seminars to share experiences with AI use and spur collaborative, critical self-reflexive interrogations (Walsh, 2003), and (iii) individually and collectively writing this up as a narrative to document and connect these experiences. The process of inquiry took as its starting point our subjective, context specific experiences of the developments, uses, discussions and events around AI from fall 2022 to fall 2025. In the self-reflexively engagements, we used our insider status as employees to question how our personal experiences linked to the wider developments in the field. This included both our individual experiences as educators working at each of the four different departments at the faculty of education and our collective experiences as members of an interdepartmental digitalization network at the faculty. The digitalization network, initiated by the authors in 2019, had since met regularly to discuss topics relating to digitalization in education and research and had arranged two faculty seminars (one on digitalization research at the faculty 2021 and one on AI in education and research 2024).
Conceptual framework
To analyze the above-mentioned sources of data we employ ‘discourse’ (Foucault, 1969/1972), ‘domestication’ (Silverstone et al., 1992; Silverstone, 2006) and ‘social mechanisms’ (Elster, 2007) as conceptual tools, for reasons and in ways detailed below.
Discourse
To get a better understanding of the dynamics of transformation that AI produced in teacher education, we approach the post 2022 AI discussions, regulations and events that took place in and around teacher education as ‘discourses’ (Foucault, 1969/1972). ‘Discourse’ as we use it here refers to how language is used and takes on meaning in specific contexts, hence giving attention to patterns in people’s ‘sayings’ and ‘writings’ as performed statements (Storm-Mathisen and Helle-Valle, 2008). The innumerable statements and discussions about AI today vary not only in position and opinion, but also in the reach and impact they have. Staff at the university relate to the discourses produced by government regulations and rules provided by the institutional level, as well as to exchanges between colleagues. It is to make some sense of these myriads of talk that we distinguish – as a simplification – between what we will call ‘large’ and ‘small’ discourses (Helle-Valle, 2009). Empirically, there is obviously not a dichotomous landscape of discourses. However, while some discourses are very large (i.e., UN resolution on AI, UNESCO AI competence framework for teachers, European AI Strategy and AI act, Norwegian national digitalization strategy, etc.), others can be said to be fairly large (i.e., our university’s policy for AI, or articles in the university newspaper) and among the small ones, some are more large-like (i.e., informal network meetings vs. formal faculty meetings among colleagues). It is for the sake of order that we suggest the distinction ‘large and small’ (Helle-Valle, 2009). The large discourses are – to borrow a term from Dreyfus and Rabinow (1983, p. 48) – serious speech acts, i.e., speech acts that are divorced from specific contexts and backed by institutions. Small discourses, on the other hand, are so called everyday speech acts, anchored in specific socio-cultural contexts, and we will understand these as ‘taming’ attempts. The distinction gives analytical room for grasping the dynamism of the evolving field as it is filled with various, partly contradictory discourses fueled by different categories of actors who are differently positioned.
Domestication
We further approach the dynamic field of different discourses, and the practices they accompany, as ‘domestication’. Domestication is a term – developed into a theory – introduced in the early 1990s by the British media and social science scholars Roger Silverstone, Eric Hirsch and David Morley (cf. Silverstone et al., 1992). The theory was a response to what they considered too simplistic, technologically oriented analyses of technology adaptations. Individuals and groups do not docilely adapt to new technologies but critically assess the social and practical implications of engaging with them. Thus, the theory focuses on the complicated maze of cultural, social, economic and technological factors that together constitute concrete processes (sometimes successful, sometimes less so) of ‘taming’ (domesticating) technology (see also Storm-Mathisen, 2014; Engen, 2019; Silverstone, 2006; Helle-Valle and Storm-Mathisen, 2023). The domestication theory conceptualizes the taming process as appropriation (i.e., how the technology becomes accessible or owned), objectification (placement and display in an environment), incorporation (temporal integration into everyday practices) and conversion (how technologies are communicated back to the “outside” world). We will use these concepts from the domestication framework to describe how AI technology entered into the teacher-education university setting from the market/world of production, and to address the ‘taming’ processes as an outcome of power-laden interactions between people, socio-technical infrastructures and institutions. The situation that AI technology very abruptly became part of everyday work life in universities, that university staff could not reject it, but were forced to try to tame AI, with insufficient information, in ways that could be ethically and practically acceptable within the university context, makes this framing relevant. By approaching the discourses as taming processes, we can better grasp how teachers’ engagements with AI involve navigating in an environment with a maze of diverging interests and more or less defined rules and regulations, and how power, in its various forms, is fundamental to outcomes and how these are assessed (i.e., as good or bad, innovative or not).
Social mechanisms
To understand how the discourses and domestication processes of AI are transforming education we draw on the concept of ‘social mechanisms’ - the ‘frequently occurring and easily recognizable causal patterns that are triggered under generally unknown conditions or with indeterminate consequences’ (Elster, 2007: 36). The concept suggests that we need to look concretely at the teacher-educator-AI relationships and consider the dynamics whereby different elements work together in a specific setting and whether those ways generate new outcomes (Helle-Valle, 2019; Storm-Mathisen, 2019). By using a concrete example from the autoethnography, we will illustrate how the emerging consequences of the acting on and with AI technology in teacher education are not fixed dynamics, nor can be predicted, but can be understood as social mechanisms, that – triggered by the constantly evolving AI-student-teacher ‘assemblages’ (Latour, 2005; Storm-Mathisen, 2014) – transform the ways AI is ‘tamed’ within the setting.
Data and analytical procedure
Data from the various methodological approaches were compiled in different formats and used for different and complementary analytical purposes, following the procedures described below:
Data from the document review was produced in the form of text summaries (with links to relevant documentation) that described the history of implementation, policy development in the education sector and emerging themes and actors in the Khrono AI debates. This data was used for insights into the ‘large discourses’. In the first step we drafted descriptions of the history of AI implementation, the main patterns in the ‘large’ AI-related discourses (policy and debates in the Norwegian university setting and at the teacher education faculty). The second step was that the four authors critically discussed and revised these draft descriptions at a meeting. Search engines, CoPilot, Sikt KI Chat and Chat GPT was in this part of the process used to check up and question events, themes, summaries and line of events.
Data from the surveys was produced in the form of automatically generated reports (in nettskjema.no) summing up answer frequencies and quotes to the survey questions. Due to the low response rate, the survey data was not suited for detailed statistical analysis. The survey data was instead treated qualitatively as examples of ‘sayings’, hence as discursive expressions of concerns and experiences (Storm-Mathisen, 2019) among our teacher-educator colleagues. Frequencies and quotes from the survey data report were used to identify thematic patterns and variations and provide examples of the ‘small discourses’ of AI among our colleagues at the faculty. The survey data was also examined for insights into our colleagues’ experiences with ‘taming’ the AI technology, over time, and as aspects of appropriation, objectivation, incorporation and conversion.
Data from the autoethnography was produced in the form of text notes and narratives depicting our individual and collective recollections of sayings, doings, experimentations, experiences with and controversies over the AI implementation at departments, faculty and in the digitalization network. This data complemented the ‘thinner’ document and survey data by being of a ‘thicker’ description quality (Geertz, 1973) as the narratives were from situations we ourselves have been part of and had insider and contextual knowledge about. The autoethnographic narratives were used for two purposes. Firstly, to critically assess and complement the identification of patterns within the ‘small discourses’, the understanding of how these changed over time, of how they were linked to specific concerns and contexts for teacher educators and could be understood in light of the various conceptualizations of domestication processes. Secondly, to investigate and discuss how the large and small discourses connected to social mechanisms and dynamics that could explain what affected varied uses and concerns with AI and the types of changes (transformations, innovations) this triggered. The individual autoethnographic accounts helped us link our subjective experiences from teaching and research practices to prevailing and taken-for-granted understandings of AI activities and engagements at the faculty of teacher education. The collective autoethnographic accounts helped us critically discuss the context specificity of meanings and facilitated as such for a wider, critical and non-AI centric discussion (cf. non digital centric digital ethnography, Pink et al., 2016). We used internet search engines, CoPilot, Sikt KI Chat and Chat GPT in parts of these processes to summarize content of documents, search for and check information (i and ii), receive input on how to shorten and improve text we had written and check of reference style.
Results—implementation, discourses and domestications of AI
In the sections below we first sum up our insights into the history of AI integration in Norway and at our university. We then present an overview of the large AI discourses, the AI policies and media debates in Norway and at our faculty. These two parts are based on our online inquiry and review of policy documents and university media articles. In the last part, based on the survey data and our autoethnographic accounts, we describe patterns and variations in the small AI discourses and of how AI was domesticated in the context of our teacher education faculty.
Implementation—AI development and integration in Norwegian teacher education
What happens within educational institutions will always be influenced by what teachers and students experience in their everyday life outside institutions.
AI—technological systems able to create new content in the form of text, music, image, video, code etc., often referred to as generative AI – were not a completely new phenomenon in the Norwegian context when Open AI Chat GPT (Generative Pre-Trained Transformer) was launched November 2022. Norwegians had since early years of the new millennium gradually (although to a large extent unknowingly) been familiarized with aspects of generative AI integration in social media platforms (i.e., newsfeed algorithms, friend suggestions, facial recognition, filters and augmented reality and virtual assistant by 2010); digital bank services (i.e., chatbots since 2018); ‘smart’ elements in digital healthcare services (since 2021) and through functionalities in ‘smart’ products and appliances (like Google assistant, Siri, Alexa). AI that used Large Language Models (LLMs) to create new content in the form of text started appearing in 2020 and developed into what we now know as ChatGPT, Claude and Gemini etc. A new thing with the Chat GPT launched in 2022 was that it, in contrast to earlier AI systems with similar text generating and conversation-like functions, had a more user-friendly interface, was made available in a ‘free’ version and was as such accessible for anyone who wanted to try it out. The uptake was, as we know, rapid and further developments followed quickly.
AI entered the work systems of all university employees when Microsoft’s Copilot was integrated into Word, Outlook, PowerPoint and Teams in Office 365 by 2023. As GPT tools treat large amounts of information about its users, Sikt - a service provider to the knowledge sector in Norway – launched the version Sikt KI-chat in 2023 as a solution to safeguard privacy and data security in a GPT technology available for higher education and research. AI-facilitated services – based on Chat GPT or alternatives like DeepSeek, Claude, Gemini – were by 2024 integrated into a wide array of our everyday digital tools for work, education, communication and consumption, increasingly as automatic applications and functions (i.e., in search engines, news media and social media) through log-in or purchase. At present, our university offers Copilot (Microsoft.com) and SIKT KI-chat (sikt.no) to students and staff with user accounts at the university, in addition for research Keenious (oslomet.no), Autotekst (uio.no) and a wide range of tools with integrated AI components for use in research (such as NVivo and ATLAS.ti) and teaching (i.e., in Canvas since 2025).
Large discourses – AI policies and debates in Norway and locally at our own university
The Norwegian Government launched in 2020 a national AI strategy that encouraged higher education institutions to integrate AI in study programs at all levels and supported digital infrastructure for AI research and teaching (Kommunal- og moderniseringsdepartementet, 2020). In 2023 an updated national strategy for digital transformation in higher education encouraged the use of AI and digital tools to improve quality of learning, accessibility of education and administrative efficiency (Kunnskapsdepartementet, 2023a). A national strategy to strengthen digital competence and infrastructure in kindergartens and schools was published the same year, somewhat delayed due to the introduction of Chat GPT. The strategy discussed AI as part of the increasing presence of new technologies that influence teaching, learning and administration, and highlighted privacy, ethical considerations and digital competence as core to ensure critical and efficient assessment and use of AI tools (Kunnskapsdepartementet, 2023b). In 2024, the same year as a common regulatory and legal framework for AI was established within the European Union (EU Artificial Intelligence Act, 2024), Norway launched a National digitalization strategy stating that Norway was to become the world’s ost digitalized country before 2030 and be in forefront with respect to ethical and safe use of AI (Digitaliserings- og forvaltningsdepartementet, 2024). The Government also increased funding for AI research. When the government appointed a committee for AI in higher education in spring 2025, the minister for Research and higher education said:
“The question is not how much or little AI to implement in education but how we adapt the educations to a new technological reality. We need rules for how we might and should use AI in higher education” (Regjeringen, 2025).
The overarching message in these governmental strategies seems thus to be that AI was to be embraced and made use of for innovations, taking ethical considerations into account, and that building critical AI awareness, skills and competence among users is crucial and a core task for educational institutions.
Public debates about AI peaked in the aftermath of the ChatGPT launch November 2022. Our discussion centers on articles and opinion pieces published in the Norwegian university newspaper Khrono from 2022 to 2025 as these focused explicitly on the impact of AI on higher education, and on teacher education. The coverage spanned diverse themes, including teaching practices, assessment, ethics, with input from students, academics, university leaders, and politicians. The first reactions emphasized uncertainty and limitations of generative AI, prompting calls for regulation. By spring 2023, one university had introduced ban on AI-generated text in exams, others argued that AI tools could also be used to police AI use. Articles 1 year later portrayed more diverse debates, with both liberal and critical perspectives emerging, alongside efforts to adapt pedagogical practices and develop regulatory frameworks. Common concerns included academic integrity, intellectual property, sustainability, and privacy. Research cited in the articles suggested negative effects on student learning and a rise in cheating cases, leading to calls for redesigned assignments and oral exams to make undocumented AI use difficult. A survey conducted among higher education students revealed that while nearly 90% used AI, over half believed it could negatively affect their qualifications. Teacher educators were portrayed to navigate a dual role; managing AI use among university students while preparing them as future teachers to navigate AI in schools. While concerns about cheating persisted, there was growing recognition that AI must be integrated into teaching to foster critical engagement and responsible use.
At our own university and at department level, staff received few recommendations from the leadership as to how AI should be handled in education and research in the first years after launch. The guidelines that were communicated were very general and in support of the Norwegian strategy that AI is ‘a technology we should use for innovations towards improved quality and efficiency’, but that we should be aware of cheating and malcontent. Some of our university leaders wrote in a commentary in the university newspaper January 2023 that chatbot cheating should be dealt with like any other case where there is suspicion that students have not written their exam themselves. They argued that sanctions are necessary in certain cases but acknowledged that the routines for cheating cases are long and difficult due to demands for documentation and the safeguarding of student’s rights. They concluded that the challenges chatbots present are similar to challenges education always have had finding ways to evaluate students in a way that is fair and gives them a reason to learn. AI was also a theme on faculty and department meetings at the start of the fall term 2023. Presentations focused on how to deal with text-generating tools and the importance of digital literacies for teachers and students, ongoing work to construct a competence module for staff on AI, including information about trials with using various AI tools in teaching, for planning, adapted training and in research for systematic reviews, transcriptions and translations and analysis. The take home message was that AI could be used for many things but warned about the biases and risks involved. A webpage with information about why and how AI can be useful, what policies the university had regarding AI, and what AI tools could be accessed as an employee was launched November 2023 (OsloMet, 2023). A policy for AI at the university was adopted in April 2024, based on a draft from a working group established in late 2023, following a proposal from the dean at our faculty. The recommendations emphasized transparency, source criticism, human responsibility, privacy and data protection, sustainability and legal compliance, but formulated in very general ways. A guide for use of AI in student theses was released august 2025 gave more concrete advice and stated among other things that AI is not considered a source to be included in reference lists, but that text generated by AI must be marked clearly and that use of AI that is not described can be judged as cheating.
What this brief description of the large discourse landscape on AI reflects is that the national frameworks were not ‘big’ enough to effectively regulate big tech. Gradually and invisibly the digital infrastructure became in control over the nuts and bolts of policies. In fear of losing out, ‘responsible’ actors argued for the importance of taking active part in the AI revolution but when set in motion it is clear that the reach and power of the big technological corporations supersede national actors.
Small discourses—domesticating AI at the teacher education faculty
In the sections below we move on to depict the reception and various engagements with AI by ourselves and our colleagues at our own faculty in the first years after 2022. We approach these ‘small’ discourses as ‘domestication,’ that is, as attempts by ourselves and our colleagues to ‘tame’ or make AI technology understandable, relatable and acceptable as it became part of our everyday academic life. To address different aspects of the processes whereby AI passed from the production and market sphere to our everyday and academic sphere of consumption, we organize the presentation of results according to the concepts appropriation, objectification, incorporation and conversion. In the presentation we seek to provide examples of main patterns as well as variations.
Appropriation
As the previous outline of the history of AI implementation in Norwegian higher educational institutions already have suggested, processes of AI appropriation by users changed dramatically from the ChatGPT launch in 2022 to fall 2025. That there also was a rapid increase in AI appropriation by faculty members at our university is suggested by results from our online survey that shows a considerable increase in share of faculty members who answered they used AI for work daily or weekly from 2023 to 2025 (from 17 to 48%).
The authors early engagements with ChatGPT tools as teachers were in the first period after the launch driven by individual choice and curiosity. One of the authors of this paper tried out the freely available version of ChatGPT at the time, another purchased the ChatGPT4 when it arrived, both out of curiosity to learn about and experiment with what these tools could do. We searched for them in the online marketplaces, and used our work email, mobile or pc to log in. Our uses and practical engagements varied. One of us experimented with AI daily, one of us did not trial at all, the others tried it out occasionally. Teacher educators who answered the 2023 survey listed Chat GPT/Open AI.com, google translate, wordify.no and menti as examples of AI applications they had used, and listed lesson planning, text revisions and content creation as examples of purposes that motivated them to do so.
At the time of writing, 2–3 years later, we experience that processes of appropriation have changed dramatically and have become diversified. With the event of university specific AIs, some of us unsubscribed and stopped using ChatGPT/4 and started using Copilot and Sikt KiChat, motivated by the insurance that these were more secure solutions and what was recommended by our university. Others kept on using the open ChatGPT in addition to trying out many other related AIs as these became available. Moreover, we realized that we also appropriated AI gradually, automatically and rather unknowingly, as AI tools became more invisibly integrated in the digital work tools our university had purchased and that we could not opt out; i.e., in the Office 365 work platform (i.e., co-pilot, decisions) and Canvas. This development was not announced by our employer, and our awareness often came with surprises, i.e., suggestions for relevant files to prepare for meetings, or how to prepare a PowerPoint, and some concern as they suggested to a one-way transparency from users to the back end of technology. The field became more blurred with respect to what AI applications we had to log in to use, and the ones that were activated as we logged in to our user account on the university webpage. Some colleagues were and still seem quite unaware that these AI are there and what they do/what we can do with them.
Our appropriation of AI tools had as such developed to become more diverse, even though regulations had become more uniform and because the GPT and AI opportunities available had diversified. Initially, appropriating AI was experienced as an active exploratory and experimental activity supported by a growing variety of free GPT and AI opportunities. Over time, this shifted toward a more deliberate and conscious engagement during a regulated period characterized by active use, requiring login and informed choices regarding its purpose and extent of use. Eventually, this evolved into a situation where conscious use intersected with more incidental use as AI had become more invisibly embedded in our everyday work technologies (without much information or feeling of choice or opt out opportunities).
Objectification
As already suggested, the objectification of AI – that is, placement, display and embeddedness in the teacher education environment – gradually changed over time and appears at present woven into a rather complicated patchwork, at our university. From being mostly absent in university digital platforms and toolkits in 2022 (only available in commercial platforms), AI had by 2023 become available and displayed on university webpages to staff as the more secure Sikt KI-Chat and CoPilot applications as on-demand opportunities. By 2025 AI applications had become more seamless and by-default integrated into our daily university Microsoft work-tools. All these placements are currently existing side by side. The Sikt KI-Chat and CoPilot solutions have the advantage that the information generated by the AI is stored and connected to the logged-in id, so that users are provided with an archive of searches and need not remember from time to time.
As faculty members, we experienced some confusion in the beginning with regard to which tools were part of the university system, and which were not. In one instance many university employees in Norway, the authors included, received an email with an invite to a free trial and course for how to use a specific AI research tool. The email looked like it was sent from a Norwegian university sector actor, and some of us thought maybe the research council. However, it turned out to be a sales-pitch from a marketing company that by some means had gained access to email addresses in university systems. In the online survey among faculty members most teacher-educators answered to have learned about AI through own research (72%), colleagues (57%) and university tutorials (57%), yet many expressed an ambivalence to their own AI competence (42%) and whether it was ethically safe to feed research data into AI appliances, suggesting to a certain degree of uncertainty. The concrete placement or display of AI tools have to our knowledge not been raised as topics for debates among our colleagues. Discussions at the first AI faculty seminar were rather on how AI could be embedded within the curricula, in content, teaching and evaluation methods. Early discussions among ourselves and our colleagues were technology-oriented: what type of technology was this, what were the different tools, what could they be used for in teaching and in research. Why should they be used? Colleagues raised concern about bias, ethical dilemmas, sustainability, privacy, copyright and how it would affect educational outcomes.
The present experience is that AI is everywhere, and as teacher educators we do not have control over where AI is placed, in the university, in the teaching and research processes nor in the students’ learning processes. The policy and guidelines are still vague with respect to what tools can be used and not, and the technological terrain is rapidly changing. The question of embeddedness is therefore likely a question that needs to be continuously addressed.
Incorporation
In terms of incorporation – the temporal integration of AI into everyday practices in teacher education – the picture seems very diverse. Our online surveys suggest that the use of AI among teachers at our faculty increased from sporadic conscious try-outs for specific purposes in 2023 to more extended use in 2025, both in everyday teaching (6 out of 10) and research (7 out of 10) and some administration (2 out of 10). Whereas one respondent in 2023 wrote “I have not found a good way to use AI yet,” or “I do not think the guidelines for what is acceptable use are clear enough” the group of staff who had not used AI in teaching or research in 2023 (approximately 6 out of 10), had shrunk notably in 2025 (1 out of 10). Some degrees of AI incorporation in teacher educators’ practices had as such become more normalized. One reason for this increase can be a change from individualized to institutionalized processes of incorporation. All university employees have since 2025 had some experience of automatic AI integration into teaching or research practices when using work-tools in the Office 365 system (email, calendar, word-spelling etc.). At the same time there are still some who may not be aware of these AI components and still do not use AI on a conscious regular basis. The group of staff that know about the opportunities but are more reluctant to try, provide ethical reasons: that they are principally skeptical or that they are uncertain about what uses constitute cheating. The AI-using respondents in the survey’s open-ended answers suggest that uses have moved from being motivated by curiosity to more concrete tasks. In teaching the respondents answered that they used AI primarily to support planning, development of ideas, translations and to make teaching designs more varied, and to some degree directly with students, such as a theme for reflection. One teacher wrote:
“It is very positive that [AI] in practice forces teachers to make assignments that cannot be answered with AI (….) we need to stop making assignments that only asks for regurgitation from curriculum. This is a good thing and gives new perspectives on what learning is at a university.”
In research the answers suggest that AI is used primarily as an aid in literary reviews, language correction in writing processes and with simple analytical tasks. There are also different rules for AI-uses in research. The guidelines in journals, for instance, vary, and many researchers prefer not to use AI to be on the safe side. The faculty members report many benefits; saving time and new opportunities for creativity and variation are most mentioned. For instance, one respondent wrote in an open-ended answer: “It is time saving. I use it to update myself in subject fields (…), and for PowerPoint dispositions.” Other open-ended answers uncover widespread concern that AI can undermine important learning processes and values: “AI makes assessments of exam assignments more difficult for teachers (…) the institution is lagging behind (…) we are left rather powerless.” Others highlighted the risk of students cheating, reduced critical thinking, problems with hallucinations and ethical dilemmas related to the use of sources, copyright and privacy. Whereas much concern has been with privacy and intellectual property rights, very few raised concerns about the fact that university owners and leaders might have access to this information.
Our overall impression is that the incorporation of AI in teaching and research is still in a stage of trial, and that the use of AI has only superficially changed core aspects of teaching and research practices. The most evident incorporation has been in assessment of students’ work, i.e., in relation to detecting plagiarism. As we have experienced it, the most notable transformation in teacher education after the launch of ChatGPT, has been related to this incorporation and to teachers’ experiences and practices with these tools in student assessments, and exams in particular. Many teachers have argued for and acted on changing assessment from written home exams to oral exams or written exams at the university campus (more about this in the next section).
Conversion
Conversion is understood here as the ways in which teachers translate what they experience or do with AI from in one context to another, that is how they communicate about their personal understandings and use of AI technologies to an “outside” world, be it to colleagues, students or in research. The public debates, also in the university newspaper, point to a dynamic where the proponents of AI technology are given more space in the debates than those who are skeptical. This was particularly evident in the early phase, where for instance the ethical and climate challenges hardly were given voice. Our impression is that many teacher educators at our faculty orient with some degree of resistance to the public policy on AI-use but are reluctant to communicate this skepticism openly. One of the survey respondents for instance wrote “I am very concerned about the ability of humans to choose the shortest way to the target. Even if AI might provide better teaching and learning I am afraid many will use it mostly to generate fast results.” Another respondent wrote about AI “as a kind of new colleague.” In terms of dialogue among colleagues, those of us who have experimented with AI more than the average teacher, can be looked at with skepticism, which tends to result in less transparency and sharing of experiences with AI. Seen in relation to the survey results it appears that whereas the large AI discourses are quite dominated by a positive techno-optimistic and implementation-oriented narrative, the smaller discourses have elements of the same but also the opposite and are more varied and messier.
In summary the results suggest that the domestication of AI in teacher education has moved from individualized to more institutionalized appropriations. Despite the increased availability of secure AI tools and automated integrations of AI technology in teacher educators work platforms, there still seems to be highly varied processes of AI domestication. If we look at domestication as the process whereby AI technology is given a practical, symbolic and institutional place, we might conclude that the process is still in its youth. Whereas, some are in the process of domesticating the technology into their teaching and research practices, others have not started, and some feel that it is the AI technology that is in the process of domesticating teachers. There seems nevertheless to be substantial concern and serious efforts by teacher-educators to ‘tame’ the AI technology and make it their own. It requires quite a lot of competence both to prompt, interpret and use AI outputs (and relate to its automated generated content) in fruitful and ethical ways and there are large differences in experience and practice of incorporation and conversion among the users.
Moreover, the increasingly seamless integration of AI in the various digital tools academic staff use represents some interesting puzzles. To some extent AI use is invisible because it is being introduced without its users’ knowledge. This raises the question of whether technology can be domesticated by its users when they do not know they are using it. Our answer is a qualified yes. But first it is important to acknowledge that the term domestication is an analytical tool, and as any tool its usefulness depends not to the extent it mirrors the reality we study but to the extent it serves as a useful tool for understanding a complex empirical reality (Bourdieu, 1977, p. 203, n49). AI’s invisibility complicates the analytical framework, but it is still useful: what the domestication framework helps us see is that AI is appropriated, objectified and incorporated irrespective of the users’ knowledge of it. However, to the extent the users are unaware of their use of AI there is little conversion. The important point here is that we see that the various aspects of domestication are relevant but function differently depending on the users’ awareness of what they actually are engaged with. In addition, the issue of AI ‘slipping in the back door’ also serves to problematize the question of what domesticates and what is domesticated. AI – by many technological experts deemed intelligent – can in many ways be said to be the actor. Or in Latour’s terms: the actant. Hence, in that sense AI is the one wielding the humans into docile objects. Or again to evoke Latour: we are better off treating humans and the social as integral and symmetrical to technology, instead of creating an artificial separation (Latour, 1993, 2005).
Discussion—AI and dynamics of transformation in teacher education
What characterized AI dynamics of transformation in and around our university and teacher education setting in the post ChatGPT years? In the sections below we discuss this question by way of returning to the analytical concepts of ‘small and large discourses’ and ‘social’ mechanisms’. Using an exemplary case we illustrate how the emerging consequences of the acting on and with AI technology in teacher education transformed the way AI was ‘tamed’ and redefined, in processes of ‘conversion’ within our teacher-educator setting.
The reason we initially introduced the distinction between large and small discourses is that we argue that in order to understand the dynamics of a practical and discursive process we need to acknowledge the often-significant tensions and conflicts that emerge in practical AI-related life. Typically, staff are forced to face concrete challenges and thus more easily see the downsides when they have to deal with all the unintended consequences of this new technology. These concrete, practical experiences thence frequently place them at odds with the guidelines laid out in large discourses. And it is precisely these tensions, we contend, that explain what is actually taking place at universities. The differences in opinions and the acts of ‘resistance’ play back to the large discourses and hence adjust the content of the latter. Therefore, by studying the interplay – the confluences, tensions and contradictions – of large and small discourses we can grasp in a more nuanced way the processual quality and dynamics of what is taking place than what a conventional discourse analysis would give us. We will illustrate how by way of an example (that we find in many versions at all Norwegian universities).
In 2023, our faculty stated that if examiners suspected students of overusing ChatGPT in home exams, they were required to report it to the administration. The administration would then follow specific procedures to determine whether the case qualified as academic misconduct. In the fall semester of that year a staff member in charge of a home exam reported two cases of possible AI-related cheating. After a series of emails between the academic staff involved and the university administration, a meeting was held, where the suspected papers were read. The conclusion was that the cases needed to be followed up. The two students were (separately) summoned for interviews with representatives from academic and administrative staff. One student refused to attend and eventually informed the committee that he had left the university. The other denied any wrongdoing, amid tears, and gave quasi-plausible explanations for what the committee saw as cheating. However, in the end the lack of hard evidence resulted in the student only receiving a warning.
This event naturally sparked talk among the academic staff at the department. The collegial attitude was that the rules and procedures given by the administration were dysfunctional. Although the academic staff was convinced that the students had cheated, the amount of workhour to go through the thesis, interviewing the candidate and writing a report to the administration combined with the lack of any conclusive outcome of the cases reflected, in the eyes of the academic staff, a tedious waste of time in an already hectic work schedule. Despite these objections the administration responded to this skepticism by pointing out that they had tools that could detect this with certainty and therefore it would not take that much time. What the latter obviously did not know at the time was that there are specific programs – like QuillBot – that were designed to rephrase text generated by Chat GPT so that it could not be detected by other data programs.
Still, feedback from this and similar cases had an effect on the administration’s procedures. Early next year the administration conveyed that they no longer wanted academic staff to report suspicion of AI-related cheating. It was left to the latter to deal with the problem as best they could. This new policy seemed also to be prompted by HR’s involvement in the issue. Their position was that the university had to be extremely careful about this as it risked being sued by students. From the point of view of a large portion of the academic staff, the administration has simply pushed the problem onto them, as they have no measures to meet the problem with. In fact, up until the time of the writing of this article there were few clear rules and procedures for how such cases should be handled.
The discrepancies between the large and small discourses in the case described above illustrates how the realities of the uses of AI gave the academic staff an understanding – and hence a discursive position – that were not foreseen by those in the administration, and that had a significant feed-back effect on the university’s AI-related procedures. Without including the small discourses in the analysis, it would be impossible to understand the changes that took place in the practical handling of student encounters. The original rules and procedures were generated by signals ‘from above’: from state institutions and also the general attitude globally about the importance of making use of AI. It was the encounters staff had ‘on the ground’, and the small discourses that these experiences generated, that pushed the university leadership and administration to change (i.e., domesticate’) its policy.
The analytical dynamisms that this way of understanding the discursive landscape provides us with – and that the case well illustrates – also point to the concept of social mechanisms. In contrast to descriptive discourse and domestication analyses, the term social mechanism points to an interest in understanding the causal connections that drive this taming of the new technology forward, and what kind of consequences they have. But instead of steering between Scylla and Charybdis – i.e. either intentionally overlooking causality or searching for coverings laws (Hempel, 1966) that shall explain the processes we observe – we have here identified and analyzed a social mechanism that was in play (the experience-near challenges of dealing with assumed cheating vs. the general rules related to this by the big institutions) and which helps us understand better what was actually happening. Obviously, the realities surrounding AI as practice are immensely complex and we therefore cannot expect any law-based explanation. However, by applying social mechanisms – as what can be termed tendential causalities (if A then it is likely that B happens) – in our analysis, we can gain a deeper understanding of what is happening (Elster, 2007; Hedström and Swedberg, 1996; Helle-Valle, 2019). An obvious, dominant mechanism in this case just described is how the encounters with students expose staff to troublesome aspect with AI in education: that students use it in ways that negatively affects their learning and that the grounds for assessments in exams is challenged.
Teacher educators are in such dilemmas all the time, not only in relation to students and colleagues, but also because of their double function. Teacher educators must be ‘good models’ for students as they are educating future teachers who will meet ‘AI domesticated’ youth, children and parent’s ‘in schools and kindergartens. As one survey respondent 2025 wrote:
“It is absolutely necessary [to integrate AI in teacher education] to keep up with today’s development. Many of our students meet young people who use this very actively. Then we must (…) contribute to critical/ethical and useful use.”
Conclusion
In sum, our findings suggest that processes of AI integration in teacher education are complex and shaped by both individual and institutional practices and domestications. Hence, we need to understand AI not merely as a technological tool, but also as a socially and culturally negotiated practice (Tan et al., 2025; Meylani, 2024). Our findings regarding skepticism and the necessity of professional judgment are in line with studies that find increased knowledge about generative AI among teacher students not necessarily lead to a greater willingness to use the technology in future teaching (i.e., Bae et al., 2024). In light of studies that show that national policies and local institutional cultures impact on how AI is intergrated (Brandhofer and Tengler, 2024; Aljemely, 2024), the concepts of “domestication” and “social mechanisms” are useful to help capture how implementation and transformation occurs through discursive, practical and structural tensions in time and space, not as an evident, universal or linear process. Our use of the conceptual pair “large and small discourses” is an analytical approach that aids in exploring the tension between overarching policy expectations and practice-based experiences – an issue also discussed in recent educational research (Selwyn, 2024; Kosmyna et al., 2025). In a broader sense, looking at AI in education through the concepts of domestication exposes the simplistic techno-centric and techno-positive thinking that has characterized the digitalization of education for several decades. As a result, it provides an opportunity for a critical reflection around what the examples outlined in this paper ‘are cases of’ (cf. Flyvbjerg, 2006). For example, there is a growing debate in education research about the corporatization and encroachment of the big tech companies (Moeller, 2020; Facer and Selwyn, 2021; Macgilchrist, 2021). In that sense, AI simply becomes part of the series of digital technologies that have been introduced to the field of education. The crucial question for educators will then be how to respond to this technology without ending up in naive optimism or neglecting its existence. We have inherited many of the unregulated and uncontrollable developments of digital technologies in education, which we are now experiencing the effects of (Engen and McGarr, 2025). For example, there have been debates among educators about how digital technology and services might compromise students and educators’ privacy and copyright for many years. AI has amplified these concerns and shaken up education. The question of exams and student assignments is one issue, but the challenges extend way beyond that if using AI has negative effects on students’ learning and understanding (Kosmyna et al., 2025). Educators have for too long managed to ignore Big Tech’s presence in education, but they can no longer ignore the new forms of soft governance brought about by them. AI is not exempt in this regard. Educators need to claim ownership of where, why, and how AI should become part of universities’ routines and structures. Helping students to critically explore these issues therefore becomes crucial. There is also a compelling need for research that discards the techno-solutionist discourses to critically explore the often taken-for-granted understanding of digital technology in education. As educators, we find ourselves in a unique position, since AI has yet to secure a clear and formalized role within universities. This presents us with a valuable opportunity to help shape the path forward through pedagogical innovations.
Postscript: interdepartmental and interdisciplinary network—an innovative approach to AI awareness and competence?
When the authors of this paper initiated a network for digitalization research in teacher education at our faculty fall 2022, the motivation was to share, discuss and develop research around the evolving digitalization and that an interdisciplinary and inter departmental context would be fruitful. Our focus for the first year was on identifying current knowledge and contributing to researchers at the faculty and assemble them in seminars to discuss research opportunities. We surveyed publications and ongoing projects relating to the theme of digitalization at our four departments and shared the findings in presentations and discussions at a faculty seminar in the spring of 2023. It was after a faculty meeting on AI in the fall of 2023, the network turned our attention to AI. Part of the reason for this was that the faculty meeting exposed highly varied knowledge about and attitudes to this new technology. Some thought this was rather insignificant, and therefore not worth giving much attention to, others warned against the fundamental challenges AI could represent both in education and research. Also, concerns raised about ethical issues related to privacy, data harvesting and plagiarism also varied. So, the idea struck us, let us conduct a survey about the use of AI among our colleagues at the faculty’. In the process of developing the survey and in preparing for two faculty seminars we have discussed and applied AI in many new ways. In hindsight, we think of the network itself as an arrangement that fostered an AI ‘taming’ process and awareness in us who participated in ways that would not have happened without it.
In retrospect we think the network achieved this function as it introduced a new setting, improved our individual and collective reflection processes (hence also a ‘conversion’ that made us more aware of the invisible aspects of the AI appropriation and objectification in our working tools), was scaled up to include colleagues with different viewpoints and resulted in research activities. The new aspect was a setting with people who had a similar interest (in digitalization), but who at the same time were socialized in very different departments and came to the discussion with very different experiences, knowledges and taken for granted ideas. This fostered an environment for open discussions, in new ways from what was offered in our sections or research groups, and as such inspired and moved and improved our reflection processes and understandings. Furthermore, as we not only discussed among us members in the network but also arranged faculty meetings and brought our discussions back to research groups we headed, the dialogues expanded to other contexts. This in turn meant that the discussions invited and included people with very different viewpoints and experiences, hence opened the dialogue even more. Lastly, the network conversations and activities also motivated us to engage in research activities and to write this paper, hence produce data on the topic that can invite discussions about AI awareness, competence and skills in even broader academic circles. AI is a technology that is hard to understand, due to its complexity, rapid development and invisible or seamless integration. Trying to put in word, discussing and reflecting on what it can be, what we do with it, what this means and what could have been different if it was not there, is to take part in the discourses and domestication processes and thus the transformations of AI in teacher education.
Data availability statement
The datasets presented in this article are not readily available, however survey data can be made accessible on request. Requests to access the datasets should be directed to YXJkaXN0QG9zbG9tZXQubm8=.
Ethics statement
Ethical approval was not required for the study involving humans in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants or the participants’ legal guardians/next of kin in accordance with the national legislation and the institutional requirements.
Author contributions
AS-M: Writing – original draft, Writing – review & editing. TG: Writing – original draft, Writing – review & editing. JH-V: Writing – original draft, Writing – review & editing. BE: Writing – original draft, Writing – review & editing. SK: Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. We used ChatGPT and CoPilot in parts of the data analysis as aid to summarize content of documents, search for information, check for correct reference style. All information and text in this article has been written by ourselves. We have used co-pilot to shorten and improve one paragraph of text we ourselves originally have written, this is stated in a footnote.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Alexandrowicz, V. (2024). Artificial intelligence integration in teacher education: navigating benefits, challenges, and transformative pedagogy. J. Educ. Learn. 13:346. doi: 10.5539/jel.v13n6p346
Aljemely, Y. (2024). Challenges and best practices in training teachers to utilize artificial intelligence: a systematic review. Front. Educ. 9:1470853. doi: 10.3389/feduc.2024.1470853
Al-Mughairi, H., and Bhaskar, P. (2024). Exploring the factors affecting the adoption AI techniques in higher education: insights from teachers' perspectives on ChatGPT. J. Res. Innov. Teach. Learn. 18, 232–247. doi: 10.1108/jrit-09-2023-0129
Bae, H., Jaesung, H., Park, J., Woong Choi, G., and Moon, J. (2024). Pre-service teachers’ dual perspectives on generative AI: benefits, challenges, and integration into their teaching and learning. Online Learn. 28, 131–156. doi: 10.24059/olj.v28i3.4543
Bearman, M., Ryan, J., and Ajjawi, R. (2022). Discourses of artificial intelligence in higher education: a critical literature review. High. Educ. 86, 369–385. doi: 10.1007/s10734-022-00937-2
Brandhofer, G., and Tengler, K. (2024). Acceptance of artificial intelligence in education: oppor-tunities, concerns and need for action. Advances Mobile Learn. Educ. Res. 4:110. doi: 10.25082/AMLER.2024.02.005
Chan, C. K. Y., and Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Stud. Educ. Eval. 83:101395. doi: 10.1016/j.stueduc.2024.101395
Cochran-Smith, M., Grudnoff, L., Orland-Barak, L., and Smith, K. (2020). Educating Teacher Educators: International Perspectives. New Educ. 16, 5–24. doi: 10.1080/1547688X.2019.1670309
Digitaliserings- og forvaltningsdepartementet (2024). Fremtidens digitale Norge: Nasjonal digitaliseringsstrategi 2024–2030. Oslo: Digitaliserings-og forvaltningsdepartementet.
Dreyfus, H. L., and Rabinow, P. (1983). Michel Foucault: Beyond Structuralism and Hermeneutics. Second Edn. Chicago: University of Chicago Press.
Ellis, C., Adams, T. E., and Bochner, A. P. (2011). Autoethnography: An Overview. Historical Social Research / Historische Sozialforschung. 36, 273–290. Available online at: http://www.jstor.org/stable/23032294
Engen, B. K. (2019). Understanding social and cultural aspects of teachers’ digital competencies [Comprendiendo los aspectos culturales y sociales de las competencias digitales docentes]. Comunicar 61. doi: 10.3916/C61-2019-01doi
Engen, B. K., and McGarr, O. (2025). The postdigital divide. Postdigit. Sci. Educ., 9–18. doi: 10.1007/s42438-025-00577-6
EU Artificial Intelligence Act, 2024. Regulation (EU) 2024/1689. Available online at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (Accessed August 2, 2025).
Facer, K., and Selwyn, N.. 2021. Digital technology and the futures of education – towards ‘non-stupid’ optimism. Futures of Education initiative. Available online at: https://unesdoc.unesco.org/notice?id=p::usmarcdef_0000377071 (Accessed August 2, 2025).
Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qual. Inq. 12, 219–245. doi: 10.1177/1077800405284363
Hedström, P., and Swedberg, R. (1996). Social mechanisms. Acta Sociol. 39, 281–308. doi: 10.1177/000169939603900302
Helle-Valle, J. (2009). “‘Si aldri nei til å gå ut med venner fordi du spiller på WoW.’ Om nettspill, disiplinering og kommunikative kontekster” in Forbrukerens ansvar. eds. K. Asdal and E. Jacobsen (Oslo: Cappelen), 171–198.
Helle-Valle, J. H. (2019). Advocating causal analyses of media and social change by way of social mechanisms. J. Afr. Media Stud. 11, 143–161. doi: 10.1386/jams.11.2.143_1
Helle-Valle, J., and Storm-Mathisen, A. (2020). “Introduction: A Social Science Perspective on media practices in Africa – Social mechanisms, dynamics and processes” in Media Practices and Changing African Socialities – non-media-centric perspectives. eds. J. Helle-Valle and A. Storm-Mathisen (Oxford: Berghahn), 1–32.
Helle-Valle, J., and Storm-Mathisen, (2023). “Domestication theory: reflections from the Kalahari” in The Routledge Handbook of Media and technology Domestication. ed. M. Hartmann (New York: Routledge), 162–177.
Kommunal- og moderniseringsdepartementet 2020 Nasjonal strategi for kunstig intelligens [National strategy for artificial intelligence] Regjeringen.no. Available online at: https://www.regjeringen.no/contentassets/1febbbb2c4fd4b7d92c67ddd353b6ae8/no/pdfs/ki-strategi.pdf (Accessed August 2, 2025).
Korseberg, L., and Elken, M. (2025). Waiting for the revolution: how higher education institutions initially responded to ChatGPT. High. Educ. 89, 953–968. doi: 10.1007/s10734-024-01256-4
Kosmyna, N., Hauptmann, L., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., et al. (2025). Your brain on ChatGPT: accumulation of cognitive debt when using an ai assistant for essay writing task. arXiv :2506.08872, 1–206. doi: 10.48550/arXiv.2506.08872
Kunnskapsdepartementet. 2023a. Strategi for digital omstilling i universitets- og høyskolesektoren. Regjeringen.no. Available online at: https://www.regjeringen.no/en/dokumenter/strategy-for-digital-transformation-in-the-higher-education-sector/id2870981/ (Accessed August 2, 2025).
Kunnskapsdepartementet. 2023b. Strategi for digital kompetanse og infrastruktur i barnehage og skole 2023–2030 [Strategy for digital competence and infrastructure in kindergartens and schools 2023–2030]. Regjeringen.no. Available online at: https://www.regjeringen.no/no/dokumenter/strategi-for-digital-kompetanse-og-infrastruktur-i-barnehage-og-skole/id2972254/ (Accessed August 2, 2025).
Macgilchrist, F. (2021). What is ‘critical’ in critical studies of edtech? Three responses. Learn. Media Technol. 46, 243–249. doi: 10.1080/17439884.2021.1958843
MacPhail, A., and O’Sullivan, M. (2019). Challenges for Irish teacher educators in being active users and producers of research. Eur. J. Teach. Educ. 42, 492–506. doi: 10.1080/02619768.2019.1641486
Meylani, R. (2024). Artificial intelligence in the education of teachers: a qualitative synthesis of the cutting-edge research literature. J. Comput. Educ. Res. 2024, 600–637.
Moeller, K. (2020). Accounting for the corporate: an analytic framework for understanding corporations in education. Educ. Res. 49, 232–240. doi: 10.3102/0013189X20909831
Mollick, E. R., and Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. The Wharton School Research Paper
OsloMet. (2023). Articifial intelligence (AI) at OsloMet. Available online at: https://ansatt.oslomet.no/en/ki-chat?retur=https%3A%2F%2Fansatt.oslomet.no%2Fkunstig-intelligens (Accessed August 2, 2025).
Pink, S., Horst, H., Postill, J., and Hjort, L. (2016). Digital Ethnography. Principle and Practice. London: Sage.
Regjeringen. (2025). Her er regjeringens utvalg om kunstig intelligens i høyere utdanning. Regjeringen.no. Available online at: https://www.regjeringen.no/no/aktuelt/her-er-regjeringens-utvalg-om-kunstig-intelligens-i-hoyere-utdanning/id3093095/ (Accessed August 2, 2025).
Selwyn, N. (2024). On the limits of artificial intelligence in education. Nordisk Tidsskrift Pedagogikk Kritikk 10, 3–14. doi: 10.23865/ntpk.v10.6062
Silverstone, R. (2006). “Domesticating domestication. Reflections on the life of a concept” in Domestication of media and technology. eds. T. Berker, M. Hartmann, Y. Punie, and K. Ward (Berkshire, UK: Open University Press).
Silverstone, R., Hirsch, E., and Morley, D. (1992). “Information and communication technologies and the moral economy of the household” in Consuming Technologies. Media and Information in Domestic Spaces. eds. R. Silverstone and E. Hirsch (London: Routledge), 15–31.
Slettemeås, D., and Storm-Mathisen, A. (2021). “Digitalisert forbruk” in Forbrukersosiologi: Bærekraft, digitalisering, identitet og makt. eds. E. Jacobsen, T. Ø. Jensen, M. W. Knutsen, and G. Schelderup (Oslo: Fagbokforlaget), 445–472.
Sperling, K., Stenberg, C-J., McGrath, C., Åkerfeldt, A., Heintz, F., and Stenliden, L. (2024) – In search of artificial intelligence (AI) literacy in teacher education: a scoping review. Comput. Educ. Open, 6:100169).
Storm-Mathisen, A., and Helle-Valle, J. (2008). “Media, identity and methodology: reflections on practice and discourse,” in Mediated crossroads: identity, youth culture and ethnicity - Theoretical and methodological challenges. eds. I. Rydin and U. Sjöberg doi: 10.1177/1329878X0913300140
Stenliden, L., and Sperling, K. (2024). Breaking the magic of automation and augmentation in Swedish classrooms. Nord. tidsskr. pedag. krit. 10, 15–32. doi: 10.23865/ntpk.v10.6174
Storm-Mathisen, A. (2014). Rfid in toll/ticketing – a user centric approach. Info 16, 60–73. doi: 10.1108/info-07-2014-0029
Storm-Mathisen, A. (2016). Grasping children’s media practices – theoretical and methodological challenges. J. Child. Media 10:81. doi: 10.1080/17482798.2015.1121888,
Storm-Mathisen, A. (2019). New media use among young Batswana – on concerns, consequences and the educational factor. J. Afr. Media Stud. 11, 163–182. doi: 10.1386/jams.11.2.163
Storm-Mathisen, A., and Helle-Valle, J. (2008). “Media, identity and methodology: reflections on practice and discourse” in Mediated crossroads: identity, youth culture and ethnicity - Theoretical and methodological challenges. eds. I. Rydin and U. Sjöberg (Nordicom), 53–75.
Tan, X., Cheng, G., and Ling, M.H. (2025) – Artificial intelligence in teaching and teacher professional development: a systematic review. Comput. Educ. Artif. Intel., 8:100355).
United Nations 2024 Global Digital Compact: Artificial Intelligence and Digital Governance. Available online at: https://www.un.org/en/global-issues/artificial-intelligence (Accessed August 2, 2025).
Walsh, R. (2003). The methods of reflexivity. Humanist. Psychol. 31, 51–66. doi: 10.1080/08873267.2003.9986934
Keywords: artificial intelligence, discourses, domestication, dynamics, teacher education, transformation
Citation: Storm-Mathisen A, Giæver TH, Helle-Valle J, Engen BK and Karstensen S (2026) Engagement with AI in teacher education—discourses, processes of domestication and dynamics of transformation. Front. Educ. 10:1694082. doi: 10.3389/feduc.2025.1694082
Edited by:
Mohammed Saqr, University of Eastern Finland, FinlandReviewed by:
Marija Marković Blagojević, Singidunum University, SerbiaSaira Saira, Ural Institute of Commerce and Law, Russia
Copyright © 2026 Storm-Mathisen, Giæver, Helle-Valle, Engen and Karstensen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ardis Storm-Mathisen, YXJkaXN0QG9zbG9tZXQubm8=
Jo Helle-Valle