REVIEW article

Front. Educ., 06 June 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1594199

Capability-based training framework for generative AI in higher education

Pablo Burneo-Arteaga,
Pablo Burneo-Arteaga1,2*Yakamury LiraYakamury Lira2Homero MurziHomero Murzi3Ana BalulaAna Balula2Antonio Pedro CostaAntonio Pedro Costa2
  • 1Universidad San Francisco de Quito, USFQ, Colegio de Ciencias e Ingeniería, Departamento de Ingeniería Industrial and Instituto de Innovación en Productividad y Logística (CATENA-USFQ), Quito, Ecuador
  • 2Research Centre on Didactics and Technology in the Education of Trainers (CIDTFF), Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
  • 3Marquette University, Milwaukee, United States

Integrating generative artificial intelligence (GenAI) in higher education (HE) requires educators to develop new competencies. However, while GenAI holds transformative potential for education, research on the competencies needed for its responsible and effective use remains limited. This study employs a mixed framework analysis method, combining quantitative and qualitative analysis to identify key competencies essential for HE teachers. The research began with a bibliometric analysis of 1,737 documents from Scopus and proceeded with an in-depth analysis of 14 peer-reviewed articles. Using a chain-of-thought (CoT) prompting approach, the analysis integrates a human-GenAI collaboration to identify patterns in existing competency frameworks and empirical publications, aiming to classify and define competencies. The findings reveal that while AI literacy and ethical awareness are frequently mentioned, there is no unified competency framework addressing the pedagogical and technical dimensions of GenAI integration. The FAM process resulted in the identification of three key domains of competencies and a set of 16 competencies. The results highlight the need for a structured, yet flexible competency model tailored to educators. Future research should focus on empirical validation and the development of professional development programs to bridge the identified gaps.

Introduction

The rapid evolution of generative artificial intelligence (GenAI) brings opportunities and challenges for education professionals. To effectively integrate GenAI, both existing and new competencies are needed (Kurtz et al., 2024; Alasadi and Baiz, 2023). GenAI tools have the potential to transform education by supporting teachers on the creation of classroom activities and develop efficient assessment mechanisms for students (Qadir, 2023; Yelemarthi et al., 2024; Chiu, 2024). Nevertheless, these advancements raise ethical concerns, such as biases and the need for human oversight to ensure the accuracy and originality of GenAI-generated content (Lim et al., 2023; Saputra et al., 2023).

To address these ethical and operational problems, according to Knoth et al. (2024b) and Mikeladze et al. (2024), it is important to provide educators with training spaces designed to integrate GenAI into their practices. However, the development of technical knowledge alone is insufficient; educators must develop complementary competencies to ensure an effective integration process (McGrath et al., 2024; Moreira et al., 2023). This requires a structured, yet adaptable plan to prepare educators for the rapid evolution of GenAI, enabling them to effectively integrate these advancements into their teaching practices (Eager and Brunton, 2023; Emenike and Emenike, 2023). In this context, “competencies” are defined as “a combination of interrelated attitudes, values, knowledge (including tacit knowledge), and fundamental skills, such as analytical, decision-making, problem solving, critical thinking, and communication skills that together make effective action possible” (Rychen, 2003, 114). This study examines the competencies identified in the literature as critical for HE teachers to integrate GenAI into their teaching and explores the paths proposed by the literature for developing these competencies within HE. Mapping competencies can provide a foundational baseline to guide the development of competency-based training programs—not only by identifying key competencies but also by outlining guidelines to integrate them effectively (Bearman and Ajjawi, 2023; Law, 2024).

Specific competencies for exploring generative AI

Several authors agree that understanding specific key competencies in emerging fields, such as GenAI, is a complex task and can derive from prior research—e.g., technical and digital literacy frameworks (Celik, 2023; Knoth et al., 2024a). Tiana (2004, 40) defines “key competencies” as “the necessary prerequisites available to an individual or a group of individuals for successfully meeting complex demands.” Accordingly, Smolikevych (2019) refers to “teacher professional competencies” as a complex set of different components such as the field specific, the pedagogical and the multicultural competencies.

As AI and GenAI capabilities expand, some competencies evolve and other emerge (Wang et al., 2023). This shift aligns with the ongoing discussion on the need for education to continuously adapt, as highlighted by Scott (2015), who emphasizes the need for innovation and modernization of teaching, as well as strategies that enable learning anytime and anywhere.

Despite advances in the identification of the competencies needed to successfully integrate GenAI in education, the development of a GenAI competencies frameworks is still in its early stages. According to Annapureddy and Fornaroli (2024), “frameworks on AI literacy tend to be quite generic, failing to address the specificities of generative AI tools.” The study presented by Mikeladze et al. (2024) provides a critical exploration of diverse frameworks showing the efforts of adaptation of some competency frameworks for educators such as the Information Communication Technology Competence Framework for Teachers (ICT CFT) and DigCompEdu.

There is some consensus on competencies that should integrate this framework, such as: AI literacy (Fenske and Otts, 2024; Blanco et al., 2024), critical evaluation of generated content (Lin, 2023), adaptation of pedagogical approaches (Monzon and Hays, 2024; Michalon and Camacho-Zuñiga, 2023), and the ethical dimension of the use of GenAI tools (AlAli et al., 2024; Al-Samarraie et al., 2024; Shimizu et al., 2023).

Besides, a common characteristic is the emphasis on the relevance of human-GenAI collaboration as essential for maximizing the benefits of these technologies (Maphoto et al., 2024; Molenaar, 2022; Holstein et al., 2020).

The need for literacy development and training

GenAI is no longer just a working tool; it is becoming integrated into daily activities, highlighting the need for GenAI literacy across various age groups and educational levels. According to de la Torre and Baldeon-Calisto (2024) the incorporation of GenAI in education is blocked by the lack of adequate teacher training. Researchers emphasize it as a key competency development gap, requiring systematic definition and integration into educational training programs (Bayaga, 2024; Kaplan-Rakowski et al., 2023). Laupichler et al. (2022) and Faruqe et al. (2022) also add that more global AI literacy is essential for interacting with emerging technologies, identifying opportunities for innovation, and understanding the ethical and operational limits of AI systems. According to Ng et al. (2023) some teachers might not be ready to be immersed in an AI-driven education as they might lack of expertise, technical knowledge and ethical understanding in this developing era.

Users need to understand how these “black boxes” function prior starting to interact with them (Bearman and Ajjawi, 2023). Gaining this insight will support effective prompting, critical evaluation of outputs, and informed decision-making (Eager and Brunton, 2023). Mastery of these tools can differentiate educators in creating innovative solutions and generating new knowledge (Bayaga, 2024). The European Competence Framework for Researchers emphasizes continuous training and the development of cognitive and technical competencies to effectively apply AI tools across research stages. In HE, training is essential for teachers, who often lack the time or experience needed to engage with tools like GenAI (Bearman and Ajjawi, 2023; Eager and Brunton, 2023; Emenike and Emenike, 2023; Xia et al., 2024).

Objectives and scope

This study aims to identify and map the key competencies required by HE teachers to effectively integrate GenAI into their teaching practices. By focusing on established competency frameworks and existing literature, this work seeks to provide a foundational understanding of these competencies and their role in GenAI-driven HE teaching environments.

Guiding question

• Which competencies does the literature highlight as critical for HE teachers integrating GenAI into their pedagogical practice?

Methodology

Given GenAI’s nascent state, an initial evaluation of extant literature is imperative. Several methodological approaches were considered, looking for a flexible, but structured procedure that could facilitate creating a roadmap for the field, while including ongoing research. Initially, a broad literature exploration was conducted to identify existing research and identify key terms and trends within the field. Throughout this process, a wide variety of concepts were identified, from theme specific to more general terms (examples are: prompting, competencies, technological literacy, among others). However, this process also revealed that, in the context of GenAI, competencies were not explicitly mentioned, grouped or discussed. Given this gap, this study employs a hybrid approach, combining qualitative and quantitative data analysis techniques, using framework analysis method (FAM). FAM is data analysis method used for literature reviews in the scope of quantitative content analysis that puts forward a structured approach closely related to quantitative paradigms (Gale et al., 2013). FAM is well-suited for synthesizing findings from multiple studies offering the researcher a flexible yet structured process to analyze qualitative data (Goldsmith, 2021; Hackett and Strickland, 2019).

FAM’s five step process was followed to classify competencies and define them in the context of GenAI. Figure 1 outlines the integration of the five stages of FAM as applied in this study. This diagram is a representation of how quantitative and qualitative methods were combined with the FAM to build a GenAI competency framework. Step 1: familiarization employs a bibliometric analysis to identify key themes and trends in the literature. Incorporating human-GenAI collaboration, in step 2, allowed for the development of a conceptual structure of coding categories for further indexing. During the indexing process, in step 3, data was coded manually using webQDA a Computer-Assisted Qualitative Data Analysis Software (CAQDAS), following the GenAI-generated inductive/deductive codes guide from step 2. Step 4 focuses on understanding coding by organizing indexed data in manageable forms to identify patterns and refine the competency categorization, including emerging themes from step 3. Finally, the mapping and interpretation step (5), a structured framework is proposed. This process is critical for understanding how the framework was established and conceptually synthesized through iterative analysis.

Figure 1
www.frontiersin.org

Figure 1. The five-step framework analysis method (FAM) process.

Step 1: preliminary familiarization

As represented in the top layer of Figure 1, the familiarization phase involved exploratory bibliometric mapping from a data set gathered from Scopus, selected for its comprehensive coverage of peer-reviewed (PR) literature in the field of education (Moresi et al., 2024). The search strategy was developed by defining three key components or axes that stem from the research question (Figure 2). Additionally, implementation was included as a fourth axis, referring to the practical application of GenAI in academic settings, such as classroom use, curriculum integration, or assessment support.

Figure 2
www.frontiersin.org

Figure 2. Venn diagram of competencies, HE, and GenAI.

Search strategy and refinement

As illustrated in Figure 2, the search strategy was structured around three conceptual axes represented in a circle of the Venn diagram. These three axes—(1) competencies in the context of HE, (2) competencies related to GenAI, and (3) the application of GenAI derived from the guiding question. Each pairwise intersection of the three axes was analyzed independently and supported by human-GenAI collaboration, which is defined as the interface between humans and GenAI systems working together to achieve a common goal (Annapureddy and Fornaroli, 2024). The results of this research were cross-referenced to identify common terms that could represent them. This process resulted in the construction of the search query, presented in Table 1, yielding a total of 1,737 results in English or Spanish.

Table 1
www.frontiersin.org

Table 1. Search terms employed for data finding in Scopus.

The retrieved records were analyzed using the R Bibliometrix package to explore trends, keyword co-occurrences, and thematic clusters. While this landscape mapping provided some valuable insights into the current research focus and thematic evolution, it also highlighted a lack of explicit groupings, clustering and mentions between GenAI and specific competencies of teachers (see Table 2).

Table 2
www.frontiersin.org

Table 2. Document data set for competency analysis.

Refining the focus on competency frameworks

Recognizing the gap in direct references to competencies, the research approach shifted to identifying competency frameworks relevant to GenAI in HE. Emerging literature mapping tools such as Litmaps and Connected Papers were used to identify studies addressing competency frameworks for HE teachers. This bibliographic search was enhanced by identifying articles that analyzed existing frameworks and disclosed synthesized analyses (Shaw et al., 2021). The study of Mikeladze et al. (2024) (“A comprehensive exploration of artificial intelligence competence frameworks for educators: a critical review”) offered crucial information about the framework to consider. Priority was given to peer-reviewed (PR) literature and policy frameworks that addressed the competencies required of teachers in technological or artificial intelligence settings. This iterative refinement ultimately yielded a subset of 24 documents (Table 3), containing PR articles and policy frameworks (e.g., UNESCO ICT Competency Framework for Teachers, DigCompEdu).

Table 3
www.frontiersin.org

Table 3. Framework analysis based on research question axis.

Step 2: framework identification

Once researchers became familiar with the selected documents, the transition to step 2 took place (see Figure 1), marking the shift from exploratory mapping to defining initial coding categories. According to Srivastava and Thomson (2009), the purpose of this stage is to develop a structure (guiding framework) that allows researchers to move from a concrete description of themes towards more abstract concepts. For Somerville et al. (2023, 2), it “can (…) come from the results of a literature review or from the initial notes taken in step one.” Similarly, Srivastava and Thomson (2009) mention that researchers should allow the data to dictate the themes.

An initial analysis of all 24 documents was carried out to understand their connection to the research objective. Some frameworks provided broad theoretical perspectives, while others were too specific to apply directly to GenAI in HE (Annapureddy and Fornaroli, 2024). Thus, an evaluation matrix (Table 3) was structured to evaluate the alignment of the 24 papers with the research objectives. Each document was classified according to its focus (e.g., AI, HE faculty) and type (e.g., policy guidance, peer-reviewed article). The evaluation criteria focused on classifying studies that discussed competencies for HE teachers integrating GenAI. Conversely, papers were excluded if they did not focus on HE or presented non-educational approaches.

This evaluation process led to the following classification of frameworks:

Excluded frameworks—documents that primary focus on topics other than education, i.e.:

• FRA_1—focus on general AI competencies for developers and end users, not specifically on education.

• FRA_7—target industry professionals and employers, rather than educators or students.

Core frameworks—documents that directly supported competency identification for HE and GenAI were considered, i.e.:

• FRA_5—address GenAI in HE by proposing competencies classification.

• FRA_10—define competencies for GenAI, covering technical, pedagogical, and ethical areas.

• FRA_16—outline AI-related competencies tailored for educators, with a focus on developing skills and ethical considerations.

Supplementary frameworks—policy guiding works that will the core frameworks by addressing gaps or specific areas.

• FRA_15—explore educational transformation, competencies, and pedagogical strategies in the 21st century.

• FRA_18—Focus on GenAI-driven pedagogical strategies and the implications of GenAI-assisted learning environments.

After finalizing the framework selection, competencies were extracted using chain-of-thought (CoT) prompting in a custom GPT built inside ChatGPT. The construction of the CoT focused on developing a systematic analysis to identify commonalities across the selected documents. This process enabled to classify and define competencies in the context of GenAI by identifying shared categories. Supplementary frameworks were used to complement the classification and refine the result in thematic categories. Table 4 presents the inductive classification of competencies that emerged from the CoT analysis. This thematic structure served as a coding guide for the indexing process in step 3.

Table 4
www.frontiersin.org

Table 4. Human-GenAI collaboration competencies: inductive classification from CoT.

Step 3: indexing

The indexing process operationalized the competency classification structure created during the framework identification in step 2 (Srivastava and Thomson, 2009; Goldsmith, 2021). This structure, inductively created, now serves as deductive guide, helping researchers follow a pre-established set of competencies. A decision was made to limit the coding to academic research papers, using political guiding frameworks solely as reference points to ensure the findings were based on systematically developed narratives rather than being policy oriented. This reduced the document set to 14 documents (see Table 5).

Table 5
www.frontiersin.org

Table 5. Distribution of AI-related competencies across framework sources.

In alignment with the mid-section presented in Figure 1, coding was carried out following a hybrid approach, combining both traditional and AI-assisted strategies, i.e., coding with CAQDAS software Large Language Models (LLMs) (Costa and Bryda, 2025). The use of a CAQDAS software helps enhancing transparency and rigor in conduction evidence synthesis (Houghton et al., 2017). Thus, after completing the manual coding for all documents based on the defined coding guide, a prompt was created to code the same texts with LLM Grok 3.

At this stage, data is not yet interpreted; it is solely coded (organized) by themes or categories (Somerville et al., 2023). This process prepares the data for the next stage of charting by organizing it according to the thematic framework, making it easier to compare and analyze (Parkinson et al., 2016). During this coding process, themes can be refined, merged, or split based on re-reading the documents (Somerville et al., 2023).

Considerations on GenAI-assisted coding

While the integration of LLMs such as ChatGPT and Grok 3 enabled a exploratory coding to be more efficient, there were some methodological issues brought about by this process. To mitigate potential biases, the GenAI-assisted coding results were reviewed by a human coder. By analyzing each coded unit and comparing them with the pre-defined coding structure developed in step 2, the researcher decided if coding was in place. Coded outputs were retained only when the researcher’s interpretation of the GenAI suggestion was agreed.

Even though traditional inter-coder reliability metrics were not used because of the hybrid nature of this process, iterative validation and cross-checking were used to address consistency. The use of CoT prompting was used to increase transparency and reproducibility in the extraction and categorization of competencies by structuring a prompt to guide the LLM to use structured reasoning rather than opaque outputs.

Step 4: charting

Charting is a steppingstone activity to transform raw coded data into a coherent unified narrative that can guide a framework construction (Parkinson et al., 2016). At this point, data was not yet been analyzed and was arranged with no structure using a combination of deductive and inductive codes; therefore, a systematic sorting process was necessary. This phase involved rearranging and summarizing the coded data (step 3) “under emerging superordinate headings as well as beginning to make subjective sense of data” (Kiernan and Hill, 2018). Furber (2010) notes that the intention of this summarizing process is to organize summaries or text extracts into an appropriate thematic structure.

The charting process serves not only as a method for organizing the coded data but also to personally interpret, refine and construct concepts that can bridge the development and structure step 5—mapping and interpretation (Gale et al., 2013). Analyzing coded text requires an iterative review of coded passages and full text to be able to connect codes to thematic analysis (Kiernan and Hill, 2018).

During the indexing process, coding followed deductive categories established in step 2. However, additional categories emerged during the reading and analysis of documents. Table 6 presents a summary of the competencies found during the coding process. The matrix presents the number of references encountered in each document regarding the coded competencies. The last two columns summarize the number of papers mentioning each competency and the number of references that where coded.

Table 6
www.frontiersin.org

Table 6. Validated competency definitions for GenAI integration in HE.

Technology related competencies

During the coding process, AI literacy emerged as a central theme (12 papers and 48 references) and is considered a key point for developing other competencies, being a major factor in developing complementary competencies or serving as a starting point (Jemetz and Motschnig, 2024). Initially AI literacy was considered a broad competency, implying understanding AI’s functions, limitations, and potential applications. However, a deeper analysis on coding extracts shows that some frameworks explicitly differentiate AI from GenAI [FRA_6, FRA_10] and others used AI Literacy without distinguishing between GenAI and AI [FRA_4, FRA_5, FRA_11, FRA_15, FRA_16]. This distinction was not explicitly considered in the initial framework (step 2) but as definitions diverged, it became necessary to differentiate these two competences.

AI literacy consistently mentioned as the foundational knowledge and skill that require a basic understanding of AI and embraces the knowledge of “what AI systems do and do not do, as well as understanding the benefits, limitations, and challenges of AI systems” [FRA_3, Ref 1] and the understanding of “AI technologies and demystify concepts such as machine learning, neural networks and GANs” [FRA_6, Ref 1].

GenAI literacy, while sharing foundational aspects with AI literacy, it encompasses unique model-specific understanding and pedagogical implications. These distinctions—particularly in prompting and content evaluation—were identified as independent competencies. Similarly to AI Literacy, GenAI Literacy requires the user to have a foundational knowledge of AI systems and a more specialized focus on GenAI technologies. It involves:

• Understanding of GenAI technologies—“Common GAI types are foundation models, generative adversarial networks (GANs)” [FRA_6] and the “probabilistic mechanisms underpinning the synthesis tools in generative models” [FRA_10, Ref 5].

• Application of GenAI tools—“skills related to the use of generative AI, with content assessment, prompt engineering” [FRA_10, Ref 8].

• Evaluation of GenAI outputs—“It is important to evaluate the effectiveness of educative AI in achieving the desired outcomes” [FRA_4, Ref 2].

GenAI data literacy, a competence related to GenAI literacy that refers to the ability to analyze and handle data used or created by AI or GenAI educational systems. This ensures educators are equipped to handle data responsibly, to interpret analytics ethically, and to maintain confidence in AI-driven insights, i.e.:

• “if the data used to train the model is of low quality, the responses of the model may not be accurate or reliable” [FRA_4, Ref 1].

• “competencies that enable the ability to comprehend, interpret, and extract insights from data, enhancing decision-making abilities across various domains” [FRA_6, Ref 1].

Prompt engineering is found with different descriptions in several passages (7 papers, 18 references), but tends to be described as a “skill to effectively prompt generative AI models,” [FRA_10, Ref 4] enabling users to “generate and optimize a complete instructional design” [FRA_16, Ref 4]. This user input requires to have “basic subject related knowledge when formulating prompts” [FRA_6, Ref 1]. The effective interaction with GenAI models is restricted to the quality of the instruction and it requires “the utilization of descriptive language, understanding the trade-off between creativity and specificity, the possibility of segmenting longer prompts into smaller units” [FRA_10, Ref 8].

Critical content evaluation refers to the set of skills a teacher develops to “question the quality of GAI processes and outcomes” [FRA_6, Ref 1] recognizing that answers could be potentially biased or not correct to content specific knowledge. It requires teachers to critically think about the outcomes and compare to other recourses to guarantee content specific veracity. Critical evaluation also implies knowing and understanding how to detect AI generated content, learning to use of AI detection software and developing pedagogical activities to help students develop a critical sense to distinguish AI vs. Human content generated, i.e.:

• “(…) knowing how to detect AI-generated content” [FRA_10, Ref 2].

• “(…) concerns both being able to tell apart human-made from AI-made content, and knowing how to use AI detection software” [FRA_10, Ref 5].

Critical evaluation supports pedagogical GenAI integration particularly when “the automatic grading and feedback by GenAI” must be supplemented with “human insight, especially for evaluating higher-order thinking skills” [FRA_5, Ref 1].

Human-GenAI collaboration refers to a teachers’ ability to interact and co-create with GenAI systems to enhance teaching and learning. This competency involves task delegation so that GenAI can “support human teachers and learners in their quest to teach and learn” [FRA_13, Ref 1]. Educators must understand when to rely on AI for automation and efficiency while ensuring critical oversight, ethical considerations, and pedagogical alignment in GenAI-supported teaching environments.

Pedagogical related competencies

The integration of GenAI in pedagogical activities introduces new and evolving insights into the teacher’s competencies. Rather than being just mere technological complement to teaching, GenAI has the potential to transform pedagogical approaches (Sattelmaier and Pawlowski, 2023) enabling more adaptive, interactive and context-specific teaching strategies (Ning et al., 2024). Effective integration of GenAI requires teachers to coordinate multiple resources (Lu et al., 2024), both technical and pedagogical to create high quality teaching. Likewise, Mills et al. (2023) state that one of the key aspects of GenAI-assisted pedagogy is its ability to support two-way interactions between technology and teaching, leading to new forms of knowledge sharing and fostering collaborative and interdisciplinary teaching practices.

This integration, and thus pedagogical transformation is to be supported by three competencies:

• GenAI-enhanced instruction.

• GenAI-assisted assessment.

• Process-oriented thinking.

GenAI-enhanced instruction refers to the abilities to integrate GenAI into teaching. It encompasses the application of GenAI tools “during the preparation stage, and the implementation and evaluation” [FRA_16, Ref 1]. The central theme revolves around preparation and real-time teaching integration and the use of GenAI is helpful to “develop a first draft, (…), and then you can revise from the basis” [FRA_16, Ref 6].

It requires teachers to understand that GenAI is a tool to support the teaching process, requiring critical evaluation of outputs based on context, subject knowledge and ethical considerations.

Enhancing teaching with GenAI will require teachers to develop the ability to “determine the most appropriate action to optimize learning” [FRA 12, Ref 5]. Thus, GenAI enhanced teaching should also focus on developing activities that promote collaborative learning, grounded in human-human interaction supported by AI, requiring continuous adaptation of “teaching strategies to address the changing learning situations and learning goals” [FRA 9, Ref 1] and empowering students by designing and restructuring lessons to support learning objectives and make students really learn. In the teachers’ interaction with GenAI, human judgment must be kept as a central element to enhance adaptive learning [FRA_9, Ref 3; FRA_2, Ref 6].

GenAI-assisted assessment refers to the abilities of teachers to integrate GenAI tools into assessment processes. The focus is to enhance and create new grading, feedback and the evaluation process. GenAI can assist teachers on traditional assessment activities [FRA_11, Ref 1] but also requires them to understand how to redesign assessment methods to incorporate GenAI as part of this process [FRA 5, Ref 1]. GenAI can assist on providing personalized feedback and automated grading; nonetheless, teachers are always held responsible for overseeing assessment outputs, focusing on developing students higher-order thinking skills [FRA 5, Ref 2]. This assisted assessment requires to define methods “and tools that are valid, reliable, fair, and transparent” [FRA 6, Ref 2].

Process-oriented thinking refers to the teacher’s ability to structure, sequence, and refine AI-supported instructional design. Process-oriented thinking involves an integrated approach to planning, designing and evaluating each phase, intentionally aligning each task with pedagogical objectives and pre-defined learning outcomes (Lu et al., 2024).

This competency involves being able to tackle:

GenAI-supported lessons—“Teachers need to manage and design their interventions and develop pedagogical approaches wisely” [FRA_9, Ref 1].

Structured GenAI interactions for teaching—“Using ChatGPT to generate and optimize a complete instructional design, and autonomously select appropriate content sections for simulated classroom exercises” [FRA_16, Ref 3].

Ethical & context awareness

GenAI ethics as a teaching competency refers to the teacher’s ability to understand, critically evaluate, and apply AI ethics in educational contexts. Based on the coded data (10 documents, 27 references), this competency is structured around three axes: awareness, reflection and responsible application.

Ethics awareness concerns the principles of understanding how the models are trained and the quality of data may impact on the output’s reliability [FRA_4, Ref 2]. It includes:

• “privacy and security concerns, especially concerning personal information” [FRA_10, Ref 2].

Ethics reflection is about engaging in a thoughtfulness process regarding:

• “(…) the impact, opportunities, and challenges of AI. In addition, they need to know how to address potential problems in the use of AI to ensure responsible use” [FRA_13, Ref 1].

Ethics responsible application considers:

• shifts in approach—“What if we took a ‘disclosure of learning process’ approach rather than prevent and punish approach” [FRA_11, Ref 1].

GenAI policy awareness & compliance is the ability of teachers to understand, interpret and apply institutional policies related to the integration of GenAI into their work. Navigating institutional policies requires a basic understanding of:

Data and privacy protection—“the protection of personal data in accordance with data protection regulations and the respect of copyright” [FRA 6, Ref 1].

Legal implications—“ensuring individuals operate within the bounds of intellectual property and other legal frameworks” [FRA 10, Ref 1].

Institutional policies—“universities need to create robust policies that can adapt to the fast-evolving nature of GenAI” [FRA 5, Ref 1] and “universities have developed guidelines for the use of LLMs in assignments” [FRA 10, Ref 2].

Professional development refers to the teacher’s ability to continually seek new knowledge regarding GenAI and pedagogy within GenAI context. This pursues “continuous professional development and lifelong learning” [FRA_5, Ref 2] to keep pace with AI’s rapid evolution. Effective GenAI professional development should include:

Technical training—“teachers would need to equip themselves with AI-related technological skills” [FRA_9, Ref 6].

Pedagogical integration—“require teachers to connect the digital tools to content knowledge and pedagogy” [FRA_9 Ref 2].

Lifelong learning & adaptability—“as professional development, helping humans improve over time” [FRA_2, Ref 1] and self-empowerment through reflection and professional growth [FRA_5_1].

Foundational competencies

Critical thinking in GenAI-enhanced education refers to the teachers’ ability to: (i) “critically analyze both generative AI models and their outputs from a human perspective” [FRA_10, Ref 3], (ii) critically assess student progress, and (iii) “identify deficiencies in their teaching and make improvements” [FRA_16, Ref 2]. This competency involves not merely accepting GenAI outputs at face value but rather engage in questioning, interpreting, and verifying data-driven recommendations, especially because “critical evaluation of the use cases applying AI to support learning and teaching is needed to enhance education” [FRA_12, Ref 1]. Ultimately, teachers must “guide students in critically engaging with GenAI tools, fostering the design of a critical framework” [FRA_5_a, Ref 2].

Applying critical thinking in GenAI-enhanced education requires an understanding of:

• Evaluating GenAI accuracy & limitations—“ChatGPT has a low accuracy rate in its responses. For example, if you ask whether differentiability implies continuity, it might answer that it does not necessarily imply continuity. It makes obvious mistakes like this as well” [FRA_16, Ref 4] and “ChatGPT’s responses are not very satisfactory. I have to ask detailed questions, and it does not seem as intelligent. In addition, the answers it provides are not necessarily correct” [FRA_16, Ref 5].

• Ethical considerations—“The ethical implications of using educative AI must be carefully considered, including potential biases” [FRA_4, Ref 1].

Problem solving refers to the teacher’s ability to develop, refine, and apply solutions to challenges in educational contexts. This competency emphasizes higher-order thinking, exploration, and designing effective learning environments that foster problem-solving skills in students.

Developing problem-solving abilities in teaching requires an understanding of:

• Exploration & experimentation—“Providing novel and effective solutions to complex and ill-defined problems that require exploration, experimentation, and discovery” [FRA_6, Ref 1].

• Designing learning environments for problem-solving—“Enhance their pedagogical and technological competencies to design appropriate learning environments for students to solve authentic problems” [FRA_9, Ref 1].

Ethical reasoning concerns the teachers’ ability to recognize, evaluate, and address privacy and ethical concerns related to AI in education. This competency ensures that teachers “explore concerns about academic integrity and excitement about pedagogical possibilities” [FRA_11, Ref 4]. It also requires teachers to “ensure students’ psychological and social well-being” [FRA_9, Ref 4], and “promote and ensure accessibility for all learners” [FRA_9, Ref 2]. It involves raising awareness of risks, guiding students in responsible AI use, and fostering ethical discussions:

Teaching AI ethics to students—“Teaching about the systems so students would understand the risks and ethical concerns” [FRA_11, Ref 2] and “Can respond to complexity and uncertainty constructively by building on values and ethics” [FRA_11, Ref 7].

Responsible GenAI use—“Several potential risks and conflicts such as privacy concerns, changes in power structures, and excessive control have been identified” [FRA_9, Ref 1].

Communication & Collaboration refers to the teacher’s ability to participate in collaborative discussions, being open to share knowledge, and help peers within and beyond institutional boundaries.

Effective GenAI-driven communication and collaboration require an understanding of:

• Student-centered collaboration—“Collaborating with students allows emergent, student-centered, and student-guided approaches” [FRA_11, Ref 1].

Cross-institutional & global knowledge exchange—“Collaborative approaches across institutions, systems, age categories (high school versus college), and nations” [FRA_11, Ref 2].

Open knowledge-sharing—“We see much sharing of documents: articles on AI in higher ed., sample policy statements, lesson plans, news coverage, and records of ChatGPT sessions” [FRA_11, Ref 5].

Communication strategies—“Teachers should consider different AI-driven tools and systems to help them develop and improve organizational communication strategies” [FRA_9, Ref 1].

Institutional & professional networks—“Teachers need a collaborative and supportive network within their institutions to navigate uncertainties and challenges” [FRA_5, Ref].

Domain-specific knowledge focusses on the teachers’ ability to understand, apply, and adapt GenAI-related competencies to subject-specific content. This competency ensures that teachers effectively integrate subject-matter expertise, pedagogical methods, and GenAI-related knowledge to enhance teaching. It promotes teachers to “recognize and reflect on the implications of the increasing use of AI in one’s discipline” [FRA_13, Ref 1].

This competency requires an understanding of:

Subject-matter knowledge—“Content knowledge is the ‘knowledge about actual subject matter that is to be learned or taught’ (…). Teachers must know about the content they are going to teach” [FRA_8, Ref 1].

Step 5: mapping and interpretation

The final stage (Figure 1) presents the findings in a dual representation framework, operationalized in Figures 3, 4. This step works on the findings of previous steps to structure a representation of competencies. The proposed framework is a two-part structure proposal. Figure 3 presents a conceptual model on how competencies are organized into thematic domains with a strong emphasis on the overlapping nature of the identified competencies. Competencies are organized into three intersecting domains: pedagogical, GenAI, and responsible AI. Complementarily, Figure 4, illustrates a progression model that outlines how teachers advance in competency development. It moves from foundational awareness to its advanced application in teaching and learning, including assessment.

Figure 3
www.frontiersin.org

Figure 3. Conceptual model for GenAI integration in HE.

Figure 4
www.frontiersin.org

Figure 4. (i) minor wording refinements for terminological consistency (“AI & GenAI Literacies”, “Technical literacy foundations”, “Integrated capabilities”, “Core capabilities”, arrow now reads “Professional development (ongoing)”, legend subtitled “Capability clusters”); (ii) bullet list streamlined (“Communication” instead of “Communication skills”); (iii) caption updated to: Progression model for educator capability development (competencies + literacies) for GenAI integration in HE.

The intersection of these models provides a structured yet flexible pathway for teacher training. The framework ensures that all identified competencies (considered relevant) are included, whereas the progression model offers guidance on how these competencies evolve in practice. This dual approach enhances applicability, supporting both curriculum development and targeted professional training.

To enhance the clarity of the proposed framework, Table 6 presents the refined definition of the identified competencies. These definitions were derived from the iterative analysis carried out during the charting and mapping phases, using a Human-GenAI collaboration, and validated by the researchers. These definitions aim to enhance the understanding and future application of each competency within the context of GenAI in HE.

Discussion

The proposed competencies framework for GenAI integration in HE was designed to equip and enhance the teachers’ capabilities regarding GenAI, including the pedagogical adaptations, and the ethical implications while using GenAI tools. The competencies are structured in a progression configuration to guide teachers in transitioning from learner to manager of content creation within the classroom, thereby promoting student development. This progression unfolds across five key steps—essential competencies, AI & GenAI literacy, GenAI-enhanced interaction, effective GenAI use, and AI-powered teaching & assessment—, supported by professional development at every stage. This vertical integration is vital in this process, as mentioned by Cha et al. (2024, p. 257) “The status quo underscores the critical need for professional training programs that empower university teachers to effectively navigate the evolving educational frontier shaped by GenAI.”

The decision to structure GenAI competencies in three conceptual axis that overlap—pedagogical, artificial intelligence, and responsible GenAI—aims to represent the diverse domains from which these competencies emerge. These axes are not meant to reflect full integration yet but rather serve as foundational lenses to identify and organize the necessary knowledge and skills for future pedagogical application. The proposed organization of competencies seeks to distinguish itself from existing models, such as DigCompEdu and UNESCO’s AI Competency Framework for Teachers, by defining a progressive structure that can be easily applied by HE teachers. According to Sattelmaier and Pawlowski (2023) and (Ng et al., 2023), teachers require a minimum level of technical literacy to effectively engage with GenAI in the academia. Similarly, Cha et al. (2024) stresses that teachers need to enhance pedagogical and assessment knowledge to gain advantage when integrating digital technology into their teaching practice. However, the analysis through the FAM process revealed that, while AI Literacy and AI related competences are frequently discussed, there is a continual interchange between AI and GenAI when describing competencies related to novel technologies such as ChatGPT (Su and Yang, 2023; Annapureddy and Fornaroli, 2024). This requires a deeper distinction between these two complementary and fundamental technological knowledges.

Competencies for GenAI integration in HE

Identifying and defining competencies for GenAI integration in HE presented significant challenges due to the lack of structured frameworks focused on GenAI. A limited number of studies directly propose GenAI-related competencies and still lack a formal coherent structure to envision a progression path for competency development. Existing policy guiding frameworks such as DigCompEdu UNESCO’s AI Competency Framework, and TPACK focus on general AI literacy, leaving a gap in understanding the specific competencies teachers need for teaching with GenAI. This lack of clarity and definitions of competences regarding GenAI integration required an inference-based approach to stablish definitions.

The thematic and cluster analysis carried out in step 1 did not reveal significant information about GenAI competencies. However, the thematic evolution indicated an increasing reference to competencies in 2024 compared to previous years, albeit still unstructured. Additionally, the emerging empirical focus by 2025 on student perspectives suggests a shift towards practice-oriented research. However, the lack of direct references to competencies in studies focused on HE teaching required a more focused analytical approach. FAM allowed for extracting relevant insights, leading to the proposed structure of competence classification and progression as presented in Figures 3, 4, in step 5. Furthermore, the propose pyramidal five-step progression proposal and Venn diagram of competencies also evidenced the overlapping interdisciplinary nature of the proposed competencies.

Relevance for HE teacher development

The framework’s progressive design (Figure 3) offers a structured path for HE teachers to enhance and develop competencies for a successful GenAI integration in their teaching practices. The progression from the initial stages of developing and enhancing fundamental competencies to the attainment of proficiency in GenAI-based teaching and assessment, incorporated within an educational paradigm of continuous learning, guides HE teachers to prepare for these new professional demands. Competencies like GenAI-enhanced instruction, for instance, empower teachers to create innovative classroom content and novel interactions, while GenAI-assisted assessment seeks to facilitate educators with tailored and efficient evaluation process.

Additionally, the framework underlines the importance of having clear policies and ethical guidelines to help teachers make decisions to enhance the teaching and learning process. As Annapureddy and Fornaroli (2024) argue, teachers must navigate new ethical and legal considerations, reinforcing the need for explicit policy frameworks.

Propositions for further research and applications

The development of this competency framework for GenAI integration in education displays several opportunities for future research. The authors believe that three major research opportunities could be addressed, namely:

Empirical validation—As theoretical construct, the proposed framework, requires future empirical research to validate the five-step progression model, focusing on its application in practical classroom settings.

Targeted training programs—Further research is also required for the development of targeted development programs tailored to HE teachers, focusing on the progression proposal and the interrelation of the three axes: pedagogy, GenAI, and ethics.

Policies support system—It is also important for future research to enhance understanding the impact and importance of well-structured policies and support systems to enhance teaching training opportunities and support the integration of GenAI into educational practices.

Limitations

When interpreting the results of this study, it is important to recognize its limitations considering the scope defined by the authors. The analysis and literature review focused exclusively on peer-reviewed studies and policy frameworks from recognized institutions. The authors acknowledge that there is a vast amount of gray literature and unofficial professional practice documents available. Given that these documents were excluded, this work potentially omits relevant non-indexed information. This underscores the need for the identified competencies and their definitions to be periodically reviewed to remain relevant, especially in light of the rapid evolution of GenAI tools and educational applications.

Moreover, there is a potential for linguistic and geographic bias due to the inclusion criteria being restricted to documents published in Spanish and English. Lastly, although AI-assisted techniques were used during the coding process, the lack of conventional inter-coder reliability metrics suggests that this work could benefit from additional empirical validation.

Conclusion

The main goal of this study was to identify the essential competencies that HE teachers need to incorporate GenAI effectively into their teaching practice. Results from both quantitative and qualitative approaches showed that existing literature and guiding frameworks (such as DigCompEdu and UNESCO’s AI Competency Framework) address AI literacy broadly, and do not offer definitions or a progression guide to support GenAI-based teaching practices. The proposed five-step competency model offers HE teachers a development path that evolves from basic AI literacy to GenAI-powered education. The three-axis framework (pedagogy, AI, and responsible AI) for developing GenAI competency proposes a balanced approach that integrates technical expertise and teaching strategies within an ethically grounded, domain-specific structure. This ensures that HE teachers develop the required technical literacy while fostering robust pedagogical applications, supported by a growing ethical, legal and social mindset of GenAI integration in education.

As a future step, the authors plan to undertake an empirical validation with HE teachers to evaluate how these competencies align with their needs and expectations. By gathering insights directly from our target audience—higher education teachers—we aim to test, refine and validate the framework based on the perspectives of real professionals. This validation process could also ground the design of a targeted teacher training program, ensuring that it addresses the specific needs and skill gaps identified by HE teachers themselves. Beyond empirical validation, there is also a need to explore how GenAI and AI institutional policies shape the integration of GenAI into education. By designing the GenAI framework, this study may lay foundations for a more informed, adaptable, and effective approach to integrating GenAI into HE teaching, ultimately empowering teachers to harness the full potential of GenAI in education.

Author contributions

PB-A: Writing – original draft, Methodology, Data curation, Investigation, Visualization, Validation, Formal analysis, Writing – review & editing, Conceptualization. YL: Data curation, Conceptualization, Writing – review & editing, Investigation. AC: Supervision, Conceptualization, Writing – review & editing, Methodology, Investigation, Validation, Formal analysis. AB: Writing – review & editing, Supervision, Conceptualization. HM: Writing – review & editing, Validation.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. Financially supported by National Funds through FCT – Fundação para a Ciência e a Tecnologia, I.P., under the projects UIDB/00194/2020 (https://doi.org/10.54499/UIDB/00194/2020) and concerning the CIDTFF Research Unit. The work of the last author is financially supported by national funds through FCT - Foundation for Science and Technology, I. P., under the project UIDB/05460/2020.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that Gen AI was used in the creation of this manuscript. The authors declare the use of different tools in writing and analyzing the data presented. GenAI tools were used for: grammar validation and text edition (QuilBot, DeepL), figures creation (Napking AI) initial concepts and literature exploration (Litmaps, Connected Papers, Elicit and STORM) and for applying chain-of-thought (CoT) prompting allowing exploratory to depth analysis (ChatGPT). In addition, webQDA (a CAQDAS) and R Bibliometrix (bibliometrics analysis) were used to execute quantitative and qualitative data analysis.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

AlAli, R., Wardat, Y., Al-Saud, K., and Alhayek, K. A. (2024). Generative AI in education: best practices for successful implementation. Int. J. Relig. 5, 1016–1025. doi: 10.61707/pkwb8402

Crossref Full Text | Google Scholar

Alasadi, E. A., and Baiz, C. R. (2023). Generative AI in education and research: opportunities, concerns, and solutions. J. Chem. Educ. 100, 2965–2971. doi: 10.1021/acs.jchemed.3c00323

Crossref Full Text | Google Scholar

Al-Samarraie, H., Sarsam, S. M., Alzahrani, A. I., Chatterjee, A., and Swinnerton, B. J. (2024). Gender perceptions of generative AI in higher education. J. Appl. Res. High. Educ. Advance online publication. doi: 10.1108/JARHE-02-2024-0109

Crossref Full Text | Google Scholar

Annapureddy, R., and Fornaroli, A. (2024). Generative AI literacy: twelve defining competencies. Digit. Gov.: Res. Pract. 6, 1–21. doi: 10.1145/3685680

Crossref Full Text | Google Scholar

Bayaga, A. (2024). Leveraging AI-enhanced and emerging technologies for pedagogical innovations in higher education. Educ. Inf. Technol. 30, 1045–1072. doi: 10.1007/s10639-024-13122-y

Crossref Full Text | Google Scholar

Bearman, M., and Ajjawi, R. (2023). Learning to work with the black box: pedagogy for a world with artificial intelligence. Br. J. Educ. Technol. 54, 1160–1173. doi: 10.1111/bjet.13337

Crossref Full Text | Google Scholar

Blanco, B. M., Ramos, E. Á., Biel, L. A., and Collantes, M. P. (2024). Vademecum of artificial intelligence tools applied to the teaching of languages. J. Sci. Educ. Technol. 14:77. doi: 10.3926/jotse.2522

Crossref Full Text | Google Scholar

Brauner, S., Matthias, M., and Markus, B. (2023). “The Development of a Competence Framework for Artificial Intelligence Professionals Using Probabilistic Topic Modelling.” Journal of Enterprise Information Management. doi: 10.1108/JEIM-09-2022-0341

Crossref Full Text | Google Scholar

Carolus, A., Yannik, A., André, M., and Carolin, W. (2023). “Digital Interaction Literacy Model – Conceptualizing Competencies for Literate Interactions with Voice-Based AI Systems.” Computers and Education: Artificial Intelligence 4. doi: 10.1016/j.caeai.2022.100114

Crossref Full Text | Google Scholar

Celik, I. (2023). Exploring the determinants of artificial intelligence (AI) literacy: digital divide, computational thinking, cognitive absorption. Telematics Inform. 83:102026. doi: 10.1016/j.tele.2023.102026

PubMed Abstract | Crossref Full Text | Google Scholar

Cha, Y., Dai, Y., Lin, Z., Liu, A., and Lim, C. P. (2024). Empowering university educators to support generative AI-enabled learning: proposing a competency framework. Proc. CIRP 128, 256–261. doi: 10.1016/j.procir.2024.06.021

Crossref Full Text | Google Scholar

Chiu, T. K. F. (2024). Future research recommendations for transforming higher education with generative AI. Comput. Educ.: Artif. Intell. 6:100197. doi: 10.1016/j.caeai.2023.100197

PubMed Abstract | Crossref Full Text | Google Scholar

Costa, A. P., and Bryda, G. (2025). “Enhancing Education Research: The Potential and Challenges of Incorporating AI into Qualitative Data Analysis.” In Methodologies and Intelligent Systems for Technology Enhanced Learning, Workshops - 14th International Conference.

Google Scholar

Couros, G. (2018). “Digital Teaching Professional Framework.” http://etfoundation.co.uk/edtech.

Google Scholar

de la Torre, A., and Baldeon-Calisto, M. (2024). Generative artificial intelligence in Latin American higher education: a systematic literature review. 2024 12th International Symposium on Digital Forensics and Security (ISDFS). 1–7

Google Scholar

Eager, B., and Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. J. Univ. Teach. Learn. Pract. 20. doi: 10.53761/1.20.5.02

Crossref Full Text | Google Scholar

European Commission: Directorate-General for Education Youth Sport Culture. (2022). “Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators.” https://data.europa.eu/doi/10.2766/153756.

Google Scholar

Emenike, M. E., and Emenike, B. U. (2023). Was this title generated by ChatGPT? Considerations for artificial intelligence text-generation software programs for chemists and chemistry educators. J. Chem. Educ. 100, 1413–1418. doi: 10.1021/acs.jchemed.3c00063

Crossref Full Text | Google Scholar

Faruqe, F., Watkins, R., and Medsker, L. (2022). Competency model approach to AI literacy: research-based path from initial framework to model. Adv. Artif. Intell. Mach. Learn. Res.. 2:40. Available online at: https://www.oajaiml.com/

Google Scholar

Fenske, R. F., and Otts, J. A. A. (2024). Incorporating generative AI to promote inquiry-based learning: comparing elicit AI research assistant to PubMed and CINAHL complete. Med. Ref. Serv. Q. 43, 292–305. doi: 10.1080/02763869.2024.2403272

PubMed Abstract | Crossref Full Text | Google Scholar

Furber, C. (2010). Framework analysis: a method for analysing qualitative data. Afr. J. Midwifery Women’s Health 4, 97:100. doi: 10.12968/ajmw.2010.4.2.47612

Crossref Full Text | Google Scholar

Gale, N. K., Heath, G., Cameron, E., Rashid, S., and Redwood, S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med. Res. Methodol. 13:117. doi: 10.1186/1471-2288-13-117

PubMed Abstract | Crossref Full Text | Google Scholar

Goldsmith, L. J. (2021). Using framework analysis in applied qualitative research. Qual. Rep. 26, 2061–2076. doi: 10.46743/2160-3715/2021.5011

PubMed Abstract | Crossref Full Text | Google Scholar

Hackett, A., and Strickland, K. (2019). Using the framework approach to analyse qualitative data: a worked example. Nurse Res. 26, 8–13. doi: 10.7748/nr.2018.e1580

PubMed Abstract | Crossref Full Text | Google Scholar

Holstein, K., Aleven, V., and Rummel, N. (2020). “A conceptual framework for human–AI hybrid Adaptivity in education” in Artificial intelligence in education (Cham: Springer).

Google Scholar

Houghton, C., Murphy, K., Meehan, B., Thomas, J., Brooker, D., and Casey, D. (2017). From screening to synthesis: using Nvivo to enhance transparency in qualitative evidence synthesis. J. Clin. Nurs. 26, 873–881. doi: 10.1111/jocn.13443

PubMed Abstract | Crossref Full Text | Google Scholar

Jemetz, M., and Motschnig, R. (2024). Teachers’ development of competence in managing generative AI technology: findings from a qualitative interview series. 2024 IEEE Frontiers in Education Conference (FIE). IEEE. 1–9

Google Scholar

Kaplan-Rakowski, R., Kaplan-Rakowski, R., Grotewold, K., Hartwick, P., and Papin, K. (2023). Generative AI and teachers’ perspectives on its implementation in education. J. Interact. Learn. Res. 34, 313–338.

Google Scholar

Kiernan, M. D., and Hill, M. (2018). Framework analysis: a whole paradigm approach. Qual. Res. J. 18, 248–261. doi: 10.1108/QRJ-D-17-00008

Crossref Full Text | Google Scholar

Knoth, N., Decker, M., Laupichler, M. C., Pinski, M., Buchholtz, N., Bata, K., et al. (2024a). Developing a holistic AI literacy assessment matrix—bridging generic, domain-specific, and ethical competencies. Comput. Educ. Open 6:100177. doi: 10.1016/j.caeo.2024.100177

PubMed Abstract | Crossref Full Text | Google Scholar

Knoth, N., Tolzin, A., Janson, A., and Leimeister, J. M. (2024b). AI literacy and its implications for prompt engineering strategies. Comput. Educ.: Artif. Intell. 6, 6:100225. doi: 10.1016/j.caeai.2024.100225

PubMed Abstract | Crossref Full Text | Google Scholar

Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., et al. (2024). Strategies for integrating generative AI into higher education: navigating challenges and leveraging opportunities. Educ. Sci. 14:503. doi: 10.3390/educsci14050503

Crossref Full Text | Google Scholar

Laupichler, M. C., Aster, A., Schirch, J., and Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: a scoping literature review. Comput. Educ.: Artif. Intell. 3:100101. doi: 10.1016/j.caeai.2022.100101

PubMed Abstract | Crossref Full Text | Google Scholar

Law, L. (2024). Application of generative artificial intelligence (GenAI) in language teaching and learning: a scoping literature review. Comput. Educ. Open 6:100174. doi: 10.1016/j.caeo.2024.100174

Crossref Full Text | Google Scholar

Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., and Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 21:100790. doi: 10.1016/j.ijme.2023.100790

Crossref Full Text | Google Scholar

Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your academic life. R. Soc. Open Sci. 23:230658. doi: 10.1098/rsos.230658

PubMed Abstract | Crossref Full Text | Google Scholar

Lorenz, U., and Romeike, R. (2023). “What is AI-PACK?—outline of AI competencies for teaching with DPACK” in Beyond bits and bytes: nurturing informatics intelligence in education (Cham: Springer), 13–25.

Google Scholar

Lu, J., Zheng, R., Gong, Z., and Huifen, X. (2024). Supporting teachers’ professional development with generative AI: the effects on higher order thinking and self-efficacy. IEEE Trans. Learn. Technol. 17, 1267–1277. doi: 10.1109/TLT.2024.3369690

PubMed Abstract | Crossref Full Text | Google Scholar

Maphoto, K. B., Sevnarayan, K., Mohale, N. E., Suliman, Z., Ntsopi, T. J., and Mokoena, D. (2024). Advancing students’ academic excellence in distance education: exploring the potential of generative AI integration to improve academic writing skills. Open Praxis 16, 142–159. doi: 10.55982/openpraxis.16.2.649

Crossref Full Text | Google Scholar

McGrath, C., Farazouli, A., and Cerratto-Pargman, T. (2024). Generative AI Chatbots in higher education: a review of an emerging research area. High. Educ. 88, 899–919. doi: 10.1007/s10734-024-01288-w

Crossref Full Text | Google Scholar

Miao, F., and Wayne, H. (2023). Guidance for Generative AI in Education and Research. Guidance for Generative AI in Education and Research. UNESCO. doi: 10.54675/ewzm9535

Crossref Full Text | Google Scholar

Miao, F., and Mutlu, C. (2024). AI Competency Framework for Teachers. AI Competency Framework for Teachers. Paris: UNESCO. doi: 10.54675/zjte2084

Crossref Full Text | Google Scholar

Michalon, B., and Camacho-Zuñiga, C. (2023). ChatGPT, a brand-new tool to strengthen timeless competencies. Front. Educ. 8:1251163. doi: 10.3389/feduc.2023.1251163

PubMed Abstract | Crossref Full Text | Google Scholar

Mikeladze, T., Meijer, P. C., and Verhoeff, R. P. (2024). A comprehensive exploration of artificial intelligence competence frameworks for educators: a critical review. Eur. J. Educ. 59. doi: 10.1111/ejed.12663

PubMed Abstract | Crossref Full Text | Google Scholar

Mills, A., Bali, M., and Eaton, L. (2023). How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process. J. Appl. Learn. Teach. 6, 16–30. doi: 10.37074/jalt.2023.6.1.34

Crossref Full Text | Google Scholar

Molenaar, I. (2022). Towards hybrid human-AI learning technologies. Eur. J. Educ. 57, 632–645. doi: 10.1111/ejed.12527

Crossref Full Text | Google Scholar

Monzon, N., and Hays, F. A. (2024). Leveraging generative AI to improve motivation and retrieval in higher education learners. JMIR Med. Educ. 11:e59210. doi: 10.2196/59210

PubMed Abstract | Crossref Full Text | Google Scholar

Moreira, M. A., Arcas, B. R., Sánchez, T. G., García, R. B., Melero, M. J. R., Cunha, N. B., et al. (2023). Teachers’ pedagogical competences in higher education: a systematic literature review. J. Univ. Teach. Learn. Pract. 20, 90–123. doi: 10.53761/1.20.01.07

Crossref Full Text | Google Scholar

Moresi, E. A., Dutra, I. P., Costa, A. P., Burneo, P. S., Machado, L. B., and Freitas, F. M. (2024). Bibliometric and comparative analysis of generative artificial intelligence in education research. 19th Iberian Conference on Information Systems and Technologies (CISTI). IEEE.

Google Scholar

Ng, D. T., Kit, J. K., Leung, L., Jiahong, S., Ng, R. C. W., and Chu, S. K. W. (2023). Teachers’ AI digital competencies and twenty-first century skills in the post-pandemic world. Educ. Technol. Res. Dev. 71, 137–161. doi: 10.1007/s11423-023-10203-6

PubMed Abstract | Crossref Full Text | Google Scholar

Ning, Y., Zhang, C., Binyan, X., Zhou, Y., and Wijaya, T. T. (2024). Teachers’ AI-TPACK: exploring the relationship between knowledge elements. Sustainability 16:978. doi: 10.3390/su16030978

PubMed Abstract | Crossref Full Text | Google Scholar

Parkinson, S., Eatough, V., Holmes, J., Stapley, E., and Midgley, N. (2016). Framework analysis: a worked example of a study exploring young people’s experiences of depression. Qual. Res. Psychol. 13, 109–129. doi: 10.1080/14780887.2015.1119228

Crossref Full Text | Google Scholar

Qadir, J.. (2023). Engineering education in the era of ChatGPT: promise and pitfalls of generative AI for education. 2023 IEEE Global Engineering Education Conference (EDUCON).

Google Scholar

Redecker, C., and Yves, P. (2017). European Framework for the Digital Competence of Educators: DigCompEdu. Luxembourg: Publications Office of the European Union. doi: 10.2760/159770

Crossref Full Text | Google Scholar

Rychen, D. S.. (2003). A frame of reference for defining and selecting key competencies in an international context. Definition and Selection of Key Competencies. Contributions to the Second DeSeCo Symposium. D. S. Rychen, L. H. Salganik, and M. E. McLaughlin. 109–119. Swiss Federal Statistical Office.

Google Scholar

Saputra, I., Astuti, M., Sayuti, M., and Kusumastuti, D. (2023). Integration of artificial intelligence in education: opportunities, challenges, threats and obstacles. A literature review. Indones. J. Electr. Eng. Comput. Sci. 12, 1590–1600. doi: 10.33022/ijcs.v12i4.3266

Crossref Full Text | Google Scholar

Sattelmaier, L., and Pawlowski, J. M.. (2023). Towards a generative artificial intelligence competence framework for schools. Proceedings of the International Conference on Enterprise and Industrial Systems (ICOEINS 2023) 291–307

Google Scholar

Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., and Shin, S. (2009). Technological pedagogical content knowledge (TPACK): the development and validation of an assessment instrument for preservice teachers. J. Res. Technol. Educ. 42, 123–149. doi: 10.1080/15391523.2009.10782544

Crossref Full Text | Google Scholar

Scott, C. L. (2015). El futuro del aprendizaje (I): ¿Por qué deben cambiar el contenido y los métodos de aprendizaje en el siglo XXI? UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000234807_spa

Google Scholar

Shaw, L., Nunns, M., Briscoe, S., Anderson, R., and Coon, J. T. (2021). A ‘rapid best-fit’ model for framework synthesis: using research objectives to structure analysis within a rapid review of qualitative evidence. Res. Synth. Methods 12, 368–383. doi: 10.1002/jrsm.1462

PubMed Abstract | Crossref Full Text | Google Scholar

Shimizu, I., Kasai, H., Shikino, K., Araki, N., Takahashi, Z., Onodera, M., et al. (2023). Developing medical education curriculum reform strategies to address the impact of generative AI: qualitative study. JMIR Med. Educ. 9:e53466. doi: 10.2196/53466

PubMed Abstract | Crossref Full Text | Google Scholar

Smolikevych, N. (2019). The teacher’s main competencies in modern higher education. Eur. Hum. Stud. State Soc. 3, 30–42. doi: 10.38014/ehs-ss.2019.3-I.03

Crossref Full Text | Google Scholar

Somerville, J., Jonuscheit, S., and Strang, N. (2023). Framework analysis for vision scientists: a clear step-by-step guide. Scand. J. Optom. Vis. Sci. 16, 1–7. doi: 10.15626/sjovs.v16i1.3547

Crossref Full Text | Google Scholar

Srivastava, A., and Thomson, S. B. (2009). Framework analysis: a qualitative methodology for applied policy research. J. Manag. Gov. 4, 72–79.

Google Scholar

Su, J., and Yang, W. (2023). Unlocking the power of ChatGPT: a framework for applying generative AI in education. ECNU Rev. Educ. 6, 355–366. doi: 10.1177/20965311231168423

Crossref Full Text | Google Scholar

Tiana, A. (2004). “Developing key competencies in education systems: some lessons from international studies and national experiences” in Developing key competencies in education: some lessons from international and national experience (Paris: UNESCO-International Bureau of Education, Studies in Comparative Education), 35–80.

Google Scholar

United Nations Educational Scientific and Cultural Organization (UNESCO). (2018). UNESCO ICT Competency Framework for Teachers. eds. Neil Butcher. Paris: United Nations Educational, Scientific and Cultural Organization UNESCO.

Google Scholar

UNICEF. (2022). Educators’ Digital Competency Framework. UNICEF Regional Office for Europe and Central Asia (ECARO).

Google Scholar

Wang, B., Rau, P. L. P., and Yuan, T. (2023). Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 42, 1324–1337. doi: 10.1080/0144929X.2022.2072768

Crossref Full Text | Google Scholar

Xia, Q., Xiaojing, W., Fan, O., Tzung, J. L., and Chiu Thomas, K. F. (2024). “A Scoping Review on How Generative Artificial Intelligence Transforms Assessment in Higher Education.” International Journal of Educational Technology in Higher Education. Springer Science and Business Media Deutschland GmbH. doi: 10.1186/s41239-024-00468-z

Crossref Full Text | Google Scholar

Yelemarthi, K., Dandu, R., Rao, M., Yanambaka, V. P., and Mahajan, S. (2024). Exploring the potential of generative AI in shaping engineering education: opportunities and challenges. J. Eng. Educ. Transform. 37, 439–445. doi: 10.16920/jeet/2024/v37is2/24072

Crossref Full Text | Google Scholar

Keywords: generative AI, competency frameworks, higher education, AI literacy, teacher training, chain-of-thought (CoT) prompting

Citation: Burneo-Arteaga P, Lira Y, Murzi H, Balula A and Costa AP (2025) Capability-based training framework for generative AI in higher education. Front. Educ. 10:1594199. doi: 10.3389/feduc.2025.1594199

Received: 15 March 2025; Accepted: 13 May 2025;
Published: 06 June 2025.

Edited by:

Celia Camilli, Complutense University of Madrid, Spain

Reviewed by:

Nadia Parsazadeh, Tamkang University, Taiwan
Noble Lo, Lancaster University, United Kingdom

Copyright © 2025 Burneo-Arteaga, Lira, Murzi, Balula and Costa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Pablo Burneo-Arteaga, cHNidXJuZW9AdXNmcS5lZHUuZWM=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.