ORIGINAL RESEARCH article

Front. Educ., 12 May 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1530721

Advancing higher education with GenAI: factors influencing educator AI literacy

  • 1Psychology and Counseling Department, Faculty of Humanities and Educational Sciences, An-Najah National University, Nablus, Palestine
  • 2Educational Sciences, Faculty of Humanities and Educational Sciences, An-Najah National University, Nablus, Palestine
  • 3Faculty of Law and Political Sciences, An-Najah National University, Nablus, Palestine
  • 4International Islamic University Malaysia (IIUM), Kuala Lumpur, Malaysia

Artificial Intelligence (AI) literacy has emerged as a critical skill across various disciplines and industries, including education. This study aimed to identify the factors that influence educators' AI literacy and to examine the relationships among these factors. A sequential mixed methods approach was used to investigate the factors influencing faculty members' AI literacy, with qualitative data collected from 33 faculty members through focus group discussions and semi-structured interviews. Quantitative data were then gathered using the finalized survey instrument, completed by 538 faculty members from diverse disciplines and higher education institutions in Palestine. Data analysis was conducted using Smart PLS. The findings revealed several key factors that impact educators' AI literacy, including AI competencies, perceived usefulness of AI, ease of use, professional development, and community support. Additionally, prior experience with technology played a significant role in developing AI literacy. While the study's mixed methods design provided depth, one limitation was that the qualitative phase involved a relatively small sample. Future research should further explore the broader implications of AI in education and its integration across various academic fields.

Introduction

The rise of generative artificial intelligence (GenAI) tools has sparked ongoing debates about their role in education and research—highlighting both the opportunities and challenges of integrating these tools into higher education (Khlaif et al., 2023). GenAI can create human-like content such as text, audio, pictures, 3D items, code, simulation, and besides videos (Lim et al., 2023). Moreover, numerous human tasks can be handled by GenAI tools to boost productivity of educators and genrating outcomes that vary from simple to high-complex depending on the prompts used (Moorhouse, 2024).

Like other new technological initiatives, there are several factors influence both the adoption and continuance intentions to use these technologies such as individual readiness (Polly et al., 2023), technostress (Khlaif et al., 2023), trust (Choi and Leon, 2023), ease of use (Boubker, 2024), and usefulness. However, GenAI tools have started a new era that shifted the skills and knowledge required in various fields including science (Kamalov et al., 2023), language (Chiu, 2023), and medical education (Awadallah Alkouk and Khlaif, 2024; Garcia et al., 2024; Salama et al., 2025). Interestingly, learners across educational levels have adopted GenAI tools easily and more quickley than educators (Sleator and Hennessey, 2023). As a result, it is essential for educators to develop suitable understanding and skills to teach effectively in the Gen-AI driven era.

The necessity for integrating technology in teaching, specifically in higher education has been a longstanding demand, drivenby its clear benefits. Studies highlight how technology can boost student motivation (Lin, 2015), support bilingual development (Mizumoto and Eguchi, 2023), improve academic performance, and skills enhancement (Cheung, 2023), and enhance collaborative work among students and equipping them with 21st-century skills. The COVID-19 pandemic accelerated this trend, pushing educators toward greater comfort and proficiency in using digital technologies and tools, such as interactive worksheets, digital noticeboards, and learning management systems, designing interactive content, and presentation software (Moorhouse et al., 2023).

However, integrating technology into higher education, particularly in teaching, remains inconsistent (Park and Son, 2022). This can leave novice teachers unprepared for the digital demand of modern classroom (Starkey, 2020). A teacher's initial readiness and early teaching experiences are crucial in shaping their confidence, self-efficacy, and job satisfaction (Khlaif et al., 2023). Despite being digital natives, beginning teachers may lack the professional digital skills required for teaching (K?nig et al., 2024), especially with limited real-world experience and skills needed for personal and professional use.

The pandemic highlighted the global gap in teachers' readiness to use technology effectively for teaching, particularly for online instruction (Moorhouse and Kohnke, 2021). However, strong school induction programs, such as team teaching, structured orientations, and support from colleagues, have been shown to help early career teachers adapt to online teaching requirements (Moorhouse and Kohnke, 2021; K?nig et al., 2024; Paetsch et al., 2023).

The emergence of GenAI presents a new challenge, distinct from the pandemic-driven shift to online teaching. While the pandemic forced a rapid shift to technology use, due to restricted in-person interactions, the emergence of advanced AI tools calls for a reassessment of the skills required to teach languages effectively in the new digital era (Mishra et al., 2023). This highlights the continuous need for adaptation and evolution in teaching competencies to keep pace with technological advancements.

• Despite growing interest, there is a lack of studies investigating the elements that effect the educator's readiness to use Gen AI in advanced teaching settings. Accordingly, this study aims to propose a model that explains the key elements shaping faculty members' readiness to adopt GenAI in higher education.The finds may contribute to the existing literature and guide decision-makers in higher teaching institutions about the factors that play an important role in facility members' readiness to use Gen AI, and to build a policy to use GenAI in the institutions. To achieve this, the study addresses the following research questions:How do educators in higher education develop their Generative AI literacy?

• What factors influence GenAI literacy among faculty members in higher education institutions?

• What is the relationship among these factors?

Literature review

AI literacy in higher education

AI literacy is broadly defined as a set of skills that helps individuals critically engage with AI technologies, including understanding their functions, applications, limitations, and ethical considerations (Annapureddy et al., 2025). Annapureddy et al. (2025) identify 12 core competencies for AI literacy, spanning from basicknowledge to practical skills, such as prompt engineering and ethical awareness. These competencies provide a structured framework for integrating AI literacy into educational curricula.

Additionally, Kalantzis and Cope (2024) emphasize the significance of AI literacy in redefining traditional literacy concepts, arguing that AI-mediated writing and communication require a new educational paradigm. They propose a shift toward “cyber-social literacy learning,” where AI serves as a collaborative tool rather than a passive content generator (Kalantzis and Cope, 2024).

Challenges and opportunities in AI literacy adoption

Despite the growing recognition of AI literacy, challenges persist in its adoption. Ethical and privacy concerns remain critical, as AI literacy must address issues such as bias, misinformation, and data privacy to ensure responsible AI use (Bailey, 2024). Issues of equity and access are also important, as unequal availability of AI tools can widen the digital divide—making inclusive policies a necessity (Pelletier et al., 2023). Another mahor challenge is educator preparedness, as many teachers lack sufficient training in AI literacy, highlighting the need for targeted professional development (Khlaif et al., 2025).

Conversely, AI literacy presents several opportunities for enhancing education. One major benefit is personalized learning, as AI tools can adapt to individual learning needs, offering customized feedback and support (Khlaif et al., 2024). The enhancement of critical thinking skills is another key benefit, as AI literacy fosters analytical skills, enabling students to critically evaluate AI-generated content (Tzirides et al., 2024). Moreover, collaborative AI integration allows AI to serve as a co-learner, enhancing students' abilities without replacing traditional learning (Kalantzis and Cope, 2024).

AI benefits in advanced teaching

Artificial intelligence (AI) is transforming many industries and changing the way people live and work. It has become a powerful driver of innovation. Applications of artificial intelligence (AI) in education are expanding and drawing increasing attention (Zawacki-Richter et al., 2019). It has made room for multiple opportunities to significantly enhance governance, boosting botheffectiveness and efficiency (Nasrallah, 2023). Artificial intelligence (AI) also shows strong potential for enhancing student outcomes, supporting teaching and learning, and transforming the higher education landscape (Stefan and Sharon, 2017).

As technology improves, the use of AI in higher teaching is becoming increasingly important and relevant (Galdames, 2024). Thanks to a range of technologies known as artificial intelligence (AI), machines can now accomplish responsibilities that once required human intelligence (Cascella et al., 2023). Natural language processing, data analytics, and machine learning algorithms, and automation are just a few of the numerous AI applications that hold great promise for the education sector (Farrokhnia et al., 2024). These advancements have the power to radically change how knowledge is found, disseminated, and applied.

AI plays a vital role in higher education by helping to solve pressing issues and creating new opportunities. It holds strong potential to transform traditional educational systems into dynamic, flexible, and student-centered environments (Hamamra et al., 2024; Omar et al., 2024). This aligns with the increasing demand for successful, individualized, and easily accessible learning experiences. Due to its potential to reshape traditional teaching, boost student engagement, and support individualized learning, many institutions are exploring how to integrate AI into their educational practices (Flanagan et al., 2023).

Artificial intelligence has many benefits for higher education. It can be used to create customized education experiences constructed on the requirements, preferences, and learning styles of each student (Zawacki-Richter et al., 2019). Adaptive learning platforms and intelligent tutoring systems can evaluate student data, identify knowledge gaps, and personalize education skills to match the unique requirements of each learner over tailored response and capitals. Kerr (2016) defined adaptive education as a method to deliver knowledge resources in which the kind of materials that are provided later are influenced by the learner's engagement with earlier information. Online learning is the setting in which this technique is applied. This educational approach uses artificial intelligence and computer algorithms to deliver personalized resources and learning activities (Kaplan, 2021).

Second, AI-powered analytics help institutions make data-driven decisions by extracting insights from large datasets. By analyzing student performance, engagement patterns, and demographic data, educational institutions can find at-risk students, increase retention rates, and improve instructional strategies. According to Pedro et al. (2019), this data-driven approach enables evidence-based decisions and customized interventions to improve student achievement and outcomes.

AI can also streamline administrative processes, relieving educators and administrators of labor-intensive manual tasks. ChatGPT and intelligent chatbots can provide instant guidance and support, automate tedious tasks, and free up time for more personalized interactions. AI-driven solutions can improve the way financial aid is managed, resources are allocated, and admissions processes are handled, all of which will boost operational effectiveness (Zawacki-Richter et al., 2019).

AI also opens up new possibilities for content creation and delivery. Virtual reality, augmented reality, and intelligent content development can all offer immersive and captivating learning experiences that boost comprehension and student engagement (Fitria, 2024). Furthermore, Artificial intelligence (AI) tools and applications can be taught to assign student grades essays, allowing educators to devote more time to other areas of instruction. Automated essay grading and language translation further optimize the assessment and feedback processes, allowing for timely and accurate evaluation (Hussain et al., 2018).

In conclusion, the advantages and importance of AI in higher education cannot be disputed. By using AI tools, educational institutions can create personalized, data-driven, and productive learning environments. As these technologies evolve, it's essential to explore their potential, address ethical challenges, and ensure human connection remains central to the learning process. Embracing AI gives us the chance to build a flexible, inclusive, and forward-thinking education system that meets the diverse needs of today's students.

Technology adoption models in education

Several theoretical models have been used to understand how new technologies, including AI, are adopted in educational contexts. The Technology Acceptance Model (TAM), developed by Davis (1989), suggests that technology adoption is influenced by perceived usefulness and ease of use. AI literacy aligns with this model by equipping users with skills to assess the usability and benefits of AI tools in learning environments. Research by Sousa and Cardoso (2025) indicates that students who receive explicit guidance on AI ethics and functionalities are more likely to adopt AI tools for academic purposes.

The Diffusion of Innovations Theory (DOI), proposed by Rogers (2003), explains how technological innovations spread through social systems. AI literacy programs act as key mechanisms for promoting the diffusion of AI tools in education. Institutional initiatives, such as the University of Florida's AI Across the Curriculum Initiative, show how structured AI literacy models can facilitate widespread adoption (Pelletier et al., 2023, Tzirides et al., 2024).

The Technological Pedagogical Content Knowledge (TPACK) framework emphasizes the intersection of technology, pedagogy, and subject matter expertise. AI literacy is becoming an increasingly important component of TPACK, requiring educators to integrate AI-driven tools in ways that enhance learning outcomes (Ding et al., 2024; Mahjoubi et al., 2025). The Substitution-Augmentation-Modification-Redefinition (SAMR) model describes how technology adoption progresses from basic substitution to transformative educational practices. AI literacy supports higher-order transformations by enabling students and educators to leverage AI for personalized learning, creative problem-solving, and critical analysis (U.S. Department of Education, 2023).

The legal issue of using AI in higher education

Educators and students must be aware of the ethical and legal regulations that may affect the usage of GenAI in teaching. This is especially important when GenAI tools in teaching involve live video or live stream since it means that teaching is broadcasted in real- time over the internet. As a result, anything an educator or student says or does will be livestreamed, automatically recorded in a relatively permanent form, and cannot be deleted or amended easily (Anderson and Simpson 2007).

Accordingly, the use of abusive language, discriminatory, invasive, or offensive words by an educator can be recorded by students, and shared through digital media or sent to the relevant authority (Salama et al., 2025). This would not only harm the educator's reputation but may also lead to legal complications such as criminal charges. The same applies to students who say or do anything considered acriminal offense or a breach of ethical regulations. For instance, a student may be asked by their educator to give an online presentation during a live-streamed lesson. This puts the student in some control of the session and could result in them using offensive language, images, clips, or websites that involve illegal content under criminal law. Therefore, institutions must ensure that educators and students are fully aware of the legal and ethical regulations involved in online teaching, especially when it includeslive-streaming lessons.

AI in Palestine

AI is transforming how we teach and learn in the field of education, opening the door for a more inclusive and advanced educational system. AI offers promising path to reducing inequalities and ensuring equitable chances for all students in Palestine, where access to high-quality education can be limited by several circumstances. Many Arab countries, such as Egypt, Palestine, Libya, Oman, Lebanon, Saudi Arabia, and the United Arab Emirates have started studying and implementing artificial intelligence into their systems and procedures. However, Sourani (2019) emphasized that, due to the challenges these countries face, artificial intelligence still lacks the capability to fully replace teachers in Arabic-speaking regions. According to a review of the literature, studies from unstable countries like Palestine put more emphasis on introducing artificial intelligence techniques, their function in smart teaching systems, and teachers' perceptionsof AI than on the actual applications of the technology (Khlaif et al., 2024). These nations lack the resources and research funding required to fully develop develop artificial intelligence and integrate it with systems. In contrast, wealthier, more technologically advanced, and politically stable Arab nations—such as Saudi Arabia, the United Arab Emirates, and Egypt—have explored and implemented AI technologies more extensively (Alzahrani, 2022).

Research design

An experimental sequential mixed methods approach was used to investigate the factors influencing faculty members' AI literacy. This approach consisted of three stages. The results from the first (qualitative) stage served as the foundation to develop the instrument for the next stage (quantitative stage) (Creswell and Clark, 2017). The third stage involved developing and analyzing the research model using statistical procedures.

Recruiting the participants

All participants in the qualitative phase were selected using purposive sampling based on predefined criteria to ensure the inclusion of faculty members with direct experience in using generative AI in teaching and research. To be eligible, participants needed to have actively engaged with AI-powered tools such as ChatGPT, DeepSeek, Claude, and other AI tools in their academic work. Only faculty members holding positions at recognized universities were considered, as their roles in curriculum design, assessment, and pedagogical practices made their insights particularly valuable. To capture a broad range of perspectives, participants were recruited from various disciplines, including STEM, humanities, social sciences, business, and medical education. This disciplinary diversity allowed for a more comprehensive understanding of AI adoption across different fields.

Additionally, participants were required to have at least 3 years of teaching experience in higher education, ensuring they had a solid pedagogical foundation and could critically assess the implications of AI in their teaching practices. Familiarity with digital technologies and various levels of using AI tools was also an essential criterion, though participants' knowledge ranged from beginner to experts. This ensured that discussions remained focused on AI integration rather than general digital literacy. To maintain a diversity of perspectives and avoid redundancy, none of the focus group participants had previously taken part in the semi-structured interviews. Finally, all participants expressed a willingness to engage in the study and share their experiences openly, ensuring rich, reflective discussions that contributed to the study's qualitative depth. As a result, a total of 33 faculty members participated in the qualitative phase. Table 1 provides a detailed breakdown of participant demographics, including academic background, years of teaching experience, and AI proficiency levels.

Table 1
www.frontiersin.org

Table 1. Demographic information of the participants in the qualitative stage (N = 33).

Sampling strategy and justification

The study employed purposive sampling and the snowball sampling to ensure the recruitment of faculty members with relevant expertise in generative AI. Given that AI literacy in higher education is an emerging area of research, a random sampling approach would not have guaranteed the inclusion of participants with the necessary experience and insights. Purposive sampling allowed for the intentional selection of faculty members who had actively integrated generative AI into their teaching and research. This ensured that the data collected would be rich, relevant, and directly applicable to the study's objectives. By focusing on educators who were already engaging with AI tools, the study could explore meaningful perspectives on AI literacy instead of collecting general opinions from faculty with limited exposure to AI technologies.

Additionally, snowball sampling was used in this study to effectively recruit faculty members with relevant experience in using generative AI for teaching and research. Given the absence of a centralized database of faculty actively engaging with AI tools, traditional probability sampling methods were not feasible. Snowball sampling allowed us to identify knowledgeable participants through academic networks, ensuring that those with direct experience contributed to the study. Since the adoption of generative AI in higher education varies significantly across institutions and disciplines, identifying qualified participants through conventional recruitment methods was challenging. Snowball sampling enabled the research team to leverage professional networks, academic associations, and institutional referrals to identify faculty members with substantial AI experience. This method was especially helpful in reaching participants who might not have been easily identifiedthrough formal recruitment channels but were recognized by their peers as knowledgeable in the field. While this method carries the risk of selection bias, we mitigated this limitation by initiating recruitment from multiple institutions and disciplines, ensuring a diverse range of perspectives. Additionally, purposive sampling criteria were applied to enhance the representativeness of the sample.

Together, purposive sampling and snowball sampling provided a strategic and effective recruitment approach that ensured the inclusion of diverse faculty perspectives while maintaining the study's focus on AI literacy. These methods facilitated the recruitment of educators from multiple disciplines, ensuring that the findings reflected a broad range of experiences and teaching contexts. By combining both approaches, the study achieved a well-rounded qualitative dataset that formed the foundation for developing the subsequent research instrument in the next phase.

The aim of the semi-structured interviews and focus group sessions was to discover the types of generative AI tools used in their practices and the factors that could influence their AI literacy. The findings from these sessions were helped refine the items for the quantitative instrument. The next stage of the study involved a quantitative approach, where data were collected using an online survey on Google Drive, and distributed via email to participants. The third stage focused on developing and testing the suggested model and analyzing the relationship between the variables affecting faculty members' AI literacy using conducting Confirmatory factor analysis (CFA) and Structural equation modeling (SEM) performed withSPSS 27 and Smart PLS 4.1.

First stage: qualitative stage

In this stage, two qualitative research instruments were used to collect in-depth information from participants based on their experiences with using generative AI in teaching and research. These tools included focus groups and semi-structured interviews.

Semi-structured interviews

The interview protocol (Appendix A) consisted of three parts: an introduction to the study, interview questions based on the findings of previous studies, and a snowball method for recruiting participants from universities. The primary criteria for inviting and selecting participants were their use of various generative AI tools for teaching and research. Involvement in the study was entirely voluntary. Interviews continued until the emergence of new themes reached saturation. Each interview lasted 25–35 min and was conducted using a video conferencing tool. All of the interview sessions were recorded after obtaining written consent from the participants. The interview questions focused on faculty experiences with using generative AI instruments in teaching and research, as well as their development of AI literacy.

Focus group discussions

Three focus group discussions, each lasting 1 h, were conducted with 18 participants to explore AI literacy and the factors influencing it drawing from their experiences. The sessions were organized during training workshops for faculty members on using generative AI tools in teaching and research. One discussion was conducted face-to-face, while the other two were held online using a video conferencing tool.

Each focus group consisted of six participants, allowing for meaningful interaction while ensuring that all individuals had the opportunity to contribute. The sessions followed a semi-structured format, guided by an interview protocol that provided consistency while allowing flexibility for participants to elaborate on emerging themes. Discussion prompts were generated from interview texts, such as “AI literacy is the competency of using AI in teaching—what do you think?” (see Appendix B for sample prompts).

To ensure structured and balanced discussions, three researchers, including the principle investigator, moderated the face-to-face session, while two others facilitated the online sessions. The facilitators played a crucial role in guiding the discussions, ensuring that all participants had an equal opportunity to contribute, and maintaining a neutral stance to avoid influencing responses. They encouraged deeper reflections by asking probing questions, clarified ambiguous points, and managed the flow of conversation to prevent any one participant from dominating. Additionally, they created an open and respectful environment where faculty members felt comfortable sharing their insights and experiences.

All focus group discussions were audio-recorded after obtaining verbal consent from participants. The recordings were later transcribed for qualitative analysis, allowing for a thorough examination of faculty perspectives on AI literacy and its impact on teaching and research.

Qualitative data analysis procedures

All interview and focus group data were audio-recorded with participants' consent and subsequently transcribed verbatim to ensure accuracy. Four researchers who conducted the interviews transcribed all audio files, exchanged text files, and individually compared the recordings with their corresponding transcripts. Moreover, the researchers who moderated the focus group sessions analyzed the data guided by the findings from the semi-structured interviews.

Following transcription, the research team conducted a thorough review to validate the data by cross-checking transcripts with the original recordings. To enhance credibility, participants were given the opportunity to review their transcripts and provide clarifications or corrections if needed. Additionally, researcher triangulation was employed, where multiple researchers independently reviewed and coded the transcripts to identify key themes and patterns, reducing potential bias. This rigorous process ensured the reliability and validity of the qualitative data, strengthening the study's findings.

In this study, thematic analysis was used to analyze the interview and focus group data related to AI literacy among faculty members. Following the six-phase approach outlined by Braun and Clarke (2006), we systematically analyzed the data manually to identify key themes and patterns. The process began with familiarization with the data, where we read and re-read the transcripts to gain a deep understanding of the content and make initial notes on potential patterns related to AI literacy. In the second phase, we generated initial codes by identifying meaningful segments of data relevant to the research questions, focusing on aspects of AI literacy, challenges, and experiences with generative AI tools in teaching and research. These codes were applied consistently across the dataset, ensuring comprehensive analysis of both interview and focus group data. Next, we searched for themes by grouping related codes into broader categories that represented different dimensions of AI literacy. This phase involved identifying commonalities and relationships among the data to form themes that could address the factors influencing AI literacy. The identified themes were then reviewed to ensure coherence and consistency, refining them to accurately reflect the data. In the final phase, each theme was clearly defined and named to represent the key aspects of AI literacy, with sub-themes used to capture more specific nuances. This thematic analysis enabled a deeper understanding of the factors influencing AI literacy and the challenges faculty members face when integrating AI tools into their academic practices.

Following the organization of themes and subthemes from the first phase of the study, the four researchers quantified the findings by calculating the the frequency of each subtheme. This quantification of the qualitative themes allowed the researchers to identify the most prevalent factors influencing the development of faculty members' AI literacy. Table 2 displays the regularity of the most significant factors impacting faculty members' AI literacy.

Table 2
www.frontiersin.org

Table 2. Frequency of the factors as reported by the participants in the qualitative phase.

Based on Table 2, it was observed that community support is the most influential factor in faculty AI literacy, followed by professional development and continuous learning about generative AI, including how it works and how to use it in teaching and research. Other significant factors included technology experience and AI competencies, such as skills and knowledge about generative AI. Attitudes and confidence in teaching with generative AI were found to be less influential on AI literacy.

Trustworthiness of qualitative phase

To ensure the rigor and trustworthiness of this study, the research process was guided by the principles of credibility, confirmability, dependability, and transferability. Credibility was reinforced through methodological triangulation, as data were collected using both semi-structured interviews and focus group discussions. This approach provided a comprehensive understanding of faculty members' AI literacy by capturing diverse perspectives across different contexts.

Confirmability was ensured by maintaining a detailed audit trail of the research process, including documentation of data collection and analysis procedures. This practice helped minimize researcher bias and ensured that the findings accurately reflected participants' experiences. Transcribed interviews were also shared with participants for member checking, allowing them to verify or clarify their responses as needed. The interview protocol was carefully developed based on the study's research objectives, a pilot interview, and feedback from experts in computer science and educational technology. Dependability was strengthened through a code-recode strategy, where the researchers independently coded the data multiple times and compared their results to maintain consistency. Additionally, since the interviews and focus groups were conducted in Arabic, a rigorous backward translation process was used to ensure linguistic and conceptual accuracy. The interrater reliability was calculated at 89%, indicating strong agreement among coders. For transferability, purposive sampling and snowball were used to recruit faculty members with relevant experience in using generative AI tools in teaching and research. This ensured that the findings would be applicable to similar higher education contexts, allowing for meaningful insights into AI literacy among faculty members.

Stage II: quantitative stage

Development of the survey

The development of the survey instrument was informed by both the findings from the qualitative phase of this study and a thorough review of existing literature on AI literacy and faculty engagement with technology in higher education. This approach ensured that the survey captured the key factors influencing faculty members' AI literacy while reflecting the perspectives and experiences shared by participants during the qualitative stage.

The survey was designed to measure multiple dimensions relevant to AI literacy and faculty adoption of AI in teaching and research. These dimensions included community support, professional development opportunities, prior teaching experience with technology, AI competencies, perceived usefulness of AI, overall AI literacy, attitudes toward AI integration, and confidence in teaching with AI-powered tools.

The initial version of the survey consisted of 29 items, structured using a five-point Likert scale to gauge the level of agreement or frequency of engagement with various AI-related practices. Of these 29 items, six were adapted from validated instruments used in previous studies to ensure reliability and consistency with prior research. The remaining items were newly developed based on qualitative insights, ensuring that the survey fully reflected the emerging themes identified in faculty members' discussions about AI literacy.

Procedures for building the survey

After finalizing the quantization of the qualitative findings, the research team convened to determine the components of the survey based on Table 2. They decided to include the themes with the highest frequency. The second step involved creating a list of items based onparticipants' responses. The third procedure was a cognitive interview with three faculty members to check the wording of the objects. Finally, a pilot study with 30 faculty participants who met the same criteria described in the first stage of the research and represented diverse academic backgrounds and faculty members was conducted to assess the reliability and validity of the instrument. To determine the loading factors of each item on the constructs and the total number of factors, exploratory factor analysis was carried out using SPSS.

To ensure the survey instrument's robustness and construct validity, a systematic approach was taken in refining the items during the exploratory factor analysis (EFA) process. Beyond removing items with factor loadings below 0.50 (Cheung et al., 2024). Additional criteria were applied to determine item retention or elimination. Items were examined for cross-loadings, and those that loaded significantly onto multiple factors (i.e., with loadings above 0.40 on more than one factor) were removed to maintain the distinctiveness of each construct. Furthermore, inter-item correlations were assessed to identify redundancy, ensuring that each retained item contributed unique information to the construct it measured. Items with low communalities (below 0.30) were also considered for removal, as they indicated weak shared variance with the underlying factor structure. Additionally, the research team reviewed item clarity and relevance based on faculty feedback from the cognitive interviews and pilot study, eliminating ambiguous or conceptually overlapping items. This rigorous item reduction process resulted in a final instrument containing 21 items across six constructs, ensuring a valid and reliable measure of AI literacy and faculty engagement with AI in teaching and research. This rigorous item reduction process resulted in a final survey instrument with 21 items across six constructs (Appendix C).

Third stage: constructing and examining the model

The researchers built and tested the relationships between the identified factors andAI literacy using Smart PLS. They used Google Forms to design and distribute the final version of the survey for data collection. Participants were invited via email, with an invitation explaining the study's purpose and informing them that participation was voluntary and without compensation. A total of 538 faculty members from various universities in Palestine participated in this stage. Table 3 presents the demographic information in the final stage of the study.

Table 3
www.frontiersin.org

Table 3. Demographic data about the participants in the quantitative stage (N = 538).

Table 3 shows that around half of the participants were female (51.1%) and 48.9% male.

The proportion of medical and engineering sciences in the stream was higher (58%).

Data collection

To maximize participation, the finalized survey was distributed via email to 850 faculty members across various universities, disciplines, and levels of AI usage. Participation was voluntary, and two follow-up reminder emails were sent to encourage responses. A total of 538 faculty members completed the survey, yielding a response rate of ~63.3%. This relatively high response rate helped reduce the risk of non-response bias, ensuring that the sample reflected diverse backgrounds and varying levels of AI integration in teaching and research. The study employed a voluntary response sampling approach, allowing faculty members with relevant experience and interest in AI literacy to contribute their perspectives. While this method prioritized the inclusion of actively engaged participants, it may limit the generalizability of the findings to the broader academic population. However, the combination of purposive sampling and a strong response rate strengthens the reliability of the data and provides valuable insights into faculty members' AI literacy and the factors influencing its development.

Data analysis

Various techniques and instruments were utilized by the researchers for data analysis, depending on the stage of the study. In order to determine the elements that affected faculty AI literacy from the participants' perspective based on their lived experience, inductive thematic analysis was used in the qualitative data analysis. Factor analysis (EFA) was utilized in the survey development process to determine the factor loading for each item on the construct using SPSS. Lastly, Smart PLS was used to investigate the connections between the variables in the model under test.

Ethical considerations

This study adhered to strict ethical guidelines to ensure the protection and autonomy of all participants. Ethical approval was obtained from the Institutional Review Board (IRB) at An-Najah National University under approval number Edu. Nov. 2024/22. Prior to participation, all individuals were provided with a detailed explanation of the study's objectives, procedures, and potential implications. Participants were explicitly informed that their involvement was entirely voluntary and that they could withdraw at any stage without facing any negative consequences.

To obtain informed consent, written consent forms were collected from participants in the interview sessions, ensuring that they fully understood the study's purpose and their rights as participants. In the focus group sessions, verbal consent was obtained before the discussions began, and participants were reminded that their responses would remain confidential and anonymous. All participants were assured that their data would be used solely for academic research, with no personally identifiable information being recorded or disclosed.

At the start of each session, participants were reminded of their right to decline answering any question or to withdraw from the study at any time. These ethical measures were implemented to safeguard participants' rights, ensure transparency in the research process, and uphold the highest standards of research integrity.

Results

To determine the validity and reliability of the constructs, the measurement model was evaluated (Table 4). First, all of the model's item factor loadings have values >0.7, which is a desirable result. No multicollinearity was detected sense all the VIF values are <10 according to Hair et al. (2019). Cronbach's alpha and composite reliability were used to evaluate reliability; statistics for both showed higher values than the suggested threshold of 0.700, indicating good reliability. Given that the AVE was >0.500, convergent validity was deemed appropriate. Through a comparison of the latent variable correlations with the square root of AVE in diagonal, discriminant validity was evaluated. The results show that the √AVE of each construct appears to be higher than its correlation with other constructs and heterotrait–monotrait ratio of correlations, with values below the (conservative) threshold of 0.85. This establishes discriminant validity (refer to Table 5).

Table 4
www.frontiersin.org

Table 4. Relatability and convergent validity.

Table 5
www.frontiersin.org

Table 5. Discriminant validity.

The √AVE is diagonal. The correlations between the values of the construct are shown below the diagonal elements. The correlation values' heterotrait–monotrait ratio is located above the diagonal elements. The R2, Q2, and f2, standardized root mean square residual, Normalized Fit Index (NFI), and significance of paths are used to evaluate structural models. According to the strength of each structural path and the R2 value for the dependent variable determine how good the model is; the R2 value should be ≥0.1.

Table 6 results demonstrate that every R2 value is >0.1. Thus, the ability to predict is established. Q2 also establishes the endogenous constructs' predictive relevance. The model has predictive relevance when the Q2 is >0. The impact of a predictor variable on a dependent variable f2 was large according to Cohen (1988) since all the values are >0.15 except PD on AIL was small. The findings indicate that the constructs' predictions are significant (see Table 6). In addition, the standardized root mean square residual (SRMSR) and Root Mean Square Error of Approximation (RMSEA) used to evaluate the model fit. The values of SRMSR and RMSEA were 0.08, 0.06 satisfies the necessary requirement of <0.10 and 0.8 (Hair et al., 2019).

Table 6
www.frontiersin.org

Table 6. Direct effect.

In Smart-PLS, the NFI, CFI, and LTI are measures used to assess the overall fit of a structural equation model. It indicates the proportion of improvement in model fit relative to the null model, with values ranging from 0 to 1, NFI, CFI, and LTI values above 0.90 are considered as acceptable, since the NFI value in our model equals 0.91 fitting is accomplished.

To determine the significance of the relationship, hypotheses were tested in order to further evaluate the goodness of fit (see Table 7). H1a evaluates whether AID has a significant impact on AIL. The results revealed that it has a significant impact on AIL (β = 0.11, t = 4.418, p = 0.00), and on PU (β = 0.10, t = 2.815, p = 0.00). Hence, H1 was supported.

Table 7
www.frontiersin.org

Table 7. Mediation analysis.

The results revealed that OS has a significant influence on AID (β = 0.37, t = 12.933, p = 0.00), AIL (β = 0.47, t = 17.285, p = 0.00), and PU (β = −0.34, t = 11.371, p = 0.00) supporting H2a, H2b, and H2c.

The results revealed that PD has a significant impact on AID (β = −0.12, t = 4.098, p = 0.00), and PU (β = 0.31, t = 10.953, p = 0.00) supporting H3a, and H3c but PD has no significant impact on AIL (β = 0.00, t = 0.031, p = 0.49).

The results revealed that PU has a significant impact on AIL (β = −0.46, t = 19.152, p = 0.00), and TE on AIL (β = 0.06, t = 2.781, p = 0.00) supporting H4, and H5.

Mediation analysis

The mediating role of AID and PU in the relationship between OS and AIL was evaluated through mediation analysis. The results (see Table 4) revealed partially competitive significant (p < 0.01) mediating roles of AID (β = 0.041, t = 3.978, p = 0.00), PU (β = 0.162, t = 10.44, p = 0.000), and AID and PU (β = −0.019, t = 2.909, p = 0.002). The results revealed the full significant mediating role of AID and PU in the relationship between PD and AIL. AID (β = −0.012, t = 2.751, p = 0.00), PU (β = −0.132, t = 9.135, p = 0.000), and AID and PU (β = 0.005, t = 2.348, p = 0.00). The results revealed partially competitive significant mediating role PU in the relationship between AID and AIL, PU (β = −0.049, t = 3.133, p = 0.00). The results revealed partially competitive significant mediating role AID in the relationship between OS and PU, AID (β = 0.042, t = 2.968, p = 0.00). The results revealed partially competitive significant mediating role AID in the relationship between PD and PU, AID (β = −0.012, t = 3.383, p = 0.00).

The moderating effect of TE on the relationship between OS and AIL was evaluated by the study. The R2 value for AIL was 0.596, which indicates that OS accounts for 59.6% of the explained variance in AIL when the moderating effect (OS * TE) is excluded. The dependent variable AIL's variance explained increased by 11% when TE was included, resulting in a R2 of 60.7%. After conducting a further analysis of the moderating effect, as indicated by Table 6, it was found that TE had a positive and significant moderating impact on the relationship between OS and AIL (β = 0.095, t = 4.635, p = 0.00).

Further, slope is presented to better understanding of the moderator effect as shown in Figure 1, the line is much steeper for high than mid, and low TE, as mid steeper than low TE. Relationship between OS and AIL is stronger in high TE than mid and low, and for mid TE than low TE.

Figure 1
www.frontiersin.org

Figure 1. Moderator effect.

F2 effect size was 0.021, Kenny proposed that the small, medium, and large effect sizes of moderation are, respectively, 0.005, 0.01, and 0.025. Indicating nearly a large and significant effect size. Figure 2 summarizes the relationships among the constructs and highlights the mediating factors influencing these relationships.

Figure 2
www.frontiersin.org

Figure 2. Inner and outer model.

Discussion

This study aimed to address three key research questions: (1) How do educators in higher education develop their Generative AI literacy? (2) What factors influence their AI literacy? and (3) What is the relationship among these factors?

Our findings reveal a clear pattern of interrelationships among these variables. Community support (OS) and professional development (PD) influence AI literacy (AIL) both directly and indirectly through their effects on AI competencies (AID) and perceived usefulness (PU). Educators acquire their Generative AI literacy primarily through specialized professional development programs that integrate technical training with pedagogical innovation. Strong community support systems that offer platforms for sharing best practices and ongoing learning best support such programs. These findings align with earlier research that supports integrated AI curricula and institution-wide initiatives (Crompton and Burke, 2022; Hrastinski et al., 2019; Liang et al., 2021; Annapureddy et al., 2025; Khlaif et al., 2024). Annapureddy et al.'s competency-based framework builds on this insight by enumerating specific competencies—such as basic AI literacy, prompt engineering, and ethical awareness—that educators need to acquire to use Generative AI properly.

Our structural model shows that community support has a significant direct impact on AI literacy (β = 0.47, p < 0.01) and on AI competencies (β = 0.37, p < 0.01) and on perceived usefulness as well. However, professional development has no direct impact on AI literacy (β = 0.00, p = 0.49) but has effects through mediation by positively impacting perceived usefulness (β = 0.31, p < 0.01) and to a lesser extent through its impact on AI competencies (β = −0.12, p < 0.01).

These mediation effects highlight thatwhile both PD and OS are important, OS exerts a more direct influence on educators' willingness to adopt AI. The significant indirect effects re-emphasize that technical competency gains and positive perception of AI are key mediators in the adoption process—a finding that is supported by existing theories like the Technology Acceptance Model (Davis, 1989) and Diffusion of Innovations Theory (Rogers, 2003).

In addition, teacher efficacy (TE) also moderates OS–AIL (β = 0.095, p < 0.01), and this indicates that educators with higher self-efficacy are better able to leverage community support to enhance AI literacy. This suggests that educators with higher self-efficacy are better positioned to leverage community support to enhance their AI literacy. This underscores the importance of addressing individual differences alongside institutional support mechanisms.

Moreover, ethicality and institutional readiness also emerged as key factors in shaping AI literacy. Ethical standards and inclusive institutional policies contribute to responsible AI integration that aligns with educational goals (Ouyang et al., 2022). This multidimensional strategy—encompassing technological, pedagogical, and cultural aspects—highlights the need for both comprehensive training and institutionally supportive environments.

Overall, our findings show that both institutional (PD and OS) and individual (TE, AID, PU) factors play crucial roles in developing Generative AI literacy. A comprehensive strategy—one that combines focused training with strong community engagement—is essential to enhancing AI literacy in higher education.

Implications

The implications of this study are multifaceted and call for a comprehensive approach to advancing AI integration in higher education. One major implication is the need for institutions to invest in strong community support systems that foster collaborative learning and the sharing of best practices among educators. Such environments can significantly enhance collective growth and adaptation to emerging technologies like AI. In addition, teacher development programs must be restructured to strengthen technical competencies and raise awareness about the practical uses of AI in educational settings. These improvements should align with the competencies outlined by Annapureddy et al. (2025), ensuring that educators are equipped to make meaningful use of AI tools in their teaching.

Furthermore, teacher efficacy can be greatly improved through initiatives such as peer support and mentoring. These forms of professional development are essential for building a supportive community that empowers educators to enhance their AI literacy and confidently integrate AI into their pedagogical practices. The findings also highlight the necessity for a comprehensive policy framework that addresses the technological, pedagogical, and cultural dimensions of AI integration. Such a framework is vital to establish a sustainable ecosystem that supports the responsible and effective use of AI in higher education.

To build on these insights, future research should adopt longitudinal designs and incorporate cross-cultural comparisons. This would allow for a more nuanced understanding of how AI integration strategies can be refined and tailored to fit diverse educational contexts, ensuring their long-term success and relevance.

Conclusion

Briefly, this study demonstrates that educators develop their Generative AI literacy through a two-pathway: formal professional development and active community support. Whereas community support directly influences AI literacy and indirectly enhances AI competences and perceived usefulness of AI, professional development does not directly impact AI literacy. Its strong indirect effects on perceived usefulness, however, confirm its central role in shaping educators' attitudes toward AI. Teacher efficacy amplifies the impact of community support and suggests individual readiness is central to maximizing the impacts of institutionally based support. These findings not only serve to address the research questions but also add to existing literature on technology adoption by illuminating mediating and moderating mechanisms that motivate educators to incorporate Generative AI into pedagogy.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by IRB, An Najah National University. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

AMA: Data curation, Formal analysis, Investigation, Writing – original draft. ZK: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Supervision, Validation, Writing – original draft, Writing – review & editing. MS: Conceptualization, Methodology, Writing – original draft. BA: Conceptualization, Data curation, Formal analysis, Investigation, Writing – original draft, Writing – review & editing. AA: Conceptualization, Investigation, Methodology, Writing – original draft. MH: Conceptualization, Formal analysis, Writing – original draft. KB: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft. TB: Conceptualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that Gen AI was used in the creation of this manuscript. The researchers would like to sincerely thank OpenAI's ChatGPT, a highly developed AI language model, for its invaluable assistance in editing and proofreading our research paper. The robust natural language processing capabilities of the model significantly improved the work's overall quality, coherence, and clarity. The development team at ChatGPT and their invaluable assistance were a major factor in the successful completion of our study.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1530721/full#supplementary-material

References

Alzahrani, A. (2022). A systematic review of artificial intelligence in education in the arab world. Amazon. Investig. 11, 293–305. doi: 10.34069/AI/2022.54.06.28

Crossref Full Text | Google Scholar

Annapureddy, R., Fornaroli, A., and Gatica-Perez, D. (2025). Generative AI literacy: twelve defining competencies. Digit. Gov. Res. Pract. 6, 1–21. doi: 10.1145/3685680

Crossref Full Text | Google Scholar

Awadallah Alkouk, W., and Khlaif, Z. N. (2024). AI-resistant assessments in higher education: practical insights from faculty training workshops. Front. Educ. 9:1499495. doi: 10.3389/feduc.2024.1499495

Crossref Full Text | Google Scholar

Bailey, C. (2024). “Artificial intelligence policy: what computing educators and students should know,” in Proceedings of the 2024 on ACM Virtual Global Computing Education Conference Vol. 1 (New York, NY: ACM), 1–2. doi: 10.1145/3649165.3699863

Crossref Full Text | Google Scholar

Boubker, O. (2024). From chatting to self-educating: can AI tools boost student learning outcomes? Expert Syst. Appl. 238:121820.? doi: 10.1016/j.eswa.2023.121820

Crossref Full Text | Google Scholar

Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa

Crossref Full Text | Google Scholar

Cascella, M., Tracey, M. C., Petrucci, E., and Bignami, E. G. (2023). Exploring artificial intelligence in anesthesia: a primer on ethics, and clinical applications. Surgeries 4, 264–274. doi: 10.3390/surgeries4020027

Crossref Full Text | Google Scholar

Cheung, A. (2023). Language teaching during a pandemic: a case study of zoom use by a secondary ESL teacher in Hong Kong. RELC J. 54, 55–70. doi: 10.1177/0033688220981784

Crossref Full Text | Google Scholar

Cheung, G. W., Cooper-Thomas, H. D., Lau, R. S., and Wang, L. C. (2024). Reporting reliability, convergent and discriminant validity with structural equation modeling: a review and best-practice recommendations. Asia Pac. J. Manag. 41, 745–783. doi: 10.1007/s10490-023-09871-y

Crossref Full Text | Google Scholar

Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interact. Learn. Environ. 32, 6187–6203. doi: 10.1080/10494820.2023.2253861

Crossref Full Text | Google Scholar

Choi, H. S., and Leon, S. (2023). When trust cues help helpfulness: investigating the effect of trust cues on online review helpfulness using big data survey based on the amazon platform. Electron. Commer. Res. 1–28. doi: 10.1007/s10660-023-09726-0

Crossref Full Text | Google Scholar

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd Edn.). Hillsdale, NJ: Lawrence Erlbaum Associates.

Google Scholar

Creswell, J. W., and Clark, V. L. P. (2017). Designing and Conducting Mixed Methods Research. London: Sage publications.

Google Scholar

Crompton, H., and Burke, D. (2022). Artificial intelligence in K-12 education. SN Soc. Sci. 2:113. doi: 10.1007/s43545-022-00425-5

Crossref Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

Ding, A. C. E., Shi, L., Yang, H., and Choi, I. (2024). Enhancing teacher AI literacy and integration through different types of cases in teacher professional development. Comput. Educ. Open 6:100178. doi: 10.1016/j.caeo.2024.100178

Crossref Full Text | Google Scholar

Farrokhnia, M., Banihashem, S. K., Noroozi, O., and Wals, A. (2024). A SWOT analysis of ChatGPT: implications for educational practice and research. Innov. Educ. Teach. Int. 61, 460–474. doi: 10.1080/14703297.2023.2195846

Crossref Full Text | Google Scholar

Fitria, N. J. L. (2024). The influence of academic services and lecturer performance on student engagement moderated by independent learning independent campus program policy. Educ. Policy Manage. Rev. 1, 25–38.

Google Scholar

Flanagan, O. L., Cummings, K. M., and Cummings, K. (2023). Standardized patients in medical education: a review of the literature. Cureus 15:e42027. doi: 10.7759/cureus.42027

PubMed Abstract | Crossref Full Text | Google Scholar

Galdames, I. S. (2024). Integration of artificial intelligence in higher education: relevance for inclusion and learning. SciComm. Rep. 4, 1–12. doi: 10.32457/scr.v4i1.2487

Crossref Full Text | Google Scholar

Garcia, M. B., Arif, Y. M., Khlaif, Z. N., Zhu, M., de Almeida, R. P. P., de Almeida, R. S., et al. (2024). “Effective integration of artificial intelligence in medical education: practical tips and actionable insights,” in Transformative Approaches to Patient Literacy and Healthcare Innovation (IGI Global), 1–19.

Google Scholar

Hair, J. F., Risher, J. J., Sarstedt, M., and Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31, 2–24. doi: 10.1108/EBR-11-2018-0203

Crossref Full Text | Google Scholar

Hamamra, B., Mayaleh, A., and Khlaif, Z. N. (2024). Between tech and text: the use of generative AI in Palestinian universities-a ChatGPT case study. Cogent Educ. 11:2380622. doi: 10.1080/2331186X.2024.2380622

Crossref Full Text | Google Scholar

Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., et al. (2019). Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education. Postdigital Sci. Educ. 1, 427–445. doi: 10.1007/s42438-019-00046-x

Crossref Full Text | Google Scholar

Hussain, N., Rigoni, U., and Orij, R. P. (2018). Corporate governance and sustainability performance: analysis of triple bottom line performance. J. Bus. Ethics 149, 411–432. doi: 10.1007/s10551-016-3099-5

Crossref Full Text | Google Scholar

Kalantzis, M., and Cope, B. (2024). Literacy in the time of artificial intelligence. Read. Res. Q. 60, 1–34. doi: 10.35542/osf.io/es5kb

Crossref Full Text | Google Scholar

Kamalov, F., Santandreu Calonge, D., and Gurrib, I. (2023). New era of artificial intelligence in education: towards a sustainable multifaceted revolution. Sustainability 15:12451. doi: 10.3390/su151612451

Crossref Full Text | Google Scholar

Kaplan, A. (2021). Artificial Intelligence (AI): when humans and machines might have to coexist. AI Everyone 21. doi: 10.16997/book55.b

Crossref Full Text | Google Scholar

Kerr, P. (2016). Adaptive learning. ELT J. 70, 8–93. doi: 10.1093/elt/ccv055

Crossref Full Text | Google Scholar

Khlaif, Z. N., Alkouk, W. A., Salama, N., and Abu Eideh, B. (2025). Redesigning assessments for AI-enhanced learning: a framework for educators in the generative AI era. Educ. Sci. 15:174. doi: 10.3390/educsci15020174

Crossref Full Text | Google Scholar

Khlaif, Z. N., Ayyoub, A., Hamamra, B., Bensalem, E., Mitwally, M. A., Ayyoub, A., et al. (2024). University teachers' views on the adoption and integration of generative AI tools for student assessment in higher education. Educ. Sci. 14:1090. doi: 10.3390/educsci14101090

Crossref Full Text | Google Scholar

Khlaif, Z. N., Sanmugam, M., Hattab, M. K., Bensalem, E., Ayyoub, A., Sharma, R. C., et al. (2023). Mobile technology features and technostress in mandatory online teaching during the COVID-19 crisis. Heliyon 9:e19069. doi: 10.1016/j.heliyon.2023.e19069

PubMed Abstract | Crossref Full Text | Google Scholar

König, J., Heine, S., Jäger-Biela, D., and Rothland, M. (2024). ICT integration in teachers' lesson plans: a scoping review of empirical studies. Eur. J. Teach. Educ. 47, 821–849. doi: 10.1080/02619768.2022.2138323

Crossref Full Text | Google Scholar

Liang, J. C., Hwang, G. J., Chen, M. R. A., and Darmawansah, D. (2021). Roles and research foci of artificial intelligence in language education: an integrated bibliographic analysis and systematic review approach. Interact. Learn. Env. 31, 4270–4296. doi: 10.1080/10494820.2021.1958348

Crossref Full Text | Google Scholar

Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., and Pechenkina, E. (2023). Generative AI and the future of education: ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 21:100790.? doi: 10.1016/j.ijme.2023.100790

Crossref Full Text | Google Scholar

Lin, H. (2015). A meta-synthesis of empirical research on the effectiveness of computer-mediated communication (CMC) in SLA. Lang. Learn. Technol. 19, 85–117.

Google Scholar

Mahjoubi, M. N., Brini, R., and Al-Qutaish, A. A. (2025). Transforming education with AI: an exploratory study of faculty insights on ChatGPT's opportunities and risks. An-Najah Univ. J. Res. 39. doi: 10.35552/0247.39.4.2370

Crossref Full Text | Google Scholar

Mishra, P., Warr, M., and Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. J. Digit. Learn. Teach. Educ. 39, 235–251. doi: 10.1080/21532974.2023.2247480

Crossref Full Text | Google Scholar

Mizumoto, A., and Eguchi, M. (2023). Exploring the potential of using an AI language model for automated essay scoring. Res. Methods Appl. Linguist. 2:100050.? doi: 10.1016/j.rmal.2023.100050

Crossref Full Text | Google Scholar

Moorhouse, B. L. (2024). Beginning and first-year language teachers' readiness for the generative AI age. Comput. Educ. Artif. Intell. 6:100201.? doi: 10.1016/j.caeai.2024.100201

Crossref Full Text | Google Scholar

Moorhouse, B. L., and Kohnke, L. (2021). Responses of the English-language-teaching community to the COVID-19 pandemic. RELC J. 52, 359–378. doi: 10.1177/00336882211053052

Crossref Full Text | Google Scholar

Moorhouse, B. L., Yeo, M. A., and Wan, Y. (2023). Generative AI tools and assessment: guidelines of the world's top-ranking universities. Comput. Educ. Open 5:100151. doi: 10.1016/j.caeo.2023.100151

Crossref Full Text | Google Scholar

Nasrallah, J. B. (2023). Stop and go signals at the stigma-pollen interface of the Brassicaceae. Plant Physiol. 193, 927–948. doi: 10.1093/plphys/kiad301

PubMed Abstract | Crossref Full Text | Google Scholar

Omar, A., Shaqour, A. Z., and Khlaif, Z. N. (2024). Attitudes of faculty members in Palestinian universities toward employing artificial intelligence applications in higher education: opportunities and challenges. Front. Educ. 9:1414606. doi: 10.3389/feduc.2024.1414606

Crossref Full Text | Google Scholar

Ouyang, F., Zheng, L., and Jiao, P. (2022). Artificial intelligence in online higher education: a systematic review of empirical research from 2011–2020. Educ. Inf. Technol. 27, 7893–7925. doi: 10.1007/s10639-022-10925-9

Crossref Full Text | Google Scholar

Paetsch, J., Franz, S., and Wolter, I. (2023). Changes in early career teachers' technology use for teaching: the roles of teacher self-efficacy, ICT literacy, and experience during COVID-19 school closure. Teach. Teach. Educ. 135:104318. doi: 10.1016/j.tate.2023.104318

Crossref Full Text | Google Scholar

Park, M., and Son, J. B. (2022). Pre-service EFL teachers' readiness in computer-assisted language learning and teaching. Asia Pacific J. Educ. 42, 320–334. doi: 10.1080/02188791.2020.1815649

Crossref Full Text | Google Scholar

Pedro, F., Subosa, M., Rivas, A., and Valverde, P. (2019). “Artificial intelligence in education: Challenges and opportunities for sustainable development,” in Working Papers on Education Policy 7.

Google Scholar

Pelletier, K., McCormack, M., Reeves, J., Robert, J., and Colgin, M. (2023). EDUCAUSE Horizon Report: Teaching and Learning Edition. EDUCAUSE Review.

Google Scholar

Polly, D., Martin, F., and Byker, E. (2023). Examining pre-service and in-service teachers' perceptions of their readiness to use digital technologies for teaching and learning. Comput. Sch. 40, 22–55. doi: 10.1080/07380569.2022.2121107

Crossref Full Text | Google Scholar

Rogers, E. M. (2003). Diffusion of Innovations (5th Edn.). Free Press.

Google Scholar

Salama, N., Bsharat, R., Alwawi, A., and Khlaif, Z. N. (2025). Knowledge, attitudes, and practices toward AI technology (ChatGPT) among nursing students at Palestinian universities. BMC Nurs. 24:269. doi: 10.1186/s12912-025-02913-4

PubMed Abstract | Crossref Full Text | Google Scholar

Sleator, L., and Hennessey, M. (2023). Almost half of Cambridge students admit they have used ChatGPT. The Times, 21 April. Available online at: https://www.thetimes.co.uk/article/cambridgeuniversity-students-chatgpt-ai-degree-2023-rnsv7mw7z

Google Scholar

Sourani, M. (2019). Artificial intelligence: a prospective or real option for education? AlJinan J. AlJinan Univ. 11, 121–139.

Google Scholar

Sousa, A. E., and Cardoso, P. (2025). Use of generative AI by higher education students. Electronics 14:1258. doi: 10.3390/electronics14071258

Crossref Full Text | Google Scholar

Starkey, L. (2020). A review of research exploring teacher preparation for the digital age. Camb. J. Educ. 50, 37–56. doi: 10.1080/0305764X.2019.1625867

Crossref Full Text | Google Scholar

Stefan, A. D. P., and Sharon, K. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhanc. Learn. 12:22. doi: 10.1186/s41039-017-0062-8

PubMed Abstract | Crossref Full Text | Google Scholar

Tzirides, A. O. O., Zapata, G., Kastania, N. P., Saini, A. K., Castro, V., Ismael, S. A., et al. (2024). Combining human and artificial intelligence for enhanced AI literacy in higher education. Comput. Educ. Open 6:100184. doi: 10.1016/j.caeo.2024.100184

Crossref Full Text | Google Scholar

U.S. Department of Education (2023). Office of educational, technology, artificial, intelligence, and future of teaching and learning: insights and recommendations, Washington, DC. Available online at: https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf (accessed October 25, 2024).

Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 16, 1–27. doi: 10.1186/s41239-019-0171-0

Crossref Full Text | Google Scholar

Keywords: generative AI, artificial intelligence, AI literacy, AI ethics, AI competency

Citation: Ayyoub AM, Khlaif ZN, Shamali M, Abu Eideh B, Assali A, Hattab MK, Barham KA and Bsharat TRK (2025) Advancing higher education with GenAI: factors influencing educator AI literacy. Front. Educ. 10:1530721. doi: 10.3389/feduc.2025.1530721

Received: 19 November 2024; Accepted: 14 April 2025;
Published: 12 May 2025.

Edited by:

Kostas Karpouzis, Panteion University, Greece

Reviewed by:

Celina P. Leão, University of Minho, Portugal
Vanessa Camilleri, University of Malta, Malta

Copyright © 2025 Ayyoub, Khlaif, Shamali, Abu Eideh, Assali, Hattab, Barham and Bsharat. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zuheir N. Khlaif, emtobGFpZkBuYWphaC5lZHU=

ORCID: Abedalkarim M. Ayyoub orcid.org/0000-0001-9111-4465
Zuheir N. Khlaif orcid.org/0000-0002-7354-7512
Alia Assali orcid.org/0000-0003-0370-871X
Muayad K. Hattab orcid.org/0000-0002-1096-1839
Kefah A. Barham orcid.org/0000-0002-8492-4723
Tahani R. K. Bsharat orcid.org/0000-0002-4029-4061

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.