Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 27 January 2026

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1737928

Generative AI use and self-learning in higher education: the role of learning difficulties

  • 1College of Interdisciplinary Studies, Zayed University, Dubai, United Arab Emirates
  • 2College of Interdisciplinary Studies, Zayed University, Abu Dhabi, United Arab Emirates

The rapid development of GenAI tools and their adoption in education have shown promising potential to personalize learning experiences. However, their effectiveness is influenced by factors such as familiarity, frequency of use, and the impact on self-learning. This study investigates the undergraduate students' familiarity with Generative AI (GenAI) tools, their frequency of use, and the perceived impact of GenAI on self-learning, with particular consideration of differences between students with and without learning difficulties. Prompt engineering is also included as a secondary aspect of students' GenAI experience. The research employed a quantitative survey design, utilizing validated scales to measure familiarity, usage experience, and perceived learning impact. Reliability and validity of the measurement model were established using Confirmatory Composite Analysis in SmartPLS. The study involved undergraduate students (N = 78) enrolled in GenEd courses, aged 17–21 (M = 18.5, SD = 0.86). Descriptive results showed that students reported low to moderate familiarity with GenAI tools, but they frequently engaged with them for academic purposes. Despite limited formal training, most participants rated the impact of GenAI on their self-learning positively, particularly in terms of communication skills, time efficiency, and confidence. A smaller portion of students indicated negative impacts, reflecting concerns about over-reliance and reduced critical engagement. Structural modeling further demonstrated significant positive relationships between GenAI familiarity, frequency of use, and perceived impact. These findings highlight a pattern of utility-driven adoption, in which students benefit from GenAI despite having a limited foundational understanding. The study highlights the importance of institutions strengthening AI literacy, providing structured pedagogical guidance, and integrating GenAI responsibly to support meaningful and ethical learning practices.

Introduction

The rapid advancement of Artificial Intelligence (AI) is reshaping educational practices, offering new possibilities for enhancing learning in higher education (Chen et al., 2023). As universities adapt to the evolving needs of a diverse and globally connected student population, AI-powered tools present a promising solution to some of the persistent challenges in traditional teaching methods (Popenici and Kerr, 2017). The ability to craft structured prompts, also known as prompt engineering, enables users to guide AI responses, ensuring tailored content and personalized feedback (Chen et al., 2023). This approach is particularly valuable in General Education (GenEd) courses, where students from diverse disciplines and backgrounds engage with a broad range of subjects. Although prompt engineering is acknowledged as one way students may interact with GenAI tools, the primary focus of this study is on students' overall experience with GenAI, including their familiarity, frequency of use, and perceived impact on self-learning. In this context, prompt engineering is treated as an optional component of GenAI use rather than a central instructional intervention. Instructors can craft effective prompts to help students navigate complex topics, refine their analytical skills, and develop self-directed learning habits. Unlike previous studies that emphasize only the adoption or frequency of GenAI tool use, this research advances the discussion by examining how GenAI can support metacognition and inclusivity, especially for students with learning difficulties, making AI-generated content more accessible, relevant, and engaging (Afshar et al., 2024). The UAE context further adds originality, as GenEd courses draw together linguistically and culturally diverse students. Investigating how students with learning difficulties engage differently with GenAI, provides insights rarely captured in international literature.

One of the key strengths of AI-powered tools in GenEd courses is their ability to scale and adapt to students' individual needs. For example, AI-based writing assistants can provide instant feedback on assignments, helping students improve their academic writing and critical thinking skills (Tu et al., 2024; Vatsal and Dubey, 2024). Additionally, AI-generated interactive summaries and tailored explanations can enhance content comprehension, especially for students who may struggle with traditional learning formats (ElSayary, 2023, 2024). Beyond personalized learning, GenAI fosters both cognitive and metacognitive development, encouraging deeper engagement and reflective learning experiences (Afshar et al., 2024). Students from diverse linguistic and cultural backgrounds, who may face academic challenges or language barriers, particularly benefit from AI-driven adjustments in content presentation (Mello et al., 2023; Wang et al., 2023). The exploration of learning difficulties in the context of GenAI and prompt engineering adds a critical layer of understanding to how different groups of students can benefit from these tools (Hamdan, 2024).

However, while GenAI presents exciting opportunities, it also raises challenges, particularly in ensuring that AI-generated content is culturally sensitive and language-inclusive and used ethically and effectively. The UAE's higher education system is characterized by a diverse student population, with learners from various linguistic and cultural backgrounds, which presents both opportunities and challenges in ensuring equitable learning experiences. Furthermore, a common misuse of AI tools occurs when students rely on the system to provide direct answers rather than using it as an aid for deeper learning (Chen et al., 2023). Instead of utilizing AI to assist with problem-solving and enhancing their understanding, students often treat the AI-generated output as the final product, bypassing critical thinking and engagement with the material. This misuse is often linked to a limited understanding of how to interact effectively with GenAI. As discussed in the literature (Afshar et al., 2024; Chen et al., 2023; Hidayat et al., 2022), students can struggle to craft meaningful prompts because they do not fully understand how to interact with GenAI, resulting in suboptimal use of AI-driven tools. This issue is especially prevalent in students who experience learning difficulties, as they often require more personalized and adaptive learning experiences. Without a grasp of how AI functions and how to use it effectively as an assistive tool, students may experience limited engagement and a missed opportunity for deeper learning. Another critical consideration is the ethical implications of integrating GenAI into education (Wang et al., 2023). Institutions must establish guidelines to protect student data and ensure transparency in how AI systems are used (Wang et al., 2023). To fully realize the potential of GenAI in GenEd courses, higher education institutions in the UAE must develop a comprehensive strategy that includes faculty training, student engagement, consideration of students' learning difficulties, and the implementation of robust ethical safeguards.

Although a growing body of literature explores the general use of GenAI in higher education (Almassaad et al., 2024; Chang et al., 2023), a notable gap exists in understanding how these tools are adopted and perceived by students with learning difficulties compared to their peers. Much of the current research either treats the student population as a monolith or focuses on the potential of AI as an assistive technology without empirically investigating its actual use and impact across different student groups. This study contributes to the field by directly addressing this gap. It provides a comparative analysis of GenAI familiarity, frequency of use, and perceived impact on self-learning between students with and without learning difficulties in the diverse higher education context of the UAE. By doing so, this research offers crucial insights for educators, policymakers, and instructional designers on how to foster more equitable and effective AI-integrated learning environments that cater to the needs of all students. These gaps in skills and understanding represent significant barriers to realizing the full potential of GenAI in higher education. Therefore, this study aims to investigate undergraduate students' familiarity with GenAI tools, their frequency of use, and the perceived impact of GenAI on self-learning, with particular consideration of differences between students with and without learning difficulties. Prompt engineering is included as a secondary aspect of students' GenAI experience rather than as a core feature of the instructional design. Accordingly, the following question was formulated to guide this study: How does learning difficulty impact undergraduate students' familiarity with GenAI tools, frequency of use, and impact on self-learning?

1. How familiar are undergraduate students with Generative AI tools, and how frequently do they use them for self-learning?

2. How do students perceive the impact of Generative AI on their self-learning?

3. Are there significant differences in GenAI familiarity, frequency of use, or perceived self-learning impact between students with and without learning difficulties?

4. Does learning difficulty moderate the relationship between FE/FUE and perceived impact?

Context of the study

The United Arab Emirates (UAE), with its commitment to technological innovation and transformative education, serves as an intriguing case study for exploring the potential of AI (UAE Ministry of State for Artificial Intelligence, 2022) in General Education (GenEd) courses. These courses, which form the foundation of higher education and cover a wide range of disciplines, stand to benefit from the personalized support and instant feedback provided by AI-driven tools. In the GenEd courses, students engage in a flipped classroom approach where they prepare for class independently. After their self-preparation, students use AI tools to assess their understanding of statistical concepts, clarify misconceptions, and practice problem-solving.

The use of AI provides personalized feedback, helping students solidify their grasp of key ideas before coming to class for deeper discussions and collaborative activities. This approach aims to enhance self-directed learning and critical thinking while utilizing AI to support students' mastery of statistics. This research project aims to understand how learning difficulty affects students' familiarity with GenAI tools, including creating effective prompts, frequency of use, and its impact on self-learning. In face-to-face lab sessions, an innovative method was introduced where students learn the art of ‘prompt engineering.' This technique involves crafting prompts that effectively guide students in exploring and understanding new concepts, thereby preparing them for their upcoming online classes.

The key objective is to encourage students to engage in self-study, especially for complex problems and ideas. This, in turn, is expected to positively impact their participation and performance in online sessions, as well as in course assignments and projects.

The UAE's focus on becoming a leader in education technology aligns with the potential uses of prompt engineering in improving learning outcomes across these core courses. This focus is in direct alignment with the UAE National Agenda, particularly its goals related to Education and Innovation (United Arab Emirates Cabinet, 2023a). The UAE seeks to create a world-class education system by fostering advanced digital learning environments (United Arab Emirates Cabinet, 2023b) and incorporating AI into educational practices. Furthermore, the research is in line with the Sustainable Development Goal (4), Quality Education, Part of the 2030 Agenda for Sustainable Development. This goal aims to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all (UAE National Committee on SDGs, 2017).

Literature

Conceptual framework

To address the complex interactions between students and generative AI, this study is grounded in a conceptual framework that integrates AI literacy, prompt engineering, and self-learning. This section defines these core concepts and clarifies their operationalization within the research. AI literacy is broadly understood as the set of competencies that enable individuals to effectively and critically understand, use, and evaluate AI technologies (Ng et al., 2021). Rather than a monolithic skill, it encompasses a range of abilities. Ng et al. (2021) proposed a comprehensive framework for AI literacy that includes four key dimensions: (1) knowing and understanding AI concepts and technologies; (2) using and applying AI tools in various contexts; (3) evaluating AI systems and creating with them; and (4) understanding and navigating the ethical issues surrounding AI. In this study, AI literacy is operationalized primarily through the first two dimensions, measured by students' self-reported familiarity with GenAI tools and their frequency of use for academic tasks. This approach allows for an assessment of students' foundational engagement with AI in their educational environment. Within the broader scope of AI use, prompt engineering has emerged as a critical skill for effectively interacting with generative AI models. It can be defined as the “steering mechanism by which users of GenAI craft their prompts to generate more desirable outcomes” (Lee and Palmer, 2025, p. 7). This process is not merely about asking a question but involves strategically designing and refining inputs to guide the GenAI toward producing outputs that are accurate, relevant, and applicable to the user's specific needs. Although prompt engineering is a teachable skill with significant potential to enhance learning, its integration into student practice is still in its early stages of development. Therefore, in this study, prompt engineering is operationalized as a secondary component of GenAI use. It is not treated as a systematically manipulated variable but is instead measured through specific survey items that gauge students' awareness and basic application of crafting prompts. The ultimate goal of integrating educational technologies like GenAI is to enhance students' capacity for self-learning. The concept is closely aligned with self-directed learning (SDL), which “entails individuals taking initiative and responsibility for their own learning” (Loeng, 2020, p. 2). In an SDL model, learners are empowered to set their own goals, identify resources, and evaluate their progress, which fosters greater autonomy and lifelong learning habits (Charokar and Dulloo, 2022). In the context of this research, self-learning is operationalized as students' perceived impact of GenAI on their learning processes. This is measured through survey questions that ask about improvements in communication skills, time efficiency, and academic performance resulting from the use of GenAI.

GenAI to support self-regulated learning

The integration of GenAI tools in higher education offers significant opportunities to support Self-Regulated Learning (SRL), a framework where learners autonomously manage their educational paths (Lee et al., 2024). SRL involves cycles of forethought (setting goals), performance (applying strategies), and self-reflection (evaluating outcomes), and GenAI can play a role in each phase. Effective interaction with GenAI fosters deeper student engagement by encouraging them to actively participate in the learning process, think critically, and apply their knowledge to various scenarios (Tu et al., 2024). The impact of AI tools on self-learning is a critical area of focus, with studies showing positive outcomes in areas such as communication skills enhancement, time efficiency, and academic performance. Students reported improvements in communication skills and greater efficiency in concept clarification when using GenAI for learning purposes (Tu et al., 2024). This aligns with the broader literature that highlights the effectiveness of AI-driven tools in facilitating interactive learning experiences where students receive real-time, personalized feedback (Ayouni et al., 2021). Furthermore, the ability to interact effectively with GenAI enhances self-directed learning by fostering autonomy, motivation, and competence (Gervacio, 2024), which can lead to increased academic success as students become more prepared and engaged in their learning process (Mollick and Mollick, 2023). Another study in Saudi Arabia found that students who frequently use GenAI tools report improved academic performance, with a weak to moderate positive correlation between AI tool usage and improved grades (Wiredu et al., 2024). A study highlighted that students who use GenAI tools for tasks such as defining concepts, generating ideas, and summarizing academic literature tend to develop better self-directed learning skills (Almassaad et al., 2024). However, while the results indicate positive academic impacts, some students noted negative effects on their academic performance, suggesting that the effectiveness of GenAI tools can vary depending on how students use them (Afshar et al., 2024). Another study in Ghana found that while 72% of students reported an improved understanding of course material through the use of GenAI, 75% cited academic integrity as a primary concern (Wiredu et al., 2024). Additionally, overreliance on GenAI tools can impede critical thinking and problem-solving skills, as noted in a study conducted in Saudi Arabia (Alnaim, 2024). This discrepancy highlights the importance of providing students with sufficient training and support in utilizing these tools, as improper use can result in suboptimal outcomes. This highlights the importance of scaffolding GenAI use within a structured learning process to ensure it supports, rather than replaces, students' cognitive engagement.

Factors influencing GenAI adoption: familiarity and frequency of use

The effective use of GenAI for self-learning is contingent on students' familiarity with these tools and their frequency of use. Familiarity with AI tools such as GenAI and prompt engineering is crucial for their effective use in education. Research indicates that while many students are aware of GenAI tools, their comfort and confidence in using these tools vary. For instance, a study conducted in Saudi Arabia found that 78.7% of students frequently use GenAI tools, with ChatGPT being the most widely used (Almassaad et al., 2024). On the other side, the results of a study by Mello et al. (2023) highlight that familiarity with these tools is low, with many students having little or no experience with GenAI. This lack of familiarity can be a significant barrier to the adoption of GenAI in higher education (Afshar et al., 2024; Lee et al., 2024). As indicated in the literature, effective interaction with GenAI is not only about designing effective prompts but also about understanding how AI systems process information and how users can manipulate prompts to get meaningful outputs (Vatsal and Dubey, 2024). Indeed, the quality of user interaction has been shown to directly impact the depth and accuracy of AI-generated responses (Rodriguez-Donaire, 2024), reinforcing the need for more than just surface-level familiarity. Learning difficulties have a role in shaping familiarity and students' perceptions of GenAI tools. For example, students who struggle with traditional learning methods may find GenAI tools more accessible and engaging, leading to higher familiarity and comfort. Conversely, students who prefer structured, human-led instruction may feel less comfortable with the autonomous nature of GenAI tools (Hamdan, 2024; Neji et al., 2023). This dichotomy highlights the complex relationship between learning difficulties and familiarity with GenAI tools.

The frequency of GenAI tool usage among undergraduate students is also influenced by various factors, including learning difficulties. The results of a study by Chang et al. (2023) show varying usage patterns among participants, with some using GenAI frequently while others use it less often, which can directly impact their learning outcomes. AI tools, when used frequently and effectively, have been shown to enhance students' understanding of complex concepts and support self-regulated learning (Afshar et al., 2024). Students who face challenges in traditional learning environments often turn to GenAI tools as a supplementary resource. For instance, a study in the Philippines found that students primarily use GenAI tools for homework, idea generation, and research, with 72% reporting an improved understanding of course material (Miranda et al., 2024; Wiredu et al., 2024). However, the same study noted that socioeconomic factors, such as financial constraints and limited access to paid AI tools can hinder frequent use, particularly among students from lower-income backgrounds. Additionally, students' self-perceived experience with GenAI is also a contributing factor to their frequency of use. Those who have higher self-perceived experience are more likely to use GenAI for educational purposes, while students with limited experience or challenges using the tool tend to underutilize its capabilities (Heston, 2023).

Learning difficulty and AI integration in education

A central aspect of this study is understanding how students with learning difficulties engage with GenAI, as they represent a group that could uniquely benefit from adaptive technologies. Learning difficulty has long been recognized as a barrier to academic success, influencing how students interact with educational tools, including AI technologies. Learning difficulties, such as dyslexia or ADHD, can affect students' ability to process information efficiently, making it challenging for them to fully benefit from traditional teaching methods (Hamdan, 2024). For example, a study focusing on students with learning difficulties found that while these students appreciate the assistive capabilities of AI tools, they are also cautious about overreliance and the potential for academic dishonesty (Korea and Alexopoulos, 2024). This approach may result in less frequent use compared to their peers without learning difficulties. Furthermore, the learning curve associated with using GenAI effectively can be a major concern, particularly for students with learning difficulties (Afshar et al., 2024). This highlights the need for targeted interventions to help students build proficiency with GenAI tools and integrate them more consistently into their learning practices (Chen et al., 2023).

However, the integration of AI-driven tools offers new opportunities to support these students by providing personalized, adaptive learning experiences (Wang et al., 2023). Learning difficulty, in this study, refers to the challenges some students face in processing information, organizing their thoughts, or using cognitive strategies effectively. These difficulties can significantly influence how students interact with AI tools in educational settings (Hamdan, 2024). AI can cater to individual learning needs by tailoring content to the student's pace and level, thus reducing the cognitive load and providing additional scaffolding (Huang et al., 2022). Despite the promising potential, learning difficulties may still influence the extent to which students can effectively use these tools, making it essential to understand how learning difficulties impact familiarity, frequency of use, and the perceived effect of AI tools like GenAI on self-learning.

Although the integration of GenAI offers substantial opportunities for improving self-learning, significant challenges remain. The effectiveness of these tools can be undermined if students fail to understand how to interact with them properly or misuse them as a shortcut for learning (Popenici and Kerr, 2017). To fully realize the potential of GenAI, higher education institutions must invest in training and support systems for both students and educators. Furthermore, ensuring that AI tools are inclusive and free from bias is essential to promote equitable learning outcomes across diverse student populations, especially in multicultural contexts like the UAE (Mello et al., 2023). Institutions must also address ethical concerns related to data privacy, algorithmic biases, and equitable access to technology to ensure that all students can benefit from AI-enhanced learning opportunities (Wang et al., 2023).

Methodology

The study was reframed to focus on students' GenAI familiarity, frequency of use, and perceived impact on self-learning. Prompt engineering was treated only as a secondary experiential component rather than as a structured intervention.

Participants

The initial sample size intended was 151 undergraduate students enrolled in the General Education department at a federal university in the UAE. However, a random sample was selected from three different courses, 78 participants aged 17–21 years were selected (M = 18.5, SD = 0.86). A comprehensive demographic breakdown of the sample is presented in Table 1. Learning difficulties were defined as students' self-reported diagnosed or perceived challenges affecting reading, writing, attention, processing speed, or comprehension, aligned with institutional accommodation categories in UAE higher education. These demographic attributes suggest the findings are primarily applicable to young, local, and predominantly female undergraduate learners in the UAE. The high proportion of local participants (96.2%) and gender imbalance (82.1% female) may reduce the generalizability of the results to more diverse or international student populations. Similarly, the relatively low percentage of students reporting learning difficulties (21.79%) might not reflect more inclusive or neurodiverse educational environments. Consequently, while the study provides valuable insights into the use of ChatGPT in this context, its external validity for broader, heterogeneous academic settings may be limited.

Table 1
www.frontiersin.org

Table 1. Demographic breakdown of participants.

Instrument

A web survey was developed and administered to students. It started with demographic information to ask students about their age, gender, nationality, learning difficulties, and learning approach. Those questions were closed-ended with multiple-choice answers. The second part of the survey was categorized according to the framework of the study to include Familiarity with using GenAI (6 items), Frequency with using GenAI (3 items), and Impact of using GenAI on self-learning (10 items), using a 5-point Likert scale. The survey was sent to two experts for content validity to check the appropriateness of the items and their alignment with the purpose of the study. The recommendations received are to enhance the language for students' understanding, which was addressed and sent back to experts for their confirmation. Cronbach's Alpha reliability test was conducted to assess the internal consistency among survey items. The reliability coefficient was found as follows: familiarity (α = 0.832), frequency of use (α = 0.699), and impact on self-learning (α = 0.899), showing highly reliable and considered suitable for the study.

Procedure

Participants received an informed consent form prior to the study, and a comprehensive explanation of the study was provided. Students were informed that their participation is voluntary and they are free not to participate if they are not willing to continue. Students enrolled in the GenEd courses were provided with instructions about the use of GenAI for self-learning purposes. At the end of the semester, the survey was sent to students for their feedback on their experiences. Quantitative data were collected and analyzed using descriptive statistics (frequency, mean, and standard deviation) and inferential statistics (One-way ANOVA and hierarchical regression) with the Statistical Package for the Social Sciences (SPSS).

Self-reported surveys were chosen as the primary instrument because the study focused on capturing students' perceptions, familiarity, and self-reported use of GenAI, dimensions that cannot be adequately measured through performance data alone. This approach aligns with prior research on educational technology adoption, which suggests that perceptions and attitudes are central to understanding usage patterns.

Descriptive statistics were first run, followed by correlation analysis to examine the relationship between variables (familiarity, frequency of use, and impact on self-learning) and confounding variables (learning approach and learning difficulty). Surprising results were found in the relationship between learning difficulty and the study variables, while no significance was found between the learning approach and the study variables. Accordingly, the researchers decided to include the learning difficulty in the study analysis. Effect sizes (Cohen's d for t-tests, η2 for ANOVA, and standardized β for regression) and 95% confidence intervals were reported to strengthen statistical interpretation. Assumptions relevant to parametric analyses (e.g., homogeneity of variances and absence of multicollinearity) were examined through Levene's test and tolerance values, respectively, and no substantial violations were detected.

Results

Reliability and validity

To validate the measurement model, a Confirmatory Composite Analysis (CCA) was conducted using SmartPLS, following the recommended criteria for PLS-SEM (Hair et al., 2021). Three constructs were assessed: Familiarity with GenAI (FE), Frequency/Use Experience (FUE), and Perceived Impact (P). All constructs met or exceeded conventional reliability and validity thresholds. As shown in Table 2, internal consistency reliability was satisfactory across all scales. Cronbach's alpha ranged from 0.707 to 0.978, and composite reliability (ρc) exceeded the recommended cut-off of 0.70 for all constructs (ρc = 0.807–0.977). These results indicate strong internal homogeneity among items. Convergent validity was also supported. Average Variance Extracted (AVE) values ranged from 0.577 to 0.875, surpassing the 0.50 threshold and demonstrating that each construct explains more than half of the variance in its indicators. The FE construct exhibited particularly strong convergence (AVE = 0.875), while P and FUE both demonstrated acceptable levels (AVE = 0.577 and 0.589, respectively). Although the Perceived Impact construct included one negatively framed indicator (P9), which demonstrated a low negative loading, this directionality is theoretically consistent because the item captures the opposing end of the impact spectrum. Reverse-worded items frequently exhibit weaker loadings due to cognitive processing differences among respondents; however, retaining P9 ensures full representation of both positive and negative academic effects, thereby preserving the construct's content validity. Importantly, overall construct reliability remained strong (α = 0.901; ρc = 0.918), and AVE exceeded recommended thresholds, indicating that the inclusion of P9 does not compromise the psychometric adequacy of the construct. Discriminant validity was assessed using the inter-construct correlation matrix and HTMT values. Inter-construct correlations remained below 0.40, suggesting that the constructs are conceptually distinct. This supports the discriminant validity of the measurement model and justifies their use as separate latent variables in the structural model. Finally, model fit was evaluated using SRMR, d_ULS, and d_G indices. The SRMR values for the saturated (0.076) and estimated model (0.099) fall within acceptable limits for PLS-SEM (< 0.10), indicating adequate model fit for the measurement specification. These results collectively confirm that the measurement model demonstrates satisfactory psychometric quality and is suitable for subsequent structural analysis.

Table 2
www.frontiersin.org

Table 2. Measurement model assessment (PLS-SEM/CFA equivalent).

Exploratory Factor Analysis (EFA) was conducted to assess the underlying structure of the items measuring GenAI familiarity, experience, and perceived impact. EFA was performed to establish the psychometric adequacy of the instrument. Principal Axis Factoring with Oblimin rotation was used. Sampling adequacy was confirmed (KMO = 0.856), and Bartlett's Test of Sphericity was significant, χ2(78) = 596.54, p < 0.001, indicating suitability for factor extraction. Three factors emerged with eigenvalues >1, explaining 70.27% of the total variance. Factor 1 accounted for 48.24%, Factor 2 for 13.16%, and Factor 3 for 8.87% of the variance. The pattern matrix showed clear loadings for: (1) Perceived Learning Impact (0.694–0.896), (2) GenAI Familiarity/Experience (0.621–0.723), and (3) Negative Academic Impact (0.413). Communalities ranged from 0.478 to 0.768 for all items except the negatively framed item (0.187). The weaker loading of this item in EFA is consistent with its negative framing and supports the decision to retain it based on theoretical importance rather than purely statistical criteria. Residuals indicated acceptable fit, with only 25% exceeding |0.05|. These results support the construct validity of the measurement items used in the main analysis.

Descriptive statistics

In order to address RQ1 (How familiar are undergraduate students with Generative AI tools, and how frequently do they use them for self-learning?), descriptive statistics and correlation analysis were conducted. Table 3 presents the frequency and percentage distributions for all familiarity, usage, and perceived impact indicators.

Table 3
www.frontiersin.org

Table 3. The frequency (N) and percentage distributions (%) of GenAI familiarity, frequency.

Regarding the familiarity with GenAI, the distribution of responses was fairly balanced across low, medium, and high categories. For example, previous GenAI use was reported as 25.6% low, 34.6% medium, and 39.7% high, indicating that most students have at least some prior exposure. Interest in exploring GenAI was also high, with 74.4% of students (medium + high) expressing interest. In contrast, formal training familiarity remained modest, with only 35.9% reporting high familiarity, suggesting that structured training opportunities may still be limited.

In terms of frequency of using GenAI, 37.2% of participants reported that they rarely use it, 32.1% use it moderately, and 30.8% use it frequently. Participants varied in their responses regarding the use of GenAI for educational purposes, with 14.1% reporting that they rarely use it in their studies, 33.3% mentioning that they moderately use it, and 52.6% using it in their studies. Using prompt engineering was also another aspect where participants' responses varied. A few of the participants (28.2%) stated that they do not have high experience using prompt engineering, 26.9% were moderately experienced using it, and 44.9% frequently use it.

In order to address RQ2 (How do students perceive the impact of Generative AI on their self-learning?), frequency and percentages were measured as shown in Table 3. Students generally perceived GenAI to have a positive impact on their learning. For example, 71.8% agreed or strongly agreed that GenAI improved their time efficiency in concept clarification. Communication skills enhancement was also viewed positively, with 47.5% agreeing or strongly agreeing, although 41.0% remained neutral. Confidence building showed similarly favorable trends, with 64.1% agreeing or strongly agreeing. Despite these positive perceptions, a portion of the sample expressed concerns: 51.3% responded with neutral or disagree categories regarding negative academic impacts, indicating mixed views about potential drawbacks.

One-way ANOVA

RQ3 asked about the differences in GenAI familiarity, frequency of use, or perceived self-learning impact between students with and without learning difficulties. Accordingly, a One-Way ANOVA was conducted to investigate the effects of learning difficulty on frequency of use. The analysis revealed a significant effect of learning difficulty on students' frequency of use of GenAI. The results of the one-way ANOVA indicated a statistically significant difference between the groups [F(2, 75) = 3.810, p = 0.027]. This suggests that learning difficulty significantly influences students' frequency of using GenAI scores, with at least one of the groups differing from the others. There were no significant differences in the effect of learning difficulty on students' impact on self-learning using GenAI [F(2, 75) = 0.886, p = 0.416, η2 = 0.023], and familiarity using it [F(2, 75) = 1.40, p = 0.253, η2 = 0.036]. This suggests that learning difficulty neither affects students' experiences nor their familiarity with GenAI.

Hierarchical regression analysis

A hierarchical multiple regression was conducted to address RQ4 (Does learning difficulty moderate the relationship between FE/FUE and perceived impact?). Specifically, it aims to examine whether GenAI familiarity (FE) and frequency of use (FUE) predict perceived learning impact (P) beyond the effect of learning difficulties. In Block 1, learning difficulty status was entered as the sole predictor. In Block 2, FE and FUE were added to assess their incremental explanatory power. Results show that learning difficulty status did not significantly predict perceived impact, F(1, 76) = 1.36, p = 0.248, explaining only 1.8% of the variance (R2 = 0.018). The coefficient for learning difficulties was non-significant (β = 0.132, p = 0.248), indicating that, on its own, LD does not meaningfully differentiate students in terms of the perceived impact of GenAI on learning. When FE and FUE were added in Block 2, the model's explanatory power increased to 9.3% (R2 = 0.093), with the change in R2 approaching statistical significance, ΔR2 = 0.075, F(2, 74) = 2.52, p = 0.064. Within this adjusted model, frequency of use (FUE) emerged as a significant positive predictor (β = 0.272, p = 0.024). Students who used GenAI more frequently reported higher perceived learning impact. Neither learning difficulty status (β = 0.054, p = 0.641) nor familiarity (FE) (β = 0.053, p = 0.642) contributed significantly after FUE was added, suggesting that usage intensity, rather than perceived familiarity or LD status, is most closely associated with perceived benefits. These findings indicate that while learning difficulties do not independently predict perceived impact, the frequency of GenAI engagement plays a meaningful role in shaping students' learning perceptions (see Table 4).

Table 4
www.frontiersin.org

Table 4. Hierarchical regression predicting perceived learning impact.

Moderation analysis

A hierarchical regression analysis was conducted to examine whether learning difficulties moderated the relationships between (a) familiarity with generative AI (FE), (b) frequency of use (FUE), and students' perceived academic impact (P). All continuous predictors were mean-centered prior to analysis, and interaction terms (FE × LD and FUE × LD) were entered in a second block. Model 1 (main effects) explained 9.3% of the variance in perceived impact (R2 = 0.093, Adj. R2 = 0.056, p = 0.064). Among the predictors, frequency of use (FUE) emerged as the only significant positive predictor (β = 0.272, p = 0.024), indicating that students who more frequently used ChatGPT reported higher perceived academic benefits. Neither familiarity (FE) nor learning difficulties were significant predictors. Model 2 (moderation model) added the two interaction terms and accounted for 10.0% of the variance (R2 = 0.100, Adj. R2 = 0.038). However, this increase was not statistically significant (ΔR2 = 0.007, p = 0.170). Neither the FE × LD interaction (β = 0.115, p = 0.677) nor the FUE × LD interaction (β = −0.183, p = 0.559) reached significance. The findings indicate that learning difficulties do not moderate the relationship between students' familiarity or frequency of GenAI use and their perceived academic impact. The results suggest that frequency of use remains the most consistent predictor of perceived benefit, independent of learning difficulty status (See Table 5).

Table 5
www.frontiersin.org

Table 5. Hierarchical moderation regression predicting perceived impact (P).

Sensitivity analysis

To evaluate the robustness of findings involving learning difficulty status, a sensitivity analysis was conducted by excluding participants who selected “prefer not to answer” (n = 7). The analyses were re-run using the remaining sample (n = 71). The results revealed no substantive changes in the direction, magnitude, or significance of relationships across the correlation, ANOVA, and regression models. Specifically, the association between learning difficulties and frequency of GenAI use remained significant, while learning difficulties continued to show no significant relationship with familiarity or perceived learning impact. Regression models similarly demonstrated that frequency of use remained the strongest predictor of perceived learning impact, and no moderation effects emerged. These consistent patterns suggest that the inclusion or exclusion of the “prefer not to answer” group does not significantly impact the study's conclusions, thereby strengthening confidence in the validity of the findings.

Discussion

This study investigated undergraduate students' familiarity with, frequency of use of, and perceived impact of Generative AI (GenAI) on self-learning, with a particular focus on the differences between students with and without learning difficulties. The findings provide a complex view of how students are engaging with these emerging technologies, revealing patterns of adoption, significant relationships between key variables, and crucial differences across student groups.

Familiarity and frequency of GenAI use

The findings of RQ1 (How familiar are undergraduate students with Generative AI tools, and how frequently do they use them for self-learning?) reveal notable results in student engagement with GenAI. The descriptive results showed that a high percentage of students reported low familiarity with GenAI tools was generally low, with most items falling within the low-to-medium range, reflecting limited prior training and challenges in effectively using prompt engineering. Conversely, their frequency of GenAI use for academic purposes was noticeably higher, with a substantial proportion of students reporting moderate to frequent use. This contrast, low familiarity but moderate-to-high use, indicates that students rely heavily on GenAI tools despite not having strong foundational knowledge of how they operate. This discrepancy suggests a pattern of “utility-driven adoption,” where students engage with GenAI not because of a deep, formal understanding but because of its immediate practical benefits for completing academic tasks. This behavior can be understood through the lens of Self-Determination Theory (SDT), which posits that individuals are motivated by the need for competence, autonomy, and relatedness (Xia et al., 2022). In this context, students are satisfying their need for competence not by mastering the technology, but by efficiently completing assignments and achieving academic goals. The tool becomes a means to an end, which explains the motivation behind this utility-driven approach even in the absence of formal training. This mode of adoption, driven by perceived usefulness rather than structured pedagogical integration, aligns with the broader literature that identifies a persistent AI literacy gap in higher education (Mello et al., 2023; Rathod, 2024). Students are learning by doing, engaging in a form of informal, just-in-time learning that is highly effective for task completion but insufficient for developing a deep conceptual understanding of the technology itself. However, this informal, utility-driven approach carries inherent ethical risks. When students use third-party GenAI tools without a full understanding of their operation, they are exposed to significant data privacy concerns, as their inputs can be used for model training without transparent consent (García-López and Trujillo-Liñán, 2025). Furthermore, this pattern of use makes them vulnerable to the algorithmic biases embedded in these systems, which can perpetuate stereotypes and inequities, particularly for students from non-dominant backgrounds (García-López and Trujillo-Liñán, 2025).

Perceived impact of GenAI on self-learning

The section below discusses the results to respond to RQ2 (How do students perceive the impact of Generative AI on their self-learning?). Despite low familiarity, a vast majority of students perceived the impact of GenAI on their self-learning as overwhelmingly positive. The data indicates that students primarily value GenAI for its ability to enhance time efficiency in concept clarification, boost confidence, and improve the overall quality of their academic work. This aligns with models of Self-Regulated Learning (SRL), which involves phases of forethought, performance, and self-reflection (Chiu, 2024). GenAI appears to be a powerful scaffold, particularly in the forethought phase (e.g., by helping to plan and organize ideas) and the performance phase (e.g., by providing instant feedback and clarifying concepts on demand). The tool effectively reduces the activation energy required to start and sustain academic tasks (Afshar et al., 2024). However, the perception of benefit is not without its drawbacks, as a notable portion of students also reported negative academic impacts, such as over-reliance or reduced critical thinking engagement. This duality highlights a critical tension in GenAI use: the line between using the tool as a supportive scaffold and using it as a cognitive crutch that undermines deep learning (Tu et al., 2024). This risk directly impacts student agency, where over-reliance on GenAI can lead to a passive consumption of information rather than active knowledge construction (Popenici and Kerr, 2017). The danger, as noted by Darvishi et al. (2024), is that students may begin to prioritize the efficiency of getting an answer over the process of learning itself. This is particularly detrimental to the self-reflection phase of SRL, as students may fail to critically evaluate the AI's output or their own understanding, thereby hindering the development of metacognitive skills.

Differential engagement for students with learning difficulties

In responding to RQ3 (Are there significant differences in GenAI familiarity, frequency of use, or perceived self-learning impact between students with and without learning difficulties?), the most detailed finding of this study emerged from the comparison between students with and without learning difficulties. The results revealed that students with learning difficulties reported a significantly higher frequency of GenAI use. This suggests that these students may be utilizing GenAI as a compensatory tool to navigate academic challenges. This can be interpreted through the Social Model of Disability, which suggests that disability arises from the interaction between an individual's impairment and an unaccommodating environment. In this view, GenAI becomes a tool for students to bridge environmental barriers in the educational system, such as a lack of personalized support or inflexible assignment formats (Ahmed et al., 2025). This is likely because GenAI provides personalized, on-demand support for executive functions, such as organizing ideas, clarifying complex texts, or overcoming writer's block, which can be particularly challenging for this student population (Hamdan, 2024). This higher frequency of use did not translate into significantly different levels of familiarity or perceived impact. There are two main reasons for this result. First, the learning curve for using GenAI effectively may be steeper for students with learning difficulties, who often manage a higher intrinsic cognitive load. They may therefore require more intensive use to achieve the same level of benefit as their peers (Afshar et al., 2024). Second, their engagement may be more instrumental and task-oriented, focused on immediate problem-solving rather than a broader exploration of the tool's capabilities, thus limiting gains in overall familiarity (Mladenov, 2025). This finding is critical, as it suggests that simply providing access to GenAI is insufficient to ensure equity; targeted support and training are also necessary.

The universal benefit of engagement

In discussing the results of RQ4 (Does learning difficulty moderate the relationship between FE/FUE and perceived impact?), the moderation analysis provided the most insightful findings of this study. The results clearly indicate that learning difficulty does not statistically moderate the relationship between either familiarity or frequency of use and perceived academic impact. In practical terms, this means that the positive benefits gained from frequently using GenAI are consistent for students both with and without learning difficulties. The interaction term was not significant, demonstrating that the strength of this relationship does not change based on a student's learning difficulty status.

This non-significant finding should not be interpreted as a null result, but rather as a powerful indicator of GenAI's potential as a universally beneficial educational tool. This can be understood through the lens of Universal Design for Learning (UDL), which posits that tools and pedagogical approaches designed to support learners with specific needs often benefit all learners (Song, 2024). Although students with learning difficulties may have used GenAI more frequently as a compensatory tool, the mechanism through which that use translates into perceived benefits, such as improved efficiency and confidence, appears to be universal. This suggests that GenAI helps reduce extraneous cognitive load for all students, allowing them to better allocate their cognitive resources to the learning task itself (Tu et al., 2024).

Furthermore, this result reinforces the core tenets of technology acceptance models, which emphasize that perceived usefulness and ease of use are primary drivers of technology's positive impact (Chang et al., 2023). The finding that frequency of use is the most consistent predictor of positive outcomes, irrespective of learning difficulty, highlights that the fundamental factor is engagement. The implication is profound: the challenge is not that GenAI works differently for these student groups, but that students with learning difficulties may need to invest more effort (i.e., higher frequency of use) to activate this same beneficial mechanism (Afshar et al., 2024). This powerfully argues against a one-size-fits-all approach to GenAI integration and highlights the dual need for universal access and training alongside differentiated support for students who require it (Ahmed et al., 2025).

Proposed framework for GenAI interaction training

Based on the findings of this study and in response to the clear need for structured training, a two-branched framework is proposed for developing GenAI interaction skills (see Figure 1). They are often referred to as prompt engineering for both students and faculty. This framework is designed to be adaptable across disciplines and institutional contexts. For students, foundational training should be integrated into first-year experience courses, general education curricula, or discipline-specific introductory courses. The primary goal is not to create expert prompt engineers, but to equip all students with the core competencies of AI literacy needed to use GenAI effectively and ethically (Ng et al., 2021). Such training should begin with a conceptual understanding of how Large Language Models (LLMs) work, including their strengths in areas like synthesis and brainstorming, and their weaknesses, such as the potential for hallucinations and bias (García-López and Trujillo-Liñán, 2025). Following this, a simple, memorable framework for crafting effective prompts should be introduced. The “Five S” model (Tassoti, 2024), for example, provides a useful starting point by teaching students to “Set the Scene” by providing context and assigning a role to the AI; to “Be Specific” by clearly defining the task and constraints; to “Simplify Language” by using clear, direct phrasing; to “Structure the Output” by specifying the desired format; and to “Share Feedback” by iterating and refining prompts based on the AI's output. The most crucial component of this training, however, is teaching students to critically evaluate GenAI outputs. This is essential to mitigate the risk of over-reliance and ensure that AI assistance does not supplant the development of independent learning skills (Darvishi et al., 2024). This critical evaluation includes fact-checking claims, identifying potential biases, and understanding that the AI is a tool to assist, not replace, their own thinking.

Figure 1
Two-branched GenAI Literacy Framework diagram with four columns: “Students Foundational Training” focuses on understanding LLMs and prompt crafting, “Students Critical Evaluation” on fact-checking and bias identification, “Instructors Pedagogical Integration” on aligning AI with learning objectives, and “Responsible AI Literate” on promoting effective and ethical AI use for campus culture. Illustrations of small trees and mountains are shown at the bottom.

Figure 1. The two-branched GenAI literacy framework is proposed for a proactive campus-wide AI culture.

Concurrently, faculty should receive training that focuses on pedagogical integration and advanced applications, enabling them to model best practices and design meaningful learning experiences. This can be delivered through faculty development workshops, departmental training sessions, and online modules. The curriculum for faculty should emphasize pedagogical integration, showing them how to align GenAI use with specific learning objectives and design assignments that encourage critical thinking rather than simple content generation. It should also provide discipline-specific applications, offering examples of how GenAI can be used for tasks relevant to their fields, such as generating case studies in business or debugging code in computer science. Furthermore, faculty should be introduced to advanced prompting techniques, such as zero-shot, few-shot, and chain-of-thought prompting, to elicit more complex and detailed responses from GenAI models. Finally, this training must equip faculty with the knowledge to lead discussions on ethical issues and to develop clear, consistent, and fair policies for GenAI use in their courses, addressing concerns of academic integrity, data privacy, and algorithmic bias (García-López and Trujillo-Liñán, 2025). By implementing this dual framework, higher education institutions can move from a reactive to a proactive stance on GenAI, fostering a campus-wide culture of responsible and effective AI literacy, which, as Lee and Palmer (2025) note, is essential for aligning the pragmatic skills of AI interaction with contextual educational goals.

Limitations and recommendations

While this study provides valuable insights into the integration of GenAI in general education courses, several limitations need to be acknowledged. First, the study was conducted at a single institution, which limits the generalizability of the findings to other educational contexts, especially in diverse cultural and institutional settings. The sample was also demographically homogenous, with a high percentage of local students (96.2%) and female participants (82.1%), which may not fully represent the broader student population in the UAE or other regions. To increase the external validity of the results, future research should aim to replicate this study across multiple universities, both within the UAE and internationally. Such replication would provide a more comprehensive understanding of how GenAI and prompt engineering function across different educational systems, regions, and student populations.

Second, the sampling procedure warrants clarification. The study initially targeted a population of 151 students enrolled in three specific General Education courses. From this population, a convenience sample of 78 students who voluntarily consented to participate was obtained. This non-random sampling method was chosen for its practicality in a real-world educational setting, but it introduces the possibility of selection bias, as students who chose to participate may have different characteristics or levels of interest in GenAI than those who did not.

Third, another limitation concerns the reliance on self-reported survey measures as the sole source of data. While self-reports are widely used in educational research because they capture students' subjective experiences, attitudes, and perceptions (which are critical for understanding engagement with new technologies), they are also prone to bias, such as social desirability and inaccurate recall. This methodological choice was made because the study aimed to examine perceptions of GenAI use, which are inherently subjective constructs. However, future research should complement self-reported data with objective indicators of learning performance, for example, academic grades, problem-solving tasks, or project-based evaluations, to provide a more robust account of the impact of GenAI tools. Combining subjective and objective measures would allow for triangulation and increase the validity of the findings.

Fourth, the absence of objective measures of learning (such as academic grades or performance-based assessments) means that self-reported improvements are not substantiated by independent data. While the study found a positive correlation between GenAI use and perceived academic impact, this does not confirm a causal link to actual performance. Future research should aim to incorporate objective measures to validate these self-reported perceptions and provide a more complete picture of GenAI's educational effectiveness.

Furthermore, while this study focused on the immediate impact of GenAI on self-learning, it does not address the long-term effects of using GenAI tools on students' academic outcomes and cognitive development. Future research should explore the longitudinal impact of GenAI usage, especially for students with learning difficulties, to determine whether the positive effects on self-regulated learning, confidence, and academic performance are sustained over time. This will help in understanding whether the integration of AI tools leads to lasting improvements or whether their benefits diminish once students are no longer exposed to them regularly.

Based on these findings and limitations, several key recommendations for future research and practice can be made. First, educational institutions should invest in structured, systematic training programs for both students and faculty to enhance the effective use of GenAI tools, particularly focusing on prompt engineering. This would help students, especially those with learning difficulties, to overcome the barriers to using AI tools effectively, such as a lack of familiarity and challenges in interacting with them meaningfully. Training programs should not only focus on the technical aspects of GenAI but also address pedagogical strategies for integrating AI into learning environments to maximize its benefits for diverse student populations.

Furthermore, research should continue to explore the integration of GenAI in the context of inclusive pedagogy, especially for students with learning difficulties. Investigating how GenAI can be customized to meet the unique needs of students with disabilities, such as dyslexia or ADHD, could provide valuable insights into creating more personalized and adaptive learning experiences. Moreover, additional studies should investigate the role of GenAI in promoting motivation and self-efficacy, particularly among students who may feel marginalized by traditional educational methods. Exploring these aspects can help refine the design and implementation of AI tools in educational settings to ensure that they are accessible and beneficial for all learners, regardless of their cognitive or academic background. A further limitation stems from the operationalization of “learning difficulties” as a single, self-reported variable. This approach treats a highly diverse group of students as a homogenous category, masking significant heterogeneity within the population of learners who experience academic challenges. The term “learning difficulties” can include a wide range of conditions, from formally diagnosed specific learning disabilities (e.g., dyslexia, ADHD) to more general, undiagnosed academic struggles. Consequently, the findings may not be generalizable to any specific subgroup within this population. Furthermore, this self-report method introduces a risk of misclassification. Students with undiagnosed disabilities may have been misclassified into the “No” group, while others may have self-identified as having a learning difficulty without a formal diagnosis, potentially diluting the true effect size. Future research should employ more granular measures, such as distinguishing between formally diagnosed disabilities and self-perceived challenges, to allow for a more nuanced analysis.

Lastly, future research should aim to assess the ethical implications of using AI tools in higher education, particularly in relation to student privacy, data security, and the potential for algorithmic biases. As AI tools become increasingly integrated into educational practices, it is crucial to ensure that their use aligns with ethical standards and fosters equitable access to learning opportunities for all students. This includes developing transparent policies for AI use, safeguarding student data, and mitigating any biases that may inadvertently affect marginalized groups. Addressing these ethical concerns will be vital in ensuring that the integration of GenAI tools in education fosters a fair, inclusive, and empowering learning environment. A critical next step for research in this area is to move beyond self-reported data and establish a clearer connection between GenAI use and actual academic outcomes. Although the present study provides valuable insights into students' perceptions, it does not empirically validate these perceptions against objective performance metrics. Future studies should therefore aim to incorporate quantitative data, such as course grades, assignment scores, or performance on standardized assessments, to analyze the correlation between the frequency and quality of GenAI interaction and students' academic achievement. Such research could employ a mixed-methods approach, combining log data from GenAI platforms with academic records and qualitative interviews to build a more robust and detailed understanding of how these tools truly impact learning. This would enable the field to shift from documenting perceived benefits to identifying causal links and developing evidence-based best practices for leveraging GenAI to achieve measurable improvements in student performance.

Conclusion

This study investigated the landscape of Generative AI (GenAI) adoption among university students in the UAE, examining how factors such as familiarity, frequency of use, and the presence of learning difficulties influence its perceived impact on self-learning. By integrating theoretical frameworks like Self-Regulated Learning (SRL), the Social Model of Disability, and Universal Design for Learning (UDL), the research sought to understand if students use these tools, and to what effect, particularly for learners who face additional academic challenges. One of the study's primary findings is the significant, positive relationship between students' familiarity with and frequency of GenAI use and its perceived positive impact on their self-learning. This suggests that sustained engagement is key to unlocking the educational benefits of these tools, a conclusion that supports existing literature on the role of AI in fostering self-regulated learning (Afshar et al., 2024; Chang et al., 2023). However, a key differential finding emerged: students with learning difficulties reported significantly lower frequency of GenAI use compared to their peers without learning difficulties, pointing to a critical equity gap in adoption. Interestingly, the moderation analysis revealed that the presence of a learning difficulty did not significantly alter the relationship between GenAI usage and its perceived impact. This suggests that although students with learning difficulties use these tools less often, when they do, they derive a similar level of perceived benefit as their peers. This detailed finding highlights the importance of focusing on equitable access and training rather than assuming a differential impact. By explicitly analyzing this group within the UAE's culturally and linguistically diverse GenEd context, the study adds to international scholarship by foregrounding inclusivity and equity as essential dimensions of AI integration in higher education. The study also confirmed a generally low level of familiarity with GenAI across the entire sample, highlighting a university-wide gap in AI literacy that likely hinders effective adoption (Mello et al., 2023). This finding reinforces the urgent need for structured training. As proposed in the Discussion, such training must move beyond basic technical skills to encompass critical evaluation, ethical use, and discipline-specific pedagogical strategies, empowering both students and faculty to utilize GenAI as a tool for deeper learning. Finally, it is important to acknowledge that the study relied solely on self-reported survey data. While this approach captured students' perceptions, a critical dimension for understanding how learners experience new technologies, it also limits the strength of the findings. The limitations were further compounded by the use of a single-institution convenience sample and the absence of objective outcome measures. Future research should incorporate objective measures such as grades, problem-solving performance, or project-based outputs to triangulate results and strengthen validity. Furthermore, longitudinal and experimental designs are needed to establish causality and track the long-term effects of GenAI integration on student learning trajectories. Overall, this study provides valuable insights into the complex dynamics of GenAI adoption in higher education. The findings highlight the need for targeted interventions to bridge the gap in students' familiarity and training with these technologies, with a particular focus on ensuring equitable access for students with learning difficulties. Future research should further explore the long-term effects of GenAI use on student outcomes, particularly for students with learning difficulties, to determine whether the positive impacts observed in this study are sustained over time. Additionally, institutions should prioritize ethical considerations related to AI tool usage, ensuring that these tools are accessible, equitable, and inclusive for all students, regardless of their backgrounds.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

The studies involving humans were approved by Research Ethics Committee, Zayed University. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

DS: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. AE: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This paper received a RIF Grant no. 23071 by Zayed University.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was used in the creation of this manuscript. Generative AI, such as Grammarly and Quillbot, was used to enhance the language of the paper.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1737928/full#supplementary-material

References

Afshar, M., Gao, Y., Wills, G., Wang, J., Churpek, M. M., Westenberger, C. J., et al. (2024). Prompt engineering with a large language model to assist providers in responding to patient inquiries: a real-time implementation in the electronic health record. JAMA Open 7:ooae080. doi: 10.1093/jamiaopen/ooae080

PubMed Abstract | Crossref Full Text | Google Scholar

Ahmed, S., Rahman, M. S., Kaiser, M. S., and Hosen, A. S. M. S. (2025). Advancing personalized and inclusive education for students with disability through artificial intelligence: perspectives, challenges, and opportunities. Digital 5:11. doi: 10.3390/digital5020011

Crossref Full Text | Google Scholar

Almassaad, A., Alajlan, H., and Alebaikan, R. (2024). Student perceptions of generative artificial intelligence: investigating utilization, benefits, and challenges in higher education. Systems 12:385. doi: 10.3390/systems12100385

Crossref Full Text | Google Scholar

Alnaim, N. (2024). Generative AI: A Case Study of CHATGPT's Impact on University Students' Learning Practices. Research Square. Riyadh: Princess Nourah Bint Abdulrahman University.

Google Scholar

Ayouni, S., Hajjej, F., Maddeh, M., and Al-Otaibi, S. (2021). A new ML-based approach to enhance student engagement in online environment. PLoS ONE 16:e0258788. doi: 10.1371/journal.pone.0258788

PubMed Abstract | Crossref Full Text | Google Scholar

Chang, D. H., Lin, M. P., Hajian, S., and Wang, Q. Q. (2023). Educational design principles of using AI chatbot that supports Self-Regulated learning in education: goal setting, feedback, and personalization. Sustainability 15:12921. doi: 10.3390/su151712921

Crossref Full Text | Google Scholar

Charokar, K., and Dulloo, P. (2022). Self-directed learning theory to practice: a footstep towards the path of being a life-long learner. J. Adv. Med. Educ. Profession. 10, 135–143. doi: 10.30476/JAMP.2022.94356.1585

Crossref Full Text | Google Scholar

Chen, B., Zhang, Z., Langrené, N., and Zhu, S. (2023). Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review. arXiv [preprint]. arXiv:2310.14735. doi: 10.1016/j.patter.2025.101260

Crossref Full Text | Google Scholar

Chiu, T. K. F. (2024). A classification tool to foster self-regulated learning with generative artificial intelligence by applying self-determination theory: a case of ChatGPT. Educ. Technol. Res. Dev. 72, 1–24. doi: 10.1007/s11423-024-10366-w

Crossref Full Text | Google Scholar

Darvishi, A., Khosravi, H., Sadiq, S., Weber, B., and Gasevic, D. (2024). Impact of AI assistance on student agency. Comput. Educ. 210:104967. doi: 10.1016/j.compedu.2023.104967

Crossref Full Text | Google Scholar

ElSayary, A. (2023). An investigation of teachers' perceptions of using ChatGPT as a supporting tool for teaching and learning in the digital era. J. Comput. Assist. Learn. 1–15. doi: 10.1111/jcal.12926

Crossref Full Text | Google Scholar

ElSayary, A. (2024). Integrating generative AI in active learning environments: enhancing metacognition and technological skills. J. Syst. Cybern. Informatics. 22, 34–37. doi: 10.54808/jsci.22.03.34

Crossref Full Text | Google Scholar

García-López, E., and Trujillo-Liñán, R. (2025). Ethical and regulatory challenges in the use of generative artificial intelligence in education: a systematic review. Front. Educ. 10:1565938. doi: 10.3389/feduc.2025.1565938

Crossref Full Text | Google Scholar

Gervacio, A. P. (2024). Exploring how generative AI contributes to the motivated engagement and learning production of science-oriented students. Environ. Soc. Psychol. 9:3194. doi: 10.59429/esp.v9i11.3194

Crossref Full Text | Google Scholar

Hair, J. F. Jr., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2021). Partial Least Squares Structural Equation Modeling (PLS-SEM). Berlin: Springer.

Google Scholar

Hamdan, A. (2024). The Double-Edged Sword of AI-Integrated Education: An Investigation into Personalized and Inclusive Learning in Higher Education (Milton Park: Routledge), 381–391.

Google Scholar

Heston, T. F. (2023). Prompt engineering for students of medicine and their teachers. arXiv [preprint]. arXiv:2308.11628. doi: 10.48550/arXiv.2308.11628

Crossref Full Text | Google Scholar

Hidayat, R., Mohamed, M. Z. B., Suhaizi, N. N. B., Sabri, N. B. M., Mahmud, M. K. H. B., Baharuddin, S. N. B., et al. (2022). Artificial intelligence in mathematics education: a systematic literature review. Int. Electron. J. Math. Educ. 17:em0694. doi: 10.29333/iejme/12132

Crossref Full Text | Google Scholar

Huang, X., Dong, L. C, N. C. V., and D, N. P. K. (2022). Self-Regulated learning and scientific research using artificial intelligence for higher education systems. Int. J. Technol. Human Interact. 18, 1–15. doi: 10.4018/IJTHI.306226

Crossref Full Text | Google Scholar

Korea, M. D., and Alexopoulos, P. (2024). Higher education students' views on the use of artificial intelligence for teaching students with specific learning disabilities. Eur. J. Open Educ. E-Learn. Stud. 9, 156–175. doi: 10.46827/ejoe.v9i1.5518

Crossref Full Text | Google Scholar

Lee, A. V. Y., Teo, C. L., and Tan, S. C. (2024). Prompt engineering for knowledge creation: using chain-of-thought to support students' improvable ideas. AI 5, 1446–1461. doi: 10.3390/ai5030069

Crossref Full Text | Google Scholar

Lee, D., and Palmer, E. (2025). Prompt engineering in higher education: a systematic review to help inform curricula. Int. J. Educ. Technol. High. Educ. 22:12. doi: 10.1186/s41239-025-00503-7

Crossref Full Text | Google Scholar

Loeng, S. (2020). Self-directed learning: a core concept in adult education. Educ. Res. Int. 2020:3816132. doi: 10.1155/2020/3816132

Crossref Full Text | Google Scholar

Mello, R. F., Freitas, E., Pereira, F. D., Cabral, L., Tedesco, P., Ramalho, G., et al. (2023). Education in the age of generative AI: context and recent developments. arXiv [preprint]. arXiv:2309.12332. doi: 10.48550/arXiv.2309.12332

Crossref Full Text | Google Scholar

Miranda, J. P., Bansil, J. A., Fernando, E., Gamboa, A., Hernandez, H., Cruz, M., et al. (2024). “Prevalence, devices used, reasons for use, trust, barriers, and challenges in utilizing generative ai among tertiary students,” 2024 2nd International Conference on Technology Innovation and Its Applications (ICTIIA), 1–6

Google Scholar

Mladenov, T. (2025). AI and disabled people's independent living: a framework for analysis. AI Soc. doi: 10.1007/s00146-025-02642-x

Crossref Full Text | Google Scholar

Mollick, E. R., and Mollick, L. (2023). Assigning AI: seven approaches for students, with prompts. SSRN Electron. J. 1–48. doi: 10.2139/ssrn.4475995

Crossref Full Text | Google Scholar

Neji, W., Boughattas, N., and Ziadi, F. (2023). Exploring new AI-Based technologies to enhance students' motivation. Issues Inform. Sci. Inform. Technol. 20, 095–110. doi: 10.28945/5143

Crossref Full Text | Google Scholar

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., and Qiao, M. S. (2021). Conceptualizing AI literacy: an exploratory review. Comput. Educ. Artif. Intell. 2:100041. doi: 10.1016/j.caeai.2021.100041

Crossref Full Text | Google Scholar

Popenici, S. A. D., and Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Res. Pract. Technol. Enhanc. Learn. 12, 1–13. doi: 10.1186/s41039-017-0062-8

PubMed Abstract | Crossref Full Text | Google Scholar

Rathod, J. D. (2024). Systematic study of prompt engineering. Int. J. Res. Appl. Sci. Eng. Technol. 12, 597–613. doi: 10.22214/ijraset.2024.63182

Crossref Full Text | Google Scholar

Rodriguez-Donaire, S. (2024). Influence of Prompts Structure on the Perception and Enhancement of Learning through LLMs in Online Educational Contexts. London: IntechOpen eBooks.

Google Scholar

Song, Y. (2024). A framework for inclusive AI learning design for diverse learners. Comput. Educ. Artif. Intell. 6:100194. doi: 10.1016/j.caeai.2024.100212

Crossref Full Text | Google Scholar

Tassoti, S. (2024). Assessment of students use of generative artificial intelligence: prompting strategies and prompt engineering in chemistry education. J. Chem. Educ. 101, 2475–2482. doi: 10.1021/acs.jchemed.4c00212

Crossref Full Text | Google Scholar

Tu, J., Hadan, H., Wang, D. M., Sgandurra, S. A., Mogavi, R. H., Nacke, L. E., et al. (2024). Augmenting the author: exploring the potential of AI collaboration in Academic writing. arXiv [preprint]. arXiv:2404.16071. doi: 10.48550/arXiv.2404.16071

Crossref Full Text | Google Scholar

UAE Ministry of State for Artificial Intelligence (2022). AI Ethics Principles and Guidelines. Digital Economy and Remote Work Application Office. Availabe online at: https://ai.gov.ae/wp-content/uploads/2023/03/MOCAI-AI-Ethics-EN-1.pdf (Accessed March 30, 2025).

Google Scholar

UAE National Committee on SDGs (2017). UAE and the 2030 Agenda for Sustainable Development: Excellence in Implementation. Available online at: https://sustainabledevelopment.un.org/content/documents/20161UAE_SDGs_~Report_Full_English.pdf (Accessed March 30, 2025).

Google Scholar

United Arab Emirates Cabinet (2023a). The National Strategy for Innovation. Available online at: https://uaecabinet.ae/en/the-national-strategy-for-innovation (Accessed March 30, 2025).

Google Scholar

United Arab Emirates Cabinet (2023b). UAE Centennial Plan. Available online at: https://uaecabinet.ae/en/uae-centennial-plan-2071 (Accessed March 30, 2025).

Google Scholar

Vatsal, S., and Dubey, H. (2024). A survey of prompt engineering methods in large language models for different NLP tasks. arXiv [preprint]. arXiv:2407.12994. doi: 10.48550/arXiv.2407.12994

Crossref Full Text | Google Scholar

Wang, T., Lund, B. D., Marengo, A., Pagano, A., Mannuru, N. R., Teel, Z. A., et al. (2023). Exploring the potential impact of artificial intelligence (AI) on international students in higher education: generative AI, chatbots, analytics, and international student success. Appl. Sci. 13:6716. doi: 10.3390/app13116716

Crossref Full Text | Google Scholar

Wiredu, J. K., Abuba, N. S., and Zakaria, H. (2024). Impact of generative AI in academic integrity and learning outcomes: a case study in the Upper East region. Asian J. Res. Comput. Sci. 17, 70–88. doi: 10.9734/ajrcos/2024/v17i7491

Crossref Full Text | Google Scholar

Xia, Q., Chiu, T. K. F., Lee, M., Sanusi, I. T., Dai, Y., Li, J., et al. (2022). A self-determination theory (SDT) design approach for inclusive and diverse artificial intelligence (AI) education. Comput. Educ. 189:104584. doi: 10.1016/j.compedu.2022.104582

Crossref Full Text | Google Scholar

Keywords: Generative AI (GenAI), higher education, learning difficulties, prompt engineering, self-learning

Citation: Saleh D and ElSayary A (2026) Generative AI use and self-learning in higher education: the role of learning difficulties. Front. Educ. 10:1737928. doi: 10.3389/feduc.2025.1737928

Received: 02 November 2025; Revised: 11 December 2025;
Accepted: 15 December 2025; Published: 27 January 2026.

Edited by:

Abdullahi Yusuf, Sokoto State University, Nigeria

Reviewed by:

Samia Mouas, University of Batna 2, Algeria
Salah Alsharsfi, North Private College of Nursing, Saudi Arabia

Copyright © 2026 Saleh and ElSayary. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Areej ElSayary, YXJlZWouZWxzYXlhcnlAenUuYWMuYWU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.