Your new experience awaits. Try the new design now and help us make it even better

CURRICULUM, INSTRUCTION, AND PEDAGOGY article

Front. Educ., 18 December 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1700056

This article is part of the Research TopicReimagining Higher Education: Responding Proactively to 21st Century Global ShiftsView all 42 articles

Artificial intelligence in higher education: student perspectives on practices, challenges, and policies in a transitional context


Bashkim erkiniBashkim Çerkini1Kaltrina BajraktariKaltrina Bajraktari2Blerina ibukiu*Blerina Çibukçiu3*Fehmi RamadaniFehmi Ramadani3Fakije ZejnullahuFakije Zejnullahu1Lulzim HajdiniLulzim Hajdini4
  • 1Faculty of Engineering and Informatics, University of Applied Sciences in Ferizaj, Ferizaj, Kosovo
  • 2AAB College, Pristina, Kosovo
  • 3Faculty of Education, University of Pristina “Hasan Prishtina,” Pristina, Kosovo
  • 4Faculty of Civil Engineering, University of Pristina “Hasan Prishtina,” Prishtina, Kosovo

This study explores the integration of Artificial Intelligence (AI) in higher education, with a particular focus on Kosovo’s transitional educational system. It investigates the students’ perceptions, the challenges they encounter, and institutional policies related to the AI use in learning practices. This research employed a mixed-methods design, including both quantitative and qualitative data. The qualitative data were assessed through thematic analysis, while quantitative data were processed using Statistical Method using SPSS. A questionnaire was designed based on SATAI (Student Attitudes Toward Artificial Intelligence) and Student Perspectives on AI in Higher Education: Student Survey, to gain insights into students’ perspectives. A total of 554 students participated from public and private universities in Kosovo. The results indicated a significant positive correlation between AI and positive students’ attitudes (r = 0.813, p < 0.001)–students who used AI often tended to view it positively and use it more. Conversely, students who lacked AI training and experience reported significant challenges in use. Furthermore, the findings revealed that positive perceptions of institutional policies are associated with increased AI use (B = 0.13, p < 0.001). However, gender and age differences didn’t have a significant influence on attitudes and the use of AI. The overall results highlight the critical need to construct clear policies, comprehensive and effective training to maximize the benefits of AI in the academic context.

1 Introduction

1.1 Background of artificial intelligence

Artificial intelligence (AI) refers to the creation of systems and machines that can perform tasks that typically require human cognitive functions (Sweeney, 2003). While AI is a familiar term to many, a deep understanding of it remains limited (West, 2018). The term “artificial intelligence” was first introduced by John McCarthy during a seminal workshop at Dartmouth College in 1956 (Russel and Norvig, 2010). Since then, the AI has grown rapidly and become deeply embedded in how we live, work, learn, and communicate (Chiu, 2021; Xia et al., 2022). Particularly, generative AI models like ChatGPT have gained traction for their ability to generate human-like content (Sallam, 2023).

Recent public access to large language models that allow users to interface with LLMs, such as (LLMs; e.g., OpenAI’s GPT-3 and 4, Google’s PaLM 1 and 2), and chatbots (e.g., OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing has significantly increased the public interest and engagement in artificial intelligence (AI). These Generative AI (GenAI) tools provide individuals with the opportunity to instantly generate human-like written content on any topic by inputting a simple prompt (Barrett and Pack, 2023). AI-based technologies, such as chatbots and virtual assistants to recommendation engines, and smart devices, are playing a prominent role in our daily routines. Particularly in the education sector, AI tools are becoming increasingly common, including intelligent tutoring systems, adaptive learning platforms, AI-supported writing tools, and virtual simulations. These tools have the potential to support learners by enhancing problem-solving, writing assistance, and personalized content delivery (Rahman and Watanobe, 2023). Research also shows that AI tools may lead to increased motivation and engagement among students, as well as improved understanding of complex concepts (Adiguzel et al., 2023). This research is further supported by an investigation of 43 scientific articles related to ChatBots showing that interaction with chatbots was positively correlated with student motivation (Huang et al., 2025). Recent studies show that AI has the potential to improve the quality of higher education, especially in the aspects of personalizing teaching, accessing resources, and increasing administrative efficiency (Krouska et al., 2022; Delgado et al., 2023). However, its acceptance depends on users’ perceptions of reliability, ethics, and personal control over the use of the technology (Ali S. B. et al., 2024).

1.2 AI in education

The origin of AI in education can be traced to early teaching machines developed by Sidney Pressey in the 1920s and later, in the 1950s, advanced by B.F. Skinner, the father of behaviorism (Holmes et al., 2019). Since then, AI in education (AIEd) has developed into a multidisciplinary field at the intersection of education, computer science, statistics, and cognitive psychology. In line with this development, the Horizon Report (Becker et al., 2018) identifies AI and adaptive learning as transformative forces in educational technology. AI has long contributed to student learning processes by offering personalized, real-time feedback and adapting to individual learning styles (Atlas, 2023; Chan and Hu, 2023; Luckin, 2017). Tools such as ChatGPT, Grammarly, and spell checkers enhance writing proficiency by providing feedback on the writing, besides identifying grammatical and lexical errors in writing (Godwin-Jones, 2022; Delcker et al., 2024; Atlas, 2023). However, to optimize these benefits, the use of ChatGPT in education requires a responsible and well-planned approach. It is recommended that AI tools be aligned with learning objectives and pedagogy, while preserving data privacy and addressing potential biases. Key steps include investing in teacher training, promoting interdisciplinary collaboration, and fostering ethical awareness. Developing open tools for inclusion, continuously evaluating the impact of AI, and encouraging student autonomy will contribute to maximizing the benefits of technology in the learning process (Adel et al., 2024).

1.3 Benefits and challenges of AI in higher education

However, the literature notes that challenges include not only technological aspects, but also the lack of clarity in educational policies and institutional readiness for the integration of AI (Baidoo-Anu et al., 2024a,b; Slimi et al., 2025). This is particularly evident in developing country contexts, where the lack of digital training and ethical regulations makes it difficult to effectively use AI in a sustainable manner (Alshamy et al., 2025). While many applications, such as content generation, summarization, or visual assistance, are legitimate, unauthorized use for assignments, assessments, or publications may be considered academic misconduct (Tauginienë et al., 2018). Kumar’s (2023) shows that the text output, although mostly original and relevant to the topics, contained inappropriate references and lacked personal perspectives that AI is generally incapable of producing. Alshamy et al. (2025) in their study explore the perceptions of students and academics at Sultan Qaboos University on AI-generating tools, and based on the results, emphasize the need to increase digital competence and address ethical concerns. In this context, students often struggle with understanding the ethical boundaries of AI usage as they are not provided with clear institutional guidance (Baidoo-Anu et al., 2024a,b). This concern ties directly to the importance of having students use AI tools ethically and responsibly by organizing student training and policies to have a sustainable, inclusive impact and maintain academic integrity (Slimi et al., 2025). Thus, Chan (2023) proposes an educational policy framework for AI in higher education, which includes pedagogical, governance, and operational dimensions, addressing issues such as privacy, ethics, and staff training. Further, Chan and Hu’s (2023), proposed the need for comprehensive educational policies that include ethical education, transparency, and privacy protection. They emphasize the importance of preparing students for a future dominated by AI technologies, but the administration of higher education institutions has to adapt their policies to the new technology (Chan, 2023).

These concerns are further supported by Bin-Nashwan et al. (2023) investigation, which demonstrates that while motivation to use ChatGPT is positively influenced by factors such as time saving and academic self-confidence, concerns like academic integrity and peer influence show a negative impact on usage. Building on this, Chat Generative Pre-Trained Transformer (ChatGPT), as the latest breakthrough in AI, became the leading digital innovation in 2023, surpassing other digital technologies (Kohnke et al., 2023). Extending this line of inquiry, Ali D. et al. (2024) conducted a review of 112 scholarly articles to identify the potential benefits and challenges related to ChatGPT use in educational settings. The results demonstrate that while ChatGPT is appreciated for its ability to enhance performance evaluation, natural language processing capabilities, and text generation, there are significant concerns regarding the quality and bias of its responses, and risks of plagiarism or inauthentic content. The advantages and disadvantages indicate a nuanced view of ChatGPT in higher education, where its potential is recognized, but an awareness of its limitations and challenges also exists.

1.4 Student perceptions and experiences

Recent studies categorize student perceptions around academic assistance, personalized learning, obstacles to critical thinking, and ethical challenges toward ChatGPT (Ruano-Borbalan, 2025). The framework by Ali S. B. et al. (2024) has been used in recent studies to understand the acceptance of artificial intelligence tools in education. It helps identify factors that influence the perception and use of AI, including: trustworthiness of the technology, personal control over content, impact on academic identity, and ethical concerns. In international studies, this framework has been successfully applied to universities that have begun to integrate AI tools experimentally or in part. However, the existing studies often overlook the real experiences of students with AI use in education in transitioning countries like Kosovo, where digital transition and reform are still evolving, making an institutional and comprehensive policy infrastructure of AI remain either underdeveloped or completely absent. Therefore, this study explores how Kosovo university students perceive AI in the learning process, the benefits and limitations they experience, and the role of institutional policies in facilitating or hindering its use.

1.5 Research gap and study objectives

While many international studies have addressed the use of AI in higher education, only a few have analyzed the experiences of students in educational systems in transition such as Kosovo. These contexts are often characterized by a lack of policy infrastructure, training, and ethics, creating a significant gap in the literature, therefore this study contributes to the existing scientific literature by providing empirical evidence from a transitional educational context—Kosovo—where institutional policy frameworks for the use of AI in higher education are still underdeveloped. It addresses the research gap regarding students’ perceptions of the practical, ethical, and political dimensions of integrating AI into academic settings. Furthermore, the findings provide a foundation upon which educators and policymakers can build strategies to implement AI responsibly in resource-limited education systems.

To analyze the factors that influence students’ perception and use of AI, this study relies on the theoretical framework from Ali S. B. et al. (2024).

2 Methodology

2.1 Target population and sample

The target population for this study consisted of students enrolled in public and private universities in Kosovo during the 2024/2025 academic year. The sample was randomly selected, including 554 students from different profiles.

2.2 Research design

This research employed an embedded mixed-methods design, with a primary focus on quantitative data obtained through closed-ended Likert-scale questions, while open-ended questions were collected through qualitative data, offering deeper insights into the statistical findings. This combination enabled an examination regarding students’ perceptions of the benefits, challenges, and the support received by university institutional policies for the AI use in higher education within the transitional educational context of Kosovo.

2.3 Data collection instrument

The study initially draws on quantitative data to identify general patterns of attitudes and AI usage.

These findings are then enriched through thematic analysis of qualitative data collected from open-ended questions. This design is particularly well-suited to understand how complex phenomena like AI are experienced in developing educational contexts.

This questionnaire contains six sections: section (1): Demographics, section (2): Attitudes toward Artificial Intelligence, section (3): Use of Artificial Intelligence in Studies. section (4): Challenges and Discouragement in the Use of AI, section (5): Institutional Policies and Guidelines, and section (6): Personal Experience (optional). The questionnaire consisted of 33 closed-ended questions measured on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree) and 3 open-ended questions that allowed students to share personal experiences and concerns regarding the use of AI in an academic context. The questionnaire used in this study was developed by adapting items from two validated instruments:

1. Student Attitudes Toward Artificial Intelligence (SATAI) by Suh and Ahn (2022), which measures students’ cognitive, affective, and behavioral attitudes toward AI.

2. Student Perspectives on AI in Higher Education: Student Survey by Chung et al. (2024), which explores students’ knowledge, access, use, and attitudes toward Generative AI (GenAI) in higher education (Table 1).

TABLE 1
www.frontiersin.org

Table 1. Structure of the questionnaire, sources, and corresponding hypotheses.

The questionnaire questions were translated into Albanian by language experts through a double translation. It was validated through a pilot study with 15 students from two universities (one public and one private), over the course of 1 week in early February, and was adjusted to increase clarity and relevance. Data collection took place from February 2025 to May 2025, through an online link that allowed students to respond to the questionnaire.

2.4 Data analysis

The collected data were initially checked and cleaned for errors (double responses, missing values, maximum and minimum values, etc.). Quantitative data were analyzed through SPSS v.27, using descriptive statistics, non-parametric inferential tests (Spearman, Mann-Whitney U, KruskalWallis), and linear regression. Simultaneously, qualitative data were analyzed through thematic analysis according to the approach of Clarke and Braun (2017), following the six steps of analysis: (1) familiarization with the data, (2) creation of initial codes, (3) identification of themes, (4) revision of themes, (5) naming and describing themes, and (6) preparation of the final report. This method provided clear and systematic procedures for extracting codes and themes from qualitative data (Clarke and Braun, 2017). To ensure the reliability and rigor of the analysis, two researchers independently coded the open-ended responses. Any discrepancies in coding were discussed and resolved through consensus, while a third researcher verified the coding scheme to ensure consistency and credibility. All themes were reviewed against the entire data set to ensure conceptual coherence.

2.5 Ethical considerations

Rigorous ethical standards were respected throughout the research process. Participants were initially informed of the purpose of the research, received an information document about the process, and a request for their consent. Detailed information was provided about the aims of the study and the potential benefits, with a clear assurance that participation did not involve any risk.

The selection of indicators in the questionnaire was based in part on the constituent elements of the theoretical framework (Ali S. B. et al., 2024), in order to measure perceptions of personal control, trustworthiness of AI, and attitudes toward its ethical impact in the academic context.

Hypotheses of the study:

H1: Students who use Artificial Intelligence (AI) more frequently have more positive attitudes toward the importance of AI in higher education.

H2: Attitudes toward Artificial Intelligence vary depending on the level of study.

H3: Students who have taken AI courses demonstrate higher knowledge and use AI more frequently in their studies.

H4: Students who do not have institutional training or guidance on AI have more ethical concerns and uncertainty about using AI.

H5: Knowledge of institutional policies and guidelines increases students’ confidence and use of AI.

H6: The use of AI positively impacts student motivation in the learning process.

H7: There are differences in the use of AI depending on gender.

H8: Students’ age influences the use of AI in higher education.

3 Findings

3.1 Reliability of the questionnaire

To determine the internal consistency of the instrument, Cronbach’s Alpha coefficients were calculated for each subscale. The results showed satisfactory reliability levels across all dimensions of the questionnaire. Specifically, the subscale “Positive attitudes toward AI” demonstrated a Cronbach’s Alpha of 0.82, indicating high internal consistency among its items. The “Use of AI in studies” subscale showed a coefficient of 0.79, which also reflects good reliability.

The subscale “Training for AI” yielded a Cronbach’s Alpha of 0.73, representing acceptable reliability, while the “Challenges” subscale produced an Alpha of 0.70, which is considered acceptable for exploratory research. The “Policies” subscale resulted in a Cronbach’s Alpha of 0.68, suggesting marginal but still acceptable internal consistency, particularly given the small number of items.

Overall, the general Cronbach’s Alpha for the entire scale was 0.71, indicating that the instrument as a whole possesses an acceptable level of internal consistency.

The reliability of the questionnaire was assessed using Cronbach’s Alpha for the 33 closed-ended questions of the questionnaire of this study (Table 2).

TABLE 2
www.frontiersin.org

Table 2. Reliability statistics of questionnaire by Cronbach’s alpha for the thirty-three items of the instrument.

The result of the coefficient α = 0.717 indicates a satisfactory level of reliability for research purposes in social sciences. According to the methodological literature of George and Mallery (2003), a value above 0.7 is considered acceptable, indicating that the questionnaire statements demonstrate good internal coherence and can be used for statistical analysis. Therefore, the instrument is reliable to continue with the intended measurements in this study.

3.2 Descriptive statistics of key variables

Descriptive data for the questionnaire sections (Table 3) indicate that students have a positive attitude toward the use of AI in higher education, with an average of 3.95 on a scale of 1–5. Although students report a high use of AI in studies (M = 4.08), they also indicate significant challenges in its use (M = 3.55), with a low perception of the existing institutional policies (M = 2.48).

TABLE 3
www.frontiersin.org

Table 3. Descriptive statistics for questionnaire subscales.

Normality tests (Kolmogorov-Smirnov and Shapiro-Wilk) showed that both variables, although they were numerical average values and measured on the “Scale,” deviate significantly from the normal distribution (p < 0.001). Also, these variables do not meet the linearity condition, so the Spearman correlation (r = 0.813**) was calculated to measure the relationship between AI use and positive attitudes toward AI.

The low mean value for institutional policies (M = 2.48) indicates that students perceive a lack of clear guidance, training, and regulations on the ethical use of AI within their universities. This finding highlights a significant policy gap that may reduce students’ confidence in using AI responsibly. It also supports hypotheses H4 and H5, suggesting that limited institutional support and unclear policies are associated with increased challenges and lower engagement in AI-based learning.

3.3 Relationship between AI use and attitudes (H1)

The results confirm a very strong and statistically significant relationship between the frequency of AI use and positive attitudes toward AI (r = 0.813, p < 0.001, N = 544) (Table 4). This result confirms the hypothesis H1 of this study that more frequent use of AI tools is associated with more positive attitudes toward AI by students.

TABLE 4
www.frontiersin.org

Table 4. Spearman correlation between the use of AI and positive attitudes toward AI.

3.4 Differences by level of study (H2)

To test H2 in this study, a Kruskal-Wallis test was performed, which showed that there is a statistically significant difference in students’ attitudes toward Artificial Intelligence depending on the level of studies [H(2) = 8.896, p = 0.012]. Bachelor’s students showed a more positive attitude toward AI compared to Master’s students (difference confirmed by the Mann-Whitney U test, p = 0.008). Although PhD students had the highest mean rank (358.67) (Table 5), the difference was not statistically significant in the comparative tests due to the small size of this group (N = 6), this result supports the hypothesis H2 of this study.

TABLE 5
www.frontiersin.org

Table 5. Comparison of students’ attitudes toward AI by level of studies in Kosovo.

3.5 AI Courses and use (H3)

Students who have taken AI courses within their faculty reported a higher frequency of AI use in their studies. As the data weren’t normally distributed, the Mann-Whitney U test was used (Shapiro-Wilk, p < 0.001). The test results showed a statistically significant difference between the two groups (U = 29,971.50, Z = –2.073, p = 0.038) (Table 6), confirming that taking the AI courses results in frequent AI use in education, which consequently supports hypothesis H3.

TABLE 6
www.frontiersin.org

Table 6. Comparison of AI use in studies according to whether AI courses are offered in the faculty.

The Spearman correlation coefficient was used to assess the relationship between the lack of guidelines/training for the use of AI and the challenges students experience when integrating AI into their studies.

3.6 Lack of training and challenges (H4)

This shows a strong positive and statistically significant correlation between the lack of training in the use of AI and the difficulties encountered when using it in an academic context. Students without adequate guidance or training report greater challenges in using AI for study purposes (Table 7). These results highlight the importance of providing structured training on the effective use of artificial intelligence in higher education to facilitate its integration into the learning process and reduce perceived barriers.

TABLE 7
www.frontiersin.org

Table 7. Spearman correlation between the lack of training for AI use and the challenges of using AI in studies.

3.7 Institutional policies and AI use (H5)

Spearman correlation analysis was conducted to examine the relationship between the perception of institutional policies related to AI and the level of its use in studies by students (Table 8), which resulted in a significant positive correlation (r = 0.193, p < 0.001). This means that the more positively students perceive institutional policies on AI, the more they use AI in their academic activities.

TABLE 8
www.frontiersin.org

Table 8. Spearman correlation between the perception of institutional policies and the use of AI in studies.

Although the strength of the correlation is low, it is significant and suggests that institutional policies may play a facilitating role in students’ integration of AI into the learning process, confirming hypothesis H5.

3.8 AI use and motivation (H6)

A linear regression was also conducted using IA to test hypothesis H5, with the dependent variable and knowledge of institutional policies as the predictor variable (Table 9).

TABLE 9
www.frontiersin.org

Table 9. Regression table: use of AI and awareness of institutional policies.

The regression analysis revealed a statistically significant model [F(1, 542) = 22.796, p < 0.001], confirming that familiarity with institutional AI policies significantly affects students’ use of AI [B = 0.13, SE = 0.03, CI (0.08, 0.18), p < 0.001]. This indicates that students who are informed about institutional guidelines and policies tend to use AI more in their studies.

Moreover, a Spearman correlation analysis was conducted to examine the relationship between the use of AI and students’ motivation to enhance their engagement in the learning process (Table 10).

TABLE 10
www.frontiersin.org

Table 10. Spearman correlation between the use of Artificial Intelligence and students’ motivation to engage in the learning process.

The results showed a positive and statistically significant correlation between the use of AI and student motivation (p = 0.393, p < 0.001). This suggests that the more students use AI in their studies, the more motivated they feel to be actively involved in the learning process.

3.9 Gender differences (H7)

The Mann-Whitney U test was used to investigate gender differences in attitudes and use of AI. As shown in Table 11, there were no statistically significant differences in positive attitudes toward AI between males and females (Z = -0.822, p = 0.411). Similarly, Table 12 indicates no significant gender-based differences in the use of AI in an academic context (Z = -0.521, p = 0.603). These results confirm that attitudes and use of AI do not differ significantly between genders in this study (p > 0.05).

TABLE 11
www.frontiersin.org

Table 11. Mann-Whitney U test results for gender differences in attitudes of AI.

TABLE 12
www.frontiersin.org

Table 12. Mann-Whitney U test results for gender differences in use of AI in studies.

3.10 Age differences (H8)

Furthermore, the Kruskal-Wallis test was applied to assess whether students’ attitudes varied across groups toward AI in higher education (Table 13). The age groups included in the analysis were 18–24 years old (N = 464, M = 273.65), 25–29 years old (N = 74, M = 286.30) and another group that appears to be 25–29 with N = 6 and M = 235.42, which may be a labeling error or repetition. The group labeled “35 and above” appears with N = 0, indicating there are no participants in this age group; therefore, it was excluded from the analysis. Kruskal-Wallis test yielded a p-value of 0.641, which exceeds the threshold for statistical significance level of p < 0.05. This means that there are no statistically significant differences in attitudes toward AI between different age groups of students. These results suggest that age does not significantly affect how students perceive or evaluate the use of AI in education. Thus, attitudes toward AI are similar across both younger and slightly older students. The findings highlight the inclusive nature of AI use in higher education, confirming that AI technologies are similarly accessible and acceptable to a broad spectrum of students. These findings align with recent research suggesting that AI technologies in education are becoming increasingly accessible and equitably adopted by students across diverse populations (Zawacki-Richter et al., 2019; Holmes et al., 2019). The widespread adoption of AI tools may reflect the growing normalization of such technologies in educational environments, rather than being influenced by traditional demographic divisions.

TABLE 13
www.frontiersin.org

Table 13. Kruskal-Wallis test results for differences in attitudes toward AI based on ages of students.

3.11 Thematic analysis of open-ended responses

To complement the statistical findings, students’ open-ended responses were collected and analyzed through thematic analysis (Table 14). Out of 544 total respondents, all open-ended responses were included in the thematic analysis. From this process, four main themes emerged:

TABLE 14
www.frontiersin.org

Table 14. Codes and themes derived from the thematic analysis of students’ open-ended responses.

“Technical and organizational support,” “Self-learning,” “Inaccuracy,” and “Lack of institutional framework for the use of AI in studies.” These themes reflect the students’ diverse experiences in terms of AI use in an academic context. Representative examples were selected from across the dataset (e.g., respondents 27, 14, 32, 18, and 41) to illustrate the range of perspectives identified through the analysis and to maintain clarity and conciseness in presenting qualitative findings.

This approach ensures clarity and representativeness while maintaining a concise presentation of qualitative findings The first theme, “Technical and organizational assistance,” includes codes concerning the use of AI to help students write, structure, and organize their essays. The codes show that AI helps improve the quality during research work. For instance, Respondent 27 stated: “It helped me in compiling the essay, it helped me structure the essay and express my thoughts more clearly.”

The theme “Self-learning” reflects students’ perception of AI as a tool to encourage self-study and gain easier access to various academic resources. Respondent 14 stated: “I used it often to search for scientific articles and to better understand how to structure the literature section.” And similarly, Respondent 32 mentioned: “The help you give me makes me feel more confident about my preparation before the exams.” While the theme “Inaccuracy” includes students’ concerns about reliability content generated by AI, as it is often perceived that the information they receive from AI is often general and does not reflect their personal writing style. Respondent 18 noted: “AI provides the answer, but sometimes it is general and does not reflect my writing style.” The last theme “Lack of institutional framework for the use of AI in studies” encompasses codes related to the lack of clear guidelines, laws and regulations from the educational institution regarding AI use. Students expressed lack of specific training and standardized policies, uncertainty about ethical and legal consequences. Respondent 41 pointed out a lack of institutional guidance: “We haven’t received any instruction from the faculty on how to use AI safely and appropriately.”

This response underscores the absence of structured support or digital literacy policies regarding AI use in higher education settings, which may leave students uncertain or at risk of misuse. Furthermore, the same respondent emphasized the need for institutional development: “I strongly recommend that universities work to ensure academically qualified staff for each subject and officially integrate AI tools into the teaching process.” These results reinforce elements of the theoretical framework that emphasize that lack of institutional support and ethical ambiguity are factors that hinder the sustainable acceptance of AI tools in education (Ali S. B. et al., 2024).

Also, providing training on using AI responsibly would increase the quality of education. Although these findings acknowledge the important contribution that AI makes to the learning process, they underscore the pressing need for institutional guidelines and ethical policies that would ensure a responsible and more effective use of AI in higher education.

4 Discussion and conclusion

This study also found that students who participated in AI courses make greater use of AI technology for academic purposes, which highlights the need to incorporate AI in the Higher Education curriculum. This promotes the awareness and skills of students, making them more willing to use technology effectively and creatively. The strong link between positive attitudes and frequent use of AI is also in line with previous theoretical models of technological acceptance, which emphasize that the perception of usefulness and ease of use are among the main factors in the adoption of new technologies (Davis, 1989; Venkatesh et al., 2003).

This finding aligns with international literature, where the use of advanced technologies such as AI is also influenced by prior knowledge and engagement in training programs (Zawacki-Richter et al., 2019; Woolf, 2020). It also underscores the importance of providing appropriate tools and resources to support a responsible and more effective use of AI in education.

Consistent with the theoretical framework, findings indicate that students are more likely to use AI when they consider it trustworthy, safe, and when they feel able to control its use in accordance with academic norms. Contrary to previous studies that highlighted gender differences in the adoption of digital technologies (Morris et al., 2005), this study found no significant differences between genders in perceptions and use of AI.

The challenges identified by students who have not received specific training indicate a significant gap in institutional support. This is consistent with the literature that highlights the need for ongoing training and institutional support for the successful use of AI in higher education (Luckin et al., 2016; Holmes et al., 2019). Institutions should invest in developing the capacities of academic staff and students to take full advantage of AI tools, thereby minimizing technical and perceptual barriers. Simultaneously, students’ positive perception of institutional policies related to AI is an indication that clear and well-communicated policies can positively influence the use of new technologies. This is consistent with studies that highlight the importance of a stable institutional framework for promoting technological innovation in the academic environment (Selwyn, 2016; Redecker, 2017).

Overall, this study contributes to the growing body of research on AI in higher education, providing an overview of the Kosovo context—a transitioning country facing numerous challenges. The study confirms that effective use of AI extends beyond students’ technical knowledge; it’s also a process that requires broad institutional support, including clear policies, appropriate training, and sufficient resources. Without adequate policy infrastructure, training, and clarity, the effective use of AI is hindered. These results are also in line with students’ perceptions of AI in higher education in a study conducted by Pitts et al. (2025), where benefits include support for research and access to information, while the main concerns are related to academic integrity and loss of critical thinking skills. Moreover, the study indicates that frequent and conscious use of AI is closely linked to positive perceptions and active participation in specific courses, highlighting the importance of including training in academic curricula. No significant gender-based differences were identified in this research, unlike findings from earlier research, which suggests a more equal educational environment for students of both genders. The challenges and opportunities identified here reflect broader patterns found in developing or post-socialist education systems.

To conclude, the results of this study indicate that the use of AI in higher education in Kosovo is still emerging, where positive perceptions and active involvement of students are closely linked to the extent of its use. Although Kosovo presents a unique case, institutional gaps in policies for AI and students’ digital competences mirror those found in other low- and middle-income countries. As such, the findings of this study are valuable for the international higher education community, especially for countries aiming to integrate AI responsibly and inclusively.

The application of the theoretical framework in this study has helped to unravel the main factors influencing the use of AI in an educational system in transition like Kosovo. Finally, the study highlights that institutional policies have a supporting role in improving the use of AI, requiring a comprehensive approach that includes policies, training, and infrastructure – critical steps for higher education institutions that aim to successfully integrate new technologies into teaching and administrative processes.

4.1 Limitations

This study presents certain limitations that should be acknowledged. Firstly, the study was conducted exclusively in the context of Kosovo and involved only students from Kosovo. However, this does not necessarily restrict the generalizability of the findings to a wider context. Additionally, the data were collected solely through student reports, a method that can introduce subjectivity. Future research should expand this investigation by incorporating faculty perspectives, longitudinal tracking of AI-related learning outcomes, in addition to comparative studies between developing and developed educational systems. Mixed methods approaches, such as interviews and focus groups, could further demonstrate how AI shapes the students’ engagement, development, autonomy, academic identity, and engagement in particularly limited source environments.

Data availability statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Ethics statement

Ethical approval was not required for the study involving humans in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants or the participants’ legal guardians/next of kin in accordance with the national legislation and the institutional requirements.

Author contributions

BÇe: Supervision, Validation, Writing – original draft, Conceptualization. KB: Conceptualization, Writing – review & editing, Methodology. BÇi: Writing – review & editing, Software, Project administration. FR: Software, Visualization, Writing – review & editing. FZ: Writing – review & editing, Validation, Formal analysis. LH: Writing – review & editing, Visualization, Validation.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Acknowledgments

We would like to thank the students who voluntarily participated in the study and shared their perspectives with openness and honesty. We also acknowledge the support of the university administrative staff who facilitated access to institutional data and helped distribute the survey during the data collection phase. Their cooperation was essential to the successful completion of this research.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Adel, A., Ahsan, A., and Davison, C. (2024). ChatGPT promises and challenges in education: Computational and ethical perspectives. Educ. Sci. 14:814. doi: 10.3390/educsci14080814

Crossref Full Text | Google Scholar

Adiguzel, T., Kaya, M. H., and Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemp. Educ. Technol. 15:e429. doi: 10.30935/cedtech/13152

Crossref Full Text | Google Scholar

Ali, D., Fatemi, Y., Boskabadi, E., Nikfar, M., Ugwuoke, J., and Ali, H. (2024). ChatGPT in teaching and learning: A systematic review. Educ. Sci. 14:643. doi: 10.3390/educsci14060643

Crossref Full Text | Google Scholar

Ali, S. B., Haider, M., Samiullah, D., and Shamsy, S. (2024). Assessing students’ understanding of ethical use of artificial intelligence (AI): A focus group study. Intern. J. Soc. Sci. Entrepreneurship 4, 65–87. doi: 10.58661/ijsse.v4i3.301

Crossref Full Text | Google Scholar

Alshamy, A., Al-Harthi, A. S. A., and Abdullah, S. (2025). Perceptions of generative AI tools in higher education: Insights from students and academics at Sultan Qaboos University. Educ. Sci. 15:501. doi: 10.3390/educsci15040501

Crossref Full Text | Google Scholar

Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. Available online at: https://digitalcommons.uri.edu/cba_facpubs/548/ (accessed March 15, 2024).

Google Scholar

Baidoo-Anu, D., Amoako, I. O., and Owusu-Mensah, F. (2024a). Challenges of AI integration in developing countries. J. Educ. Technol. 18, 112–126.

Google Scholar

Baidoo-Anu, D., Asamoah, D., Amoako, I., Owusu, B., and Essuman, D. (2024b). Exploring student perspectives on generative artificial intelligence in higher education learning. Discover Educ. 3:98. doi: 10.1007/s44217-024-00173-z

Crossref Full Text | Google Scholar

Barrett, A., and Pack, A. (2023). Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. Intern. J. Educ. Technol. Higher Educ. 20:59. doi: 10.1186/s41239-023-00427-0

Crossref Full Text | Google Scholar

Becker, S. A., Brown, M., Dahlstrom, E., Davis, A., DePaul, K., Diaz, V., et al. (2018). Horizon report 2018 higher education edition brought to you by educause. Dallas, TX: Educause, 1–54.

Google Scholar

Bin-Nashwan, S. A., Sadallah, M., and Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technol. Soc. 75:102370. doi: 10.1016/j.techsoc.2023.102370

Crossref Full Text | Google Scholar

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. Intern. J. Educ. Technol. Higher Educ. 20:84. doi: 10.1186/s41239-023-00408-3

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education’. Intern. J. Educ. Technol. Higher Educ. 20:43. doi: 10.1186/s41239-023-00411-8

Crossref Full Text | Google Scholar

Chiu, T. K. F. (2021). A holistic approach to artificial intelligence (AI) curriculum for K–12 schools. TechTrends 65, 796–807. doi: 10.1007/s11528-021-00637-1

Crossref Full Text | Google Scholar

Chung, J., Henderson, M., Pepperell, N., Slade, C., and Liang, Y. (2024). “Student perspectives on AI in higher education: Student survey,” in Student Perspectives on AI in Higher Education Project. doi: 10.26180/27915930

PubMed Abstract | Crossref Full Text | Google Scholar

Clarke, V., and Braun, V. (2017). Thematic analysis. J. Positive Psychol. 12, 297–298. doi: 10.1080/17439760.2016.1262613

Crossref Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quar. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

Delcker, J., Heil, J., Ifenthaler, D., Seufert, S., and Spirgi, L. (2024). First-year students AI-competence as a predictor for intended and de facto use of AI-tools for supporting learning processes in higher education. Intern. J. Educ. Technol. Higher Educ. 21:18. doi: 10.1186/s41239-024-00452-7

Crossref Full Text | Google Scholar

Delgado, M., García, A., and Pérez, J. (2023). Artificial intelligence in higher education: Opportunities and challenges. Intern. J. Educ. Res. 121:102163.

Google Scholar

George, D., and Mallery, P. (2003). SPSS for windows step by step: A simple guide and reference, 4th Edn. Boston, MA: Allyn & Bacon.

Google Scholar

Godwin-Jones, R. (2022). Partnering with AI: Intelligent writing assistance and instructed language learning. Lang. Learn. Technol. 26, 5–24.

Crossref Full Text | Google Scholar

Holmes, W., Bialik, M., and Fadel, C. (2019). Artificial intelligence in education: Promise and implications for teaching and learning. Boston, MA: Center for Curriculum Redesign.

Google Scholar

Huang, W., Jiang, J., King, R. B., and Fryer, L. K. (2025). Chatbots and student motivation: A scoping review. Intern. J. Educ. Technol. Higher Educ. 22:26. doi: 10.1186/s41239-025-00524-2

Crossref Full Text | Google Scholar

Kohnke, L., Moorhouse, B. L., and Zou, D. (2023). Exploring generative artificial intelligence preparedness among university language instructors: A case study’. Comp. Educ. Art. Intell. 5:100156. doi: 10.1016/j.caeai.2023.100156

Crossref Full Text | Google Scholar

Krouska, A., Troussas, C., and Sgouropoulou, C. (2022). Artificial intelligence in higher education: A systematic review. Educ. Inform. Technol. 27, 1707–1735.

Google Scholar

Kumar, A. H. S. (2023). Analysis of ChatGPT tool to assess the potential of its utility for academic writing in biomedical domain. BEMS Rep. 9, 24–30. doi: 10.5530/bems.9.1.5

Crossref Full Text | Google Scholar

Luckin, R. (2017). Towards artifcial intelligence-based assessment systems. Nat. Hum. Behav. 1:0028. doi: 10.1038/s41562-016-0028

Crossref Full Text | Google Scholar

Luckin, R., Holmes, W., Griffiths, M., and Forcieier, L. B. (2016). Intelligence unleashed: An argument for AI in education. London: Pearson Education.

Google Scholar

Morris, M. G., Venkatesh, V., and Ackerman, P. L. (2005). Gender and age differences in employee decisions about new technology: An extension to the theory of planned behavior. IEEE Trans. Eng. Manag. 52, 69–84. doi: 10.1109/TEM.2004.839967

Crossref Full Text | Google Scholar

Pitts, G., Marcus, V., and Motamedi, S. (2025). “Student perspectives on the benefits and risks of AI in education’,” in Proceedings of the 2025 ASEE Annual Conference & Exposition, (New York, NY: ACM), doi: 10.18260/1-2-57693

Crossref Full Text | Google Scholar

Rahman, M. M., and Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strategies. Appl. Sci. 13:5783. doi: 10.3390/app13095783

Crossref Full Text | Google Scholar

Redecker, C. (2017). European framework for the digital competence of educators: DigCompEdu. Luxembourg: Publications Office of the European Union.

Google Scholar

Ruano-Borbalan, J.-C. (2025). The transformative impact of artificial intelligence on higher education: A critical reflection on current trends and futures directions. Eur. J. Educ. 60, 123–135. doi: 10.1177/2212585X251319364

Crossref Full Text | Google Scholar

Russel, S., and Norvig, P. (2010). Artificial intelligence - a modern approach. London: Pearson Education.

Google Scholar

Sallam, M. (2023). ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare 11:887. doi: 10.3390/healthcare11060887

PubMed Abstract | Crossref Full Text | Google Scholar

Selwyn, N. (2016). Education and technology: Key issues and debates, 2nd Edn. London: Bloomsbury Academic.

Google Scholar

Slimi, H., Mezni, H., and Bouslama, F. (2025). Ethical issues in the use of AI in academia. Ethics Inform. Technol. 27, 33–48.

Google Scholar

Suh, A., and Ahn, J. (2022). Development and validation of a scale measuring student attitudes toward artificial intelligence. SAGE Open 12:21582440221100463. doi: 10.1177/21582440221100463

Crossref Full Text | Google Scholar

Sweeney, L. (2003). That’s AI?: A history and critique of the field. Pittsburgh, PA: School of Computer Science Carnegie Mellon University.

Google Scholar

Tauginienë, L., Gaižauskaitë, I., Glendinning, I., Kravjar, J., Ojsteršek, M., Ribeiro, L., et al. (2018). Glossary for academic integrity (revised version). european network for academic integrity. Brno.

Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quar. 27, 425–478. doi: 10.2307/30036540

Crossref Full Text | Google Scholar

West, D. M. (2018). What is artificial intelligence?. Washington, D.C: Brookings Institution.

Google Scholar

Woolf, B. P. (2020). “AI in education,” in AI for everyone? Critical perspectives, ed. M. F. Wiser (Cambridge, CA: MIT Press), 139–158.

Google Scholar

Xia, Q., Chiu, T. K. F., Lee, M., Temitayo, I., Dai, Y., and Chai, C. S. (2022). A self-determination theory design approach for inclusive and diverse artificial intelligence (AI) K–12 education. Comp. Educ. 189:104582. doi: 10.1016/j.compedu.2022.104582

Crossref Full Text | Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? Intern. J. Educ. Technol. Higher Educ. 16:39. doi: 10.1186/s41239-019-0171-0

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, higher education, practices, challenges, policies

Citation: Çerkini B, Bajraktari K, Çibukçiu B, Ramadani F, Zejnullahu F and Hajdini L (2025) Artificial intelligence in higher education: student perspectives on practices, challenges, and policies in a transitional context. Front. Educ. 10:1700056. doi: 10.3389/feduc.2025.1700056

Received: 05 September 2025; Revised: 07 November 2025; Accepted: 28 November 2025;
Published: 18 December 2025.

Edited by:

Adan Lopez-Mendoza, Universidad Autónoma de Tamaulipas, Mexico

Reviewed by:

Prudencia Gutiérrez-Esteban, University of Extremadura, Spain
Patricia Vazquez, Monterrey Institute of Technology and Higher Education (ITESM), Mexico

Copyright © 2025 Çerkini, Bajraktari, Çibukçiu, Ramadani, Zejnullahu and Hajdini. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Blerina Çibukçiu, Y2lidWtjaXVAdW5pLXNvZmlhLmJn; YmxlcmluYS5jaWJ1a2NpdUB1bmktcHIuZWR1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.