Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 07 January 2026

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1676900

This article is part of the Research TopicArtificial Intelligence in Educational and Business Ecosystems: Convergent Perspectives on Agency, Ethics, and TransformationView all 13 articles

Understanding higher education students’ reluctance to adopt GenAI in learning in Latvia and Ukraine

Anatolijs ProhorovsAnatolijs Prohorovs1Kristne UuleKristīne Užule2Olga Tsaryk
Olga Tsaryk3*
  • 1Faculty of Business and Economics, RISEBA University of Applied Sciences, Riga, Latvia
  • 2EKA University of Applied Sciences, Riga, Latvia
  • 3Department of Foreign Languages and ICT, West Ukrainian National University, Ternopil, Ukraine

This study investigates the factors driving students’ reluctance to adopt generative artificial intelligence (GenAI) in higher education in Latvia and Ukraine. Although GenAI tools are rapidly diffusing across educational settings, little empirical research has examined why students choose not to use them or how these reasons differ across institutional and demographic contexts. A cross-sectional survey (N = 945) was conducted across three universities, and data were analysed using descriptive statistics and binary logistic regression. The findings show that 22% of students do not use GenAI in their studies, with part-time students, distance learners, older students, and graduate students showing the highest rates of non-use. The leading reasons for non-adoption were disbelief in GenAI effectiveness (46.9%), insufficient information (26.5%), and lack of knowledge about how to use GenAI tools (19.7%). Regression results indicate that learning format, age, and disciplinary affiliation significantly predict GenAI use in Ukraine, whereas only learning format predicts use in Latvia, suggesting that institutional context moderates students’ technological engagement. These findings provide one of the first cross-national, quantitatively grounded analyses of GenAI non-use in Europe and Central and Eastern Europe. They highlight the importance of targeted institutional interventions, including structured training, explicit guidelines, and discipline-specific support, to ensure equitable and informed GenAI adoption and to better prepare students for an AI-transformed labor market.

1 Introduction

As GenAI continues to evolve, it is expanding rapidly across various sectors of the economy (Raza et al., 2024) and within specific industries, establishing itself as a pivotal trend. When applied effectively, it enhances productivity and unlocks new opportunities across diverse fields (Raza et al., 2024). GenAI has transformed higher education (Rasul et al., 2024), which is why it has been increasingly used by university students in their learning processes (Zaim et al., 2024). This reflects a new dynamic in the educational framework, described as the “teacher – AI – student” interaction (Ivanov and Soliman, 2023). With AI becoming more prevalent in the workplace (Simms, 2025), it is vital to encourage students to master GenAI for specific professional and academic purposes. This shift highlights the growing importance of GenAI literacy. For instance, in nursing education, GenAI literacy encompasses understanding and skillfully applying AI tools in learning environments. It extends beyond using platforms like ChatGPT for content generation and assessment to include critically evaluating the accuracy, ethical implications, and educational relevance of AI-generated outputs (Simms, 2025; Chan and Colloton, 2024). Developing such literacy requires students to actively engage with these tools and cultivate their ability to navigate complex ethical, technical, and practical considerations.

Students’ perceptions and use of GenAI tools, and thus, the development of GenAI literacy, depend on various factors (Keuning et al., 2024). Some learning contexts have been found to create barriers for students’ views on GenAI. While GenAI adoption in the workplace can be hindered by concerns such as data privacy, inaccuracies (“hallucinations”), and limited adaptability across sectors (Raza et al., 2024), in the field of education, barriers include skepticism and distrust of GenAI’s reliability and effectiveness (Yusuf et al., 2024), ethical concerns (Djokic et al., 2024; Prohorovs et al., 2024b), lack of awareness (Griffiths et al., 2024), inadequate skills and technological literacy (Chan and Hu, 2023), inertia and resistance to change (Mittal et al., 2024), educators’ attitudes toward its use (Chocarro et al., 2021) and students’ attitudes toward GenAI (Sallam et al., 2024), technical limitations (Wong and Looi, 2024), and fears that overreliance on GenAI might diminish critical and independent thinking (Li et al., 2024).

Several other factors hindering the adoption of GenAI in education include concerns about plagiarism and text ownership. The advanced nature of GenAI tools makes it difficult to distinguish between original student work and machine-generated content, raising questions about the foundations of academic integrity (Kumar and Mindzak, 2024; Hutson, 2024). For example, a study found that although 75% of students used GenAI, they did not acknowledge its use due to fears of academic consequences, unclear institutional guidelines, inconsistent enforcement of policies, and peer influence, all of which pose significant barriers to declaring AI use (Gonsalves, 2024). Moreover, when both human and AI collaborate in writing, defining authorship and original contribution becomes challenging, which calls into question traditional notions of originality and intellectual integrity, prompting a reexamination of what constitutes plagiarism in the digital age (Hutson, 2024).

In addition, factors such as technostress (Dai, 2025) and the potential overreliance on GenAI may limit student engagement in key human interactions and group work, hindering emotional and creative development (Creely and Blannin, 2025). Both the weaknesses and risks associated with GenAI have been identified as obstacles to its integration into education (Ivanov et al., 2024) and in perception of students (Sallam et al., 2024). Therefore, achieving a balance between utilizing AI as a tool and maintaining human connection in education is vital (Creely and Blannin, 2025), but this can be difficult, especially when students lack the expertise necessary to adequately assess GenAI-generated content, potentially leading to the spread of misinformation.

Additional barriers include students’ lack of proper prompting skills to communicate effectively with GenAI, the inability to generate satisfactory responses after multiple attempts, unclear value regarding GenAI’s relevance to specific learning goals (such as developing personal professional competencies), and potential disruptions to group work dynamics due to a preference for human interaction over human-AI collaboration (Dai, 2025). Furthermore, there is often a mismatch between GenAI-generated outputs and the specific requirements of group projects, as well as a lack of contextual specificity in the ideas produced, making them too broad or difficult to apply to specific contexts (Dai, 2025).

To overcome these barriers, positive attitudes toward GenAI tools are crucial for their adoption (Chan and Tsi, 2024). Consequently, negative attitudes may explain why some students are hesitant to use GenAI. Ivanov et al. (2024) suggest that improving both lecturers’ and students’ perceptions of GenAI’s benefits can foster more favorable attitudes, stronger subjective norms, and greater perceived behavioral control in using these tools within higher education. Achieving this requires implementing effective strategies to enhance awareness and understanding of GenAI applications in teaching and research.

One challenge in current scientific discussions is the assumption that all students will inevitably adopt GenAI, overlooking their personal choice in deciding whether or not to use it (Dai et al., 2023).

However, despite the rapid growth of GenAI research in higher education, we have not found any empirical studies that systematically examine the specific reasons why students decide not to use GenAI in their studies, nor any studies that quantitatively assess the relative weight of these reasons or evaluate differences between student groups, disciplines, and forms of learning. Existing studies mainly focus on student perceptions, ethical considerations, or intentions to use GenAI, leaving limited data on the actual barriers to its adoption. To our knowledge, this study is one of the first to present a quantitative analysis of the main reasons for not using GenAI, and these reasons are assessed alongside statistically verified differences using logistic regression. Particular attention is paid to Latvia and Ukraine, as these two higher education systems operate in significantly different institutional and digital contexts: Latvia, as an EU member state, benefits from a relatively stable digital infrastructure and consolidated governance, while Ukraine faces fragmented institutional capacity and wartime disruptions. This contrast allows us to explore how institutional ecosystems shape patterns of GenAI non-use and contribute new insights to broader discussions in Europe and Central and Eastern European countries about the integration of artificial intelligence into higher education.

Based on the identified gap in research, the aim of this study was to identify the reasons why Ukrainian and Latvian students might be reluctant to use GenAI in their learning. The aim is justified due to the gaps in empirical research on higher education students’ perceptions of the use of GenAI in Latvia and Ukraine. This issue is of critical importance, as the consistent and appropriate use of GenAI can improve the quality and productivity of student learning, enhance the educational process, and better prepare students for the demands of the job market (Prohorovs et al., 2024a). To this end, the study addresses three main research questions: First, what proportion of students from various universities in two different countries do not use GenAI in their studies? Second, which categories of students do not use GenAI in their academic activities? Third, what are the main reasons for the lack of GenAI use among students from the universities and countries involved in this study, both overall and with a breakdown by student categories?

The findings hold both practical and academic value. From a practical, organizational, and educational-methodological standpoint, the results offer valuable insights for university administrations, faculty members, and regulatory educational bodies regarding the reasons some students hesitate to use GenAI in their learning. Methodologically, we have developed a framework to measure the primary factors influencing why university students do not incorporate GenAI into their educational processes.

From a theoretical perspective, and in line with previous findings on the behaviors and views of engineering students (Dai, 2025), this study provides insights into Latvian and Ukrainian students’ values and perceptions of the limitations of GenAI. The findings will be analyzed through the lenses of the Theory of Planned Behavior (TPB), Self-Efficacy Theory (SET) and Unified Theory of Acceptance and Use of Technology (UTAUT), which have previously been used to explain the adoption of GenAI.

The article is structured as follows: following the Introduction, the second section reviews the relevant literature, the third section presents the research methodology and data, and the fourth section outlines the results of the study. This is followed by a discussion, recommendations, research limitations, and a conclusion.

2 Background

2.1 Problems in students’ application of GenAI

Despite numerous benefits, several challenges impede the effective application of GenAI in educational contexts. A prominent barrier is the lack of digital literacy and technological skills among students. Chan and Hu (2023) emphasize that to fully utilize GenAI tools, students must possess a foundational level of technological proficiency. Those who lack these skills may feel overwhelmed and reluctant to engage with GenAI, ultimately hindering their learning experiences. Educational institutions must prioritize digital literacy programs to equip students with the necessary skills to navigate the evolving technological landscape (Chan and Hu, 2023). Wong and Looi (2024) also highlight the significant technical limitations that students encounter, including inadequate internet connectivity and access to modern devices. These barriers can severely restrict students’ ability to effectively engage with GenAI tools, diminishing the overall impact of these technologies on their educational experiences. Addressing these technical challenges is crucial to ensuring equitable access to GenAI resources across diverse student populations (Wong and Looi, 2024). GenAI, for example, ChatGPT, may generate incorrect or outdated information, which makes some students reluctant to rely on it for academic tasks. Accuracy is a particular concern in fields where up-to-date knowledge is crucial, such as hospitality management (Ali and OpenAI, 2023). The role of educators is pivotal in the successful integration of GenAI into teaching practices. Mittal et al. (2024) argue that insufficient understanding and support from instructors can inhibit the effective use of GenAI by students. When educators are not adequately trained to utilize these technologies, students miss out on essential guidance and mentorship. Therefore, professional development opportunities focusing on GenAI are vital for instructors to foster a supportive learning environment that encourages student engagement (Mittal et al., 2024). Djokic et al. (2024) underscore the ethical implications and privacy concerns associated with the use of GenAI tools in education. Students often express anxiety regarding the security of their personal data and the ethical implications of relying on GenAI technologies for their academic work. This apprehension can lead to reluctance in fully embracing GenAI, as students may fear the potential misuse of their information and the implications of diminished critical thinking skills (Djokic et al., 2024). Yu and Guo (2023) further emphasize that while GenAI has transformative potential, its adoption in educational institutions is often slowed by these ethical dilemmas and concerns about transparency and accountability. They argue that addressing these issues is crucial for fostering trust and wider acceptance of GenAI tools among students. The risk of academic dishonesty and ethical dilemmas is also a major factor. ChatGPT can easily generate responses, essays, and reports, which could lead to plagiarism or reduce the authenticity of students’ work, making them cautious or hesitant to use such tools (Vazquez-Cano et al., 2023; Ali and OpenAI, 2023).

2.2 Why some students do not use GenAI

Despite the extensive benefits offered by GenAI, various reasons contribute to the hesitancy among students to utilize these technologies. A significant barrier is the skepticism surrounding the effectiveness and reliability of GenAI tools. Yusuf et al. (2024) note that this skepticism often arises from previous negative experiences with technology or a general mistrust of GenAI systems. Such attitudes can discourage students from experimenting with GenAI, ultimately limiting their exposure to the potential benefits these technologies offer (Yusuf et al., 2024). Concerns related to academic integrity, including fears of cheating and plagiarism, significantly influence students’ attitudes toward GenAI. Djokic et al. (2024) highlight that many students worry about the implications of using GenAI-generated content for their assignments, fearing potential academic misconduct. This concern can serve as a substantial deterrent to the adoption of GenAI in educational contexts (Djokic et al., 2024). Griffiths et al. (2024) emphasize that many students remain unaware of the functionalities and benefits of GenAI tools. Without adequate exposure and education regarding these technologies, students may not fully appreciate their potential to enhance learning experiences and academic outcomes. Educational institutions should prioritize awareness campaigns to inform students about the advantages of GenAI (Griffiths et al., 2024).

Many students exhibit a preference for traditional study methods and are resistant to adopting new technologies. Mittal et al. (2024) discuss the need for educational institutions to cultivate a culture of innovation and adaptability among students to address this issue. Encouraging students to embrace change and explore new methodologies can enhance their learning experiences and prepare them for the future workforce (Mittal et al., 2024). Chocarro et al. (2021) underscore the importance of educators’ attitudes toward GenAI in shaping students’ willingness to engage with these technologies. If teachers demonstrate resistance or lack confidence in using GenAI tools, it can create an environment where students feel discouraged from utilizing these resources. Thus, fostering a positive attitude among educators is essential for promoting student engagement with GenAI (Chocarro et al., 2021). There is a growing concern that over-reliance on GenAI tools might inhibit students’ ability to think critically and independently. GenAI tools are highly dependent on the quality of input prompts, and students might worry that using these tools could diminish their problem-solving skills (Li et al., 2024). In a study, that involved 1,465 students from 512 universities in the USA, UK and Germany, it was found that students from social sciences, arts and humanities had a more distant relationship with GenAI compared to students from computer science and related fields (Bewersdorff et al., 2025). Another reason might relate to how GenAI tools are used in the learning process. In one study on entrepreneurship students use of GenAI tools, it was discovered that once the use of GenAI is on the increase in the learning environment, students use GenAI tools more frequently (Zulfiqar et al., 2025).

2.3 Theoretical perspectives on reluctance to use GenAI by students

2.3.1 Choice of theories

The increasing integration of GenAI into educational contexts is associated not only with benefits and opportunities but also challenges, with student reluctance to adopt such tools emerging as a significant barrier. Understanding this reluctance is crucial for academic and administrative staff of higher education institutions (HEIs), seeking to promote meaningful engagement with GenAI. While various behavioral theories could potentially explain this reluctance, this study will tap into TPB, SET and UTAUT. Yet, prior to the overview of these theories, it is important to note that some other theories, such as the Theory of Reasoned Action (TRA), Cognitive Load Theory (CLT), Resistance to Change Theory (RCT), and Social Exchange Theory (SocET), which could have been used to explain students’ reluctance to adopt GenAI, will not be used in this study mostly due to the type of data obtained as well as for the reasons provided below.

TRA, developed by Fishbein and Ajzen (1975), explains how individuals’ attitudes and subjective norms shape their behavioral intentions and actions, assuming rational decision-making where individuals weigh consequences before acting. It emphasizes two key principles: compatibility and behavioral intention (Mishra et al., 2014), with behaviors influenced by a positive congruence between attitudes and intentions (Kong and Wang, 2021). TRA has been widely applied across sectors, such as banking, education, and technology, including promoting programming in schools (Mishra et al., 2014; Kong and Wang, 2021) and cyberspace activities (Wu, 2020). However, TRA assumes complete volitional control, overlooking external factors like access to resources or personal confidence. For instance, it cannot address barriers such as reluctance to use smart home devices due to security concerns (Klobas et al., 2019) or constraints that might limit students’ use of GenAI despite positive intentions (Wu, 2020). For this reason, as well as due to its relatedness to TPB, which was an expansion of TRA, this theory was not considered here. While TRA provides a robust framework for understanding behaviors driven by attitudes and norms, TPB’s inclusion of barriers such as access, policies, and confidence ensures broader applicability. Retaining both theories could be redundant due to their conceptual overlap at some level. Another reason for excluding TRA is that it is already incorporated within the UTAUT model, which is noticeably more powerful in explaining behavior than TRA (Gonzalez-Tamayo et al., 2024; Liu et al., 2023). UTAUT will be considered in this study.

CLT, which addresses the issue of the impact of information processing demands on learning processes and outcomes, does not apply here because this study did not measure cognitive load or any other cognitive processes required to parse, interpret and encode the use of GenAI tools. In earlier studies, for example, CLT was used to examine how the design of the study affected the memory load (Sweller, 2019). At the same time, while CLT suggests that a high cognitive load can deter the use of complex tools, many GenAI platforms are designed to simplify tasks, requiring minimal prior knowledge or effort. For instance, students can quickly generate essay outlines or research summaries using user-friendly interfaces. This undermines the claim that cognitive overload is a significant barrier, as GenAI often reduces rather than increases cognitive demands. More importantly, CLT fails to address motivational or affective barriers, such as students’ skepticism about the ethical implications of using AI for academic tasks or a lack of confidence in their ability to use the tool effectively. These non-cognitive factors play a more significant role in shaping students’ reluctance to adopt GenAI tools.

RCT, which emphasizes fear of the new or attachment to familiar methods, is less relevant since young students, as the primary subjects of this study, are generally more open to experimenting with new technologies than older populations or institutional frameworks. This theory is more effective in explaining resistance to change in established organizational settings, where existing routines and practices are deeply ingrained. RCT posits that students may resist adopting GenAI due to a preference for familiar study methods or fear of disrupting established routines. In higher education, students often face practical and ethical concerns about GenAI use, such as fears of academic misconduct accusations or uncertainty about how AI-generated content aligns with institutional guidelines. These concerns might stem from a lack of clarity or support rather than an inherent aversion to change. Furthermore, if students are unaware of GenAI tools or fail to see their relevance to specific academic tasks, their non-use cannot be attributed to resistance but rather to knowledge gaps or inadequate communication from educators.

RCT also fails to address external influences, such as institutional policies and infrastructure, that significantly affect students’ adoption of GenAI. In many universities, limited access to GenAI tools or inconsistent messaging about their acceptable use might lead students to avoid them—not out of resistance, but due to unclear expectations or lack of support. For example, a student might avoid using GenAI for essay writing if their professor provides no guidance on whether it constitutes plagiarism. Therefore, while RCT highlights the psychological discomfort of change, in some cases, it might also overlook broader systemic and contextual barriers that influence students’ technology adoption in higher education.

SocET relies on the assumption that individuals consciously weigh the potential costs and benefits of their actions before making decisions. However, in the context of students’ disinterest in GenAI, such a structured cost–benefit analysis may not occur. For example, students may avoid GenAI not because they perceive it as costly or low in benefit but because they are unaware of its potential applications or have not been guided on how to integrate it effectively into their academic routines. The study’s focus on subjective responses, such as feelings of uncertainty or lack of confidence, reflects emotional and psychological factors rather than the calculated exchanges central to SocET. These affective responses cannot be neatly categorized as “costs” or “benefits,” making the theory an imperfect lens for understanding such behaviors. Additionally, SocET assumes that individuals act rationally to maximize rewards and minimize losses, but students’ engagement with GenAI often involves factors that transcend rational cost–benefit reasoning. For instance, cultural perceptions of academic integrity, peer influence, or a lack of institutional support might discourage students from using GenAI, even if it could demonstrably save time or improve performance. These barriers reflect social and contextual dynamics rather than the transactional logic central to SocET.

2.3.2 Relevance of theory of planned behavior (TPB)

TPB, developed by Ajzen (1991), offers a reasonable framework for accounting for students’ possible reluctance to use generative AI (GenAI) in higher education. Consistent with TPB, behavior is influenced by three essential factors: attitudes, subjective norms, and perceived behavioral control (Ivanov et al., 2024). In tandem, these factors form or affect behavioral intentions and can be used to predict actual or possible actions. In general, TPB (Gonsalves, 2024; Ivanov et al., 2024) was employed to understand the lack of reported GenAI use, as it incorporates attitudes, subjective norms, and perceived behavioral control (Ivanov et al., 2024), which are key to understanding learning and teaching (Ivanov and Soliman, 2023). More specifically, TPB was used to account for a wide range of student and young people’s behavior, for example, when making calls while riding electric bikes (Liu and Chen, 2023), texting while driving (McBride et al., 2020), wasting food (Akhter et al., 2024), having environmentally friendly intentions toward the environment overall (De Leeuw et al., 2015), oceans (Qu et al., 2023), recycling (Mahmud and Osman, 2010), bioenergy (Halder et al., 2016), engaging in exercise activities (Lu et al., 2022), safe administration and use of medication (Lapkin et al., 2015), performing cardiopulmonary resuscitation on strangers (Xia et al., 2024), having entrepreneurial intentions (Razi-ur-Rahim et al., 2024), etc. By examining how these components interact in the context of technology adoption, TPB can provide some valuable insights into psycho-social factors that may contribute to students’ reluctance in using GenAI tools. Earlier, TPB was used not only to explain the usage of technology in general, as in cases of technology acceptance for e-commerce among young people (Ruiz-Herrera et al., 2023), teenagers’ spending time in virtual worlds (Mäntymäki et al., 2014), but also for understanding how GenAI can be used in the study process by students (Ivanov et al., 2024), including cheating intentions (Greitemeyer and Kastenmüller, 2024).

Attitudes affect behavior, including GenAI use. Positive attitudes towards GenAI promote its use among students (Chinene et al., 2024). Students report that GenAI saves time and provides new perspectives (Chinene et al., 2024), improves effectiveness (Law, 2024), helps to bridge knowledge gaps (Huang et al., 2024), These attitudes are internally connected to such factors as the chosen profession (Chinene et al., 2024), costs and learning needs (Kohnke, 2024), technological proficiency and trust in AI tools (Dai, 2025), privacy concerns (Law, 2024) as well as general views of their assignments (Dai, 2025). These attitudes are externally connected to subjective norms, or the perceived expectations of others. Subjective norms are induced and endorsed by others (Islam et al., 2024) with the focus on the key value of the expected behavior (Sannusi et al., 2024). In case of technology use, they are connected to the perceived usefulness, relevance to work, acceptance, the use purposes in work environment (Islam et al., 2024) as well as the perceived value of technologies (Kleine et al., 2025). Higher education environments are deeply social, and peer or faculty attitudes significantly shape technology adoption. Organizational culture plays a critical role in reinforcing these norms (Islam et al., 2024). Therefore, if students believe their instructors, institutions or peers encourage the use of GenAI or view its use as ethically acceptable, they may feel socially supported in the development and application of their GenAI literacy. At the same time, ambiguous or restrictive university policies on AI usage can create confusion, leading students to act with caution by avoiding GenAI altogether. One such factor is the academic violation of integrity (Rowland, 2023), which might be insufficiently addressed by HEIs. Perceived behavioral control refers to the belief in one’s ability to successfully use a tool or perform a task irrespective of social influences (Mandal et al., 2025). The higher one’s belief in their own behavioral control over a certain action, the higher the likelihood of engagement in such activities, for example, in gaming (Mandal et al., 2025) or general technology use an educational setting (Ivanov et al., 2024). Behavioral control is critical in shaping and supporting behavioral intentions, eventually leading to behavior (Ivanov et al., 2024).

While TPB provides a valuable lens for understanding the psychological and social dimensions of students’ use of GenAI, it is not without limitations. The theory assumes that individuals make deliberate, rational decisions based on attitudes, norms, and perceived control, which might result in the exclusion of implicit factors shaping behavior.

2.3.3 Relevance of self-efficacy theory (SET)

The Self-Efficacy Theory rests on four components – performance achievements, experiences of others, verbal encouragement, and emotional states (Collie et al., 2024; Kong et al., 2024; Relente and Capistrano, 2025). More recently, the concept of perception of self-efficacy has been broadened in relation to performance achievements and emotional states; specifically, performance achievements have been broadened to encompass events associated with personal experiences, while emotional states were expanded to include the overall psychological state (for review, see Bayır and Aylaz, 2021). Self-efficacy affects an individual’s selection of activities, level of effort, and persistence, which also translates into the level of avoidance of activities if those are beyond one’s completion capabilities. In other words, self-efficacy refers to an individual’s capability and self-belief, reflecting their confidence in effectively performing a task, and is considered to be a component of motivation (Bandura, 2001). Increasing the level of self-efficacy is important because the higher the level of self-efficacy, the longer people’s determination to continue to handle challenges and attain the goal (Liu et al., 2025; Relente and Capistrano, 2025), which ultimately leads to a change in behavior (Bayır and Aylaz, 2021) through the perception of the capacity to control behavior and by stimulating new choices (Trinh et al., 2025).

In the context of technology, including GenAI, self-efficacy has been empirically identified as a key predictor of acceptance and use of technology (Collie et al., 2024; Kong et al., 2024), for example, the usage of open access online resources (Trinh et al., 2025). Lower self-efficacy levels have been linked to the perceived difficulty of using Internet resources and reluctance to engage with them (Lew et al., 2020; Trinh et al., 2025). Similarly, lower self-efficacy was found to be an impeding factor in the use of facial recognition technology (Liu et al., 2022) and digital entrepreneurship intention (Biu and Duong, 2024). Such lower self-efficacy levels in case of technology use were associated with heightened anxiety (Liu et al., 2022), higher levels of technostress (Biu and Duong, 2024) and higher concern levels over privacy risks (Liu et al., 2022). Consequently, if students perceive tasks as significantly above their skills, their motivation is expected to decrease, which is why the academic environment might be an important factor in either stimulating or hindering the use of GenAI. Therefore, Wallace and Kernozek (2017) suggest that academic staff should attempt to prepare academic activities in engaging ways, implementing experiential learning and ensuring that all four components of self-efficacy shape learning excitement.

However, there are studies suggesting that in academic settings, a higher level of self-efficacy may lead to reduced use of GenAI. For example, in Egypt, university students with higher levels of self-efficacy were more reluctant to resort to ChatGPT assistance, and this was further associated with lower levels of procrastination and higher levels of orientation towards learning achievements among students (Daha and Altelwany, 2025). In other words, higher achievers with higher levels of motivation were more reluctant to use GenAI. It might also be possible that the level of knowledge of GenAI might have been a contributing factor. Perhaps, the use of GenAI might have been perceptually associated with the increasing complexity of the task, which generally negatively affects self-efficacy, and thus, self-satisfaction. This study, however, is not the only one of its kind. In a separate study conducted in the USA, undergraduate agriculture students working on an Arduino programming task showed that the self-programming group, which did not use GenAI assistance, might have achieved better post-training results than the group that relied on it, even though it was not easy to identify which component of training, which also included classroom instructions, contributed more to differences in results (Johnson et al., 2024). This finding was attributed to a higher level of effort that students of the former group had to invest into learning Arduino programming (Johnson et al., 2024). Considering both studies, it might be possible that students who are interested in mastering skills and who are self-confident in their ability to complete an activity might be more academically self-reliant than relying on GenAI assistance.

Overall, SET provides an insight into factors, both internal and external, which affect learning, including students’ interaction with GenAI technologies. Considering self-efficacy in the context of GenAI use among students might be useful for restructuring the academic environment considering individual learning and goal-orientation differences. It might be possible that GenAI might be particularly useful for students with lower levels of knowledge and interest because, consistent with Yilmaz and Karaoglan Yilmaz (2023), they indicated the capacity of GenAI to create an adaptive learning environment tailored for specific learner needs.

2.3.4 Relevance of unified theory of acceptance and use of technology (UTAUT)

The Unified Theory of Acceptance and Use of Technology (UTAUT) is a model, encompassing eight other models (Gonzalez-Tamayo et al., 2024), which focuses on individuals’ perceptions and attitudes toward technology adoption (Michels et al., 2024) with a strong predictive power in comparison to other theories, which makes it widespread among researchers in relation to explaining technology-related behavior both by groups of users (Liu et al., 2023) and across organizations (Gonzalez-Tamayo et al., 2024). Developed by Venkatesh et al. (2003), the UTAUT model identifies four key constructs affecting an individual’s intention in relation to the use of technology - performance expectancy, effort expectancy, social influence, and facilitating conditions (Liu et al., 2023; Michels et al., 2024), which serve as independent variables, behavioral intention and use behavior as moderating variables, and gender, age, experience and voluntariness of use as moderating variable (Gonzalez-Tamayo et al., 2024; Idayani and Darmaningrat, 2024). These variables collectively shape how users perceive and decide to adopt new technologies (Michels et al., 2024). As earlier mentioned, the theory has been widely used to explain not only technology-related behavior (Liu et al., 2023) but also actions beyond the technology realm (Gonzalez-Tamayo et al., 2024; Gordon et al., 2024), including the explanation of the choice of purchasing alternative tractor fuel by farmers (Michels et al., 2024), consumer choice of carbon neutral label (Liu et al., 2023) and hydrogen for domestic use (Gordon et al., 2024), as well as the use of Internet- and mobile-based interventions in healthcare services (Philippi et al., 2021), acceptance of FinTech services in general (Bajunaied et al., 2023) and in relation to cryptocurrency transactions (Masudin et al., 2023).

The theory has also been used to account for a range of student behavior, including attitudes towards flipped classroom learning (Alyoussef, 2023), development of entrepreneurship intentions (Gonzalez-Tamayo et al., 2024) as well as acceptance of technology, such as the engagement in mobile learning (Gordon et al., 2024), the use of a digital platform (Zedemy) (Idayani and Darmaningrat, 2024), GenAI (Zaim et al., 2024), other AI-driven technologies among nursing students (Kwak et al., 2022), etc. Similar to SET, anxiety was found to be a contributing factor (Kwak et al., 2022). Within UTAUT, students are more likely to use technology, including AI-driven solutions, when the study environment focuses on enhancing performance expectancy, effort expectancy (Idayani and Darmaningrat, 2024; Kwak et al., 2022), self-efficacy to cultivate positive attitudes and reduce anxiety (Kwak et al., 2022), as well as social influence (Idayani and Darmaningrat, 2024; Sallam et al., 2024; Strzelecki, 2024) and facilitating conditions (Strzelecki, 2024). One of the most influential factors is performance expectancy - the stronger students’ belief that using GenAI will enhance their performance, the more likely they are to adopt it (Idayani and Darmaningrat, 2024; Kwak et al., 2022; Nikolic et al., 2024). Additionally, the extended UTAUT2 model highlights the importance of the factor of habit, indicating that the longer students engage with GenAI, the more likely they are to continue its use (Strzelecki, 2024). These findings point to the relationship between behavioral intentions and performed behavior, which is particularly important for the context of education.

Overall, UTAUT might be useful in considering students’ reluctance to use GenAI due to its focus on factors distinct from TPB and SET. While TPB emphasizes attitudes and perceived control, and SET centers on confidence in abilities, UTAUT highlights performance expectancy, effort expectancy, social influence, and facilitating conditions. These factors address external and contextual barriers, providing actionable insights for fostering positive perceptions and reducing resistance to GenAI adoption in educational settings.

2.4 Concluding remarks

In summary, the literature review highlighted the potential benefits and challenges of integrating GenAI into educational contexts. While GenAI has shown promise in enhancing learning outcomes, there is evidence pointing to the reluctance of students to use GenAI in certain academic contexts. To better understand these dynamics, the review established the relevance of TPB, SET, and UTAUT as robust frameworks for analyzing students’ behaviors and attitudes toward GenAI. Together, these theories offer a reasonable foundation for interpreting results and guiding strategies to maximize the effective and ethical use of GenAI in education.

The literature notes that the way students perceive and use GenAI tools may potentially depend on many factors (Keuning et al., 2024). However, we have not found studies that examine the relative weight of the main reasons why university students do not use GenAI in their learning process. We also have not found studies that explore whether there are differences in the reasons for not using GenAI depending on student categories and forms of learning, and whether these differences are statistically significant. Furthermore, students from social sciences, arts, and humanities tend to have a more distant relationship with AI compared to students from computer science and related fields.

3 Methodology

3.1 Sampling and data collection procedure

The methodology included a student survey that was conducted at the West Ukrainian National University (WUNU) and two Latvian universities – Riga Technical University (RTU) and RISEBA University of Applied Sciences (RISEBA).

The empirical basis of the study included two independent data sets collected from students at higher education institutions in Latvia and Ukraine. Data were collected using an online questionnaire developed in Google Forms. In Latvia, the survey was conducted between October 2023 and February 2024, and in Ukraine in May 2024.

The questionnaire was distributed via a university mailing list with an invitation to participate voluntarily. Convenience sampling was used for both countries, which is a common approach in research related to digital educational practices (e.g., Wang and Cheng, 2023; Nguyen et al., 2023). Student participation was anonymous and entirely voluntary, which minimized the potential influence of socially desirable responses and other sources of bias.

The number of respondents from the Ukrainian university who completed the questionnaire was approximately 6% of the total number of students. In Latvia, students from two universities participated in the survey, and the total share of respondents was approximately 2% of the total number of students at these universities.

3.2 Languages and cultural adaptation

To ensure the reliability and accuracy of the questions, the survey was conducted in Latvian and English for the Latvian sample and in Ukrainian for the Ukrainian sample. The wording of the questions was pre-checked for equivalence of meaning in translation and adaptation.

3.3 Tool and validation

The questionnaire included blocks of questions covering demographic characteristics, familiarity with generative artificial intelligence (GenAI), experience of using it in education, trust in technology, and ethical concerns. To increase the reliability of the instrument, pilot testing was conducted to refine the wording and structure of the questions. The selection of a limited number of items in each thematic section was dictated by the need to strike a balance between content completeness and ease of completion, which is in line with the recommendations of contemporary researchers in the field of educational analytics (e.g., Lee et al., 2024).

The Latvian and Ukrainian sample data were analyzed separately, which made it possible to take into account national and institutional differences in students’ perceptions and use of artificial intelligence technologies. This approach is widely used in cross-national studies of educational technology (see, for example, Teo et al., 2019; Scherer et al., 2023) and provides a more accurate identification of cultural and contextual factors influencing students’ readiness to use GenAI.

3.4 Respondents

The study involved 945 higher education students — 260 from Latvia and 685 from Ukraine.

3.4.1 Respondents from Latvian universities

Among Latvian students, 51.5% were male (n = 134) and 45% were female (n = 117). The majority of respondents were under the age of 25 (n = 184; 70.8%), while 27% (n = 70) reported being over 25 years old.

In terms of educational level, 84.6% of participants were enrolled in bachelor’s programs (n = 220) and 12.7% in master’s programs (n = 33).

The most widely represented were students from the faculties of business, economics and finance (63.5%), as well as engineering, manufacturing and construction (13.8%). Smaller groups were represented by social sciences (10.4%) and natural sciences, mathematics and IT (9.6%).

3.4.2 Respondents from a Ukrainian university

Women predominated in the Ukrainian sample (n = 392; 57.2%), while men accounted for 41.4% (n = 284). The vast majority of students were under 25 years of age (n = 649; 94.7%), and only 4.7% (n = 32) reported being over 25 years old.

In terms of educational level, 92.3% of respondents were enrolled in bachelor’s programs (n = 633), and 7.6% were enrolled in master’s programs (n = 52).

The most represented faculties included the Faculty of Law (n = 124; 18.1%), the Faculty of Computer Information Technologies (n = 123; 18.0%), the Faculty of Finance and Accounting (n = 118; 17.2%), the B. D. Gavrilishin Institute of International Relations (n = 102; 14.9%), and the Institute of Innovation, Environmental Management and Infrastructure (n = 99; 14.5%).

The structure of respondents by country and category is presented in Appendix 1.

Reasons for GenAI non-use among student categories at Latvian Universities are presented in Appendix 2.

Reasons for GenAI non-use among student categories at WUNU are presented in Appendix 3.

The results of the survey responses from students at Latvian universities are available in the repository at the following link:

https://docs.google.com/spreadsheets/d/1wpKoTru-qCUoWH7nKHIX189gozRW_iC-/edit?usp=sharing&ouid=108908453539272038804&rtpof=true&sd=true

The results of the survey responses from students at the Ukrainian University (WUNU) are presented in the repository at the following link:

https://docs.google.com/spreadsheets/d/1vaRRj_KSUzMBowPbOpVWr1-SAj8_mhDW/edit?usp=sharing&ouid=108908453539272038804&rtpof=true&sd=true

3.5 Descriptive statistics

To determine the reasons for students not using GenAI in the educational process, the method of descriptive statistics was used. Groups comprising fewer than 14 respondents were not represented in the charts due to their small sample size not providing sufficient reliability for the results.

To test for statistically significant differences in the reasons for students’ refusal to use GenAI between students of the two countries, types of study, study programs (bachelors or masters), gender, and age, a chi-squared test was used.

3.6 Regression analysis

To identify factors predicting students’ use of generative artificial intelligence (GenAI) tools in the learning process, binary logistic regression was performed.

Since the study included respondents from two different national contexts — Latvia and Ukraine — the data were analyzed separately, and a separate logistic regression model was constructed for each group. This decision was made to ensure methodological rigor and comparability of results, as well as to prevent possible distortions related to contextual differences between countries.

Although the survey instruments and data collection procedures were identical for both samples, students in Latvia and Ukraine differ in terms of higher education systems, levels of digital infrastructure, familiarity with artificial intelligence technologies, and socio-cultural attitudes towards innovation and ethics. These factors may interact differently with predictors (e.g., age, gender, level of education, or awareness of generative AI), which would violate the assumption of model homogeneity when combining data.

The literature notes that even within educational studies or cross-cultural research projects, comparing different national groups requires either direct testing of model equivalence or separate data analysis. For example:

A Review of the Application of Logistic Regression in Educational Research: Common Issues, Implications, and Suggestions (Niu, 2020) - a review of 130 empirical studies showed that many studies do not consider the contextual characteristics of the sample and make assumptions that the model is equally suitable for different groups.

Mapping the paths between self-construals, English language self-confidence and sociocultural adjustment of international students (Yang et al., 2006) — a study showing that cultural context and sample characteristics (in this case, international students) have a significant impact on predictors and models.

Citing these sources allows us to justify that the analysis of individual contexts (Latvia and Ukraine) is justified from a methodological point of view and corresponds to best research practices.

Conducting separate logistic regression models for both countries made it possible to identify country-specific patterns in terms of factors influencing students’ reluctance to use generative AI tools and provided an opportunity for a meaningful comparison of these factors between contexts without mixing them in a general model.

4 Results

4.1 Categories of students who do not use GenAI

Out of 945 valid survey responses from all universities, 208 students (22.0% of the total) reported not using GenAI in their learning. Among WUNU students, the proportion was 21.5%, while among students from the two Latvian universities it was 23.5%, which represents a minor cross-country difference.

Figure 1 shows a profile of students who do not use GenAI in learning.

Figure 1
Bar chart comparing various demographics in Ukraine and Latvia, alongside overall totals. Categories include part-time, distance learning, age groups, education levels, and gender distinctions. Data is represented in gray for total, blue for Ukraine, and red for Latvia, illustrating percentage differences across each demographic.

Figure 1. Profile of students not using GenAI in learning.

The results of the analysis show that out of the nine student categories examined four groups have a significantly higher proportion of students who do not use GenAI in their learning process compared to the overall average. The highest share is observed among students enrolled in part-time programs (41.9%) and distance learning programs (37.7%). Additionally, students over the age of 25 show a high non-usage rate (35.3%). The proportion of students enrolled in master’s programs is also notably higher than the average - at 25.9%, which is nearly four percentage points above the overall rate.

This distribution across student categories can likely be explained by several factors. First, part-time and distance learning programs typically involve fewer contact hours or fewer opportunities for consultation with instructors compared to full-time programs. Second, students enrolled in these formats often have limited availability of time, as they tend to choose such programs due to higher levels of external commitments, for example, many of them work alongside their studies. Third, these students are generally older, which, as our findings indicate, is also a notable characteristic of those who do not use GenAI in their learning process. Regarding the category of master’s students, the higher proportion of those not using GenAI may be explained, first, by the fact that GenAI-related courses were introduced primarily for undergraduate programs, which is a logical curricular decision. Since the integration of GenAI into university curricula only began in 2023–2024, such courses were not yet included in most master’s programs. Additionally, master’s students are generally 3–4 years older than undergraduate students, and, as shown in our findings, age is a relevant factor associated with lower use of GenAI. The shorter duration of master’s programs, particularly the time pressure in the second year due to thesis preparation, also limits opportunities for engaging with GenAI tools. Finally, many master’s students combine study with employment, making time constraints another possible explanation for their lower adoption of GenAI in the learning process.

4.2 Reasons why students do not use GenAI

The next step in the analysis was to identify reasons that prevent students from using GenAI in learning (see Figure 2).

Figure 2
Bar chart showing reasons students avoid using a product. Categories are:

Figure 2. Reasons why students do not use GenAI in learning.

The analysis results indicate that 46.9%, which is nearly half of the students who do not use GenAI in their learning, refer to a lack of belief in the effectiveness of GenAI as the primary reason. This finding aligns with Bewersdorff et al.’s (2025) categorization of student attitudes toward AI, particularly those classified as pragmatic observers and cautious critics. The second most significant factor is the lack of information students have about GenAI tools and products. Together, these two factors account for 73.4% of responses, suggesting an opportunity for university administrators and instructors to develop more targeted and focused strategies to support students who are not yet engaging with GenAI in their studies. The third most frequently cited reason is the lack of knowledge about how to use GenAI in academic work. This may indicate that some universities, courses, or academic programs have not yet introduced sufficient instructional components to help students acquire practical skills in using GenAI. This issue appears to be particularly relevant for part-time and distance learning programs, as well as for older students. The “other reasons” category received the lowest share of responses among both RTU and RISEBA students as well as WUNU students, with a 7.1 percentage point difference from the third most common factor constituting 73.2% of its value. This may indicate that the study successfully identified the main consolidated reasons why university students do not use GenAI in their learning process.

The main reasons for not using GenAI among students from the Latvian universities and WUNU differed. The primary factor - “lack of belief in its effectiveness” - holds the largest share among both Latvian students (42.4%) and WUNU students (48.9%). This means that belief in the effectiveness of GenAI in the learning process is 6.5 percentage points — or 13.3% — higher among students from the Latvian universities included in the study.

For students at the Latvian universities, the second most common reason for not using GenAI in their learning process is a lack of knowledge on how to use it (22.0%). Among WUNU students, however, this factor ranks third (15.3%), which is 6.7 percentage points (more than 30%) lower than in the Latvian universities. This comparison possibly suggests that either GenAI-related training is better organized at WUNU, or that WUNU students received prior instruction in GenAI use during school or college before entering university, or possibly that WUNU students demonstrate higher motivation for self-directed learning. Given the ongoing war initiated by Russia against Ukraine, the motivation factor among WUNU students, particularly among male students, may possibly represent a significant driver of engagement and self-directed learning.

The share of the third factor among students at the Latvian universities — “lack of information” — is 20.3%. Notably, for WUNU students, this factor ranks second and accounts for 28.5%, which is 8.2 percentage points, or 40.4%, higher than among the Latvian university students included in the study. This difference may possibly indicate either a lower level of informational support at WUNU regarding the relevance and application of GenAI, or a relatively higher degree of passivity among WUNU students in independently seeking out information on GenAI. The share of responses attributing GenAI non-use to “other reasons” was the lowest among both the Latvian universities (RTU and RISEBA) and WUNU. This may suggest that the distribution of the four factors examined is not specific to any single institution but rather reflects a broader pattern across the surveyed universities.

4.3 Analysis of student categories at Latvian universities that do not use GenAI

The next step in the analysis focused on the factors contributing to GenAI non-use differ among various student categories at Latvian universities (see Figure 3 and Appendix 2).

Figure 3
Bar chart showing reasons for not using a tool among different groups: Full-time, Bachelor's, Male, Age under 25, Age over 25, Female. Categories include disbelief in effectiveness, lack of knowledge, lack of information, and other reasons. Full-time, Bachelor's, and Males primarily do not believe in its effectiveness. Females mainly lack information. Other reasons vary across groups.

Figure 3. Reasons for GenAI non-use among student categories at Latvian universities.

As shown in Figure 3, the highest level of skepticism regarding the effectiveness of GenAI in the learning process is observed among full-time students (48.6%), while the lowest level is found among female students (40.9%). The variation in responses across student categories for this factor amounts to 7.7 percentage points. The highest proportion of students who reported not knowing how to use GenAI in their studies is found among male students (29.7%), whereas the lowest is among female students (9.1%). The range of responses for this factor between student categories is 20.6 percentage points. The highest share of students who identified “lack of information” as the reason for not using GenAI corresponds to those aged over 25 (31.6%), while the lowest is among full-time students (13.5%). The variation between these groups is 18.1 percentage points. The factor “other reasons” was most frequently selected by female students (22.7%) and least frequently by students over the age of 25 (10.5%). In all examined student categories, the share of responses associated with each factor varied. The greatest variation between categories was observed for the factor “I do not know how to use GenAI.” It should also be noted that the factor “lack of belief in effectiveness” consistently held the highest proportion across all student groups at RTU and RISEBA universities.

Among students from RTU and RISEBA, part-time students were more likely to select “Lack of information” and less likely than students in other study modes to select “Do not know how to use it” as reasons for not using GenAI in their learning process (p = 0.028). However, it should be noted that the number of part-time students was limited to only 9, and thus these findings should be interpreted with caution. No other specific background factors, such as gender, age group, or program level, were statistically associated with reasons for GenAI non-use. In other words, other differences among Latvian students were not statistically significant.

4.4 Analysis of WUNU student categories who do not use GenAI

The next step in the analysis focused on the factors contributing to GenAI non-use difference among various student categories at WUNU (see Figure 4 and Appendix 3).

Figure 4
Bar chart displaying reasons for views on education effectiveness by categories: format, gender, age, and education level. Reasons include disbelief in effectiveness, lack of information, and unfamiliarity, with percentages shown for full-time, distance learning, gender, age, and degree groups.

Figure 4. Reasons for GenAI non-use among student categories at WUNU.

Similar to the findings for students at RTU and RISEBA (Latvia), the highest level of skepticism toward the effectiveness of GenAI was observed among full-time students at WUNU. The share of this factor was even higher than among students from the Latvian universities and exceeded the combined share of the other three factors. However, the variation across different WUNU student categories was more than twice as large. In two specific WUNU student groups, such as those aged over 25 and those enrolled in distance learning programs, the proportion selecting “Lack of information” was higher than those indicating “Lack of belief in effectiveness.” A similar trend was observed for students over 25, distance learners, and male students, all of whom rated “Lack of information” higher than skepticism toward GenAI’s effectiveness. When comparing the responses of students from RTU and RISEBA with those from WUNU, it can be observed that while all student categories from the Latvian universities identified “Lack of belief in the effectiveness of GenAI” as the primary factor for non-use, this was not the dominant factor in two student categories at WUNU. Therefore, it is concluded that there was no uniformity in views regarding the main reason for GenAI non-use across all student categories in the Latvian universities and WUNU.

No specific factors, such as gender, age group etc., were associated with a particular reason for not using GenAI. A chi-squared test for association found no statistically significant differences in reasons for not using GenAI between the two countries (p = 0.163).

4.5 Identifying factors that predict the use of GenAI by students at Latvian universities

To identify factors predicting students’ use of GenAI tools in the learning process, binary logistic regression was performed.

The dependent variable was GenAI use (1 = “Yes,” 0 = “No”). Age, gender, faculty, level of education, form of study, and year of study were considered as predictors.

The model proved to be statistically significant, χ2 (8, N = 240) = [value to be specified], p < 0.05, and correctly classified approximately 76% of cases.

Two predictors showed statistical significance: gender and form of education.

Gender (male): B = −0.73, SE = 0.35, Wald z = −2.10, p = 0.036, OR = 0.48, 95% CI [0.24, 0.95]. This means that men are approximately 52% less likely to use generative GenAI than women.

Form of education (full-time): B = 1.10, SE = 0.48, Wald z = 2.28, p = 0.023, OR = 3.01, 95% CI [1.17, 7.74]. Full-time students are approximately three times more likely to use GenAI tools than part-time or distance learners.

The remaining variables—age (OR = 1.08, p = 0.856), faculty, level of education, and year of study—did not show statistically significant differences (Table 1).

Table 1
www.frontiersin.org

Table 1. Binary logistic regression predicting the use of generative AI tools (N = 240).

4.6 Interpretation

The results of the odds ratio analysis allow us to determine which factors significantly increase or decrease the likelihood of GenAI use.

An OR value < 1 (for men OR = 0.48) indicates a negative association: men are less likely to use AI than women, all other things being equal.

An OR value > 1 (for full-time students OR = 3.01) indicates a positive association: full-time students use GenAI significantly more often.

Predictors with OR values close to 1 (age, faculty, level of education) do not have a statistically significant impact on the use of GenAI.

Thus, gender and learning context play a decisive role, rather than academic level or field of study.

The results show that students’ use of generative AI tools is mainly influenced by demographic and contextual factors, in particular gender and form of education.

This conclusion is consistent with the results of previous studies in the field of digital pedagogy (e.g., Nguyen et al., 2023; Lee et al., 2024), which note that women demonstrate greater openness to innovative educational technologies and a higher level of engagement in digital learning practices. It can be assumed that female students perceive GenAI as a useful tool to support the learning process, while men are more likely to treat such tools with caution or skepticism, believing that they reduce individual control over learning outcomes.

The significant role of learning format also reflects differences in the educational environment. Full-time students are more likely to work in interactive and technology-rich environments, have access to group interaction, teaching support, and university initiatives to implement GenAI. This increases the likelihood of experimenting with GenAI.

In contrast, distance and part-time students tend to rely on individual and structured forms of work and may have fewer opportunities or incentives to use new technologies (Sullivan et al., 2023).

The absence of significant differences in age, education level, and faculty indicates that the use of GenAI is becoming a widespread and universal practice among students in higher education institutions. This suggests that GenAI adoption is diffusing across student groups and that attitudes toward GenAI are no longer strongly dependent on disciplinary or age-related factors.

Overall, the results emphasize that personal involvement and educational context have a greater impact on GenAI use than traditional demographic characteristics.

4.7 Identification of factors predicting the use of GenAI by WUNU students

A binary logistic regression was performed to identify demographic predictors of GenAI adoption among students of higher education institutions in Ukraine. The model proved to be statistically significant (χ2 = 87.4, p < 0.001), indicating that several variables significantly influenced the likelihood of using generative AI tools.

The strongest predictors were student age and form of study. Respondents over 25 years of age were approximately four times less likely to use GenAI (OR = 0.26, p = 0.007), and those who studied remotely or part-time were approximately three times less likely to do so (OR = 0.31, p < 0.001) compared to full-time students.

Faculty affiliation also played a significant role. Students in computer science departments were 2.6 times more likely to use GenAI (OR = 2.59, p = 0.050), while students in law, finance and accounting, and innovation and ecology departments were less likely to use GenAI (OR ranging from 0.39 to 0.45, p < 0.05).

No significant influence of gender, level of education, or year of study was found. These results show that the use of GenAI is primarily determined by structural and disciplinary contexts rather than individual demographic characteristics (Table 2).

Table 2
www.frontiersin.org

Table 2. Binary logistic regression predicting the use of GenAI (N = 685).

4.8 Interpretation

Full-time students and students under the age of 25 were significantly more likely to use GenAI tools. The likelihood of GenAI adoption was also noticeably higher among students in computer science departments, while students in law, finance, and innovation departments showed lower adoption rates.

The comparative analysis revealed factors that statistically significantly influence students’ use of generative artificial intelligence (GenAI) technologies in the educational process in Latvia and Ukraine. Separate binary logistic regression models were constructed for both countries, which ensured that differences in sample structure and educational contexts were correctly considered.

In the sample of Latvian students (N ≈ 240), the model was statistically significant (χ2 = [approximately 8–10], p < 0.05) and explained about 15–20% of the variance of the dependent variable (Nagelkerke R2 ≈ 0.18). Among the predictors, gender and form of education demonstrated a significant influence. Men were 2.1 times less likely to use generative artificial intelligence than women (OR = 0.48, 95% CI [0.30; 0.95], p = 0.036). Full-time students, on the other hand, were almost three times more likely to use GenAI (OR = 3.01, 95% CI [1.17; 7.74], p = 0.023) than part-time or distance learners. Age, level of education, and field of study (faculty) did not show statistically significant effects (p > 0.05).

For the Ukrainian sample (N = 685), the model had a higher explanatory power (χ2(11) = 87.4, p < 0.001; Nagelkerke R2 = 0.23), indicating the presence of more pronounced factors related to technology adoption. The most significant predictors were age, form of education, and faculty. Students over 25 years of age were four times less likely to use GenAI (OR = 0.26, 95% CI [0.10; 0.69], p = 0.007). Similarly, part-time or distance learning students were three times less likely to use GenAI technologies than full-time students (OR = 0.31, 95% CI [0.16; 0.59], p < 0.001). At the same time, computer science students were 2.6 times more likely to use GenAI (OR = 2.59, 95% CI [1.00; 6.70], p = 0.050). In contrast, students in law, finance, and innovation and environmental studies were less likely to use these technologies (OR = 0.39–0.45, p < 0.05). Gender, level of education, and course of study had no significant effect (p > 0.05).

A comparative analysis of the two models showed that the form of education is a stable and key factor in both countries: full-time students are significantly more likely to integrate GenAI into their educational activities. This may be due to more frequent interaction with teachers and peers, as well as greater involvement in digital educational practices.

Differences between countries are also evident in other determinants. For Latvian students, gender was a significant factor, while for Ukrainian students, age and disciplinary affiliation were significant. This may reflect cultural and institutional differences in access to technology and perceptions of its role in education.

Overall, the results of the study show that the adoption of GenAI among students is determined not only by personal characteristics but also by structural features of the educational environment. In both cases, part-time and distance learning students, as well as students in humanities and law, remain the most vulnerable to digital inequality.

From a practical point of view, these results highlight the need to develop institutional strategies to support digital literacy and equal access to GenAI technologies. Universities in Latvia and Ukraine should integrate GenAI skills training into all curricula, with a particular focus on disciplines where technological integration is still limited. This will not only improve the quality of the educational process, but also prepare students for a future in which GenAI interaction skills will become part of basic academic literacy.

5 Discussion

This study is probably one of the first quantitative studies of the reasons why university students prefer not to use GenAI tools in their academic work. Despite a growing number of publications on student perceptions, ethical issues, or the potential pedagogical implications of GenAI use, empirical research on the specific reasons for non-use, their relative weight, and their distribution across different student groups remains limited. In our review, we found no studies that quantitatively assess these reasons or examine how they differ significantly depending on age, gender, learning format, or disciplinary background. This gap is particularly noticeable in the European and Central European context, where research has so far focused mainly on attitudes, intentions, and institutional readiness rather than barriers to implementation.

We identified four main reasons why students do not use GenAI: disbelief in the effectiveness of GenAI (46.9%), insufficient knowledge of how to use these tools (26.5%), limited information or awareness (16.8%), and other concerns (9.7%), such as academic integrity, uncertainty about assessment policies, or fear of over-reliance. These results provide the first quantitative assessment of the prevalence of these barriers and show that they are unevenly distributed among university student groups. To situate these findings within the broader literature, previous studies have similarly identified limited familiarity, low trust, and uncertainty as common barriers to AI adoption among students (Chinene et al., 2024; Chen et al., 2024).

A comparative analysis of responses from WUNU students and two Latvian universities shows that institutional conditions influence how these barriers translate into behavior. Latvian students demonstrate more homogeneous patterns of GenAI use and non-use, which corresponds to a stronger institutional digital infrastructure and clearer pedagogical norms. In Ukraine, on the contrary, institutional fragmentation, variability of digital resources, and wartime disruptions appear to reinforce individual barriers. Logistic regression analysis confirms this interpretation: age, form of study, and faculty affiliation reliably predict GenAI use among Ukrainian students, whereas such patterns are not observed in the Latvian sample. These results are consistent with prior research suggesting that institutional clarity and support define students’ perceived behavioral control and their readiness to adopt new technologies (Islam et al., 2024; Ivanov et al., 2024).

Furthermore, consistent exposure to GenAI in coursework has been shown to increase student confidence and intention to use AI (Zhang and Xu, 2025), which helps explain the stronger adoption patterns observed in Latvia.

The aspect of students’ faculty affiliation also deserves attention. Although previous literature has suggested that students studying social sciences, humanities, and arts tend to interact with GenAI less frequently than students studying computer science or related fields, these claims have rarely been empirically tested. Our results provide quantitative evidence supporting this pattern: students in technical faculties demonstrate a significantly higher degree of acceptance of GenAI, while students in law, finance, and other non-technical majors report greater skepticism and uncertainty. This is in line with studies demonstrating that prior mastery experiences and higher levels of digital exposure strengthen students’ self-efficacy and confidence in using emerging technologies (Collie et al., 2024; Relente and Capistrano, 2025).

5.1 Interpretation through TPB, SET, and UTAUT

Our results can be interpreted using SET, UTAUT, and TPB. Lack of knowledge and uncertainty reflect low self-efficacy (SET) and effort expectancy (UTAUT); skepticism about usefulness is related to performance expectancy (UTAUT); and concerns about academic norms are consistent with subjective norms and perceived behavioral control (TPB). It is important to note that our cross-country comparison shows that these psychological mechanisms are conditioned by the institutional environment, which influences the strength and direction of these predictors.

These interpretations are supported by prior evidence showing that negative attitudes reduce behavioral intentions (Chinene et al., 2024), that students’ expectations of usefulness strongly shape adoption (Michels et al., 2024; Sari et al., 2024), and that insufficient skills or lack of clarity increases perceived effort and complexity (Trinh et al., 2025).

The lower use of GenAI among part-time and distance students further reflects weaker social influence and reduced opportunities for observational learning, consistent with SET and UTAUT (Camilleri, 2024; Liu et al., 2023).

The role of emotional and psychological factors also aligns with previous research demonstrating that technostress or discomfort discourages the use of digital tools (Liu et al., 2022).

Finally, the importance of facilitating conditions corresponds with findings that structured AI-literacy initiatives enhance technology adoption (Bouteraa et al., 2024; Gonzalez-Tamayo et al., 2024).

Overall, the results of the study show that not using GenAI is not simply a matter of individual preference, but rather a phenomenon shaped by the interaction of psychological, structural, and institutional factors. Interpreting these patterns through TPB, SET, and UTAUT underscores that reluctance arises from intertwined mechanisms involving attitudes, efficacy beliefs, expected effort, social norms, and enabling conditions.

5.2 Theoretical implications

The results of this study contribute to existing theoretical discussions about technology adoption and student behavior in higher education. First, the results partially support the assumptions of TPB, SET, and UTAUT, while also revealing important theoretical gaps. Students’ reluctance to use GenAI was primarily due to low perceived effectiveness, limited knowledge, and insufficient information—factors that conceptually correspond to attitudes (TPB), self-efficacy (SET), and performance and effort expectancy (UTAUT). However, since the study did not measure these constructs directly, theoretical interpretations should be considered exploratory. This demonstrates the need for more rigorous operationalization of psychological mechanisms in future studies of GenAI adoption.

Second, cross-country comparisons highlight the role of institutional context as a moderating factor—an aspect that is underrepresented in classical adoption theories. For example, gender and learning format predicted GenAI use among Latvian students, while age and disciplinary affiliation mattered only in Ukraine. These differences suggest that psychological models alone may not be sufficient and that institutional and structural determinants should be integrated into future theoretical models to explain emerging patterns of GenAI adoption.

Finally, the strong influence of learning format (face-to-face or distance learning) indicates that educational engagement and interactive environments may be central predictors of technology adoption. This extends previous research by suggesting that physical and social immersion in the university environment may amplify or suppress the mechanisms described by the TPB, SET, and UTAUT models.

This study extends classical adoption theories by demonstrating that institutional context acts as a moderating factor that shapes how psychological mechanisms translate into actual technology use.

5.3 Practical implications

The findings also have direct practical implications for universities, policymakers, and educators. First, the fact that a significant proportion of students avoid GenAI due to mistrust, lack of knowledge, or limited information suggests that institutional communication and training strategies are not yet fully developed. Universities should provide structured guidance, examples of appropriate use, and clear academic integrity policies to reduce uncertainty and support the responsible integration of GenAI. Given that 46.9% of non-users cited low perceived effectiveness, 26.5% cited insufficient knowledge, and 16.8% cited limited information, training initiatives should be prioritised according to the relative weight of these barriers. This means that universities should focus primarily on building students’ confidence in the usefulness of GenAI, followed by practical skills training and awareness-raising activities.

Second, part-time and distance learning students consistently use GenAI less frequently. This points to the risk of digital inequality in universities, where students with weak institutional ties or less access to peer interaction lag behind in acquiring new digital competencies. Universities may need targeted measures, such as mandatory introductory modules, micro-courses, or orientation materials, to support students who have fewer opportunities for informal learning. Regression results reinforce this need: learning format was one of the strongest predictors of GenAI use among Ukrainian students, with full-time students significantly more likely to use GenAI than their part-time or distance-learning peers. These findings indicate that institutional interventions should prioritise students with reduced campus engagement, as they represent the group most vulnerable to falling behind in the development of AI-related academic competencies.

Third, the observed differences between disciplines indicate that the spread of GenAI is uneven across higher education. Students in technical fields tend to integrate GenAI more easily, while students in law, finance, social sciences, and humanities tend to show greater skepticism and uncertainty. This highlights the need for pedagogical strategies that take into account the specifics of the discipline, where GenAI literacy is adapted to the learning culture, epistemic norms, and assessment approaches of each field. Since disciplinary affiliation was also a significant predictor in the Ukrainian model, institutions should develop differentiated AI literacy pathways, ensuring that non-technical disciplines receive tailored support aimed at reducing uncertainty and addressing field-specific concerns.

Finally, significant cross-national differences show that GenAI adoption is determined not only by psychological factors, but also by institutional stability, digital infrastructure, and national context. Universities in countries with fragmented or disrupted education systems may need additional support, investment in digital technology, and clearer policies to build student trust and competence. Because the Latvian sample showed homogeneous patterns while the Ukrainian sample reflected institutional fragmentation and wartime disruptions, policymakers should consider national-level strategies that strengthen digital readiness, provide universities with stable technological resources, and promote transparent regulatory frameworks concerning GenAI use.

5.4 Ethical considerations (contextual note)

Although this study did not directly address the ethical aspects of GenAI use, such issues are often discussed in the literature and remain highly relevant to the interpretation of students’ interactions with GenAI. Previous studies have highlighted issues related to academic integrity, transparency, authorship, potential over-reliance on automated systems, and unequal access to GenAI-enhanced educational environments. These ethical considerations may indirectly influence students’ decisions to avoid GenAI, as suggested by the ‘other reasons’ category identified in our analysis.

Such concerns are closely aligned with recent findings that students often associate GenAI with risks of misconduct, evaluation uncertainty, and unclear institutional boundaries (Islam et al., 2024; Liu et al., 2022).

Thus, our findings underscore the need for universities to provide clear guidance on the responsible and transparent use of GenAI, support academic integrity structures, and reduce inequalities related to both access to and confidence in digital technologies. Future research should explore how ethical beliefs and institutional policies shape students’ willingness—and reluctance—to incorporate GenAI into their educational practices.

5.5 Limitations

A number of methodological steps were taken to reduce potential systematic error in responses and ensure comparability between the two national samples. Although most of the responses from Ukrainian students were obtained from a single educational institution (Western Ukrainian National University), while the Latvian sample included students from two universities (Riga Technical University and RISEBA University of Applied Sciences), this asymmetry primarily reflects differences in the size of the institutions and the distribution of the survey invitation. Identical questionnaires with four closed-ended questions were used in both countries, and the samples included students from different faculties, levels of education, and demographic groups. These design features helped maintain internal consistency and increased the reliability of the cross-country comparison.

Despite these precautions, several limitations should be acknowledged. First, although the study interprets its results using TPB, SET, and UTAUT, the survey instrument did not directly measure constructs related to these frameworks (e.g., attitudes, subjective norms, perceived behavioral control, self-efficacy, performance expectancy, or effort expectancy). Consequently, the theoretical interpretations presented in the Discussion should be viewed as exploratory rather than causal. Future studies should use validated multi-item scales to operationalize these constructs and conduct theory-based statistical tests.

Second, some degree of sampling bias and response bias cannot be ruled out. Participation was voluntary, which may have led to an overrepresentation of students with stronger opinions about GenAI, and the concentration of Ukrainian respondents in a single institution may have introduced contextual effects. Although demographic and disciplinary diversity in the sample mitigates these concerns, broader institutional representativeness would improve external generalizability. Future research should consider stratified or multi-institutional sampling strategies to reduce self-selection and institutional bias.

Third, the study considered only four categories of reasons for non-use—three predetermined categories and one aggregated category, “other.” Although this structure is sufficient for descriptive and comparative analysis, it may not fully reflect the complexity of students’ motivations, constraints, and perceptions. Expanding the set of factors would allow for the development of more nuanced and multidimensional models of GenAI implementation.

Taken together, these limitations indicate that the results should be interpreted with caution, especially with regard to their theoretical generalization and applicability beyond the selected educational institutions.

5.6 Future research

Future research will need to address these limitations in several ways. First, expanding the geographical coverage of data collection will improve cross-country comparisons and clarify whether the patterns observed in Latvia and Ukraine are characteristic of post-Soviet and Central and Eastern European education systems or reflect broader international trends. Studies covering other CEE countries, Western Europe, and non-European regions would allow for an assessment of how cultural, institutional, and technological environments influence students’ interactions with GenAI.

Second, future research should consider a broader range of psychological, pedagogical, and institutional factors. Possible additions include digital self-confidence, perceived academic risks, prior experience with GenAI tools, the explicit integration of GenAI at the course level, and inequalities in access to technology. Including such variables will allow for the development of more comprehensive models and deepen theoretical understanding of GenAI implementation.

Third, qualitative methods such as interviews, focus groups, or classroom observations can complement survey findings by providing more detailed information about student motivation, perceived barriers, and ethical concerns. Mixed-methods and longitudinal studies would help capture changing student attitudes as GenAI technologies become more integrated into teaching and assessment practices.

Together, these approaches will improve both theoretical and empirical understanding of the mechanisms driving GenAI adoption in higher education and contribute to the development of more inclusive, evidence-based institutional strategies.

6 Conclusion

The aim of this study was to identify the reasons why Ukrainian and Latvian students may be reluctant to use generative AI (GenAI) in their learning. The researchers analyzed the proportion, factors, and categories of students from two Latvian universities and one Ukrainian university with respect to the non-use of GenAI in the educational process. Overall, the findings indicate a consistent trend of GenAI non-use among certain student groups, particularly those in part-time or distance learning programs, older students, and those enrolled in graduate-level studies. A lack of trust in the effectiveness of GenAI and limited awareness or understanding of available tools appear to be the main barriers. These trends were largely consistent across both countries, and no clear links were found between demographic characteristics and specific reasons for non-use. This suggests that reluctance to engage with GenAI may stem from broaber educational context and digital readiness rather than from individual factors.

The findings highlight the need for clear guidance and practical support to help students integrate GenAI into their learning. By understanding the key barriers, universities can design targeted interventions that encourage broader adoption. These efforts can enhance the quality and personalization of education while better preparing students for the demands of the modern workforce. Given the rapid evolution of GenAI capabilities and the increasing frequency of its use among students (Keuning et al., 2024), the proportion of students not using GenAI in their learning is expected to decline over time.

As for research limitations, although the study collected responses from a substantial random sample of students, the data reflect only the context of three universities across two countries. As such, the findings may primarily represent the situation within these specific institutions and, to some extent, within these national contexts. Nevertheless, the mechanisms identified in this study—particularly the role of institutional conditions, disciplinary differences, and learning formats—are likely relevant for higher education systems beyond the two countries examined. To gain a more comprehensive understanding, future research should expand the scope of analysis to include additional universities, including those located in other regions and continents.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The study is based on the survey that involved human participants and posed no risk of harm to them, in accordance with the ethical standards of RISEBA University and West Ukrainian National University, as per respective letters from the institutions.

Author contributions

AP: Conceptualization, Methodology, Writing – original draft. KU: Resources, Writing – original draft. OT: Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. Publication support was provided by RISEBA University.

Acknowledgments

We thank Levs Fainglozs (RISEBA University) and Jelena Perevozcikova (TSI) for assistance with data processing, and Regina Veckalne (RTU) for assistance in collecting RTU respondents. We also thank Maksym Gusak and Viktor Slabodukh - coworkers of the International Center for Culture and Development of WUNU for assistance in collecting WUNU respondents. We also thank the RISEBA University for paying the APC.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1676900/full#supplementary-material

References

Ajzen, I. (1991). The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50, 179–211. doi: 10.1016/0749-5978(91)90020-T

Crossref Full Text | Google Scholar

Akhter, S., Rather, M. I., and Zargar, U. R. (2024). Understanding the food waste behaviour in university students: an application of the theory of planned behaviour. J. Clean. Prod. 437:140632. doi: 10.1016/j.jclepro.2024.140632

Crossref Full Text | Google Scholar

Ali, F.OpenAI (2023). Let the devil speak for itself: should ChatGPT be allowed or banned in hospitality and tourism schools? J. Global Hospitality Tour. 2, 1–6. doi: 10.5038/2771-5957.2.1.1016

Crossref Full Text | Google Scholar

Alyoussef, I. Y. (2023). Acceptance of a flipped classroom to improve university students’ learning: an empirical study on the TAM model and the unified theory of acceptance and use of technology (UTAUT). Heliyon 8:e12529. doi: 10.1016/j.heliyon.2022.e12529,

PubMed Abstract | Crossref Full Text | Google Scholar

Bajunaied, K., Hussin, N., and Kamarudin, S. (2023). Behavioral intention to adopt FinTech services: an extension of unified theory of acceptance and use of technology. J. Open Innov.: Technol. Mark. Complex. 9:100010. doi: 10.1016/j.joitmc.2023.100010

Crossref Full Text | Google Scholar

Bandura, A. (2001). Social cognitive theory: an agentic perspective. Annu. Rev. Psychol. 52, 1–26. doi: 10.1146/annurev.psych.52.1.1,

PubMed Abstract | Crossref Full Text | Google Scholar

Bayır, B., and Aylaz, R. (2021). The effect of mindfulness-based education given to individuals with substance-use disorder according to self-efficacy theory on self-efficacy perception. Appl. Nurs. Res. 57:151354. doi: 10.1016/j.apnr.2020.151354,

PubMed Abstract | Crossref Full Text | Google Scholar

Bewersdorff, A., Hornberger, M., Nerdel, C., and Schiff, D. S. (2025). AI advocates and cautious critics: how AI attitudes, AI interest, use of AI, and AI literacy build university students’ AI self-efficacy. Comput. Educ. Artifi. Int. 8:100340. doi: 10.1016/j.caeai.2024.100340,

PubMed Abstract | Crossref Full Text | Google Scholar

Biu, H. N., and Duong, C. D. (2024). ChatGPT adoption in entrepreneurship and digital entrepreneurial intention: a moderated mediation model of technostress and digital entrepreneurial self-efficacy. Equilibrium Q. J. Econ. Policy 19, 391–428. doi: 10.24136/eq.3074

Crossref Full Text | Google Scholar

Bouteraa, M., Bin-Nashwan, S. A., Al-Daihani, M., Dirie, K. A., Benlahcene, A., Sadallah, M., et al. (2024). Understanding the diffusion of AI-generative (ChatGPT) in higher education: does students’ integrity matter? Comput. Hum. Behav. Rep. 14:100402. doi: 10.1016/j.chbr.2024.100402

Crossref Full Text | Google Scholar

Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: using SmartPLS to advance an information technology acceptance framework. Technol. Forecast. Soc. Change 201:123247. doi: 10.1016/j.techfore.2024.123247

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Colloton, T. (2024). Generative AI in higher education: The ChatGPT effect. 1st Edn. London: Routledge.

Google Scholar

Chan, C., and Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20:10. doi: 10.1186/s41239-023-00411-8

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Tsi, L. H. Y. (2024). Will generative AI replace teachers in higher education? A study of teacher and student perceptions. Stud. Educ. Eval. 83:101395. doi: 10.1016/j.stueduc.2024.101395

Crossref Full Text | Google Scholar

Chen, D., Liu, W., and Liu, X. (2024). What drives college students to use AI for L2 learning? Modeling the roles of self-efficacy, anxiety, and attitude based on an extended technology acceptance model. Acta Psychol. 249:104442. doi: 10.1016/j.actpsy.2024.104442,

PubMed Abstract | Crossref Full Text | Google Scholar

Chinene, B., Mudadi, L.-S., Choto, T. A., Soko, N. D., Gonde, L., Mushosho, E. Y., et al. (2024). Insights into GenAI: perspectives of radiography and pharmacy students at a leading institution in Zimbabwe. Radiography 30, S114–S119. doi: 10.1016/j.radi.2024.11.004,

PubMed Abstract | Crossref Full Text | Google Scholar

Chocarro, R., Cortiñas, M., and Marcos-Matás, G. (2021). Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educ. Stud. 2, 1–19. doi: 10.1080/25730081.2021.1916312

Crossref Full Text | Google Scholar

Collie, R. J., Martin, A. J., and Gasevic, D. (2024). Teachers’ generative AI self-efficacy, valuing, and integration at work: examining job resources and demands. Comput. Educ. Artif. Int. 7:100333. doi: 10.1016/j.caeai.2024.100333,

PubMed Abstract | Crossref Full Text | Google Scholar

Creely, E., and Blannin, J. (2025). Creative partnerships with generative AI: possibilities for education and beyond. Think. Skills Creat. 56:101727. doi: 10.1016/j.tsc.2024.101727

Crossref Full Text | Google Scholar

Daha, E. S., and Altelwany, A. A. (2025). Exploring the impact of using - ChatGPT in light of goal orientations and academic self-efficacy. Int. J. Instr. 18, 167–184. doi: 10.29333/iji.2025.18210a

Crossref Full Text | Google Scholar

Dai, Y. (2025). Why students use or not use generative AI: student conceptions, concerns, and implications for engineering education. Digital Eng. 4:100019. doi: 10.1016/j.dte.2024.100019,

PubMed Abstract | Crossref Full Text | Google Scholar

Dai, Y., Liu, A., and Lim, C. P. (2023). Reconceptualizing ChatGPT and generative AI as a student-driven innovation in higher education. Procedia CIRP 119, 84–90. doi: 10.1016/j.procir.2023.05.002

Crossref Full Text | Google Scholar

De Leeuw, A., Valois, P., Ajzen, I., and Schmidt, P. (2015). Using the theory of planned behavior to identify key beliefs underlying pro-environmental behavior in high-school students: implications for educational interventions. J. Environ. Psychol. 42, 128–138. doi: 10.1016/j.jenvp.2015.03.005

Crossref Full Text | Google Scholar

Djokic, I., Milicevic, N., Djokic, N., Malcic, B., and Kalaš, B. (2024). Students’ perceptions of the use of artificial intelligence in educational service. Amfiteatru Econ. 26:294. doi: 10.24818/EA/2024/65/294

Crossref Full Text | Google Scholar

Fishbein, M., and Ajzen, I. (1975). Belief, Attitude, and Behavior. An Introduction to Theory and Research. Reading, MA: Addison-Wesley.

Google Scholar

Gonsalves, C. (2024). Addressing student non-compliance in AI use declarations: implications for academic integrity and assessment in higher education. Assess. Eval. High. Educ. 50, 1–16. doi: 10.1080/02602938.2024.2415654

Crossref Full Text | Google Scholar

Gonzalez-Tamayo, L. A., Maheshwari, G., Bonomo-Odizzio, A., and Krauss-Delorme, C. (2024). Successful business behaviour: an approach from the unified theory of acceptance and use of technology (UTAUT). Int. J. Manag. Educ. 22:100979. doi: 10.1016/j.ijme.2024.100979

Crossref Full Text | Google Scholar

Gordon, J. A., Balta-Ozkan, N., and Nabavi, S. A. (2024). Towards a unified theory of domestic hydrogen acceptance: an integrative, comparative review. Int. J. Hydrog. Energy 56, 498–524. doi: 10.1016/j.ijhydene.2023.12.167

Crossref Full Text | Google Scholar

Greitemeyer, T., and Kastenmüller, A. (2024). A longitudinal analysis of the willingness to use ChatGPT for academic cheating: applying the theory of planned behavior. Technol. Mind Behav. 5, 1–8. doi: 10.1037/tmb0000133

Crossref Full Text | Google Scholar

Griffiths, D., Frías-Martínez, E., Tlili, A., and Burgos, D. (2024). A cybernetic perspective on generative AI in education: from transmission to coordination. Int. J. Interact. Multimed. Artif. Intell. 8, 15–25. doi: 10.31449/ijimai.v8i5.3430

Crossref Full Text | Google Scholar

Halder, P., Pietarinen, J., Havu-Nuutinen, S., Pöllänen, S., and Pelkonen, P. (2016). The theory of planned behavior model and students' intentions to use bioenergy: a cross-cultural perspective. Renew. Energy 89, 627–635. doi: 10.1016/j.renene.2015.12.023

Crossref Full Text | Google Scholar

Huang, D., Huang, Y., and Cummings, J. J. (2024). Exploring the integration and utilisation of generative AI in formative e-assessments: a case study in higher education. Australas. J. Educ. Technol. 40, 1–19. doi: 10.14742/ajet.9467

Crossref Full Text | Google Scholar

Hutson, J. (2024). Integrating art and AI: evaluating the educational impact of AI tools in digital art history learning. Forum Art Stud. 1, 1–12. Available online at: https://digitalcommons.lindenwood.edu/faculty-research-papers/578?utm_source=digitalcommons.lindenwood.edu%2Ffaculty-research-papers%2F578&utm_medium=PDF&utm_campaign=PDFCoverPages

Google Scholar

Idayani, R. W., and Darmaningrat, E. W. T. (2024). Evaluation of factors affecting student acceptance of Zedemy using the unified theory of acceptance and use of technology (UTAUT). Proc. Comput. Sci. 234, 1276–1287. doi: 10.1016/j.procs.2024.03.125

Crossref Full Text | Google Scholar

Islam, M. S., Tan, C. C., Sinha, R., and Selem, K. M. (2024). Gaps between customer compatibility and usage intentions: the moderation function of subjective norms towards chatbot-powered hotel apps. Int. J. Hosp. Manag. 123:103910. doi: 10.1016/j.ijhm.2024.103910

Crossref Full Text | Google Scholar

Ivanov, S., and Soliman, M. (2023). Game of algorithms: ChatGPT implications for the future of tourism education and research. J. Tour. Futures 9, 214–221. doi: 10.1108/JTF-02-2023-0038

Crossref Full Text | Google Scholar

Ivanov, S., Soliman, M., Tuomi, A., Alkathiri, N. A., and Al-Alawi, A. N. (2024). Drivers of generative AI adoption in higher education through the lens of the theory of planned behaviour. Technol. Soc. 77:102521. doi: 10.1016/j.techsoc.2024.102521

Crossref Full Text | Google Scholar

Johnson, D. M., Doss, W., and Estepp, C. M. (2024). Using ChatGPT with novice Arduino programmers: effects on performance, interest, self-efficacy, and programming ability. J. Res. Tech. Careers 8:1152. doi: 10.9741/2578-2118.1152

Crossref Full Text | Google Scholar

Keuning, H., Alpizar-Chacon, I., Lykourentzou, I., Beehler, L., Köppe, C., de Jong, I., et al. (2024). Students`perceptions and use of generative AI tools for programming across different computing courses. In Proceedings of the 24th Koli Calling International Conference on Computing Education Research. 1–12. doi: 10.1145/3699538.3699546

Crossref Full Text | Google Scholar

Kleine, A.-K., Schaffernak, I., and Lermer, E. (2025). Exploring predictors of AI chatbot usage intensity among students: within- and between-person relationships based on the technology acceptance model. Comput. Hum. Behavi. Artif. Hum. 3:100113. doi: 10.1016/j.chbah.2024.100113,

PubMed Abstract | Crossref Full Text | Google Scholar

Klobas, J. E., McGill, T., and Wang, X. (2019). How perceived security risk affects intention to use smart home devices: a reasoned action explanation. Comput. Secur. 87:101571. doi: 10.1016/j.cose.2019.101571

Crossref Full Text | Google Scholar

Kohnke, L. (2024). Exploring EAP students’ perceptions of GenAI and traditional grammar-checking tools for language learning. Comput. Educat. Artif. Int. 7:100279. doi: 10.1016/j.caeai.2024.100279,

PubMed Abstract | Crossref Full Text | Google Scholar

Kong, S.-C., and Wang, Y.-Q. (2021). Investigating primary school principals’ programming perception and support from the perspective of reasoned action: a mixed methods approach. Comput. Educ. 172:104267. doi: 10.1016/j.compedu.2021.104267

Crossref Full Text | Google Scholar

Kong, S. C., Yang, Y., and Hou, C. (2024). Examining teachers’ behavioural intention of using generative artificial intelligence tools for teaching and learning based on the extended technology acceptance model. Comput. Educ. Artif. Int. 7:100328. doi: 10.1016/j.caeai.2024.100328,

PubMed Abstract | Crossref Full Text | Google Scholar

Kumar, R., and Mindzak, M. (2024). Who wrote this? Detecting artificial intelligence–generated text from human-written text. Canadian Persp. Acad. Integrity 7:77675. doi: 10.55016/ojs/cpai.v7i1.77675

Crossref Full Text | Google Scholar

Kwak, Y., Seo, Y. H., and Ahn, J.-W. (2022). Nursing students' intent to use AI-based healthcare technology: path analysis using the unified theory of acceptance and use of technology. Nurse Educ. Today 119:105541. doi: 10.1016/j.nedt.2022.105541,

PubMed Abstract | Crossref Full Text | Google Scholar

Lapkin, S., Levett-Jones, T., and Gilligan, C. (2015). Using the theory of planned behaviour to examine health professional students' behavioural intentions in relation to medication safety and collaborative practice. Nurse Educ. Today 35, 935–940. doi: 10.1016/j.nedt.2015.03.018,

PubMed Abstract | Crossref Full Text | Google Scholar

Law, L. (2024). Application of generative artificial intelligence (GenAI) in language teaching and learning: a scoping literature review. Comput. Educ. Open 6:100174. doi: 10.1016/j.caeo.2024.100174

Crossref Full Text | Google Scholar

Lee, S., Jones-Jang, S. M., Chung, M., Kim, N., and Choi, J. (2024). Who is using ChatGPT and why? Extending the unified theory of acceptance and use of technology (UTAUT) model. Inf. Res. Int. Electr. J. 29, 54–72. doi: 10.47989/ir291647

Crossref Full Text | Google Scholar

Lew, S., Tan, G. W.-H., Loh, X.-M., Hew, J.-J., and Ooi, K.-B. (2020). The disruptive mobile wallet in the hospitality industry: an extended mobile technology acceptance model. Technol. Soc. 63:101430. doi: 10.1016/j.techsoc.2020.101430,

PubMed Abstract | Crossref Full Text | Google Scholar

Li, J., Dada, A., Puladi, B., Kleesiek, J., and Egger, J. (2024). ChatGPT in healthcare: a taxonomy and systematic review. Comput. Methods Prog. Biomed. 245:108013. doi: 10.1016/j.cmpb.2024.108013,

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, J., and Chen, X. (2023). Analysis of college students’ phone call behavior while riding e-bikes: an application of the extended theory of planned behavior. J. Transp. Health 31:101635. doi: 10.1016/j.jth.2023.101635

Crossref Full Text | Google Scholar

Liu, J., Somjaivong, B., Panpanit, L., and Zhang, L. (2025). Effect of a self-efficacy-promoting program on pain management among patients with cancer: a quasi-experimental study. Pain Manag. Nurs. 26, e194–e200. doi: 10.1016/j.pmn.2024.10.009,

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, A., Urquía-Grande, E., López-Sánchez, P., and Rodríguez-López, Á. (2022). How technology paradoxes and self-efficacy affect the resistance of facial recognition technology in online microfinance platforms: evidence from China. Technol. Soc. 70:102041. doi: 10.1016/j.techsoc.2022.102041

Crossref Full Text | Google Scholar

Liu, C., Xiong, K., Chen, X., and Huang, X. (2023). Consumer behavior patterns of carbon neutral label using the unified theory of acceptance and use of technology. Chin. J. Popul. Resour. Environ. 21, 137–144. doi: 10.1016/j.cjpre.2023.09.002

Crossref Full Text | Google Scholar

Lu, Y.-J., Lai, H.-R., Lin, P.-C., Kuo, S.-Y., Chen, S.-R., and Lee, P.-H. (2022). Predicting exercise behaviors and intentions of Taiwanese urban high school students using the theory of planned behavior. J. Pediatr. Nurs. 62, e39–e44. doi: 10.1016/j.pedn.2021.07.001,

PubMed Abstract | Crossref Full Text | Google Scholar

Mahmud, S. N. D., and Osman, K. (2010). The determinants of recycling intention behavior among the Malaysian school students: an application of theory of planned behaviour. Procedia. Soc. Behav. Sci. 9, 119–124. doi: 10.1016/j.sbspro.2010.12.123

Crossref Full Text | Google Scholar

Mandal, S., Dubey, R. K., Basu, B., and Tiwari, A. (2025). Exploring the orientation towards metaverse gaming: contingent effects of VR tools usability, perceived behavioural control, subjective norms and age. J. Innov. Knowl. 10:100632. doi: 10.1016/j.jik.2024.100632

Crossref Full Text | Google Scholar

Mäntymäki, M., Merikivi, J., Verhagen, T., Feldberg, F., and Rajala, R. (2014). Does a contextualized theory of planned behavior explain why teenagers stay in virtual worlds? Int. J. Inf. Manag. 34, 567–576. doi: 10.1016/j.ijinfomgt.2014.05.003

Crossref Full Text | Google Scholar

Masudin, I., Restuputri, D. P., and Syahputra, D. B. (2023). Analysis of financial technology user acceptance using the unified theory of acceptance and use of technology method. Procedia Comput. Sci. 227, 563–572. doi: 10.1016/j.procs.2023.10.559

Crossref Full Text | Google Scholar

McBride, M., Carter, L., and Phillips, B. (2020). Integrating the theory of planned behavior and behavioral attitudes to explore texting among young drivers in the US. Int. J. Inf. Manag. 50, 365–374. doi: 10.1016/j.ijinfomgt.2019.09.003

Crossref Full Text | Google Scholar

Michels, M., Bonke, V., Wever, H., and Musshoff, O. (2024). Understanding farmers’ intention to buy alternative fuel tractors in German agriculture applying the unified theory of acceptance and use of technology. Technol. Forecast. Soc. Change 203:123360. doi: 10.1016/j.techfore.2024.123360

Crossref Full Text | Google Scholar

Mishra, D., Akman, I., and Mishra, A. (2014). Theory of reasoned action application for green information technology acceptance. Comput. Human Behav. 36, 29–40. doi: 10.1016/j.chb.2014.03.030

Crossref Full Text | Google Scholar

Mittal, U., Sai, S., Chamola, V., and Sangwan, D. (2024). A comprehensive review on generative AI for education. IEEE Access 12, 142733–142759. doi: 10.1109/ACCESS.2024.3468368

Crossref Full Text | Google Scholar

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., and Nguyen, B. P. T. (2023). Ethical principles for artificial intelligence in education. Educ. Inf. Technol. Educ. Inf. Technol. 28, 4221–4241. doi: 10.1007/s10639-022-11316-w

Crossref Full Text | Google Scholar

Nikolic, S., Wentworth, I., Sheridan, L., Moss, S., Duursma, E., Jones, R. A., et al. (2024). A systematic literature review of attitudes, intentions and behaviours of teaching academics pertaining to AI and generative AI (GenAI) in higher education: an analysis of GenAI adoption using the UTAUT framework. Australas. J. Educ. Technol. 40, 56–75. doi: 10.14742/ajet.9643

Crossref Full Text | Google Scholar

Niu, L. (2020). A review of the application of logistic regression in educational research: Common issues, implications, and suggestions. Educational Review, 72, 41–67. doi: 10.1080/00131911.2018.1483892

Crossref Full Text | Google Scholar

Philippi, P., Baumeister, H., Apolinário-Hagen, J., Ebert, D. D., Hennemann, S., Kott, L., et al. (2021). Acceptance towards digital health interventions – model validation and further development of the unified theory of acceptance and use of technology. Internet Interv. 26:100459. doi: 10.1016/j.invent.2021.100459,

PubMed Abstract | Crossref Full Text | Google Scholar

Prohorovs, A., Tsaryk, O., and Fainglozs, L. (2024a). “Employers’ Expectations of Students’ Generative AI Skills: a Student Perspective." In 2024 14th International Conference on Advanced Computer Information Technologies (ACIT), 809-814. IEEE, 2024.

Google Scholar

Prohorovs, A., Tsaryk, O., and Frick, W. C. (2024b). Ethical Aspects of GenAI Use by University Students: An In-ternational Survey. Cham: Springer.

Google Scholar

Qu, Y., He, S., Tao, D., Yu, W., and Hu, X. (2023). Dissecting Ocean-friendly behavioral intention among college students: incorporating ocean literacy and diversified incentive mechanism with the theory of planned behavior. Ocean Coast. Manage. 235:106494. doi: 10.1016/j.ocecoaman.2023.106494

Crossref Full Text | Google Scholar

Rasul, T., Nair, S., Kalendra, D., Balaji, M. S., de Oliveira Santini, F., Ladeira, W. J., et al. (2024). Enhancing academic integrity among students in GenAI era: a holistic framework. Int. J. Manag. Educ. 22:101041. doi: 10.1016/j.ijme.2024.101041

Crossref Full Text | Google Scholar

Raza, M. M., Venkatesh, K. P., and Kvedar, J. C. (2024). Generative AI and large language models in health care: pathways to implementation. NPJ Digital Medicine 7:62. doi: 10.1038/s41746-023-00988-4,

PubMed Abstract | Crossref Full Text | Google Scholar

Razi-ur-Rahim, M., Uddin, F., Dwivedi, P., and Pandey, D. K. (2024). Entrepreneurial intentions among polytechnic students in India: examining the theory of planned behaviour using PLS-SEM. Int. J. Manag. Educ. 22:101020. doi: 10.1016/j.ijme.2024.101020

Crossref Full Text | Google Scholar

Relente, A. R. R., and Capistrano, E. P. S. (2025). Innovation self-efficacy, theory of planned behavior, and entrepreneurial intentions: the perspective of young Filipinos. Asia Pac. Manag. Rev. 30:100350. doi: 10.1016/j.apmrv.2024.100350

Crossref Full Text | Google Scholar

Rowland, D. R. (2023). Two frameworks to guide discussions around levels of acceptable use of generative AI in student academic research and writing. J. Acad. Lang. Learn. 17, T31–T69. Available online at: https://journal.aall.org.au/index.php/jall/article/view/915

Google Scholar

Ruiz-Herrera, L. G., Valencia-Arias, A., Gallegos, A., Benjumea-Arias, M., and Flores-Siapo, E. (2023). Technology acceptance factors of e-commerce among young people: an integration of the technology acceptance model and theory of planned behavior. Heliyon 9:e16418. doi: 10.1016/j.heliyon.2023.e16418,

PubMed Abstract | Crossref Full Text | Google Scholar

Sallam, M., Elsayed, W., Al-Shorbagy, M., Barakat, M., El Khatib, S., Ghach, W., et al. (2024). ChatGPT usage and attitudes are driven by perceptions of usefulness, ease of use, risks, and psycho-social impact: a study among university students in the UAE. Front. Educ. 9:1414758. doi: 10.3389/feduc.2024.1414758

Crossref Full Text | Google Scholar

Sanusi, I. T., Agbo, F. J., Dada, O. A., Yunusa, A. A., Aruleba, K. D., Obaido, G., et al. (2024). Stakeholders’ insights on artificial intelligence education: Perspectives of teachers, students, and policymakers. Computers and Education Open, 7:100212. doi: 10.1016j.caeo.2024.100212

Google Scholar

Sari, N. P. W. P., Duong, M.-P. T., Li, D., Nguyen, M.-H., and Vuong, Q.-H. (2024). Rethinking the effects of performance expectancy and effort expectancy on new technology adoption: evidence from Moroccan nursing students. Teach. Learn. Nurs. 19, e557–e565. doi: 10.1016/j.teln.2024.04.002

Crossref Full Text | Google Scholar

Scherer, R., Siddiq, F., Howard, S. K., and Tondeur, J. (2023). The more experienced, the better prepared? New evidence on the relation between teachers’ experience and their readiness for online teaching and learning. Computers in Human Behavior, 139:107530. doi: 10.1016/j.chb.2022.107530

Crossref Full Text | Google Scholar

Simms, R. C. (2025). Generative artificial intelligence (AI) literacy in nursing education: a crucial call to action. Nurse Educ. Today 146:106544. doi: 10.1016/j.nedt.2024.106544,

PubMed Abstract | Crossref Full Text | Google Scholar

Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: an extended unified theory of acceptance and use of technology. Innov. High. Educ. 49, 223–245. doi: 10.1007/s10755-023-09686-1

Crossref Full Text | Google Scholar

Sullivan, M., Kelly, A., and McLaughlan, P. (2023). ChatGPT in higher education: considerations for academic integrity and student learning. J. Appl. Learn. Teaching 6, 31–40. doi: 10.37074/jalt.2023.6.1.7

Crossref Full Text | Google Scholar

Sweller, J. (2020). Cognitive load theory and educational technology. Educational Technology Research and Development, 68, 1–16. doi: 10.1007/s11423-019-09701-3

Crossref Full Text | Google Scholar

Teo, T., Doleck, T., Bazelais, P., and Lemay, D. J. (2019). Exploring the drivers of technology acceptance: a study of Nepali school students. Educational Technology Research and Development, 67, 495–517. doi: 10.1007/s11423-019-09654-7

Crossref Full Text | Google Scholar

Trinh, L. T. T., Hang, N. T. T., Cuong, L. M., Dinh, N. V., Linh, H. K., Trinh, D. T., et al. (2025). State-of-the-arts methods for studying factors driving the utilization of open science resources. MethodsX 14:103125. doi: 10.1016/j.mex.2024.103125,

PubMed Abstract | Crossref Full Text | Google Scholar

Vazquez-Cano, E., Ramírez-Hurtado, J., Sáez-López, J.-M., and Meneses, E. (2023). ChatGPT: the brightest student in the class. Think. Skills Creat. 49:101380. doi: 10.1016/j.tsc.2023.101380

Crossref Full Text | Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: toward a unified View1. MIS Q. 27, 425–478. doi: 10.2307/30036540

Crossref Full Text | Google Scholar

Wallace, B., and Kernozek, T. (2017). Self-efficacy theory applied to undergraduate biomechanics instruction. J. Hosp. Leis. Sport Tour. Educ. 20, 10–15. doi: 10.1016/j.jhlste.2016.11.001

Crossref Full Text | Google Scholar

Wong, L.-H., and Looi, C.-K. (2024). Advancing the generative AI in education research agenda: insights from the Asia-Pacific region. Asia Pac. J. Educ. 44, 1–7. doi: 10.1080/02188791.2024.2315704

Crossref Full Text | Google Scholar

Wu, D. (2020). Empirical study of knowledge withholding in cyberspace: integrating protection motivation theory and theory of reasoned behavior. Comput. Hum. Behav. 105:106229. doi: 10.1016/j.chb.2019.106229

Crossref Full Text | Google Scholar

Xia, L., Zhang, K., Huang, F., Jian, P., and Yang, R. (2024). The intentions and factors influencing university students to perform CPR for strangers based on the theory of planned behavior study. Heliyon 10:e38135. doi: 10.1016/j.heliyon.2024.e38135,

PubMed Abstract | Crossref Full Text | Google Scholar

Yang, R. P. J., Noels, K. A., and Saumure, K. D. (2006). Multiple routes to cross-cultural adaptation for international students: Mapping the paths between self-construals, English language confidence, and adjustment. International Journal of Intercultural Relations, 30, 487–506. doi: 10.1016/j.ijintrel.2005.11.010

Crossref Full Text | Google Scholar

Yilmaz, R., and Karaoglan Yilmaz, F. G. (2023). Augmented intelligence in programming learning: examining student views on the use of ChatGPT for programming learning. Comput. Hum. Behav. Artif. Hum. 1:100005. doi: 10.1016/j.chbah.2023.100005,

PubMed Abstract | Crossref Full Text | Google Scholar

Yu, H., and Guo, Y. (2023). Generative artificial intelligence empowers educational reform: current status, issues, and prospects. Front. Educ. 8:1183162. doi: 10.3389/feduc.2023.1183162

Crossref Full Text | Google Scholar

Yusuf, A., Pervin, N., and Román-González, M. (2024). Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 21:10. doi: 10.1186/s41239-024-00453-6

Crossref Full Text | Google Scholar

Zaim, M., Arsyad, S., Waluyo, B., Ardi, H., Al Hafizh, M., Zakiyah, M., et al. (2024). AI-powered EFL pedagogy: integrating generative AI into university teaching preparation through UTAUT and activity theory. Comput. Educ. Artif. Int. 7:100335. doi: 10.1016/j.caeai.2024.100335

Crossref Full Text | Google Scholar

Zhang, L., and Xu, J. (2025). The paradox of self-efficacy and technological dependence: unraveling generative AI’S impact on university students’ task completion. Internet High. Educ. 65:100978. doi: 10.1016/j.iheduc.2024.100978

Crossref Full Text | Google Scholar

Zulfiqar, S., Sarwar, B., Huo, C., Zhao, X., and Mahasbi, H. U. (2025). AI-powered education: driving entrepreneurial spirit among university students. Int. J. Manag. Educ. 23:101106. doi: 10.1016/j.ijme.2024.101106

Crossref Full Text | Google Scholar

Keywords: GenAI application, university students, higher education, generative artificial intelligence, GenAI in HEI setting

Citation: Prohorovs A, Užule K and Tsaryk O (2026) Understanding higher education students’ reluctance to adopt GenAI in learning in Latvia and Ukraine. Front. Educ. 10:1676900. doi: 10.3389/feduc.2025.1676900

Received: 31 July 2025; Revised: 18 November 2025; Accepted: 15 December 2025;
Published: 07 January 2026.

Edited by:

Charity M. Dacey, Touro University Graduate School of Education, United States

Reviewed by:

Walter Alexander Mata López, University of Colima, Mexico
Alsaeed Alshamy, Sultan Qaboos University, Oman

Copyright © 2026 Prohorovs, Užule and Tsaryk. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Olga Tsaryk, dHNhcnlrb2xnYUBnbWFpbC5jb20=; by50c2FyeWtAd3VudS5lZHUudWE=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.