- 1Universidad de Alcalá, Alcalá de Henares, Spain
- 2GES Department, Universidad Galileo, Guatemala City, Guatemala
Introduction: The rapid expansion of Generative Artificial Intelligence (GAI) is reshaping pedagogical practices and educational policies worldwide. One of its most notable contributions is its capacity to deliver personalized feedback, which has the potential to enhance student learning and academic performance. This study aims to propose and validate a conceptual model that examines the factors influencing student behavior in response to GAI-mediated feedback in online learning environments.
Methods: A Massive Open Online Course (MOOC) titled “Transforming Education with AI: ChatGPT” was designed within a university setting, in which students received feedback on their activities through the GAI tool ChatGPT. Data were collected through a survey completed by 161 participants. The proposed model was evaluated and validated using Partial Least Squares Structural Equation Modeling (PLS-SEM).
Results: Findings indicate that students hold a positive perception of GAI as a tool for receiving feedback within their learning process. Although concerns related to privacy and security remain, these factors do not exert a significant influence on students’ overall satisfaction with GAI-mediated feedback.
Discussion: The results suggest that GAI-mediated feedback is well-received by students and can be integrated effectively into online learning environments. While issues surrounding privacy and security should not be overlooked, they do not appear to hinder students’ acceptance or satisfaction. These insights contribute to the development of evidence-based strategies for the pedagogical incorporation of GAI in higher education.
1 Introduction
In recent decades, online teaching and learning processes have experienced sustained growth, which accelerated significantly in recent years as a response to the challenges posed by the COVID-19 pandemic for higher education institutions worldwide (Estriegana et al., 2024; Govindaraju et al., 2023). To address this shift, most institutions adopted hybrid teaching models, which enabled them to respond to a new educational paradigm grounded in the flexibility and adaptability of learning environments (Bakar et al., 2023; Raes, 2022). Within this context, Artificial Intelligence (AI) has played a fundamental role by enabling the analysis of large volumes of data to identify performance patterns (Dhara et al., 2022) and by offering specific recommendations that enhance both students’ conceptual understanding and the personalization of teaching strategies (Boscardin et al., 2024; Chen M. et al., 2024; Chen X. et al., 2024).
The development of Generative Artificial Intelligence (GAI) has further expanded these possibilities, establishing itself as a tool capable of transforming pedagogical methods and shaping educational policy worldwide (Canabal and Margalef, 2017). One of its main contributions lies in the generation of personalized educational content, such as activities and assessments tailored to the level and pace of each student, thereby strengthening learning personalization and fostering more inclusive and equitable access to education (Morales-Chan et al., 2024). Nonetheless, the integration of these technologies also raises considerable ethical challenges, including issues related to data privacy, the detection and correction of algorithmic biases, and the need to guarantee equity in access to digital resources (Barrett and Pack, 2023).
Among the emerging challenges, one of the most pressing concerns is how GAI can contribute to optimizing feedback processes in online courses in ways that not only promote deep and meaningful learning but also enhance student motivation and satisfaction (Wongvorachan and Bulut, 2022). The ability of these technologies to generate real-time, personalized feedback constitutes a valuable resource that can directly influence the improvement of students’ academic performance (Lin et al., 2022).
Despite these advances, the academic literature still lacks a systematic model for comprehensively assessing the impact of GAI on feedback processes and, consequently, on student satisfaction in online education environments (Boscardin et al., 2024). This gap underscores the need for empirical studies that examine the factors involved in such impact. In this regard, the present study proposes an evaluation model based on seven key dimensions: feedback, personal predisposition, privacy, attitude toward GAI, trust, security, and student satisfaction. These dimensions will be tested through an empirical analysis using the PLS-SEM technique, with the aim of validating the proposed hypothetical relationships and contributing evidence to strengthen the academic debate on the application of GAI in contemporary higher education (Govindaraju et al., 2023).
This article is structured as follows: after an introduction, Section 2 contains a literature review and theoretical framework. This section describes and establishes the importance of feedback, including its challenges, and generative artificial intelligence. In addition to identifying some key studies, to propose a specific model on the impact of generative artificial intelligence on online course feedback. Section 3 presents the model and its components. Also, in this section the hypotheses are presented. Section 4 describes the research methodology, the instrument used, the participants and data collection, furthermore the data analysis and results are presented. A discussion follows in Section 5. Finally, the article ends with the conclusions, limitations and future work drawn from the study.
2 Literature review
2.1 Foundations of feedback in education
Feedback constitutes an essential process within teaching and learning activities, as it guides students toward achieving the objectives, outcomes, and educational goals established by the teaching team (Mamoon et al., 2016). This process acquires particular relevance in higher education, since it provides concrete information on academic performance, enabling students to adjust their study strategies and address areas in need of improvement (Aguilar et al., 2016).
Beyond the evaluation of results, feedback fosters the development of analytical, writing, and presentation skills, thereby preparing students to face challenges in both academic and professional contexts. Positive reinforcement acts as a motivating factor, while the learning derived from mistakes becomes an opportunity for growth and preparation for future challenges (Viciana et al., 2023).
In the context of Massive Open Online Courses (MOOCs), the literature has highlighted the importance of implementing formative feedback mechanisms capable of addressing the needs of large and diverse student populations (Barrett and Pack, 2023; Steiss et al., 2024). Although peer feedback has emerged as a viable alternative, a significant portion of students expresses a preference for more detailed and personalized comments (Suen, 2014; Floratos et al., 2017).
Studies have also identified three major challenges. The first is the lack of personalization, since generic feedback rarely responds to the specific needs of each student (Sunar et al., 2016). The second relates to delays in delivery or low levels of interaction, factors that undermine motivation and reduce the effectiveness of the learning process (Laaser, 2014; Khe and Wing, 2014). Finally, the low quality of comments directly affects students’ perception of their usefulness, which becomes particularly critical in large-scale educational initiatives such as MOOCs (Segovia, 2021).
2.2 The role of generative AI in educational feedback
Generative Artificial Intelligence (GAI) is profoundly reshaping the contemporary educational landscape, eliciting divergent responses across academic institutions that range from restricting its use to actively integrating it into teaching practices (Samala et al., 2025). This polarization reflects both the novelty and the disruptive potential of the technology (Ahmad et al., 2023). Its influence is evident in significant transformations of pedagogical methodologies and in the redefinition of teaching and learning practices (Bahroun et al., 2023; Bower et al., 2024). In this regard, the impact of GAI is often compared to earlier technological revolutions, such as the advent of the Internet or smartphones, due to its capacity to alter already consolidated structures (Ooi et al., 2023).
The integration of GAI into educational environments fosters new dynamics of collaborative learning and promotes teaching innovation, establishing itself as an essential resource for professional development in digital education (Alammari, 2024). However, its full potential can only be realized if faculty adapt their teaching and assessment strategies, ensuring ethical use and preventing malpractice (Ali et al., 2024; Alshaikh et al., 2024). Its utility extends to diverse domains such as university admissions, assessment, and educational research (Boscardin et al., 2024), while simultaneously raising ethical and methodological dilemmas that demand critical attention (Mao et al., 2024).
The effectiveness of these tools has been examined in empirical studies. For example, ChatGPT has demonstrated its usefulness in addressing conceptual questions in disciplines such as medical physiology (Agarwal et al., 2023), while comparisons with other models like Claude-2 have revealed differences in accuracy and relevance (Banerjee et al., 2023). Likewise, its capacity to provide contextualized and valuable information to learners in educational settings has been highlighted (Almagazzachi et al., 2024).
GAI also supports knowledge construction by enabling immersive experiences through virtual and augmented reality, facilitating the creation of realistic and educationally valuable simulations (Carlson, 2023; Vaughn et al., 2024). This potential requires innovative pedagogical environments capable of redesigning active learning experiences while addressing emerging risks such as plagiarism and the erosion of academic integrity (Salinas-Navarro et al., 2024).
At the student outcome level, evidence shows that GAI can contribute to learning when used as a virtual tutor, fostering student confidence despite the possibility of inaccurate responses (Ding et al., 2023). Research in higher education further confirms that the influence of tools such as ChatGPT is shaped by the technological design itself (Chen et al., 2023). It has also been shown to enhance early stages of critical thinking (Essien et al., 2024), improve performance through conversational assistance and problem-solving support (French et al., 2023), and encourage self-regulation in academic writing by means of innovative pedagogical models (Kong et al., 2024).
Collaborative learning in “human-human” and “human-machine” modalities presents distinct nuances: while interactions with GAI reduce cognitive load, they also encourage more systematic thinking (Li et al., 2024). These benefits are reinforced in iterative processes of AI-assisted academic writing, where students perform better when working jointly with the technology (Le et al., 2024). From the teaching perspective, GAI has become a widely adopted resource in both course material preparation and direct instruction (Vera, 2024).
At the institutional level, although these technologies enrich course content, they also increase demands for updates and maintenance, adding to the workload of faculty (Ilieva et al., 2023). The growing integration of GAI requires higher education institutions to strengthen policies that safeguard academic integrity (Song, 2024). Ethical use is thus framed as an indispensable requirement (De Gagne et al., 2023), accompanied by curricular integration proposals designed to maximize benefits while minimizing risks (Gosak et al., 2024). Along these lines, controlled adoption strategies in management programs have been explored to balance innovation with caution (Hyde et al., 2024).
Therefore, the introduction of GAI challenges traditional notions of authorship and academic norms, generating the need for clear regulatory frameworks to guide its use (Duah and McGivern, 2024). The absence of such guidelines can lead to ethical ambiguities; hence, several universities have begun establishing specific policies to regulate its application in teaching, research, and learning contexts (Spivakovsky et al., 2023).
3 Research model and hypotheses
A theoretical model was constructed to understand students’ attitudes towards the use of GAI in the feedback process in online courses. Each hypothesis presented below corresponds to a path in the Structural Equation Modeling (SEM) depicted in Figure 1.
3.1 Feedback
Feedback in teaching and learning processes is conceived as the guidance provided by instructors to students in order to achieve the intended learning outcomes and objectives (Suen, 2014; Floratos et al., 2017). In the university context, this task becomes particularly complex due to the volume and diversity of content, which makes it difficult to deliver timely and high-quality responses (Hujala et al., 2020).
Information technologies have transformed the educational ecosystem, reshaping the ways in which knowledge is accessed and assimilated (Estriegana et al., 2021). These tools turn the classroom into a dynamic and participatory environment, within which Generative Artificial Intelligence (GAI) plays a central role by providing personalized explanations and feedback for each student (Ayoubi, 2024; Chen M. et al., 2024; Chen X. et al., 2024).
The need to address the diversity of learning styles requires flexible and scalable training approaches. Traditional methods show limitations in this regard, whereas tools such as ChatGPT or Copilot allow for productivity optimization, though they remain constrained in terms of personalization (Shaka et al., 2023; Patil, 2024). Within this framework, the use of language models has expanded to innovative experiences, such as the design of educational board games, where ChatGPT supports educators through the stages of ideation, customization, and feedback (Junior et al., 2023).
In the field of academic writing, GAI demonstrates significant potential. For instance, it has been noted that ChatGPT facilitates the development of electronic portfolios by providing automatic feedback and suggestions for improvement throughout the reflective process (Le et al., 2024). Similarly, it is recognized as an emerging teaching support resource, particularly in mathematics education, where it is used as a complement to formative assessment, with both benefits and limitations (Lee et al., 2024; Téllez et al., 2024).
The literature shows that ChatGPT can contribute to strengthening writing skills in higher education, functioning as a valuable support alternative (Escalante et al., 2023; Mahapatra, 2024; Seetharaman, 2023). Nevertheless, when compared with teacher-provided feedback, the latter offers a more contextualized and empathetic analysis, while ChatGPT, though reliable in certain aspects, lacks that personal dimension (Wang et al., 2024).
Perceptions of ChatGPT’s usefulness in formative feedback reflect both opportunities and cautions. Research indicates that although expert teachers’ comments often surpass in quality those generated by GAI, the latter proves especially useful in resource-constrained contexts, such as Massive Open Online Courses (MOOCs), where the scale hinders individualized attention (Barrett and Pack, 2023; Steiss et al., 2024; Zheng et al., 2024; Liu, 2025). In this scenario, GAI emerges as an alternative to ensure support, particularly in the early stages of learning. Finally, it has been highlighted that ChatGPT also makes a significant contribution in specific areas such as programming education, by providing automated feedback tailored to students’ levels (Phung et al., 2024).
Therefore, our hypothesis suggests that feedback received by students via GAI (ChatGPT) positively impacts their attitude towards it (ATT) (H6).
3.2 Trust (TRU)
Trust in the use of emerging technologies such as ChatGPT constitutes a decisive element in the adoption and effectiveness of these tools in educational settings, as it reflects students’ sense of security and certainty when employing them to achieve their learning goals (Yang et al., 2023). Several studies emphasize that trust is a determining factor in the success of technological innovations, since its presence promotes both acceptance and continued use of such tools (Loh et al., 2021). In this regard, evidence shows that students’ trust is positively related to their intention to use ChatGPT consistently, reinforcing its role as a key mediator in technological adoption (Salifu et al., 2024). This impact is further confirmed by research highlighting that the degree of trust influences not only the initial use of ChatGPT, but also its long-term integration as a learning resource (Ayoub et al., 2024). Likewise, trust functions as a relevant predictor in the acceptance of generative artificial intelligence (GAI), increasing students’ expectations regarding its use (Tanantong and Wongras, 2024).
The effects of trust on student behavior extend beyond mere technological adoption. This variable has been positively associated with perceptions of security, self-confidence, and favorable attitudes toward ChatGPT, thereby enhancing satisfaction with the digital learning experience (Salah et al., 2024). From another perspective, the relationship between trust and perceived productivity reveals both benefits and risks: while these tools can optimize performance, they also raise concerns about potential threats in their implementation (Kuhail et al., 2024).
In the context of higher education, trust has been linked to processes of pedagogical adaptation and to the need for instructors to reformulate their practices in order to critically integrate GAI. It is recognized that, in addition to trust, factors such as critical thinking and self-regulation strategies are essential for the effective use of ChatGPT (Abdelhalim, 2024). Moreover, research has shown that the level of acceptance of educational chatbots depends on a balance between perceived trust, performance expectations, and social influence, reflecting the multifactorial nature of their integration into learning environments (Al Shakhoor et al., 2024).
Recent literature has also examined trust in ChatGPT as a virtual tutor in fields such as STEM, showing that students consider its responses and academic support to be reliable (Ding et al., 2023). However, concerns persist regarding excessive dependence, since the uncritical use of these systems may hinder the autonomous development of skills and the validation of information (Kiryakova and Angelova, 2023). In this sense, the need has been raised to implement strategies that mitigate AI-generated hallucinations in order to preserve and strengthen students’ trust in the educational use of these tools (Leiser et al., 2023).
Therefore, our hypotheses suggest that trust in the use of GAI (ChatGPT) positively impacts behavior change (BC) (H12), feedback (FEEDBACK) (H13), student attitudes towards it (ATT) (H11), perception of safety (PS) (H14), and satisfaction (SAT) (H15).
3.3 Behavior change (BC)
Behavioral change is understood as the process through which an individual, group, or community intentionally modifies actions or habits, influenced by factors such as education, motivation, persuasion, social influence, or the surrounding environment (Zhu et al., 2024). In the educational context, the acceptance and use of generative artificial intelligence (GAI) tools, such as ChatGPT, are linked to determinants such as performance expectancy, motivation, and perceived ease of use, which encourage students’ willingness to adjust their learning practices (Sabraz Nawaz et al., 2024).
Trust constitutes a decisive element in the adoption of these technologies, as it enhances students’ readiness to modify their behaviors in relation to their use (Jo, 2023). From a theoretical perspective, the application of the Unified Theory of Acceptance and Use of Technology confirms that behavioral change toward ChatGPT is a key factor in understanding its integration into academic environments (Strzelecki, 2023). Likewise, the use of GAI techniques and applications significantly influences students’ cognitive performance, strengthening both their learning capacity and behavioral patterns (Jaboob et al., 2024).
The impact of digital technology based on artificial intelligence is not confined to the academic sphere, but also extends to quality of life, where performance and effort expectations determine the way users integrate these tools into their routines (Kosasi et al., 2023). Along these lines, the intention to adopt ChatGPT is reinforced when its performance benefits are emphasized, trust conditions are consolidated, and favorable environments are created for its professional and academic use (Emon et al., 2023).
Perceptions of the benefits, risks, and weaknesses of GAI differ between students and educators, yet perceived strengths exert a positive effect on attitudes, subjective norms, and perceived behavioral control, which directly influence behavioral change (Ivanov et al., 2024). Similarly, the use of ChatGPT among students is conditioned by variables such as performance expectancy, social influence, educational and technological self-efficacy, and personal anxiety, while academic integrity may act as a barrier to its adoption (Bouteraa et al., 2024).
Other studies confirm that perceived usefulness and ease of use are positively associated with behavioral intention and actual usage behavior, validating the applicability of the Technology Acceptance Model in this field (Ma et al., 2024). In sectors beyond education, such as healthcare, the combination of TAM and the Theory of Planned Behavior has demonstrated that technological and attitudinal factors exert a positive influence on the intention to use, except for subjective norms, which exhibit a contrary effect (Dhara et al., 2023).
In the university context, the adoption of educational chatbots confirms that trust, performance expectancy, and student habits act as predictors of behavioral intention (Rahim et al., 2022). Similarly, in e-commerce, perceptions of accuracy and interaction experience have been shown to directly influence online purchasing behaviors, which highlights the applicability of these models across diverse digital environments (Adwan and Aladwan, 2022).
Therefore, our hypotheses suggest that behavior change due to the use of GAI (ChatGPT) positively impacts attitude (ATT) (H2), feedback (FEEDBACK) (H3), privacy perception (PP) (H4) and satisfaction (SAT) (H5).
3.4 Attitude (ATT)
Students’ attitudes toward the use of Generative Artificial Intelligence (GAI) constitute a decisive factor in their acceptance of, and satisfaction with, its implementation in online learning environments. This disposition is influenced by perceptions of usefulness, ease of use, and prior experiences, which determine the degree of openness to incorporating such tools into educational processes (Cao et al., 2023). Evidence indicates that evaluations of GAI are heterogeneous and depend on sociodemographic variables, such as age, which condition the willingness to integrate these tools into educational practice (Moravec et al., 2024).
In teaching contexts, the acceptance of ChatGPT has been associated with predominantly positive attitudes, as it helps address structural limitations in the teaching–learning process (Mukred et al., 2023). In specialized university settings, such as health sciences, perceived risk, perceived usefulness, and ease of use have been identified as key factors shaping favorable attitudes toward this technology (Sallam et al., 2023). Nevertheless, a persistent tension remains between recognizing its benefits and expressing doubts about the quality and accuracy of its outputs, reflecting an ambivalent attitude oscillating between enthusiasm and caution (Weber et al., 2024).
Several studies confirm that students acknowledge both the opportunities and risks associated with the use of ChatGPT. For instance, while some highlight its potential to enhance productivity, they also warn of the risk of unethical academic practices (Rogers et al., 2024). The use of ChatGPT in the creation of learning scenarios has been shown to increase intrinsic motivation, academic performance, and positive attitudes toward its integration into training programs (Bai et al., 2024). Similarly, initial student interactions with these tools can shift neutral or cautious attitudes toward more enthusiastic perceptions following firsthand practical experiences (Šedlbauer et al., 2024).
Attitudes also vary across disciplinary and cultural contexts. In technical universities, for example, students expressed greater openness to using ChatGPT in English classes, whereas instructors maintained more neutral positions (Synekop et al., 2024). Complementarily, research shows that English as a Foreign Language (EFL) students with positive attitudes toward the usefulness of ChatGPT demonstrate a stronger intention to incorporate it into their learning processes outside the classroom, thereby consolidating a clear link between attitude and behavior (Liu and Ma, 2024).
A positive perception of ChatGPT is not limited to higher education. In early education, teachers have emphasized its value as an effective pedagogical resource to enhance second-language acquisition, underlining its utility in foundational learning contexts (Allehyani and Algamdi, 2023). Likewise, ChatGPT has been documented as functioning as an intelligent learning assistant, fostering personalized learning, increasing student engagement, and stimulating creativity (Kiryakova and Angelova, 2023).
Finally, prior knowledge of AI directly influences students’ attitudes. Those with greater understanding of the technology tend to recognize its appropriate use and display more favorable attitudes toward its implementation (Iwasawa et al., 2023). However, gaps remain in the literature regarding students’ attitudes and behavioral intentions, highlighting the need for further inquiry into how perceived usefulness and ease of use affect their full acceptance of these tools (Rahman et al., 2023).
Therefore, our hypothesis suggests that students’ Attitude towards the use of GAI (ChatGPT) positively impacts students’ Satisfaction with GAI.
3.5 Privacy (PP)
In the educational domain, privacy is understood as the protection of the personal and sensitive information of students, faculty, and administrative staff, ensuring that such data is neither misused nor disclosed without consent (Crompton and Burke, 2024). This concept also entails compliance with legal and regulatory frameworks designed to safeguard the rights and intimacy of individuals in learning contexts mediated by digital technologies (Yang and Beil, 2024). In the case of the implementation of generative artificial intelligence (GAI), such as ChatGPT, ensuring data confidentiality becomes indispensable, as the exposure or misuse of information constitutes one of the main perceived risks (Polyportis and Pahos, 2024).
The incorporation of ChatGPT in university settings raises ethical concerns regarding data security and the responsible use of technology. Consequently, both public and private educational institutions must establish clear policies to guide the implementation of GAI-based tools and guarantee minimum standards of privacy protection (Rejeb et al., 2024). Empirical studies have shown that privacy is one of the most decisive factors in the acceptance of ChatGPT, ranking above other elements such as security, trust, or social influence in adoption models (Albayati, 2024). These findings reinforce the need to address the concerns arising from its use, as the absence of preventive measures may heighten risks such as plagiarism, misinformation, or academic fraud (Crompton and Burke, 2024).
Likewise, several analyses have emphasized that although ChatGPT holds significant potential to enrich educational processes, it also generates risks linked to data privacy and biases, which must be subjected to ongoing ethical scrutiny (Srishti, 2024). Complementarily, it has been argued that further research is crucial to better understand the scope and limitations of its implementation, particularly in relation to the protection of user information (Samala et al., 2024).
Other studies have warned that the application of ChatGPT in education and the labor market can optimize knowledge transmission and invigorate training systems, although this entails legal and ethical implications, including privacy violations (Chen M. et al., 2024; Chen X. et al., 2024). Similarly, specific risks associated with the use of GAI in academic environments have been identified, with the exposure of sensitive data emerging as one of the most pressing concerns (Gonçalves and Gonçalves, 2024).
Documented experiences in diverse teaching scenarios have also highlighted the emergence of issues such as information manipulation, deceptive privacy, and lack of transparency, which undermine trust in these technologies (Tlili et al., 2023). In this regard, critical challenges have been described, including the reliability of responses, algorithmic biases, and the need for system interpretability—all of which demand careful attention to mitigate risks related to privacy (Chen M. et al., 2024; Chen X. et al., 2024).
From a broader perspective, the literature has proposed mitigation strategies aimed at reducing threats to privacy and security, while simultaneously warning about the potential misuse of ChatGPT in illicit activities such as cyberattacks (Alawida et al., 2023). Other works stress that although this technology may deliver substantial benefits, its application also entails negative societal consequences arising from biases, misinformation, or privacy violations (Dwivedi et al., 2023).
Finally, the use of virtual assistants such as ChatGPT in public and educational services introduces challenges related to the transfer of personal data, transparency in decision-making, and risks of bias, which call for a rigorous ethical approach (Piñeiro-Martín et al., 2023). In this sense, ensuring user trust largely depends on the capacity of institutions to effectively address concerns regarding the security and privacy of information (Yang et al., 2023).
Therefore, our hypotheses suggest that students’ perception of privacy due to the use of GAI (ChatGPT) positively impacts their attitude towards it (ATT) (H7) and their satisfaction (SAT) (H8).
3.6 Security (PS)
The perception of security in the use of Generative Artificial Intelligence (GAI) tools, such as ChatGPT, constitutes a decisive factor in the acceptance and satisfaction of students in online learning environments (Baig and Yadegaridehkordi, 2025; Shahzad et al., 2025). Security is linked to the protection of personal data and the assurance that information generated or shared through these platforms is handled with integrity and confidentiality (Crompton and Burke, 2024). In this regard, ensuring compliance with data protection regulations and policies is essential to strengthen users’ trust and their willingness to interact with these technologies (Chen M. et al., 2024; Chen X. et al., 2024).
The use of ChatGPT in educational settings raises ethical and legal concerns related to privacy, which directly affect students’ perception of security (Polyportis and Pahos, 2024). To address this, institutions must implement clear guidelines that promote responsible use of GAI and include preventive measures against risks such as plagiarism, academic fraud, or the manipulation of information (Rejeb et al., 2024). Likewise, recent studies highlight that security and privacy factors, along with trust and social influence, are key determinants for the sustained adoption of ChatGPT in education (Albayati, 2024).
From a critical perspective, the potential of ChatGPT not only offers advantages in terms of personalization and learning enhancement, but also generates concerns regarding the loss of skills and technological dependency, which impact the perception of security and the ethical evaluation of its use (Srishti, 2024). These concerns are reinforced by research stressing the need to analyze risks and limitations related to privacy and the exposure of personal data (Samala et al., 2024).
Security is also associated with the ability of institutions to ensure the ethical and transparent use of these tools (Akor et al., 2024). Although the implementation of GAI may improve efficiency and add value to educational processes, it is not exempt from threats such as privacy breaches and the exposure of sensitive data (Gonçalves and Gonçalves, 2024). Research has identified scenarios in which issues emerge concerning the accuracy of responses, information manipulation, and misleading privacy, posing a constant challenge in consolidating a secure environment (Tlili et al., 2023).
In this context, mitigation strategies become particularly relevant to minimize risks associated with cyberattacks and algorithmic biases, aiming to ensure that the educational experience does not compromise students’ security (Alawida et al., 2023). Nevertheless, the adoption of these technologies requires a critical evaluation of their ethical and legal limitations, as well as of the potential impact of misinformation and misuse on the perception of security (Dwivedi et al., 2023).
Furthermore, perceived security not only influences the trust placed in these tools but also determines the continuity of their use (Shahzad et al., 2025). Research shows that transparency in data management and protection against biases or risks derived from automated processing are essential conditions for consolidating student satisfaction (Piñeiro-Martín et al., 2023). Similarly, ensuring robust standards of information security is an indispensable requirement for maintaining users’ trust and fostering the sustained adoption of GAI in education (Yang et al., 2023).
Therefore, our hypotheses suggest that students’ perception of security regarding the use of GAI (ChatGPT) positively impacts their attitude towards it (ATT) (H9) and their satisfaction with the feedback provided through GAI (SAT) (H10).
3.7 Satisfaction (SAT)
Satisfaction in online learning contexts is understood as the state in which students perceive that their academic expectations have been adequately fulfilled (Mireles and García, 2022). This construct is particularly relevant because it directly influences behavioral intentions associated with the acceptance and use of emerging educational technologies (Alqurashi, 2019). In this regard, satisfaction not only reflects a subjective evaluation of the learning experience but also represents a decisive factor in the continuity of learning mediated by generative artificial intelligence (GAI) systems.
Several studies have documented that usability, enjoyment, and the perceived responsiveness of platforms such as ChatGPT are critical determinants in increasing student satisfaction and their willingness to continue using these tools (Kim et al., 2024). Likewise, recent research indicates that the perceived usefulness of GAI and the quality of the outputs it generates have a direct effect on satisfaction, thereby reinforcing its potential to enhance the educational experience and optimize academic outcomes (Boubker, 2024).
Empirical evidence also shows that the impact of GAI on generating academic recommendations enhances both the quality of student work and the satisfaction associated with the learning process (Neyem et al., 2024). At the level of digital service management, factors such as perceived intelligence and service quality are crucial to consolidating perceived usefulness and overall user satisfaction (Jo, 2024). In parallel, it has been observed that literacy in the use of GAI tools significantly increases students’ satisfaction and trust in these systems (Lee and Park, 2023).
Within learning management platforms, the integration of ChatGPT has been shown to raise student satisfaction by providing personalized diagnoses of weaknesses and suggestions for improvement (Yasniy et al., 2023). Additionally, when students acknowledge the usefulness of ChatGPT in their learning process, they not only report higher levels of satisfaction but also demonstrate a favorable attitude toward its continued adoption (Ngo et al., 2024).
Satisfaction is also linked to the intention to recommend and promote the use of GAI among peers, reinforcing positive attitudes that facilitate the continuity of learning mediated by these technologies (Pasupuleti and Thiyyagura, 2024). Finally, recent studies have shown that knowledge acquisition through support systems based on ChatGPT has a direct effect on motivation, satisfaction, and the perceived effectiveness of learning (Hu et al., 2023).
Based on the aspects discussed above, it can be concluded that satisfaction constitutes a determining factor in the use of Generative Artificial Intelligence (GAI) tools. This satisfaction does not emerge in isolation but is shaped by multiple dimensions, among which privacy, security, attitude, trust, and behavioral change are particularly significant.
4 Methodology
The following presents a learning and formative assessment experience that incorporates key elements such as feedback, behavioral change, attitude, privacy, security, and student satisfaction, all of which play a central role in the process. The implementation of this experience is grounded in the previously discussed literature review and a theoretical study that enabled the adaptation of the topic to integrate the use of generative artificial intelligence in the feedback provided to students on their assignments and activities, with particular emphasis on its impact on satisfaction within online learning environments.
4.1 Formative assessment experience
The incorporation of feedback strategies supported by GAI was carried out through the development of a MOOC titled “Transforming Education with AI: ChatGPT,” which ran from March 27 to May 11, 2023. The course attracted 5,482 students interested in exploring the use of GAI in the educational field and was designed to cater to a diverse group of learners, from educators to curious enthusiasts about AI’s potential in teaching.
The curriculum was divided into four comprehensive lessons, each designed to progressively deepen participants’ understanding and skills regarding the role of AI in education. The first lesson introduced ChatGPT, focusing on its capabilities and potential to revolutionize educational practices. Next, practical aspects of integrating ChatGPT into teaching and learning processes were addressed, providing hands-on experience and insights into effective implementation strategies—from planning, curriculum design, and development of learning activities to the assessment process. Additionally, ethical considerations in the use of AI in education were discussed, equipping participants with knowledge to navigate the complex ethical landscape surrounding GAI and emphasizing the importance of responsible use.
The course content delivery primarily utilized educational videos generated with GAI tools such as Heygen and Elevenlabs, showcasing their practical application in creating educational content and serving as an innovative teaching method. Within the MOOC, various learning activities were implemented, both formative and summative. Formative activities were designed to facilitate continuous learning and skill development, while summative activities, including one specifically designed for assessment using GAI, focused on measuring students’ assimilation of knowledge and competencies attained.
One of these summative activities integrated with an innovative educational bot called “GESfeedback,” developed to provide constructive and personalized feedback to students.
The development and implementation of the “GESFeedback” bot were based on an in-house prototype specifically designed to support the formative assessment experience described in this study, Figure 2 presents the AI Architecture for the prototype.
The system receives each student’s written response through the MOOC activity interface and processes it using a structured evaluation rubric embedded in a prompt that guides the generation of individualized feedback. The personalization mechanism is grounded on the student’s actual submission rather than on pre-defined learner profiles or preferences: each response is analyzed semantically, and the feedback returned is dynamically adapted to the content provided by the learner as is shown in Figure 3.
From a technical standpoint, the prototype was implemented using LangChain 0.0.158 and the OpenAI GPT-4 API (model: gpt-4-0613) (OpenAI, 2023). It employed a SequentialChain that combined a PromptTemplate (presented in Table 1), an LLMChain, and a simple agent responsible for interpreting rubric criteria and structuring the response. The system used deterministic parameters (temperature = 0.3; max_tokens = 800; top_p = 1.0) to ensure consistency and comparability across student outputs. The system instructions within the prompt explicitly defined the expected tone, length, and evaluative focus. The entire workflow was orchestrated through a Python-based pipeline that handled task retrieval, text preprocessing (tokenization and cleaning), rubric mapping, and response post-processing prior to delivery.
The main objective of “GESfeedback” was to significantly enhance both learning and academic performance of each student by providing constant and specific feedback that complemented and enriched the educational process. This bot stood out for its ability to meticulously analyze students’ responses and work to offer a deep analysis of their performance. It tailored its feedback to each student’s individual needs, highlighting their strengths and suggesting improvements in specific areas to strengthen their academic performance. To achieve this, an assessment rubric was used that clearly expressed the criteria and levels of evaluation.
In the context of the MOOC, Figure 3 presents how “GESfeedback” operates as follows: (a) Task Submission: The student submitted the task through the course platform. (b) Task Receipt: The task was received through the course platform, then downloaded for further processing. (c) Task Analysis: Advanced algorithms were used to assess the quality and understanding of the student’s work. (d) Feedback Generation: Based on the analysis and assessment rubric, the bot generated personalized feedback addressing both strengths and areas for improvement. (e) Feedback Delivery: The feedback was sent to the student via email. (f) Student Reception: The student received the feedback and could use it to enhance their learning and prepare for future assessments. Immediately afterward, students were invited to complete the perception instrument described in Section 4.2, which collected data for validating the proposed model.
It should be noted that this version of GESFeedback was developed as an experimental prototype, and more robust iterations are currently under development with improved pipelines for data validation, rubric versioning, and adaptive prompt optimization. To replicate this experience, researchers may follow a comparable configuration by (a) defining a clear evaluation rubric aligned with the constructs under analysis (an example of the GESFeedback Prompt Template is presented in Table 2), (b) implementing a sequential chain in LangChain that links a prompt template to the LLM API call, and (c) recording feedback interactions for subsequent perception analysis. This design can be reproduced with open-source components and minimal computational resources, facilitating adaptation to different online learning environments and research contexts.
4.2 Instrument
The items for each variable in the study were adapted from validated scales in previous studies. Thus, questions regarding the feedback received by students through the use of GAI were adapted from (Lizzio and Wilson, 2008; Gan et al., 2021). The scales on changes in student behavior in the use of GAI in feedback were adapted from Estriegana et al. (2021, 2024), Chang (2013).
The scales for trust towards the use of GAI in feedback were measured using items adapted from Tang et al. (2022). Similarly, attitudes were adapted from items proposed by Lim et al. (2006) and also from Ibrahim et al. (2011).
On the other hand, scales measuring students’ perception of privacy towards the use of feedback through GAI were measured using items adapted from Aleroud et al. (2020). Regarding the scales measuring students’ perception of security in the use of GAI in the feedback process, items proposed by Charles et al. (2022) were used.
Finally, questions to evaluate students’ satisfaction with the feedback process through the use of GAI were adapted from Wirani et al. (2022) and also from Jang and Hsieh (2021).
To test our hypotheses, student data was collected using an online questionnaire following several criteria as a guide, adapted considering other reviewed models as recommended by O’Leary (2017).
The questionnaire used a 5-point Likert scale to obtain responses (Likert, 1932), adopting the standard method for measuring variables that are not directly quantifiable (Hair et al., 2013), with responses ranging from 1: completely disagree to 5: completely agree. To minimize errors in items related to variation, the questionnaire used simple questions and easy-to-understand language. This questionnaire was subsequently analyzed.
4.3 Participants and data collection
The activities targeted 207 participants in the pilot. Reviewing panel of educators determined that the responses were suitable and adequate. Participants were then issued with a study questionnaire, which was completed by 161 individuals whose responses were recorded and analyzed in depth.
4.4 Data analysis
This study employed a regression analysis of latent variables, based on the optimization technique of partial least squares (PLS) to elaborate the model. This study draws on SmartPLS 4.1.0.2. PLS is a multivariate technique for testing structural models and estimates the model parameters that minimize the residual variance of the dependent variables of the whole model (Hair et al., 2013). It does not require any parametric conditions and is recommended for small samples (Hulland, 1999).
4.4.1 Justification of number of cases
On the other hand, Hair et al. (2017) suggest using software like GPower GPower 3.0 (Institut für Experimentelle Psychologie, 2007) for conducting specific power analyses as per model specifications. To determine the sample size, it is necessary to specify the effect size (ES), the significance level alpha (α), and the power (β). Generally, a significance level of α = 0.05 and power of 80% are accepted. In this case, a multiple regression study with four predictors was conducted, with a medium effect size (ES) of 0.15, an alpha of 0.05, and power of 0.95 (following Cohen, 1992). One wishes to ascertain the sample size required. Applying a priori analysis shows a result of N = 129 subjects.
The available sample for this analysis consists of 161 valid cases, which comfortably exceeds any requirement set by these criteria, for conducting measurement and structural model analyses.
4.4.2 Measurement model evaluation
The results show that all standardized loadings (λ) exceed the threshold of 0.707, supporting the adequate individual reliability of the items (Carmines and Zeller, 1979). Moreover, these outer loadings, which represent the association between latent variables and their observed indicators, reinforce the validity of the model, as presented in Table 3.
The simple reliability of the measurement scales used was calculated considering the Cronbach’s alpha values, all of which were above 0.70 (Nunnally and Bernstein, 1994). The composite reliability can be seen that all of indicators values are shown to be greater than 0.7 (Werts et al., 1974), so high level of internal consistency reliability have been demonstrated among latent variables.
In the analysis of variance, all the values for the average variance extract (AVE) were above 0.50, Fornell and Larcker (1981), exceeding the minimum acceptable values for validity (Table 4).
Table 4. Cronbach’s alpha coefficients, Rho_A, construct reliability and average variance extracted AVE.
Additionally, Fornell and Larcker (1981), suggest that the square root of AVE in each latent variable can be used to establish discriminant validity so for confirm discriminant validity among the constructs, the square root of the AVE must be superior to the correlation between the constructs. Table 5 presents the square roots of the AVE on the diagonal and the correlations among the constructs. This value is larger than other correlation values among the latent variables, so that the values indicate adequate discriminant validity of the measurements.
On the other hand, as we can show in Table 6 the discriminant validity measures using the heterotrait-multitrait (HTMT) method (Henseler et al., 2014) which indicated the mean of the heterotrait-heteromethod correlations relative to the geometric mean of the average monotrait-heteromethod correlation of both variables. We found that the HTMT ratio for group-focused and individual focused, was below the 0.95 cutoff recommended for conceptually close constructs (Henseler et al., 2014).
4.4.3 Structural model analysis
The model shown in Figure 1 has been elaborated from the reviewed literature and its analysis.
PLS program can generate T-statistics for significance testing of both the inner and outer model, using a procedure called bootstrapping (Chin, 1998b). In this procedure, many subsamples (10000) are taken from the original sample with replacement to give bootstrap standard errors, which in turn gives approximate T-values for significance testing of the structural path.
After the bootstrapping procedure is completed, Results can get as the following. All the R-squared values range from 0 to 1. The higher the value, the more predictive capacity the model has for that variable. Because R-squared should be high enough for the model to reach a minimum level of explanatory power. The R-squared values are greater than 0.10 with a significance of t > 1.64 (Fralk and Miller, 1992).
Figure 4 and Table 7 show the explained variance (R squared) in the dependent constructs and the path coefficients for the model.
The standardized of the regression coefficients show the estimates of the relationships of the structural model, that is, the hypothesized relationships between constructs. So it will analyze the algebraic sign if there is change of sign, the magnitude and statistical significance is greater Tstadistic of (t(9999), one-tailed test) 1.64. After, the hypotheses were checked and validated and the relationships were positive, mostly with high significance (Table 8).
However, when it is applied percentile bootstrap to generate a 95% confidence interval using 10.000 resamples, H1 to H6, and H12 to H14, are supported because its confidence interval not includes zero (Table 5). Thus these hypothesis are adopted. All of these results complete a basic analysis of PLS-SEM in our research. PLS-SEM result is shown in Figure 4.
Finally, Table 9 shows the amount of variance that each antecedent variable explains on each endogenous construct. Thus, cross-validated redundancy measures show that the theoretical/structural model has a predictive relevance.
5 Discussion
Based on the results, the proposed model for this analysis was highly satisfactory. The reliability of each item, along with the values of Cronbach’s alpha and composite reliability, met acceptable standards, demonstrating a high level of internal consistency reliability among the latent variables. Additionally, it was found that the validity and discriminant validity values of the measures were within acceptable ranges. Moreover, the relationships between the variables were predominantly significant, confirming the validation of all hypotheses.
According to Table 8, confidence in the use of GAI (TRU) shows significant positive correlations with students’ behavioral change towards GAI use (BC) (H12), explaining 29.16% of the variance, consistent with findings by Sabraz Nawaz et al. (2024) and Jo (2023). Furthermore, confidence in the use of GAI (TRU) correlates significantly positively with students’ attitude towards GAI use (FEEDBACK) (H13), explaining 19.55% of the variance, as indicated by Shaka et al. (2023), Téllez et al. (2024), and Barrett and Pack (2023).
Additionally, confidence in the use of GAI (TRU) shows a significant positive correlation with students’ perception of security (PS) in receiving GAI-based feedback, explaining 24.30% of the variance, in line with findings by Kuhail et al. (2024), Hannon et al. (2024), and Kiryakova and Angelova (2023). This is critical as there is a risk that students may blindly trust GAI without verifying the authenticity of generated texts, potentially negatively impacting their acquisition of knowledge and skills.
On the other hand, confidence in the use of GAI (TRU) does not directly influence attitude (ATT) or satisfaction (SAT) but does so indirectly through behavioral change, explaining 4.98% of the variance.
Behavioral change (BC) has a significant direct positive impact on students’ perception of privacy (PP) in GAI use (H4), explaining 4.88% of the variance, consistent with findings by Chen M. et al. (2024), Chen X. et al. (2024), Albayati (2024), and Samala et al. (2024).
On the other hand, behavioral change (BC) has a significant direct positive impact on students’ Feedback (Feedback) H3, on Attitude (ATT) H2, and on Satisfaction in GAI use (SAT) H5, explaining 21.81, 15.76, and 13.8% of the variance, respectively. These findings are consistent with Téllez et al. (2024), Phung et al. (2024) regarding feedback, Mukred et al. (2023), Sallam et al. (2023) regarding attitude, and Ayoubi (2024), Boubker et al. (2024), and Lee and Park (2023) regarding satisfaction.
Furthermore, we can observe that students’ attitude toward feedback received through GAI (ATT) H1 has a significant positive impact on satisfaction (SAT) H1, explaining 47.30% of the variance. This is because when there is a high degree of satisfaction, there is a strong willingness to continue using it in the future, as indicated by Ayoubi (2024) and Neyem et al. (2024).
Additionally, feedback (Feedback) has a significant positive correlation with students’ attitude (ATT) H6, explaining 38.21% of the variance, in line with the works of Escalante et al. (2023), Mahapatra (2024), Seetharaman (2023), Wang et al. (2024), and Steiss et al. (2024).
We can also observe that although students have a high perception of privacy (PP) H4, this does not significantly influence students’ attitude toward the use of GAI, nor do they perceive it to influence the satisfaction experienced in using GAI for the feedback received.
Similarly, students’ perception of security (PS) H14 does not significantly influence their attitude toward the use of GAI, nor do they perceive it to influence the satisfaction experienced in using GAI for the feedback received. Therefore, it is evident that students’ attitude toward the use of GAI in their learning and feedback process depends on the trust provided by the tool and the behavior change that occurs during its use, thereby resulting in greater satisfaction with it.
Based on the results, we can affirm that students show a positive attitude toward the use of GAI in feedback, receiving timely feedback on their assignments and practices, which facilitates the flow of learning for students, in line with Lee and Park (2023), Yasniy et al. (2023), and Ngo et al. (2024).
The integration of GAI in the learning process of MOOC courses facilitates improvements in learning outcomes and student attitudes by encouraging their active participation and providing quick responses to their assignments and tasks, thereby reducing dropout rates.
Furthermore, it enhances knowledge acquisition, particularly in subjects like programming and sciences, by offering interactive and hands-on learning experiences, as indicated by Blackie and Luckett (2024), Rogers et al. (2024), Iwasawa et al. (2023), and Jo (2024).
On the other hand, the findings of this study also invite reflection on the implications of Generative Artificial Intelligence (GAI) for inclusive education. Within online learning environments, GAI-mediated feedback can serve as a mechanism to support learners with diverse needs by offering personalized and adaptive responses that accommodate varying levels of prior knowledge, learning pace, and linguistic competence (Barrett and Pack, 2023; Steiss et al., 2024). This capacity for personalization aligns with the principles of Universal Design for Learning, which emphasize flexibility and accessibility in instructional design. Consequently, GAI feedback systems such as GESfeedback can contribute to fostering equitable participation, particularly in large-scale settings like MOOCs, where instructor-led individualized feedback is often unfeasible (Floratos et al., 2017).
Moreover, the integration of GAI tools into online courses has the potential to mitigate barriers faced by marginalized learners and students with disabilities. By providing multimodal feedback—through text, voice, or visual explanations—these systems enhance accessibility for learners who might otherwise be excluded from traditional online formats (Chen M. et al., 2024; Chen X. et al., 2024; Alammari, 2024). For instance, feedback generated via natural language processing can be adapted to different reading levels or translated automatically, thus facilitating participation for students from diverse linguistic backgrounds (Canabal and Margalef, 2017). Such functionalities align with broader efforts to promote digital inclusion in higher education and to ensure that AI adoption does not widen existing educational inequalities (Bower et al., 2024).
In addition, the model proposed in this study underscores that trust and behavioral change—two variables significantly associated with student satisfaction—can act as mediating factors in inclusive practices. When students perceive GAI feedback as trustworthy and supportive, they are more likely to engage actively and persist in their learning, even when facing socio-economic or cognitive barriers (Kiryakova and Angelova, 2023; Jo, 2023). This dynamic suggests that inclusion is not solely a matter of access, but also of sustained participation and motivation within digital learning environments enhanced by GAI. Therefore, trust-based feedback mechanisms represent a valuable pathway toward more inclusive and equitable forms of online education.
Finally, it is essential to recognize that the ethical deployment of GAI in education must explicitly address the challenges of fairness, bias mitigation, and accessibility. Future implementations should ensure that the algorithms used for generating feedback are transparent and sensitive to cultural, linguistic, and cognitive diversity (Dwivedi et al., 2023; Song, 2024). By embedding inclusivity as a core design principle rather than an afterthought, GAI systems like GESfeedback can evolve from being mere technological aids to becoming transformative instruments for social equity in education. This perspective broadens the contribution of our study, linking GAI-mediated feedback not only to learning effectiveness and satisfaction but also to the advancement of inclusive educational practices.
6 Conclusions, limitations and future work
The integration of GAI-based technologies like ChatGPT in education is revolutionizing learning processes and altering methodologies. In many cases, it represents a paradigm shift where finding a balance between automation and the human factor is crucial. Therefore, understanding students’ attitudes toward these applications is fundamental for enhancing the learning process.
While automation ensures efficiency and scalability, recognizing the nuances where human intervention is crucial is part of the precision, empathy, and personalized guidance needed in certain educational interactions. Otherwise, there is a risk of dehumanizing the learning process, where individual needs and differences may not be adequately considered. Achieving this balance allows for the optimal use of GAI capabilities while maintaining human-centered qualities essential for effective teaching and learning.
The results show that students have positive attitudes towards GAI-based feedback, finding its use satisfactory largely due to its speed and continuous response, enabling students to correct errors and improve their skills more efficiently, thereby enhancing the learning process. Furthermore, it is evident that perceptions of security and privacy are important to students, as indicated by the values of H4 and H14, but they do not consider these perceptions relevant to their attitudes towards GAI and satisfaction with feedback received through GAI. Instead, GAI tools provide confidence and foster behavioral change in students.
The study results will contribute to defining guidelines and curricula aimed at developing GAI-based feedback processes, especially in integrating GAI tools into higher education curricula. This could lead to a rethinking of how soft skills are taught and assessed.
Despite the observed benefits, the study also recognizes technical challenges and limitations that may affect the effectiveness of GAI-based feedback, such as Ethics, which requires a firm commitment in its use to ensure fairness, mitigate biases, and safeguard data privacy, are integral aspects of responsible AI use.
Addressing these ethical considerations not only upholds our standards of integrity and equity but also establishes a foundation of trust among students, educators, and stakeholders. Ethical implementation is not just a regulatory requirement but a fundamental principle to foster a positive and responsible educational environment driven by AI. This aspect underscores the need for institutional support and teacher training for the effective integration of these technologies in the classroom through policies and educator training.
However, this study presents some limitations. Although certain demographic data were collected during the implementation of the MOOC, these were not included in the online questionnaire, which limits the possibility of conducting a more detailed analysis of the responses. Moreover, since the course focused specifically on generative artificial intelligence in education, a potential bias may have been introduced, as participants interested in this topic might share similar predispositions or perceptions. Additionally, the use of self-reported data may involve biases and methodological variations that should be taken into account when interpreting the results. Likewise, the proportion of variance explained in the dependent variables is not exhaustive, suggesting that some relevant predictors might not have been included in the analysis. Finally, it is recommended that future research expand the sample to obtain more representative and statistically robust data, thereby strengthening the reliability of the findings and enhancing their generalizability across different educational contexts.
Another limitation of this study relates to the level of technical detail provided regarding the GESFeedback prototype. The article intentionally prioritized the validation of the conceptual model and its behavioral constructs over a full technical exposition of the prototype’s configuration. Consequently, implementation specifics such as pseudocode, API parameters, or LangChain component architecture were only outlined at a conceptual level and referenced to our previous publication (Morales-Chan et al., 2024). This decision aligns with the paper’s primary objective that is to empirically validate the proposed model rather than to present a system design study. Future research may expand on these aspects by offering open-access repositories or technical appendices that facilitate replicability and comparative studies across similar educational contexts.
Therefore, the following lines of research are proposed. Firstly, to continue research to further explore how GAI-based feedback impacts skill acquisition, which could provide deeper insights into the effectiveness of GAI-driven educational tools. Secondly, it would be pertinent to explore how students’ demographic and gender variables may influence their attitudes towards feedback through GAI. Third, it is proposed to analyze the impact of the bot’s use in MOOCs that are not specifically focused on generative artificial intelligence, in order to avoid potential predispositions or homogeneous perceptions among participants. Lastly, to analyze how factors such as privacy perception and security perception are affected by the social influence of rapid advancements in GAI, particularly in specific fields where technological factors could be decisive, such as social sciences, health sciences, or engineering. These findings would contribute to a better understanding of the implications of GAI use in the learning and development processes of students and educators.
Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Author contributions
JM: Supervision, Investigation, Conceptualization, Writing – review & editing, Validation, Software, Methodology, Resources, Formal analysis, Writing – original draft, Visualization, Data curation. MM: Visualization, Resources, Writing – original draft, Project administration, Funding acquisition, Formal analysis, Validation, Methodology, Investigation, Data curation, Supervision, Software, Writing – review & editing, Conceptualization. RB: Visualization, Methodology, Conceptualization, Software, Validation, Writing – original draft, Formal analysis, Writing – review & editing, Investigation. HA-S: Conceptualization, Writing – review & editing, Investigation, Software, Writing – original draft, Visualization, Formal analysis, Data curation, Validation. RH-R: Writing – original draft, Writing – review & editing, Investigation, Formal analysis, Visualization, Data curation, Validation, Conceptualization.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abdelhalim, S. M. (2024). Using ChatGPT to promote research competency: English as a foreign language undergraduates' perceptions and practices across varied metacognitive awareness levels. J. Comput. Assist. Learn. 40, 82–103. doi: 10.1111/jcal.12948
Adwan, A. A., and Aladwan, R. (2022). Use of artificial intelligence system to predict consumers’ behaviors. Int. J. Data Netw. Sci. 6, 1223–1232. doi: 10.5267/j.ijdns.2022.6.011
Agarwal, M., Goswami, A, and Sharma, P. (2023). Evaluating ChatGPT-3.5 and Claude-2 in Answering and Explaining Conceptual Medical Physiology Multiple-Choice Questions. Cureus Journal of Medical Science. 15.
Aguilar, E., Rodríguez, A., Baeza, L., and Méndez, N. (2016). La retroalimentación constructiva en el desarrollo de habilidades comunicativas escritas e investigativas en dos generaciones de alumnos de medicina en Yucatán, México. An. Fac. Med. 77, 137–142. doi: 10.15381/anales.v77i2.11818
Ahmad, N., Murugesan, S., and Kshetri, N. (2023). “Generative Artificial Intelligence and the Education Sector.” Computer, 56, 72–76.
Akor, S. O., Nongo, C., Udofot, C., and Oladokun, B. D. (2024). Cybersecurity awareness: leveraging emerging technologies in the security and management of libraries in higher education institutions. South. Afr. J. Secur. 2:14. doi: 10.25159/3005-4222/16671,
Al Shakhoor, F., Alnakal, R., Mohamed, O., and Sanad, Z. (2024). “Exploring business faculty’s perception about the usefulness of Chatbots in higher education” in Studies in systems, decision and control, Cham: Springer, vol. 503, 231–244. doi: 10.1007/978-3-031-43490-7_17
Alammari, A. (2024). Evaluating generative AI integration in Saudi Arabian education: a mixed-methods study. PeerJ Comput. Sci. 10:e1879. doi: 10.7717/peerj-cs.1879,
Alawida, M., Mejri, S., Mehmood, A., Chikhaoui, B., and Isaac Abiodun, O. (2023). A comprehensive study of ChatGPT: advancements, limitations, and ethical considerations in natural language processing and cybersecurity. Information 14:462. doi: 10.3390/info14080462
Albayati, H. (2024). Investigating undergraduate students' perceptions and awareness of using ChatGPT as a regular assistance tool: a user acceptance perspective study. Comput. Educ. Artif. Intell. 6:100203. doi: 10.1016/j.caeai.2024.100203,
Almagazzachi, A., Mustafa, A., Eighaei Sedeh, A., Vazquez Gonzalez, A. E., Polianovskaia, A., Abood, M., et al. (2024). Generative Artificial Intelligence in Patient Education: ChatGPT Takes on Hypertension Questions. Cureus Journal of Medical Science. 16.
Aleroud, A., Alazab, M., Venkatraman, S., Alazab, A., Alazab, M., and Gandotra, V. (2020). An examination of susceptibility to spear phishing cyber attacks in non-English speaking communities. J. Inf. Secur. Appl. 55:102614. doi: 10.1016/j.jisa.2020.102614
Ali, K., Barhom, N., Tamimi, F., and Duggal, M. (2024). ChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students. European Journal of Dental Education, 28, 206–211.
Allehyani, S. H., and Algamdi, M. A. (2023). Digital Competences: Early Childhood Teachers’ Beliefs and Perceptions of ChatGPT Application in Teaching English as a Second Language (ESL). International Journal of Learning, Teaching and Educational Research 20, 343–363. doi: 10.26803/ijlter.22.11.18
Alshaikh, R., Al-Malki, N., and Almasre, M. (2024). The implementation of the cognitive theory of multimedia learning in the design and evaluation of an AI educational video assistant utilizing large language models. Heliyon. 10.
Alqurashi, E. (2019). Predicting student satisfaction and perceived learning within online learning environments. Distance Educ. 40, 133–148. doi: 10.1080/01587919.2018.1553562
Ayoub, D., Metawie, M., and Fakhry, M. (2024). AI -ChatGPT usage among users: factors affecting intentions to use and the moderating effect of privacy concerns. MSA-Manage. Sci. J., 120–152. doi: 10.21608/msamsj.2024.265212.1054
Ayoubi, K. (2024). Adopting ChatGPT: pioneering a new era in learning platforms. Int. J. Data Netw. Sci. 8, 1341–1348. doi: 10.5267/j.ijdns.2023.11.001
Bahroun, Z., Anane, C., Ahmed, V., and Zacca, A. (2023). Transforming education: a comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 99:12983. doi: 10.3390/su151712983
Bai, S., Gonda, D. E., and Hew, K. F. (2024). Write-curate-verify: a case study of leveraging generative AI for scenario writing in scenario-based learning. IEEE Trans. Learn. Technol. 17, 1301–1312. doi: 10.1109/TLT.2024.3378306,
Baig, M. I., and Yadegaridehkordi, E. (2025). Factors influencing academic staff satisfaction and continuous usage of generative artificial intelligence (GenAI) in higher education. Int. J. Educ. Technol. High. Educ. 22:5. doi: 10.1186/s41239-025-00506-4
Bakar, N., Mohamad Rejeni, N. A., and Nyuak, A. (2023). Machine learning for predicting students' academic achievement based on learning style and academic results. Int. J. Innov. Ind. Revolution 5, 120–130. doi: 10.35631/IJIREV.515013
Banerjee, A., Ahmad, A., Bhalla, P., and Goyal, K. (2023). Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology. Cureus Journal of Medical Science. 15.
Barrett, A., and Pack, A. (2023). Not quite eye to A.I.: student and teacher perspectives on the use of generative artificial intelligence in the writing process. Int. J. Educ. Technol. High. Educ. 20:59. doi: 10.1186/s41239-023-00427-0
Blackie, M. Y., and Luckett, K. (2024). Conversaciones críticas sobre conocimiento, currículo y justicia epistémica: Compromiso con el legado de Suellen Shay : Taylor and Francis.
Boscardin, C. K., Gin, B., Golde, P. B., and Hauer, K. E. (2024). ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Academic Medicine 99, 22–27. doi: 10.1097/Acm.0000000000005439
Boubker, O. (2024). From chatting to self-educating: can AI tools boost student learning outcomes? Expert Syst. Appl. 238:121820. doi: 10.1016/j.eswa.2023.121820,
Boubker, O., Ben-Saghroune, H., Bourassi, J. E., Abdessadek, M., and Sabbahi, R. (2024). Examining the impact of OpenAI’s ChatGPT on PhD student achievement. Int. J. Inf. Educ. Technol. 14, 443–451. doi: 10.18178/ijiet.2024.14.3.2065
Bouteraa, M., Bin-Nashwan, S. A., Al-Daihani, M., Dirie, K. A., Benlahcene, A., Sadallah, M., et al. (2024). Understanding the diffusion of AI-generative (ChatGPT) in higher education: does students' integrity matter? Comput. Human Behav. Rep. 14:100402. doi: 10.1016/j.chbr.2024.100402,
Bower, M., Torrington, J., Lai, J. W., Petocz, P., and Alfano, M. (2024). How should we change teaching and assessment in response to increasingly powerful generative artificial intelligence? Outcomes of the ChatGPT teacher survey. Educ. Inf. Technol. 29, 15403–15439. doi: 10.1007/s10639-023-12405-0,
Canabal, C., and Margalef, L. (2017). La retroalimentación: La clave para una evaluación orientada al aprendizaje. Profesorado Rev. Currículum Form. Profr. 21, 149–170. doi: 10.30827/profesorado.v21i2.10329
Cao, Y., Aziz, A. A., and Arshard, W. N. R. M. (2023). University students’ perspectives on artificial intelligence: a survey of attitudes and awareness among interior architecture students. Int. J. Educ. Res. Innov., 1–21. doi: 10.46661/ijeri.8429
Carlson, C. G. (2023). Virtual and Augmented Simulations in Mental Health. Current Psychiatry Reports 25, 365–371.
Carmines, E. G., and Zeller, R. A. (1979). Reliability and validity assessment. Newbury Park, CA: Sage Publications.
Chang, C. C. (2013). Examining users′ intention to continue using social network games: a flow experience perspective. Telemat. Inform. 30, 311–321. doi: 10.1016/j.tele.2012.10.006
Charles, C., Cutshall, R., and Changchit, C. (2022). “Determinants of students’ intention to learn cloud computings,” J. Int. Technol. Inf. Manage. 31. doi: 10.58729/1941-6679.1510
Chen, M. S., Hsu, T. P., and Hsu, T. C. (2024), GAI-assisted personal discussion process analysis. In International Conference on Innovative Technologies and Learning (pp. 194–204). Cham: Springer Nature Switzerland.
Chen, S., Xu, X., Zhang, H., and Zhang, Y. (2023). Roles of ChatGPT in virtual teaching assistant and intelligent tutoring system: Opportunities and challenges. ACM International Conference Proceeding Series, pp. 201–206
Chen, X., Yang, N., Zhou, Y., and Cao, W. (2024). AIGC affecting education and employment in era of digital economy — take ChatGPT as an example. Syst. Eng. Theory Pract. 44, 260–271. doi: 10.12011/SETP2023-1708
Chin, W. W. (1998b). “The partial least squares approach to structural equation modelling” in Modern methods for business research. ed. G. A. Marcoulides (Mahwah, NJ: Lawrence Erlbaum), 295–336.
Crompton, H., and Burke, D. (2024). The educational affordances and challenges of ChatGPT: state of the field. TechTrends 68, 380–392. doi: 10.1007/s11528-024-00939-0
De Gagne, J. C., Hwang, H., and Jung, D. (2023). Cyberethics in nursing education: Ethical implications of artificial intelligence. Nursing Ethics.
Dhara, S., Chatterjee, S., Chaudhuri, R., Goswami, A., and Ghosh, S. K. (2022). “Artificial intelligence in assessment of students' performance” in Artificial intelligence in higher education (Boca Raton, FL, USA: CRC Press), 153–167.
Dhara, S. K., Giri, A., Santra, A., and Chakrabarty, D. (2023). “Measuring the Behavioral Intention Toward the Implementation of Super Artificial Intelligence (Super-AI) in Healthcare Sector: An Empirical Analysis with Structural Equation Modeling (SEM)” in ICT Infrastructure and Computing. ICT4SD 2023. eds. M. Tuba, S. Akashe, and A. Joshi, Lecture Notes in Networks and Systems, vol. 754 (Singapore: Springer). doi: 10.1007/978-981-99-4932-8_42
Ding, L., Li, T., Jiang, S., and Gapud, A. (2023). Students’ perceptions of using ChatGPT in a physics class as a virtual tutor. Int. J. Educ. Technol. High. Educ. 20:63. doi: 10.1186/s41239-023-00434-1
Duah, J. E., and McGivern, P. (2024). How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. International Journal of Information and Learning Technology.
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71:art. no. 102642. doi: 10.1016/j.ijinfomgt.2023.102642,
Emon, M. M. H., Hassan, F., Nahid, M. H., and Rattanawiboonsom, V. (2023). Predicting adoption intention of artificial intelligence – a study on ChatGPT. AIUB J. Sci. Eng. 22, 189–196. doi: 10.53799/AJSE.V22I2.797
Escalante, J., Pack, A., and Barrett, A. (2023). AI-generated feedback on writing: insights into efficacy and ENL student preference. Int. J. Educ. Technol. High. Educ. 20:57. doi: 10.1186/s41239-023-00425-2
Essien, A., Bukoye, O. T., O’Dea, X., and Kremantzis, M. (2024). The influence of AI text generators on critical thinking skills in UK business schools. Studies in Higher Education.
Estriegana, R., Medina, J. A., Robina-Ramirez, R., and Barchino, R. (2021). Analysis of cooperative skills development through relational coordination in a gamified online learning environment. Electronics 10. doi: 10.3390/electronics10162032
Estriegana, R., Teixeira, A. M., Robina-Ramirez, R., Medina-Merodio, J. A., and Otón, S. (2024). Impact of communication and relationships on student satisfaction and acceptance of self- and peer-assessment. Educ. Inf. Technol. doi: 10.1007/s10639-023-12276-5
Floratos, N., Guasch, T., and Espasa, A. (2017). Student engagement in Moocs with appropriate formative assessment and feedback practices. EDULEARN17 Proceedings, pp. 1604–1612.
Fornell, C., and Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 181, 39–50.
French, F., Levi, D., Maczo, C., Simonaityte, A., Triantafyllidis, S., and Varda, G. (2023). Creative Use of OpenAI in Education: Case Studies from Game Development. Multimodal Technologies and Interaction. 7.
Gan, Z., An, Z., and Liu, F. (2021). Teacher feedback practices, student feedback motivation, and feedback behavior: how are they associated with learning outcomes? Front. Psychol. 12:697045. doi: 10.3389/fpsyg.2021.697045,
Gosak, L., Pruinelli, L., Topaz, M., and Štiglic, G. (2024). The ChatGPT effect and transforming nursing education with generative AI: Discussion paper. Nurse Education in Practice. 75.
Gonçalves, B.F., and Gonçalves, V. (2024). Artificial intelligence language models: the path to development or regression for education?. Lecture Notes in Networks and Systems, Cham: Springer. 773. pp. 56–65. doi: 10.1007/978-3-031-44131-8_6
Govindaraju, V., Seruji, Z., and Yeng, S. K. (2023). Teaching approaches and methodologies: a review of post COVID-19. Hong Kong J. Soc. Sci. doi: 10.55463/hkjss.issn.1021-3619.61.22
Hair, J. F., Hult, G. T. M., Ringle, C. M., and Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM). Thousand Oaks, CA: Sage.
Hair, J. F., Ringle, C. M., and Sarstedt, M. (2013). Partial least squares structural equation modeling: rigorous applications, better results and higher acceptance. Long Range Plan. 46, 1–12. doi: 10.1016/j.lrp.2013.01.001
Hannon, B., Kumar, Y., Gayle, D., Li, J. J., and Morreale, P. (2024). Robust testing of AI language model resiliency with novel adversarial prompts. Electronics 13:842. doi: 10.3390/electronics13050842
Henseler, J., Ringle, C. M., and Sarstedt, M. (2014). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8
Hu, J.-M., Liu, F.-C., Chu, C.-M., and Chang, Y.-T. (2023). Health care trainees’ and professionals’ perceptions of ChatGPT in improving medical knowledge training: rapid survey study. J. Med. Internet Res. 25:e49385. doi: 10.2196/49385,
Hujala, M., Knutas, A., Hynninen, T., and Arminen, H. (2020). Improving the quality of teaching by utilising written student feedback: a streamlined process. Comput. Educ. 157:103965. doi: 10.1016/j.compedu.2020.103965
Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research: a review of four recent. Strateg. Manage. J. 20, 195–204. doi: 10.1002/(SICI)1097-0266(199902)20:2<195::AID-SMJ13>3.0.CO;2-7
Hyde, S. J., Busby, A., and Bonner, R. L. (2024). Tools or Fools: Are We Educating Managers or Creating Tool-Dependent Robots? Journal of Management Education.
Ibrahim, R., Yusoff, R. C. M., Khalil, K., and Jaafar, A. (2011). Factors affecting undergraduates’ acceptance of educational game: An application of technology acceptance model (TAM). In H. B. Zaman, et al. Lecture Notes in Computer Science, (Berlin, Heidelberg: Springer) 7067, 135–146. doi: 10.1007/978-3-642-25200-6_14
Ilieva, G., Yankova, T., Klisarova-Belcheva, S., Dimitrov, A., Bratkov, M., and Angelov, D. (2023). Effects of Generative Chatbots in Higher Education. Information. 14.
Institut für Experimentelle Psychologie (2007). G*Power 3.1 [programa informático]. Available online at: https://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower (Accessed October 10, 2025).
Ivanov, S., Soliman, M., Tuomi, A., Alkathiri, N. A., and Al-Alawi, A. N. (2024). Drivers of generative AI adoption in higher education through the lens of the theory of planned behaviour. Technol. Soc. 77:102521. doi: 10.1016/j.techsoc.2024.102521,
Iwasawa, M., Kobayashi, M., and Otori, K. (2023). Knowledge and attitudes of pharmacy students towards artificial intelligence and the ChatGPT. Pharm. Educ. 23, 665–675. doi: 10.46542/pe.2023.231.665675
Jaboob, M., Hazaimeh, M., and Al-Ansi, A. M. (2024). Integration of generative AI techniques and applications in student behavior and cognitive achievement in Arab higher education. Int. J. Hum.-Comput. Interact. 41, 353–366. doi: 10.1080/10447318.2023.2300016
Jang, Y. T., and Hsieh, P. S. (2021). Understanding consumer behavior in the multimedia context: incorporating gamification in VR-enhanced web system for tourism e-commerce. Multimed. Tools Appl. 80, 29339–29365. doi: 10.1007/s11042-021-10989-4,
Jo, H. (2023). Understanding AI tool engagement: a study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 85, 994–1009. doi: 10.1016/j.tele.2023.102067
Jo, H. (2024). Uncovering the reasons behind willingness to pay for ChatGPT-4 premium. Int. J. Hum.-Comput. Interact. 41. doi: 10.1080/10447318.2024.2307692
Junior, W. G., Marasco, E., Kim, B., Behjat, L., and Eggermont, M. (2023). How ChatGPT can inspire and improve serious board game design. Int. J. Serious Games 10, 33–54. doi: 10.17083/ijsg.v10i4.645
Khe, F. H., and Wing, S. C. (2014). Students’ and instructors’ use of massive open online courses (MOOCs): motivations and challenges. Educ. Res. Rev. 12, 45–58. doi: 10.1016/j.edurev.2014.05.001
Kim, J. S., Kim, M., and Baek, T. H. (2024). Enhancing user experience with a generative AI chatbot. Int. J. Hum.-Comput. Interact. 41, 651–663. doi: 10.1080/10447318.2024.2311971,
Kiryakova, G., and Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice. Educ. Sci. 13. doi: 10.3390/educsci13101056,
Kong, S., Lee, L. C., and Tsang, O. (2024). A pedagogical design for self-regulated learning in academic writing using text-based generative artificial intelligence tools: 6-P pedagogy of plan, prompt, preview, produce, peer-review, portfolio-tracking. Research and Practice in Technology Enhanced Learning. 19.
Kosasi, S., Lukita, C., Rizachakim, M. H., Faturahman, A., and Kusumawardhani, D. A. R. (2023). The influence of digital artificial intelligence technology on quality of life with a global perspective. APTISI Trans. Technopreneursh. 5, 240–250. doi: 10.34306/att.v5i3.354
Kuhail, M. A., Mathew, S. S., Khalil, A., Berengueres, J., and Shah, S. J. H. (2024). “Will I be replaced?” assessing ChatGPT's effect on software development and programmer perceptions of AI tools. Sci. Comput. Program. 235:103111. doi: 10.1016/j.scico.2024.103111
Laaser, W. (2014). Ascenso y caída de los Cursos Masivos Abiertos y en Línea. Virtualidad Educ. Cienc. 5, 78–89. doi: 10.60020/1853-6530.v5.n9.9552
Le, A.N.-N., Nguyen, V. N., Nguyen, M. T.-X., and Bo, L. K. (2024). Exploring the use of ChatGPT as a tool for developing Eportfolios in ESL classrooms. EAI/Springer Innovations in Communication and Computing, part F2195, pp. 51–76
Lee, H.-Y., Chen, P.-H., Wang, W.-S., Huang, Y.-M., and Wu, T.-T. (2024). Empowering ChatGPT with guidance mechanism in blended learning: effect of self-regulated learning, higher-order thinking skills, and knowledge construction. Int. J. Educ. Technol. High. Educ. 21:16. doi: 10.1186/s41239-024-00447-4
Lee, S., and Park, G. (2023). Exploring the impact of ChatGPT literacy on user satisfaction: the mediating role of user motivations. Cyberpsychol. Behav. Soc. Netw. 26, 913–918. doi: 10.1089/cyber.2023.0312,
Leiser, F., Eckhardt, S., Knaeble, M., Maedche, A., Schwabe, G., and Sunyaev, A. (2023). From ChatGPT to FactGPT: A participatory design study to mitigate the effects of large language model hallucinations on users. ACM International Conference Proceeding Series, pp. 81–90
Li, T., Ji, Y., and Zhan, Z. (2024). Expert or machine? Comparing the effect of pairing student teacher with in-service teacher and ChatGPT on their critical thinking, learning performance, and cognitive load in an integrated-STEM course. Asia Pacific Journal of Education 44, 45–60.
Lim, K. H., Sia, C. L., Lee, M. K. O., and Benbasat, I. (2006). Do I trust you online, and if so, will I buy? An empirical study of two trust-building strategies. J. Manag. Inf. Syst. 23, 233–266. doi: 10.2753/MIS0742-1222230210
Lin, J., Sha, L., Li, Y., Gasevic, D., and Chen, G. (2022). Establishing trustworthy artificial intelligence in automated feedback. doi: 10.35542/osf.io/5efxn
Liu, X. (2025). Integration or hesitation: unraveling factors affecting teachers’ inclination towards personal and students’ adoption of generative AI in language instruction. Int. J. Technol. Educ. 8, 502–520. doi: 10.46328/ijte.1103
Liu, G., and Ma, C. (2024). Measuring EFL learners’ use of ChatGPT in informal digital learning of English based on the technology acceptance model. Innov. Lang. Learn. Teach. 18, 125–138. doi: 10.1080/17501229.2023.2240316
Lizzio, A., and Wilson, K. (2008). Feedback on assessment: students' perceptions of quality and effectiveness. Assess. Eval. High. Educ. 33, 263–275. doi: 10.1080/02602930701292548
Loh, X. M., Lee, V. H., Tan, G. W. H., Ooi, K. B., and Dwivedi, Y. K. (2021). Switching from cash to mobile payment: what is the hold–up? Internet Res. 31, 376–399. doi: 10.1108/INTR-04-2020-0175,
Ma, J., Wang, P., Li, B., Wang, T., Pang, X. S., and Wang, D. (2024). Exploring user adoption of ChatGPT: a technology acceptance model perspective. Int. J. Hum.-Comput. Interact. 41, 1431–1445. doi: 10.1080/10447318.2024.2314358
Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: a mixed methods intervention study. Smart Learn. Environ. 11:9. doi: 10.1186/s40561-024-00295-9
Mamoon, A.B., Rezaul, K., and Rahman, I. (2016). The value and effectiveness of feedback in improving students' learning and professionalizing teaching in higher education. J. Educ. Pract., 7, 38–41. Available online at: https://eric.ed.gov/?id=EJ1105282
Mao, J., Chen, B, and Liu, J. C. (2024). Generative Artificial Intelligence in Education and Its Implications for Assessment. Techtrends. 68, 58–66.
Mireles, M. G., and García, J. A. (2022). Satisfacción estudiantil en universitarios: una revisión sistemática de la literatura. Rev. Educ. 46. doi: 10.15517/revedu.v46i2.47621
Morales-Chan, M., Amado-Salvatierra, H. R., Medina, J. A., Barchino, R., Hernández-Rizzardini, R., and Moreira Teixeira, A. (2024). Personalized feedback in massive open online courses: harnessing the power of LangChain and OpenAI API. Electronics 13:1960. doi: 10.3390/electronics13101960
Moravec, V., Hynek, N., Skare, M., Gavurova, B., and Kubak, M. (2024). Human or machine? The perception of artificial intelligence in journalism, its socio-economic conditions, and technological developments toward the digital future. Technol. Forecast. Soc. Change 200:123162. doi: 10.1016/j.techfore.2023.123162
Mukred, M., Mokhtar, U. A., and Hawash, B. (2023). Exploring the acceptance of ChatGPT as a learning tool among academicians: a qualitative study. Jurnal Komun. Malays. J. Commun. 39, 306–323. doi: 10.17576/JKMJC-2023-3904-16
Neyem, A., Alcocer, J. P. S., Mendoza, M., Centellas-Claros, L., Gonzalez, L. A., and Paredes-Robles, C. (2024). Exploring the impact of generative AI for StandUp report recommendations in software capstone project development. SIGCSE 2024. Proceedings of the 55th ACM Technical Symposium on Computer Science Education, Vol. 1, pp. 951–957
Ngo, T. T. A., Tran, T. T., An, G. K., and Nguyen, P. T. (2024). ChatGPT for educational purposes: investigating the impact of knowledge management factors on student satisfaction and continuous usage. IEEE Trans. Learn. Technol. 17, 1341–1352. doi: 10.1109/TLT.2024.3383773,
O’Leary, Z. (2017). (2017) The Essential Guide to Doing Your Research Project. London: SAGE Publications Ltd.
Ooi, K. B., Wei-Han Tan, G., Al-Emran, M., Al-Sharafi, M. A, Capatina, A., Chakraborty, A., et al. (2023). “The Potential of Generative Artificial Intelligence Across Disciplines: Perspectives and Future Directions,” Journal of Computer Information Systems.
OpenAI (2023). Introducing ChatGPT. Available online at: https://openai.com/blog/chatgpt (Accessed July 5, 2023).
Pasupuleti, R. S., and Thiyyagura, D. (2024). An empirical evidence on the continuance and recommendation intention of ChatGPT among higher education students in India: an extended technology continuance theory. Educ. Inf. Technol. 29, 17965–17985. doi: 10.1007/s10639-024-12573-7
Patil, D. (2024). Human-artificial intelligence collaboration in the modern workplace: Maximizing productivity and transforming job roles. doi: 10.2139/ssrn.5057414
Phung, T., Pǎdurean, V.-A., Singh, A., Brooks, C., Cambronero, J., Gulwani, S., et al. (2024). Automating human tutor-style programming feedback: Leveraging GPT-4 tutor model for hint generation and GPT-3.5 student model for hint validation. ACM International Conference Proceeding Series, pp. 12–23
Piñeiro-Martín, A., García-Mateo, C., Docío-Fernández, L., and López-Pérez, M. D. C. (2023). Ethical challenges in the development of virtual assistants powered by large language models. Electronics 12:3170. doi: 10.3390/electronics12143170
Polyportis, A., and Pahos, N. (2024). Navigating the perils of artificial intelligence: a focused review on ChatGPT and responsible research and innovation. Humanit. Soc. Sci. Commun. 11:107. doi: 10.1057/s41599-023-02464-6
Raes, A. (2022). Exploring student and teacher experiences in hybrid learning environments: does presence matter? Postdigital Science and Education 4, 138–159. doi: 10.1007/s42438-021-00274-0,
Rahim, N. I. M., Iahad, N. A., Yusof, A. F., and Al-Sharafi, M. A. (2022). AI-based chatbots adoption model for higher-education institutions: a hybrid PLS-SEM-neural network modelling approach. Sustainability 14:12726. doi: 10.3390/su141912726
Rahman, M. S., Sabbir, M. M., Zhang, J., Moral, I. H., and Hossain, G. M. S. (2023). Examining students’ intention to use ChatGPT: does trust matter? Australas. J. Educ. Technol. 39, 51–71. doi: 10.14742/ajet.8956
Rejeb, A., Rejeb, K., Appolloni, A., Treiblmaier, H., and Iranmanesh, M. (2024). Exploring the impact of ChatGPT on education: a web mining and machine learning approach. Int. J. Manag. Educ. 22, 1–14. doi: 10.1016/j.ijme.2024.100932
Rogers, M.P., Hillberg, H.M., and Groves, C.L.. 2024. Attitudes towards the use (and misuse) of ChatGPT: A preliminary study. SIGCSE 2024 - Proceedings of the 55th ACM Technical Symposium on Computer Science Education, Vol. 1, pp. 1147–1153
Sabraz Nawaz, S., Fathima Sanjeetha, M. B., Al Murshidi, G., Mohamed Riyath, M. I., Mat Yamin, F. B., and Mohamed, R. (2024). Acceptance of ChatGPT by undergraduates in Sri Lanka: a hybrid approach of SEM-ANN. Interact. Technol. Smart Educ. 21, 546–570. doi: 10.1108/ITSE-11-2023-0227
Salah, M., Alhalbusi, H., Ismail, M. M., and Abdelfattah, F. (2024). Chatting with ChatGPT: decoding the mind of Chatbot users and unveiling the intricate connections between user perception, trust and stereotype perception on self-esteem and psychological well-being. Curr. Psychol. 43, 7843–7858. doi: 10.1007/s12144-023-04989-0
Salifu, I., Arthur, F., Arkorful, V., Nortey, S. A., and Osei-Yaw, R. S. (2024). Economics students’ behavioural intention and usage ofChatGPT in higher education: a hybrid structural equation modelling-artificial neural network approach. Cogent Soc. Sci. 10:2300177. doi: 10.1080/23311886.2023.2300177
Salinas-Navarro, D. E., Vilalta-Perdomo, E., Michel-Villarreal, R., and Montesinos, L. (2024). Using Generative Artificial Intelligence Tools to Explain and Enhance Experiential Learning for Authentic Assessment. Education Sciences. 14.
Sallam, M., Salim, N. A., Barakat, M., Al-Mahzoum, K., Al-Tammemi, A. B., Malaeb, D., et al. (2023). Assessing health students' attitudes and usage of ChatGPT in Jordan: validation study. JMIR Med. Educ. 9:e48254. doi: 10.2196/48254,
Samala, A. D., Rawas, S., Wang, T., Reed, J. M., Kim, J., Howard, N. J., et al. (2025). Unveiling the landscape of generative artificial intelligence in education: a comprehensive taxonomy of applications, challenges, and future prospects. Educ. Inf. Technol. 30, 3239–3278. doi: 10.1007/s10639-024-12936-0
Samala, A. D., Zhai, X., Aoki, K., Bojic, L., and Zikic, S. (2024). An in-depth review of ChatGPT’s pros and cons for learning and teaching in education. Int. J. Interact. Mob. Technol. 18, 96–117. doi: 10.3991/ijim.v18i02.46509
Šedlbauer, J., Činčera, J., Slavík, M., and Hartlová, A. (2024). Students' reflections on their experience with ChatGPT. J. Comput. Assist. Learn. 40, 1526–1534. doi: 10.1111/jcal.12967
Seetharaman, R. (2023). Revolutionizing medical education: can ChatGPT boost subjective learning and expression? J. Med. Syst. 47:61. doi: 10.1007/s10916-023-01957-w,
Segovia, G. (2021). Criterios de calidad de un MOOC basado en la valoración de los estudiantes. Bordón. Rev. Pedag. 73, 145–160. doi: 10.13042/Bordon.2021.87938
Shahzad, M. F., Xu, S., and Asif, M. (2025). Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: an application of technology acceptance model. Br. Educ. Res. J. 51, 489–513. doi: 10.1002/berj.4084
Shaka, M., Carraro, D., and Brown, K.N. (2023).Personalised programming education with knowledge tracing. ACM International Conference Proceeding Series, pp. 47
Spivakovsky, O. V., Omelchuk, S. A., Kobets, V. V., Valko, N. V., and Malchykova, D. S. (2023). Institutional Policies on Artificial Intelligence in University Learning, Teaching and Research. Information Technologies and Learning Tools. 97, 181–202.
Song, N. Y. (2024). Higher education crisis: Academic misconduct with generative AI. Journal of Contingencies and Crisis Management 32.
Srishti, R. (2024). “ChatGPT in education: augmenting learning experience or dehumanizing education?” in Educational perspectives on digital Technologies in Modeling and Management, 114–128. doi: 10.4018/979-8-3693-2314-4.ch005
Steiss, J., Tate, T., Graham, S., Cruz, J., Hebert, M., Wang, J., et al. (2024). Comparing the quality of human and ChatGPT feedback of students’ writing. Learn. Instr. 91:101894. doi: 10.1016/j.learninstruc.2024.101894,
Strzelecki, A. (2023). Students’ acceptance of ChatGPT in higher education: an extended unified theory of acceptance and use of technology. Innov. High. Educ. 49, 223–245. doi: 10.1007/s10755-023-09686-1
Suen, H. (2014). Peer assessment for massive open online courses (MOOCs). Int. Rev. Res. Open Distrib. Learn. 15, 312–327. doi: 10.19173/irrodl.v15i3.1680,
Sunar, A. S., Abdullah, N. A., White, S., and Davis, H. (2016). “Personalisation in MOOCs: A critical literature review” in Computer supported education. CSEDU 2015. Communications in Computer and Information Science. eds. S. Zvacek, M. Restivo, J. Uhomoibhi, and M. Helfert, vol. 583 (Cham: Springer).
Synekop, O., Lytovchenko, I., Lavrysh, Y., and Lukianenko, V. (2024). Use of chat GPT in English for engineering classes: are students’ and teachers’ views on its opportunities and challenges similar? Int. J. Interact. Mob. Technol. 18, 129–146. doi: 10.3991/ijim.v18i03.45025
Tanantong, T., and Wongras, P. (2024). A UTAUT-based framework for analyzing users’ intention to adopt artificial intelligence in human resource recruitment: A case study of Thailand. Systems 12:28. doi: 10.3390/systems12010028,
Tang, J., Zhang, B., and Xiao, S. (2022). Examining the intention of authorization via apps: personality traits and expanded privacy calculus perspectives. Behav. Sci. 12:218. doi: 10.3390/bs12070218,
Téllez, N. R., Villela, P. R., and Bautista, R. B. (2024). Evaluating ChatGPT-generated linear algebra formative assessments. Int. J. Interact. Multimed. Artif. Intell. 8, 75–82. doi: 10.9781/ijimai.2024.02.004
Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., et al. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 10:15. doi: 10.1186/s40561-023-00237-x
Vaughn, J., Ford, S. H., Scott, M., and Levinski, A. (2024). Enhancing Healthcare Education: Leveraging ChatGPT for Innovative Simulation Scenarios. Clinical Simulation in Nursing. 87.
Vera, M. D. S. (2024). Artificial Intelligence as a teaching resource: uses and possibilities for teachers. Educar 60, 33–47.
Viciana, J., Cervelló, E., Ramírez, J., San-Matías, J., and Requena, B. (2023). Influencia del feedback positivo y negativo en alumnos de secundaria sobre el clima ego-tarea percibido, la valoración de la educación física y la preferencia en la complejidad de las tareas de clase. Motricidad8, 10, 99–116. Recuperado de https://www.redalyc.org/pdf/2742/274220877005.pdf
Wang, L., Chen, X., Wang, C., Xu, L., Shadiev, R., and Li, Y. (2024). Chatgpt's capabilities in providing feedback on undergraduate students’ argumentation: a case study. Think. Skills Creat. 51:101440. doi: 10.1016/j.tsc.2023.101440
Weber, J.L., Martinez Neda, B., Carbajal Juarez, K., Wong-Ma, J., Gago-Masague, S., and Ziv, H. (2024). Measuring CS student attitudes toward large language models. SIGCSE 2024 – Proceedings of the 55th ACM Technical Symposium on Computer Science Education, Vol. 2, 2, pp. 1846–1847
Werts, C. E., Linn, R. L., and Jöreskog, K. G. (1974). Intraclass reliability estimates: testing structural assumptions. Educ. Psychol. Meas. 34, 25–33. doi: 10.1177/001316447403400104
Wirani, Y., Nabarian, T., and Romadhon, M. S. (2022). Evaluation of continued use on Kahoot! As a gamification-based learning platform from the perspective of Indonesia students. Proc. Comput. Sci. 197, 545–556. doi: 10.1016/j.procs.2022.01.153
Wongvorachan, T., and Bulut, O. (2022). Feedback generation through artificial intelligence. Open/Technology in Education, Society, and Scholarship Association Conference Proceedings, Vol. 2, 1–9
Yang, E., and Beil, C. (2024). Ensuring data privacy in AI/ML implementation. N. Dir. High. Educ. 2024, 63–78. doi: 10.1002/he.20509
Yang, J., Chen, Y.-L., Por, L. Y., and Ku, C. S. (2023). A systematic literature review of information security in Chatbots. Appl. Sci. (Switzerland) 13:6355. doi: 10.3390/app13116355,
Yasniy, O., Mykytyshyn, A., Didych, I., Kubashok, V., and Boiko, A. (2023). Application of artificial intelligence to improve the work of educational platforms. CEUR Workshop Proceedings, 3628, pp. 433–439. Available online at: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85184367623&partnerID=40&md5=e2d366108f73ad7082affc2f37bc7ff3
Zheng, W., Ma, Z., Sun, J., Wu, Q., and Hu, Y. (2024). Exploring factors influencing continuance intention of pre-service teachers in using generative artificial intelligence. Int. J. Hum.–Comput. Interact., 13, 10325–10338. doi: 10.1080/10447318.2024.2433300
Zhu, W., Huang, L., Zhou, X., Li, X., Shi, G., Ying, J., et al. (2024). Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. Int. J. Hum.-Comput. Interact. 41, 742–764. doi: 10.1080/10447318.2024.2323277
Keywords: feedback, behavioral change, generative AI, attitude, privacy, security, satisfaction
Citation: Medina Merodio JA, Morales Chan M, Barchino Plata R, Amado-Salvatierra HR and Hernandez-Rizzardini R (2026) Impact of generative artificial intelligence feedback on online student satisfaction. Front. Comput. Sci. 7:1708114. doi: 10.3389/fcomp.2025.1708114
Edited by:
Mary Sánchez-Gordón, Østfold University College, NorwayReviewed by:
Vladimir Robles-Bykbaev, Salesian Polytechnic University, EcuadorMartín López Nores, University of Vigo, Spain
Copyright © 2026 Medina Merodio, Morales Chan, Barchino Plata, Amado-Salvatierra and Hernandez-Rizzardini. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jose Amelio Medina Merodio, am9zZWEubWVkaW5hQHVhaC5lcw==