AUTHOR=Shvets Oleg , Murtazin Kristina , Piho Gunnar , Meeter Martijn TITLE=Experiment with ChatGPT: methodology of first simulation JOURNAL=Frontiers in Education VOLUME=Volume 10 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1624516 DOI=10.3389/feduc.2025.1624516 ISSN=2504-284X ABSTRACT=Providing timely and effective feedback is a crucial element of the educational process, directly impacting student engagement, comprehension, and academic achievement. However, even within small groups, delivering personalized feedback presents a significant challenge for educators, especially when opportunities for individual interaction are limited. As a result, there is growing interest in the use of AI-based feedback systems as a potential solution to this problem. This study examines the impact of AI-generated feedback, specifically from ChatGPT 3.5, compared to traditional feedback provided by a supervisor. The aim of the research is to assess students’ perceptions of both types of feedback, their satisfaction levels, and the effectiveness of each in supporting academic progress. As part of our broader research agenda, we also aim to evaluate the relevance of the domain model currently under development for supporting automated feedback. This model is intended, among other functions, to facilitate the integration of AI-driven mechanisms with student-centered feedback in order to enhance the quality of learning. At this stage, the domain model is employed at a conceptual level to define key actors in the educational process and the relationships between them, to describe the feedback process within a course, and to structure assignment content and assessment criteria. The experiment presented in this study serves as a preparatory step toward the implementation and integration of the model into the educational process, highlighting its function as a conceptual framework for feedback design. Our results indicate that both types of feedback were generally perceived positively, but differences were observed in how their quality was evaluated. In one group, supervisor-provided feedback received higher ratings for clarity, depth, and relevance. At the same time, students in the other group showed a slight preference for feedback from ChatGPT 3.5, particularly in terms of improving their understanding of the assignment topics. The speed and consistency of AI-generated feedback were highlighted as key advantages, indicating its potential value in educational environments where personalized feedback from instructors is limited.