Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Psychol., 06 October 2025

Sec. Educational Psychology

Volume 16 - 2025 | https://doi.org/10.3389/fpsyg.2025.1707902

This article is part of the Research TopicPromoting Innovative Learning-oriented Feedback Practice: Engagement, Challenges, and SolutionsView all 8 articles

Editorial: Promoting innovative learning-oriented feedback practice: engagement, challenges, and solutions

  • Faculty of Applied Sciences, Macao Polytechnic University, Macao, China

Introduction

The studies on feedback have experienced some fundamental changes: from identifying features of effective feedback to exploring how it is delivered and constructed, from focusing on feedback givers (teachers, peers, computers, and, more recently, GenAI tools) to its receivers or users, and from portraying feedback as a one-way action from givers to receivers to a more collaborative process between them that emphasizes learners' feedback-seeking initiatives. During these significant shifts, students' engagement patterns with feedback have been at the center of discussions. Engagement patterns in technology-assisted feedback refer to how learners interact with and respond to feedback provided through digital tools, encompassing their cognitive processes, emotional responses, and behaviors when receiving and applying feedback in educational settings.

Without exception, the articles in this special topic present an excellent summary of these shifts from multiple perspectives. This Research Topic assembles seven scholarly articles from diverse international contexts, including Mainland China, Vietnam, Norway, and Sweden. These studies explore and examine different aspects of innovative feedback practice across a wide range of disciplines, such as mathematics, languages, engineering, and medicine. Moving beyond the traditional view of feedback as the transmission of information from one side to the other to narrow the gap between current performance and expected outcomes, these articles have highlighted key themes in designing and promoting innovative feedback practice, which can be broadly categorized into three closely interlinked parts: learners' characteristics, interactions and engagement with feedback, and scaffolding and learning environments.

The characteristics of learners

A frequently quoted framework in the design and analysis of feedback studies is the student-feedback interaction model (Lipnevich and Smith, 2022). This model lists six learner characteristics as contributing factors to students' further intention or engagement level with received feedback: (1) ability, (2) receptivity, (3) expectations, (4) self-efficacy, (5) motivation, and (6) personality. Specifically, these individual differences influence a learner's engagement with and response to feedback, thereby shaping the effectiveness of the feedback process. In this special Research Topic, Söderström et al. found that mastery goals (which focus on the development of competency and skills) have a direct positive influence on students' perceived usefulness of received feedback, while performance goals (which are related to a motivation to stand out from peers competitively) do not show similar effects. Furthermore, rather than highlighting the direct impact of self-efficacy on feedback engagement, Zhang et al.'s study of self-efficacy reported a mediation model. Their research found that self-efficacy mediates the influence of pressure—specifically, challenge time pressure and hindrance time pressure—on students' self-reported level of innovative behaviors in postgraduate studies, including the communication and engagement with feedback from both peers and supervisors.

The interactions and engagement

As for the second emerging theme in this special Research Topic of studies, research on learners' interaction and engagement patterns with feedback now centers on students' cognitive, affective, and behavioral processing of it, and the interplay among these processes. In this Research Topic, Yang et al. went beyond the traditional view of engaging with feedback from outsiders (i.e., peers, teachers, and learning analytics) and attempted to validate a research instrument to measure learners' self-feedback behaviors. The three confirmed dimensions (seeking, processing, and using feedback) can greatly benefit researchers and teachers seeking to better understand and design pedagogical activities that scaffold self-feedback practice. Moreover, in the context of self-regulated online learning, Dao et al. placed more emphasis on the role of cognitive engagement. Their study involved designing an online laboratory system in which learners can analyze and set up learning goals, discuss them with peers, and resubmit their work. These embedded activities and mechanisms within the laboratory system not only serve as a reminder but also, more importantly, as scaffolding to help Asian learners overcome their emotional and cognitive barriers to engaging with formative peer feedback before they final submission of their work.

Scaffolding and learning environments

Last but not least, previous frameworks have not elaborated on the role of technology in providing sufficient scaffolding for feedback. Recently, generative AI has been identified as having the capability to offer such feedback, including instructions for learning, developing learning plans, and recommending learning strategies (Chen et al., 2025a). However, this capability requires a supportive learning environment to be effective. As noted by Chen et al. (2025b), when AI-generated feedback does not align with the expectations or preferences of peers and teachers, students are less likely to accept it. In this Research Topic, three studies offered more details on how a facilitating environment may promote effective feedback practice. For instance, Zhang et al.'s study on PhD students revealed that supervisors' support leads to greater innovative learning behaviors at the postgraduate level, such as discussing more frequently and actively seeking feedback from both peers and supervisors. Moreover, although learning routines and established teaching practices can hardly be challenged, Chen et al. attempted to introduce a new teaching pedagogy to combat their negative consequences. Learners were given some degree of autonomy in discussing, learning from others' contributions, and summarizing their learning progress independently. Finally, Lu et al. studied collaborative problem-solving skills (CPS) in the mathematics classroom and argued that identifying, diagnosing, and visualizing cognitive conflicts during the CPS process can help teachers adopt a learning-oriented approach (rather than an evaluative one) and create opportunities for learners to provide timely peer feedback.

As for future studies, the arrival of GenAI has revolutionized the feedback generation and feedback engagement process on a large scale. However, researchers have issued an increasing number of concerns, as working with GenAI requires effective prompting, which presents another challenge to learners. On the bright side, those who understand expected outcomes, have a higher level of AI literacy, and have the mindset to explore and evaluate GenAI-generated feedback may have a greater advantage in benefiting from GenAI-human collaborations. However, metacognitive laziness and overreliance, which result from blindly accepting GenAI feedback without conducting task analysis and monitoring the alignment between AI output and task demands, have already started to erode learners' academic achievements (Fu et al., 2025). Strategies for developing AI literacy and solutions for combating these undesired consequences of GenAI are urgently needed by both learners and educators.

Author contributions

WW: Writing – original draft, Writing – review & editing.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that Gen AI was used in the creation of this manuscript. To proofread the language.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Chen, Z., Wei, W., Zhu, X., and Yao, Y. (2025b). Unpacking the rejection of L2 students toward ChatGPT-generated feedback: an explanatory research. ECNU Rev. Educ. 1–20. doi: 10.1177/20965311241305140

Crossref Full Text | Google Scholar

Chen, Z., Wei, W., and Zou, D. (2025a). Generative AI technology and language learning: global language learners' responses to ChatGPT videos in social media. Interact. Learn. Environ. 1–14. doi: 10.1080/10494820.2025.2511248

Crossref Full Text | Google Scholar

Fu, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: effects of generative artificial intelligence on learning motivation, processes, and performance. Br. J. Educ. Technol. 56, 489–530. doi: 10.1111/bjet.13544

Crossref Full Text | Google Scholar

Lipnevich, A. A., and Smith, J. K. (2022). Student–feedback interaction model: revised. Stud. Educ. Eval. 75:101208. doi: 10.1016/j.stueduc.2022.101208

Crossref Full Text | Google Scholar

Keywords: innovative feedback, feedback engagement, self feedback, GenAI feedback, learning oriented assessment

Citation: Wei W (2025) Editorial: Promoting innovative learning-oriented feedback practice: engagement, challenges, and solutions. Front. Psychol. 16:1707902. doi: 10.3389/fpsyg.2025.1707902

Received: 18 September 2025; Accepted: 22 September 2025;
Published: 06 October 2025.

Edited and reviewed by: Daniel H. Robinson, The University of Texas at Arlington College of Education, United States

Copyright © 2025 Wei. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Wei Wei, d2Vpd2VpdGVzdGluZ0Bob3RtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.