Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 12 January 2026

Sec. Digital Learning Innovations

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1736446

This article is part of the Research TopicHarnessing AI to Support Self-Regulated Learning in Educational and Workplace SettingsView all articles

Fostering feedback literacy by scaffolding self-regulated feedback: a comparative study of GenAI and human peers

  • Department of Curriculum and Instruction, The Education University of Hong Kong, Hong Kong, Hong Kong SAR, China

Student feedback literacy is vital for effective use of feedback. While traditional peer review activities provide opportunities for students to practice giving and receiving feedback, their effectiveness is sometimes undermined because of interpersonal factors, such as friendship and psychological safety. Generative artificial intelligence (GenAI) offers a promising new avenue by providing adaptive and instant feedback; however, its effectiveness compared to traditional peer interaction and the underlying mechanisms remain underexplored and warrant further investigation. This study used a mixed-methods design with first-year undergraduates to explore the effect of GenAI and human peer feedback on student feedback literacy development. The study also analyzed the role of students’ self-regulated learning (SRL) as a mechanism explaining how these two feedback sources contribute to enhancing feedback literacy. The results revealed that GenAI yielded a small but significant improvement in developing feedback literacy compared to human peers. Qualitative analysis clarified this finding by uncovering behavioral differences between the two groups, highlighting GenAI’s specific support for the SRL process, especially in goal setting, planning, critical evaluation, and immediate self-reflection. These findings suggest that GenAI is powerful in fostering feedback literacy because it facilitates self-regulatory behaviors essential for effective interaction with feedback. Educators can strategically integrate GenAI in classroom activities to scaffold self-regulatory behaviors, fostering student feedback literacy development.

1 Introduction

Feedback literacy refers to students’ “understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (Carless and Boud, 2018, p. 1316). Feedback-literate students can make good use of feedback, which is essential to their learning. Since students are not born with the capacity to effectively interact with feedback, they need to engage with well-designed activities to develop feedback literacy. Traditionally, peer review activities can enhance student feedback literacy by giving students opportunities to make judgments and act on feedback (Carless, 2022); however, peer feedback is not always available, and students may hesitate due to concerns about their peers’ different levels of proficiency (Panadero et al., 2023). The recent development of generative artificial intelligence (GenAI) presents new opportunities by providing students with immediate and personalized feedback, potentially facilitating student feedback literacy (Oktarina et al., 2024). However, the comparative effect between GenAI and human peers in student feedback literacy development is underexplored, and more importantly, the underlying mechanism of how they make feedback effective remains a critical question.

One of the factors that is vital for effective interaction with feedback and feedback literacy development lies in the students’ own self-regulated learning (SRL) (Khuder, 2025; Panadero and Broadbent, 2018). Self-regulated learners set learning goals, apply learning strategies, monitor the learning process, and process and enact feedback to achieve the goals and final learning outcomes (Butler and Winne, 1995). Emerging research suggests GenAI can effectively scaffold students’ SRL processes (Qi et al., 2025); hence, it is expected that the use of GenAI will promote student feedback literacy as they become increasingly self-regulated learners. Taking a more nuanced view, Zhan et al. (2025) conceptualized students’ processes in regulating their interactions with GenAI feedback, including feedback forethought, feedback control, and feedback retrospect. However, limited empirical evidence exists, particularly in contrast to their interactions with peer feedback. Understanding these different self-regulatory behaviors could provide valuable insights into the distinct ways in which GenAI and peer feedback contribute to the development of student feedback literacy.

By conducting a mixed-method study with first-year undergraduates in China, the current study aims to explore how GenAI can foster student feedback literacy and support students’ self-regulated feedback behaviors in comparison to human peers.

2 Literature review

2.1 Student feedback literacy

Student feedback literacy is vital for effective feedback use, as it is students themselves who take actions to improve learning (Carless and Boud, 2018). Sutton (2012) defined feedback literacy from an academic literacy perspective as the ability to read, interpret, and use written feedback. Based on this, Carless and Boud (2018) extended the definition and suggested four features of feedback-literate students: (1) appreciate feedback, recognizing the value of feedback and understanding their active role in feedback processes; (2) make judgements about the quality of their own work and others’, (3) manage affect in the feedback process, and (4) act on feedback information they have received. The first three features are interrelated and, together, positively influence students’ behaviors in taking action to provide feedback.

Student feedback literacy could be developed through formative assessment practice. Self-assessment activities provide students with a chance to enhance feedback literacy as they need to engage with external feedback for effective self-assessment (Yan and Carless, 2022). Peer assessment is another important learning activity for advancing student feedback literacy (Hoo et al., 2022; Ketonen et al., 2020). During peer assessment, students learn to apply assessment criteria to make judgments on peers’ work and their own work (Nicol et al., 2014), through which they internalize standards of quality, transfer what they learned from others’ work on their own task, and develop evaluative judgment (Tai et al., 2018). Providing peer feedback is often more beneficial than receiving it, as it is more cognitively demanding, involving higher-order processes such as identifying errors in a peer’s work, correcting those errors, and monitoring whether similar issues exist in their own work (Carless and Boud, 2018; Sato, 2017). However, students’ challenges in peer assessment, particularly distrust and friendship bias (Panadero et al., 2023), may lead to students’ disengagement in the peer review process, reducing the effect of peer assessment on student feedback literacy development. Training for peer review needs to be provided to students; otherwise, the expected gains may not occur (Tai et al., 2016). Only when students value peer feedback and are scaffolded in the process can peer review activity show potential for student feedback literacy development (Carless and Boud, 2018). Hence, innovative approaches are required to better foster feedback literacy.

2.2 Student feedback literacy development in the GenAI context

To address these challenges associated with peer feedback, researchers have turned to integrating technology in feedback practices, which has been shown to effectively develop student feedback literacy, as it can provide timely and contextualized feedback that encourages greater student participation in the feedback process (Wood, 2021). The emergence and development of GenAI further transform feedback processes. GenAI uses machine learning models “to learn the patterns and relationships in a dataset of human-created content” and “use the learned patterns to generate new content” (Google, 2023, How does generative AI work? section, para. 1). Its capacity to generate new content makes it distinctive from traditional AI technology. GenAI can provide students with instant and adaptive feedback, which is vital for their continuous engagement with learning (Escalante et al., 2023), thereby bearing the potential to promote their feedback literacy.

Recent empirical studies have shown that GenAI generally exerts a positive effect on students’ development of feedback literacy. After using ChatGPT, 18 English as a Foreign Language (EFL) students in Indonesia improved their feedback literacy, particularly in the dimension “feedback processing” (Gozali et al., 2024). Similarly, Zhan and Yan (2025) discovered that, after receiving ChatGPT feedback on their writing, students constantly made judgments about the feedback quality by comparing it with previous teacher feedback and self-assessment, a characteristic of feedback-literate students. However, the integration of GenAI is not without its pedagogical challenges. When investigating how ChatGPT feedback may shape students’ evaluative judgment compared to peer feedback, Xie et al. (2025) found that doctoral students in a statistical analysis course reported a lack of confidence in verifying the accuracy of ChatGPT feedback. Researchers also express concerns that GenAI’s responses are not always reliable and accurate (Lodge et al., 2023). Students’ overreliance on technology may negatively influence students by increasing cognitive offloading (Ma et al., 2025). This externalizing of mental effort can impede the development of critical thinking skills (Sardi et al., 2025) and lead to metacognitive laziness (Fan et al., 2025). It may also threaten feedback literacy, as students may skip the evaluative judgment making process required to become feedback literate. Moreover, GenAI may also not fully replicate the human feedback interaction process because of the importance of mutual recognition (Corbin et al., 2025) and student assessors’ context-specific awareness (Usher, 2025) in effective feedback. Therefore, it is crucial to understand not only the comparative effectiveness of GenAI and human peers but also how students’ self-regulated interactions with these two feedback types differ.

2.3 Self-regulated learning as a mechanism in feedback literacy development

Student feedback literacy is closely linked to self-regulated learning (SRL), which can be viewed as either a cognitive skill or a self-directed process (Qi et al., 2025). Despite differences in their conceptualizations, SRL models typically share similar phases or processes, such as goal setting, self-monitoring, and adjusting learning strategies to align with learning goals (Panadero, 2017). As one of the most influential models in the field of SRL, Zimmerman’s (2000) SRL model depicts SRL as a cyclical process including three interrelated phases: 1) forethought phase where students set goals and plan for learning, 2) performance phase where students self-monitor and adjust their behaviors to align with learning goals, and 3) self-reflection phase where students critically evaluate their performance after completing the task. During this process, students need to employ a variety of (meta) cognitive and motivational regulation skills to adapt to achieve learning outcomes.

While student feedback literacy could influence their SRL process (Jin et al., 2025), students’ self-regulated learning ability is crucial for their sustained interaction with feedback over time (Carless et al., 2011; Winstone et al., 2017). Molloy et al. (2020) emphasized that students’ engagement with goal setting, planning, and monitoring is necessary for effective use of external feedback. Previous studies have shown that using SRL strategies can foster student feedback literacy, including abilities such as making evaluative judgements (Panadero and Broadbent, 2018) and seeking feedback (Khuder, 2025). Chen et al.’s (2025) quantitative data from 1,975 Chinese university students explicitly revealed that SRL can support feedback literacy in blended learning environments through various links; for instance, students’ self-evaluation predicts their appreciation of feedback and the ability to make evaluative judgements, while goal setting and task strategies predict feedback uptake. Hence, fostering student self-regulated learning can enhance students’ capacity to utilize feedback independently and effectively.

Existing research has shown that GenAI can support student self-regulated learning. Though not typically programmed to facilitate student SRL, GenAI can be prompted to provide feedback for co-regulation (Lodge et al., 2023). Sardi et al.’s (2025) review revealed an overall influence of GenAI on SRL, with the majority of studies reporting AI’s positive effect on SRL through personalized learning, metacognitive scaffolding, and adaptive feedback. Qi et al. (2025) conducted a systematic review that further explored specific processes and found that GenAI can support SRL throughout the three phases: particularly, searching for information in the forethought phase, seeking strategies for problem-solving in the performance phase, and gaining feedback and carrying out self-assessments in the self-reflection phase. Chiu’s (2024) case study also found that GenAI could foster students’ SRL through conducting experiments and obtaining feedback. GenAI’s support for student self-regulated learning shows potential to foster student feedback literacy development.

While these studies provide evidence for GenAI’s broad support for SRL, Zhan et al. (2025) proposed a more focused model to understand the detailed process involved when students self-regulate their interactions specifically with feedback. Based on Zimmerman’s (2000) SRL model, Zhan et al.’s (2025) self-regulated feedback model includes three phases: (1) feedback forethought phase, where students set goals and plans based on their needs and understand GenAI’s capabilities and limitations; (2) feedback control phase, where students monitor and regulate the interaction with feedback; and (3) feedback retrospect phase, where students reflect on this process. In this cyclical self-regulation feedback model, students actively engage with GenAI feedback, improving learning outcomes and feedback literacy. This framework explains how GenAI not only supports general SRL but also specifically enhances self-regulated feedback behaviors, which are crucial for developing feedback literacy. Therefore, this study used Zhan et al.’s (2025) self-regulation feedback model to explore students’ self-regulation in different feedback interactions. Although this framework was developed in the GenAI context, it can be applied in broader contexts where students interact with external feedback.

2.4 Research aims and questions

Despite the potential of GenAI in enhancing student feedback literacy, few studies have compared this novel technology with peer feedback and explored how students’ self-regulated feedback interactions may influence their feedback literacy development. This study aims to address these gaps. The main objective of this research is to examine how students’ self-regulated interactions with Generative AI and peer feedback contribute to the development of their feedback literacy. The specific research questions (RQs) are as follows:

RQ1: How does the development of student feedback literacy differ when students’ self-regulated learning is scaffolded by GenAI versus human peers?

RQ2: How do students’ self-regulated feedback interactions differ when engaging with GenAI versus human peers across the forethought, control, and retrospect phases?

3 Methodology

3.1 Contexts and participants

This study was conducted in two parallel English classes in a university in Guangdong, China. To achieve sufficient statistical power, G*Power software revealed that at least 25 students are required following the convention in the social science field (α = 0.05, power = 0.8) (Tomczak et al., 2014) and an expected effect size of 0.59 from previous self-assessment research (Yan et al., 2022).

Participants were recruited through convenience sampling. Based on their lecture time, 118 first-year undergraduates were assigned to the GenAI group or the Peer group. The GenAI group consisted of 56 students, including 31 female students (55.36%), with a mean age of 18.04 (SD = 0.33); the Peer group consisted of 62 students, including 37 female students (59.68%), with a mean age of 18.05 (SD = 0.61). Mandarin is their first language. The instructor assessed students’ baseline writing skills through a writing test as a standard procedure at the beginning of the semester, and the results indicated no significant differences between the two groups. All students were taught by the same English instructor, who had over 20 years of teaching experience.

3.2 Measurements

3.2.1 Student feedback literacy

Feedback literacy was measured by the feedback literacy behavior scale (Dawson et al., 2024). The original scale includes five subscales, but the subscale “provide feedback information” was excluded since students did not provide feedback to GenAI outputs. Consequently, the quantitative measure specifically targeted the receptive and processing dimensions of feedback literacy (seeking, making sense, using information, and managing affect), rather than the dimension of feedback provision. The final scale included 19 items, that is, five items for seek feedback information (e.g., I reflect on the quality of my own work and use my reflection as a source of information to improve my work), four items for make sense of information (e.g., I carefully consider comments about my work before deciding if I will use them or not.), five items for use feedback information (e.g., I check whether my work is better after I have acted on comments.), and five items for manage affect (e.g., I am open to reasonable criticism about my work.). Students self-rated the frequency of their feedback literacy behavior based on this six-point Likert scale (1 = never, 6 = always). To assess the reliability of the overall scale, a Cronbach’s alpha was calculated based on the sample in this study. The scale demonstrated excellent internal consistency (α = 0.904).

3.2.2 Students’ self-regulated interaction with feedback

Students’ interactions with feedback were explored through individual semi-structured interviews. Malecka et al.’s (2022) “elicit-process-use” feedback cycle was used to develop the interview protocol, while Zhan et al.’s (2025) self-regulation feedback model was used to explore how students engage with and potentially self-regulate their feedback interactions. Students were also asked to provide examples of their experience when interacting with feedback (Appendix A).

3.3 Procedure

Over one semester of an English course, students participated in three cycles of self-assessment activities where they had to interact with external feedback. Students attended a 25-minute training before the intervention, with the GenAI group being trained to use ChatGPT-4o and the peer group being instructed to carry out peer review. A self-reflection worksheet, a peer review worksheet, and GenAI prompt guidelines were also designed according to the features of each feedback source and distributed to students to scaffold the feedback interaction. More specifically, for the peer group, students completed a structured peer review worksheet that required them to address specific dimensions (writing strengths, weaknesses, and future strategies) to facilitate students’ participation and increase the quality of peer comments. For the GenAI group, the model was pre-trained on specific writing topics and assessment rubrics to optimize its understanding and reduce students’ burden in prompt engineering. Students were also provided with prompt guidelines to facilitate effective interaction (e.g., “Which is the weakest part of my writing based on the rubrics?”). To ensure a valid comparison between the two groups, the prompt guidelines covered the same core feedback dimensions as the peer group. However, to maintain ecological validity, the design avoided artificially restricting the human-AI interaction. Students could extend their feedback seeking beyond the guidelines (e.g., requesting multiple sample essays) based on their own needs.

In each activity, students spent around 45 min interacting with external feedback, self-reflecting on and revising their writing. For the GenAI group, students interacted with GenAI, and for the Peer Group, they evaluated each other’s work with a high-quality exemplar as a reference. Through these designed activities, students had lots of chances to engage with feedback. To enhance the internal validity of this quasi-experiment, the two groups of students received identical instruction from the same teacher, used the same curriculum, and were allocated equal instructional time. During the feedback activities, the instructor did not intervene or provide scaffolding, allowing students to engage freely with GenAI and their peers. This ensured that the observed learning outcomes could be primarily attributed to feedback sources rather than instructional variance.

Students completed the pre- and post-surveys on feedback literacy in 15 min before and after the intervention. Then, at the end of the semester, nine students from each group with varying English proficiency levels were invited to an interview (20–30 min). The interviews were conducted in the participants’ first language, Mandarin, and were recorded and transcribed.

3.4 Data analysis

Regarding the comparison of student feedback literacy development between two treatment groups, we used analysis of covariance (ANCOVA) to compare the post-survey scores after controlling for the pre-survey effect.

Adopting a thematic analysis approach (Braun and Clarke, 2012), we utilized Zhan et al.’s (2025) self-regulation feedback model, integrated with Malecka et al.’s (2022) feedback processes, to explore how students self-regulate their feedback behaviors. The students’ responses were coded according to three phases: feedback forethought, feedback control, and feedback retrospect. Then, for each phase, we analysed and compared the responses of the two groups to investigate their interactions with GenAI and peer feedback, identifying both similarities and differences.

4 Results

4.1 Effects on student feedback literacy

Descriptive statistics about student feedback literacy of the two groups before and after the intervention were presented in Table 1.

TABLE 1
www.frontiersin.org

Table 1. Pretest-posttest comparisons on feedback literacy.

For the between-group difference, one-way ANCOVA was conducted on post-test scores of student feedback literacy, with pre-test scores as covariates to account for between-group differences before intervention. The homogeneity of variance was not violated after checking via Levene’s test (F = 0.57, p = 0.45), and the homogeneity of regression slopes was confirmed (F = 0.20, p = 0.65) (Keppel, 1991). As shown in Table 2, the main effect of group was significant (F = 3.96, p = 0.049), indicating a statistically significant, albeit small, difference between the post-survey scores in the two groups after controlling for pre-survey scores. Table 3 showed that the parameter for the group was 0.17 (p = 0.049), indicating the post-test of feedback literacy scores for the GenAI group was 0.17 higher than that for the peer group after controlling for the pre-survey scores, with a small effect size (ηp2 = 0.03) (Cohen, 1988).

TABLE 2
www.frontiersin.org

Table 2. One-way ANCOVA on student feedback literacy.

TABLE 3
www.frontiersin.org

Table 3. Summary of parameters of one-way ANCOVA on student feedback literacy.

4.2 Students’ self-regulated interaction with different feedback

Using Zhan et al.’s (2025) self-regulation feedback model, we explored how students self-regulated their interactions with GenAI and peer feedback and how this process contributed to students’ feedback literacy development. The behaviors and perceptions of the two groups of students were presented under each self-regulation feedback stage.

Students’ pseudonyms starting with “A” indicate that they are from the GenAI group, while those starting with “P” are from the peer group.

4.2.1 Feedback forethought phase

4.2.1.1 GenAI feedback

Students were aware of the importance of prompts for eliciting intended feedback. Alice had a clear mindset to prompt sample work from GenAI, aligning with her learning habits:

In my past studies, I usually learned by imitating sample essays, so I elicited exemplars from GenAI. (Alice)

Students also prompted GenAI feedback when they identified writing weaknesses but had no idea about how to improve, for instance:

When I used some words, I could tell they didn’t fit, but I didn’t have enough knowledge to replace them. Then I asked GenAI to help me improve. (Amy)

If I want to polish my essay from certain aspects but don’t know how to do it, then I would ask for feedback. (Aurora)

Moreover, students elicited GenAI feedback tailored to their learning goals. To improve her score quickly, Aurora said that she first sought feedback on basic writing aspects (e.g., grammar and expressions) before considering additional feedback on content. These proactive planning behaviors demonstrate students’ appreciation of feedback as a goal-oriented tool, a cornerstone of feedback literacy.

Students also actively and iteratively adjusted prompts to get useful feedback, showing their awareness of trying to best interact with GenAI, for example:

At first, I found it impressive. Later, I got used to it and started asking questions to see if there were better ways to improve my writing. (Ashley)

Ashley further explained that, at first, she focused only on fixing small mistakes, such as spelling errors; later, she paid more attention to deeper aspects, including the logical flow between paragraphs, the connections between sentences, and whether her essay met the requirements of the rubric. This iterative behavior shows the development of evaluative judgment, as the student’s internal standards for “good writing” became more sophisticated.

4.2.1.2 Peer feedback

Students created their own peer review groups. Most students elicited peer feedback from those who sit near them (i.e., Patrick), with a similar major (i.e., Pluto), or roommates (i.e., Paul).

Students first reviewed peers’ work before receiving feedback comments. Some students reviewed peers’ work based on needs; for instance, Paula noted that she often used simple sentences in her writing, which motivated her to study sentence structures while reviewing her peers’ work. Hence, as feedback givers, self-regulated students learned from others’ work aligned with their feedback goals.

As feedback receivers, however, only a small number of students were clear about what they wanted or not from peers, as Phillip noted that he hoped to improve his writing through aspects other than content:

Once I decide how to write, I don’t usually change it, so I seldom revise the content of my writing. (Phillip)

Most students in the peer group may not have clear feedback goals regarding the specific aspects they want to receive feedback on. Instead, students were guided by the worksheet, which provided an overview of their strengths and weaknesses. As Pluto said, he appreciated that his peers could give him some feedback, as he sometimes did not know how to improve.

Though without specific feedback goals, students could receive diverse feedback on their work. When Peter talked about the feedback he received from his peers in three activities, he noted:

My peers focused on different writing aspects. For instance, one classmate did well in structure and transitions, so he focused more on these aspects when reviewing my work. But another peer didn’t comment much on these. (Peter)

Peter did not specify his preference for feedback content, but he appreciated the diverse perspectives from different peers, showing his increased awareness of eliciting feedback from different people after identifying feedback goals in the next feedback cycle.

Students in the peer group also expressed some concerns about showing their work to their peers and disturbing others:

I don’t usually feel comfortable. I feel a bit bad about wasting others’ time by asking them to help me improve my writing. (Phoebe)

If I ask someone I don’t know well for a peer review, I worry about whether they’ll agree to review or have other concerns. (Pluto)

These concerns in eliciting peer feedback highlight the challenge of managing affect, a crucial component of feedback literacy, which was more prominent in peer interaction than human-GenAI interaction.

4.2.2 Feedback control phase

4.2.2.1 GenAI feedback

During this stage, students actively monitored the feedback process to examine whether they got what they needed from GenAI. Ashley complained that sometimes GenAI would completely change the sentence when she only wanted to know where exactly she had gone wrong in a sentence.

Students assessed the GenAI feedback quality, such as grammar and vocabulary accuracy, for instance:

I checked online when GenAI gave me advice on grammar, such as “to do” or “doing,” “with” or “on.” I learned a lot in this process. (Amy)

I will check the phrase GenAI suggested, such as the context in which it should be used, and its meaning. (Alice)

These actions of verifying GenAI feedback clearly exemplify how students practiced their evaluative judgment, a key component of feedback literacy. Sometimes students identified and corrected inaccurate information in GenAI output, as April noted:

Once, it gave me a list of grammar errors in my writing, but one of them looked wrong. I challenged it, and it admitted its mistake. (April)

Apart from accuracy, students also evaluated whether GenAI gave feedback clearly, for instance:

It always said my essay lacked fluency, but didn’t explain how to improve. I only learned what fluency was by asking it to generate a sample essay, but I still didn’t understand why my writing was not fluent. (Alice)

April agreed that after GenAI gave feedback, she would consider the points she had not previously thought of and further ask why it made those comments and how to make improvements.

However, students reported difficulties in evaluating GenAI feedback because of their current English level, showing a metacognitive awareness of the limits of one’s own evaluative judgment, for instance:

Sometimes, AI translates directly, but since one Chinese word can have multiple meanings, just like English words, it can lead to inaccuracies. (Alice)

Students also carefully considered whether they could apply GenAI feedback to their learning. Amy hoped that GenAI was helping her better express her ideas:

I would consider if its output was aligned with my ideas and if I could write it myself. I may skip points I didn’t think of, but if it repeated ideas I already considered but didn’t express well, I rephrased them to clarify. (Amy)

April compared the revised version GenAI put forward with the original version to check whether the coherence of her essay improved. Angelia also carefully considered whether she used GenAI feedback in her writing:

Some words were beyond my current vocabulary level, but I felt that I should try to improve my writing within my existing vocabulary range. I also mainly used feedback about words, collocations, and expressions, as they were easier to improve and made my writing more engaging. (Angelia)

When GenAI feedback could not be used in the writing task, students (i.e., Alice) noted it down and considered using it in subsequent writing. These behaviors illustrate that students strategically take action upon feedback, selectively incorporating it or planning to use it at a later stage.

4.2.2.2 Peer feedback

Some students critically assessed peer feedback, for instance:

I didn’t check if I felt confident, but when I had doubts, I looked them up to verify. I used tools like Youdao Dictionary. (Pluto)

Similar to the GenAI group, students also reported difficulties in evaluating peer feedback. Peggy felt upset when she had different opinions from her peers. They struggled to understand the standards, so Peggy had no idea how to improve her essay. This difficulty demonstrates challenges in both managing affect and exercising evaluative judgment, which can directly impede students’ effective use of peer feedback.

However, some students (i.e., Paula) explicitly stated that she did not judge her peer’s comments because of their friendship. As most students in this study formed their groups with friends on their own, the friendship effect may hinder the development of their impartial evaluative judgment, preventing the development of feedback literacy. Patrick also said that he often trusted his peers because he felt that they had a higher English level. He only checked when he believed he had not made a mistake.

Students also considered whether they could use what they learned from peers’ work in their learning, for instance:

Since identifying my own problems is sometimes challenging, reviewing my peers’ essays helped me notice mistakes more easily and encouraged me to reflect on my own weaknesses. (Pluto)

4.2.3 Feedback retrospect phase

4.2.3.1 GenAI feedback

During this stage, students reflected on which specific writing aspects and learning strategies were more effectively addressed by GenAI feedback:

Enhancing vocabulary and sentence structures takes ongoing effort. I get little feedback on fluency, likely because it’s more abstract. For structure, the revised essays show a clear pattern that I can imitate in future writing. (Alice)

I used to prioritize advanced vocabulary, but after receiving feedback, I found that the overall coherence is more important. Once it only used a mid-level word, I felt the quality of my essay improved. (Amy)

Students also reflected on the effectiveness of different prompts. For instance, Alice preferred prompting direct feedback on her writing rather than asking GenAI to generate sample work, because she hoped to achieve a high score according to her own logic, rather than adopting the structure suggested by the sample work. Similarly, Angelia favored direct feedback for its focus on her writing weaknesses, which she felt helped her learn more effectively.

Apart from the types of prompts used, students also emphasized the importance of language when interacting with GenAI. Ashley felt that using English instead of her native language could lead to errors in translation and interpretation.

In addition, GenAI facilitated students’ self-reflection by giving immediate responses to confirm whether the expected outcomes were achieved, for instance:

I kept improving my essay and found that it gave me a score from 8 to 10. If it still gave the same score after the revision, I’d wonder whether the problem was in other areas or aspects. So I’d try adding things in places I wouldn’t usually think of, even if it meant revising sentences I was originally satisfied with, just to see if I could get a higher score. (Amy)

Amy was motivated to take action upon GenAI feedback to figure out the areas for improvement, showing the characteristic of feedback-literate students.

However, students acknowledged the importance of taking an active role in the learning process; otherwise, the feedback was not effective:

I haven’t resolved all the problems it pointed out, probably because I didn’t figure them out myself. (Albert)

These acknowledgments revealed that students show deep appreciation of feedback, while recognizing the learner’s central role in taking action.

Several students also demonstrated awareness of the cognitive costs associated with GenAI feedback, for instance:

Sometimes I realized I could write those good expressions if I relied on my own effort. But after using GenAI, I just let it do the thinking for me. I worry that if I keep relying on it, my own skills will become worse. (Amelia)

Amelia acknowledged GenAI’s advantages in providing exemplary refinement strategies and improving her writing; however, she also recognized the risk of cognitive offloading, perceiving it as an “external brain” that sometimes induced metacognitive laziness. She recognized the need for self-regulation, trying to step away from GenAI sometimes and practice on her own.

4.2.3.2 Peer feedback

Students felt that peer review activity increased their writing proficiency, as Peter said that peer feedback served as a reminder:

Next time I work on my essays, I’ll pay closer attention to the feedback I received earlier from my peers. (Peter)

In addition, Pearl also felt that peer review helped her improve her writing because she used a sentence structure recommended by her peer in the final exam, a characteristic of feedback-literate students who took action by transferring learning from a feedback activity to a summative assessment.

Students also showed preferences when considering the effectiveness of different types of feedback information (i.e., peers’ comments, peer work, and high-quality sample work). Several students appreciated the development of evaluative judgment facilitated by reviewing others’ work, as Peter said:

I evaluated peers’ work by comparing their writing with my own, such as vocabulary and grammar, to see if they have any strengths that I haven’t thought of or that catch my eye. (Peter)

Pluto also expressed his preference for reviewing peers’ work over receiving feedback from others. However, most students explicitly stated that they preferred high-quality sample work provided by the researcher, as they did not need to evaluate it and knew that the work was of high quality, allowing them to learn from it, as Paul mentioned. This preference may indicate a desire to bypass the cognitively demanding process of making judgments, highlighting a key area for development.

Students in the peer group also admitted the importance of agency in making peer feedback effective, as sometimes they knew the problem but did not seriously try to overcome it (i.e., Pluto). This highlights students’ awareness of the critical role of taking action in the effective use of feedback.

5 Discussion

5.1 How students’ self-regulated interactions with GenAI and peer feedback contribute to feedback literacy development

This study showed that GenAI enhanced student feedback literacy compared to human peers, though the small effect size needs to be interpreted with caution. This result echoes previous studies that have revealed a positive effect of GenAI on student feedback literacy development (Gozali et al., 2024; Zhan and Yan, 2025). Appendix Table 1 in Appendix B summarizes the self-regulated learning behaviors observed across phases and groups. Below, we highlight the most salient contrasts that inform our interpretation.

During the feedback forethought phase, the two groups showed different behaviors. Most students in the GenAI group were clear about what feedback they wanted to obtain from GenAI, selectively using the provided prompts based on their learning needs and adjusting those that were ineffective. However, in the peer group, students mainly sought feedback based on social convenience rather than learning goals. Although students valued peers’ diverse perspectives on their work in helping them raise audience awareness, only a few students reviewed their peers’ work based on their learning needs; most did not have specific feedback goals for their own work and appeared to receive feedback from their peers passively. The differences in feedback seeking between the two groups may reflect the different nature of the feedback source and instructional scaffolding. Since GenAI requires prompting to provide feedback (Lodge et al., 2023), students must take a more active role in the feedback process and strategically plan what they want from GenAI. Deliberate goal setting and planning can facilitate students’ engagement with feedback (Winstone et al., 2017). On the one hand, prompting feedback tailored to their learning needs helps students better incorporate feedback, since goal setting is highly related to the uptake of feedback (Chen et al., 2025). Moreover, this process encourages students to appreciate feedback as a tool and consider how to approach feedback effectively, which are essential components of feedback literacy. While prompt guidelines encourage the GenAI group to exercise agency based on their learning needs, the structured peer review worksheet, though important for facilitating peer involvement, may have constrained students’ agency by limiting their attention to specific writing aspects. Furthermore, since students often express concerns when seeking feedback from peers, GenAI may mitigate these interpersonal barriers by providing a psychologically safe learning environment, encouraging students to seek feedback with reduced anxiety about being judged by others (Zhan et al., 2025).

During the feedback control phase, both groups assessed feedback quality and considered whether to apply feedback to their learning. Students constantly judged the quality of GenAI feedback and monitored their feedback process, echoing the findings in Zhan and Yan’s (2025) study. However, students’ evaluation of peer feedback could be influenced by interpersonal factors, such as friendship, as revealed in both students’ interviews and the previous review (Panadero et al., 2023). Hence, by eliminating negative interpersonal effects, GenAI enables students to interpret and evaluate feedback more critically, a key component of feedback literacy. Although both groups reported some difficulties in evaluating feedback, the peer group seemed to experience more challenges. On the one hand, critically assessing peers’ work and giving peer feedback can help students identify weaknesses in their own writing; on the other hand, students felt more difficulties in explicitly evaluating peers’ work, particularly when the students involved had similar levels of proficiency. This result differs from Xie et al.’s (2025) finding that students lack confidence in evaluating ChatGPT feedback compared to peer feedback, probably because of the discipline and students’ academic levels. When considering whether to utilize feedback, the GenAI group demonstrated more considerations, such as whether the suggested expressions fell within Vygotsky’s (1978) Zone of Proximal Development (ZPD) and which issues should be addressed first. The abundant feedback from GenAI, compared to that from human peers, required students to be more selective and thoughtful in deciding whether to incorporate feedback into their learning, thereby supporting the development of feedback literacy.

During the feedback retrospect phase, both GenAI and peer groups considered the effectiveness of feedback and interaction with GenAI or human peers. Students also emphasized the importance of agency in making feedback effective, demonstrating their awareness of the need to take action on feedback, a characteristic of feedback-literate students. Regarding students’ reflections, GenAI’s immediate responses could help students determine whether their work has improved after applying feedback, facilitating them to adjust their feedback goals in the next self-regulation cycle. However, the reflection within the peer group could be constrained because they did not receive enough information to foster deep reflection. Students might only become aware of the effects of peer feedback during exams or future writing assignments, and this delayed awareness can impede their understanding and hinder their ability to effectively utilize the feedback. Considering the importance of external feedback in student self-reflection (Yan and Brown, 2017), GenAI’s immediate and iterative nature facilitates students’ reflection process and fosters the habit of immediately taking action on feedback. GenAI’s instant feedback facilitates students’ active iterative engagement with feedback, encouraging their metacognitive reflection in the learning process (Qi et al., 2025). However, as noted in the interviews (e.g., Amelia), the convenience of GenAI may lead to cognitive offloading, a concern shown in the extant literature (Ma et al., 2025). Therefore, metacognitive awareness is vital in this phase so that GenAI can serve as a scaffold for reflection rather than a substitute for critical thinking, which would otherwise impede student feedback literacy development.

Notably, the comparative advantages of GenAI in scaffolding self-regulated learning and feedback literacy do not negate the unique benefits of peer interaction. While the Feedback Literacy Behavior Scale primarily measures students’ self-reported capacity to seek, process, and utilize feedback information to enhance their work (Dawson et al., 2024), the qualitative data highlight that the peer group still developed feedback literacy, especially in making evaluative judgments through giving feedback. For instance, one student explicitly stated that reviewing peers’ work facilitated self-reflection on their own writing weaknesses, resonating with the distinct value of providing feedback in peer assessment activities (Ion et al., 2019; Nicol et al., 2014; Sato, 2017), which could not be achieved through merely interacting with GenAI feedback. Therefore, while GenAI may be a more efficient tool for scaffolding the technical and self-regulatory aspects of feedback use, peer feedback remains essential for developing other learning skills.

5.2 Implications

The positive results of GenAI in student feedback literacy development suggest that integrating this novel technology into formative assessment activities can be a strategic approach to promote lifelong, self-directed learning. Although the partial eta-squared value is small (ηp2 = 0.03) based on Cohen’s (1988) benchmarks, the results are still meaningful in the educational context (Kraft, 2020). By providing immediate and personalized feedback, GenAI can effectively mitigate the logistical and interpersonal challenges often associated with traditional peer review activities. However, the small effect size also highlights that GenAI’s advantage may not be transformative on its own. As shown by previous research (Lodge et al., 2023) and this study, GenAI can sometimes generate inaccurate information. While this uncertainty has the potential to develop evaluative judgment in students, a risk remains that some learners will accept the feedback without making a judgment. Similarly, while some students demonstrated an awareness of the threat of cognitive offloading, not all students have this crucial metacognitive awareness. Students’ variation in cognitive engagement and metacognitive awareness necessitates teachers’ scaffolding when students implement GenAI tools. Instructors can provide scaffolding, such as training, prompt guidelines, and worksheets, to facilitate students’ self-regulated interactions with GenAI feedback.

While GenAI is more effective in developing feedback literacy, it cannot replace human peers, as suggested by previous researchers (Zhang et al., 2025). The unique strengths of peer interaction (e.g., raising audience awareness, identifying one’s own weaknesses through giving feedback) highlight the value of human peers in student feedback literacy development, and the two approaches are complementary. Hence, to better facilitate student self-regulation in the peer review process, the challenges students meet in peer review should also be addressed. For instance, the peer review activity could be redesigned to better support SRL through anonymous feedback (Rotsaert et al., 2018), thereby mitigating friendship bias and providing structured support to facilitate active goal setting, planning, and reflection. Moreover, rather than viewing GenAI and peer feedback as competing modalities, educators should design multi-stage learning activities that strategically combine their complementary strengths (Usher, 2025). For instance, GenAI can be utilized to provide immediate and psychologically safe scaffolding for feedback processing and uptake, while peer review could be reserved to require students to practice making evaluative judgments by acting as feedback providers.

5.3 Limitations and future studies

While this study provides valuable initial evidence, several limitations inform future research directions. The primary methodological challenge is the non-random assignment design, which, alongside the small effect size, could threaten internal validity due to potential unmeasured group differences. Furthermore, the generalizability of the findings is constrained by the specific context. Future research could replicate this design in other disciplines or with larger and more diverse samples to establish the generalizability of these findings. Future research could also move beyond comparative studies to explore optimal blended models that strategically integrate GenAI and peer feedback to support learners at different stages of learning. Another limitation is the short intervention period (i.e., three feedback cycles over one semester). The limited intervention time and the nascent integration of GenAI in education could introduce a novelty effect, a phenomenon also observed by Huang and Chen (2025). Future studies could extend the intervention period beyond a single semester to mitigate the novelty effect on student behaviors. Finally, the reliance on self-reported surveys for feedback literacy and retrospective interviews for self-regulated feedback behaviors suggests a need for more objective measurement. Future studies could employ methods such as stimulated recall tasks and learning analytics to objectively assess changes in students’ feedback literacy and self-regulated learning.

6 Conclusion

The current study found that interacting with GenAI offers distinct advantages for enhancing student feedback literacy, particularly by scaffolding students to seek, process, and utilize feedback. Students’ interviews further revealed that GenAI fosters a self-regulatory environment that enables learners to actively engage with feedback compared to the peer interactions observed in this context. As GenAI tools become increasingly integrated into education, this research highlights that the value of these technologies lies not only in their ability to provide feedback but also in their potential to foster students’ self-regulatory skills, which empower students to make full use of the feedback information.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by the Human Research Ethics Committee, The Education University of Hong Kong. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

JG: Methodology, Conceptualization, Investigation, Writing – review & editing, Writing – original draft, Formal analysis. JC: Formal analysis, Writing – review & editing, Methodology, Conceptualization. ZY: Methodology, Supervision, Conceptualization, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Acknowledgments

We would like to thank the participating students and the reviewers for supporting this study.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative Al was used in the creation of this manuscript. Generative AI was employed as an intervention tool in this mixed-methods study and to ensure consistency in the use of American English style during manuscript preparation.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Braun, V., and Clarke, V. (2012). Thematic Analysis. Washington, DC: American Psychological Association, doi: 10.1037/13620-004

Crossref Full Text | Google Scholar

Butler, D. L., and Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65, 245–281. doi: 10.3102/00346543065003245

PubMed Abstract | Crossref Full Text | Google Scholar

Carless, D. (2022). From teacher transmission of information to student feedback literacy: activating the learner role in feedback processes. Active Learn. High. Educ. 23, 143–153. doi: 10.1177/1469787420945845

Crossref Full Text | Google Scholar

Carless, D., and Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assess. Eval. High. Educ. 43, 1315–1325. doi: 10.1080/02602938.2018.1463354

Crossref Full Text | Google Scholar

Carless, D., Salter, D., Yang, M., and Lam, J. (2011). Developing sustainable feedback practices. Stud. High. Educ. 36, 395–407. doi: 10.1080/03075071003642449

Crossref Full Text | Google Scholar

Chen, S., Zhang, L., and Li, A. W. (2025). Leveraging self-regulated learning to support university students’ feedback literacy in blended learning environments. Assess. Eval. High. Educ. 1–20. doi: 10.1080/02602938.2025.2546349

Crossref Full Text | Google Scholar

Chiu, T. K. (2024). A classification tool to foster self-regulated learning with generative artificial intelligence by applying self-determination theory: a case of ChatGPT. Educ. Technol. Res. Dev. 72, 2401–2416. doi: 10.1007/s11423-024-10366-w

Crossref Full Text | Google Scholar

Cohen, J. (1988). Statistical Power Analysis for the behavioral Sciences (2. Auflage). Hillsdale, NJ: erlbaum.

Google Scholar

Corbin, T., Tai, J., and Flenady, G. (2025). Understanding the place and value of GenAI feedback: a recognition-based framework. Assess. Eval. High. Educ. 50, 1–14. doi: 10.1080/02602938.2025.2459641

Crossref Full Text | Google Scholar

Dawson, P., Yan, Z., Lipnevich, A., Tai, J., Boud, D., and Mahoney, P. (2024). Measuring what learners do in feedback: the feedback literacy behaviour scale. Assess. Eval. High. Educ. 49, 348–362. doi: 10.1080/02602938.2023.2240983

Crossref Full Text | Google Scholar

Escalante, J., Pack, A., and Barrett, A. (2023). AI-generated feedback on writing: insights into efficacy and ENL student preference. Int. J. Educ. Technol. High. Educ. 20:57. doi: 10.1186/s41239-023-00425-2

Crossref Full Text | Google Scholar

Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: effects of generative artificial intelligence on learning motivation, processes, and performance. Br. J. Educ. Technol. 56, 489–530. doi: 10.1111/bjet.13544

Crossref Full Text | Google Scholar

Google. (2023). What is Generative ai and What Are its Applications? Google Cloud. Available online at: https://cloud.google.com/use-cases/generative-ai [Accessed February 25, 2024].

Google Scholar

Gozali, I., Wijaya, A. R. T., Lie, A., Cahyono, B. Y., and Suryati, N. (2024). Leveraging the potential of ChatGPT as an automated writing evaluation (AWE) tool: students’ feedback literacy development and AWE tools integration framework. JALT CALL J. 20, 1–22. doi: 10.29140/jaltcall.v20n1.1200

Crossref Full Text | Google Scholar

Hoo, H. T., Deneen, C., and Boud, D. (2022). Developing student feedback literacy through self and peer assessment interventions. Assess. Eval. High. Educ. 47, 444–457. doi: 10.1080/02602938.2021.1925871

Crossref Full Text | Google Scholar

Huang, K., and Chen, C. H. (2025). Instructional video and GenAI-supported chatbot in digital game-based learning: influences on science learning, cognitive load and game behaviours. J. Comput. Assist. Learn. 41:e70094. doi: 10.1111/jcal.70094

Crossref Full Text | Google Scholar

Ion, G., Sánchez Martí, A., and Agud Morell, I. (2019). Giving or receiving feedback: which is more beneficial to students’ learning? Assess. Eval. High. Educ. 44, 124–138. doi: 10.1080/02602938.2018.1484881

Crossref Full Text | Google Scholar

Jin, F. J. Y., Nath, D., Guan, R., Li, T., Li, X., Mello, R. F., et al. (2025). Analytics of self-regulated learning in learning analytics feedback processes: associations with feedback literacy in secondary education. J. Comput. Assist. Learn. 41:e70076. doi: 10.1111/jcal.70076

Crossref Full Text | Google Scholar

Keppel, G. (1991). Design And Analysis: a Researcher’s Handbook, 3rd Edn. Hoboken, NJ: prentice-Hall, Inc.

Google Scholar

Ketonen, L., Nieminen, P., and Hähkiöniemi, M. (2020). The development of secondary students’ feedback literacy: peer assessment as an intervention. J. Educ. Res. 113, 407–417. doi: 10.1080/00220671.2020.1835794

Crossref Full Text | Google Scholar

Khuder, B. (2025). Feedback-seeking behaviour as a self-regulation strategy in higher education: a pedagogical approach. Assess. Eval. High. Educ. 50, 861–875. doi: 10.1080/02602938.2025.2476621

Crossref Full Text | Google Scholar

Kraft, M. A. (2020). Interpreting effect sizes of education interventions. Educ. Res. 49, 241–253. doi: 10.3102/0013189X20912798

PubMed Abstract | Crossref Full Text | Google Scholar

Lodge, J. M., Yang, S., Furze, L., and Dawson, P. (2023). It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? Learning 9, 117–124. doi: 10.1080/23735082.2023.2261106

Crossref Full Text | Google Scholar

Ma, Y., Zhang, Z., and Liu, C. (2025). The double-edged sword effect of GenAI assistance on university students’ academic performance: evidence from China. Educ. Inform. Technol. 1–26. doi: 10.1007/s10639-025-13850-9

Crossref Full Text | Google Scholar

Malecka, B., Boud, D., and Carless, D. (2022). Eliciting, processing and enacting feedback: mechanisms for embedding student feedback literacy within the curriculum. Teach. High. Educ. 27, 908–922. doi: 10.1080/13562517.2020.1754784

Crossref Full Text | Google Scholar

Molloy, E., Boud, D., and Henderson, M. (2020). Developing a learning-centred framework for feedback literacy. Assess. Eval. High. Educ. 45, 527–540. doi: 10.1080/02602938.2019.1667955

Crossref Full Text | Google Scholar

Nicol, D., Thomson, A., and Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assess. Eval. High. Educ. 39, 102–122. doi: 10.1080/02602938.2013.795518

Crossref Full Text | Google Scholar

Oktarina, I. B., Saputri, M. E. E., Magdalena, B., Hastomo, T., and Maximilian, A. (2024). Leveraging ChatGPT to enhance students’ writing skills, engagement, and feedback literacy. Edelweiss Appl. Sci. Technol. 8, 2306–2319. doi: 10.55214/25768484.v8i4.1600

Crossref Full Text | Google Scholar

Panadero, E. (2017). A review of self-regulated learning: six models and four directions for research. Front. Psychol. 8:422. doi: 10.3389/fpsyg.2017.00422

PubMed Abstract | Crossref Full Text | Google Scholar

Panadero, E., Alqassab, M., Fernández Ruiz, J., and Ocampo, J. C. (2023). A systematic review on peer assessment: intrapersonal and interpersonal factors. Assess. Eval. High. Educ. 48, 1053–1075. doi: 10.1080/02602938.2023.2164884

Crossref Full Text | Google Scholar

Panadero, E., and Broadbent, J. (2018). “Developing evaluative judgment: Self-regulated learning perspective,” in Developing Evaluative Judgement in Higher Education. Assessment for Knowing and Producing Quality Work, eds D. Boud, R. Ajjwi, P. Dawson, and J. Tai (London: Routledge), 81–89. doi: 10.4324/9781315109251-9.

Crossref Full Text | Google Scholar

Qi, X. I. A., Liu, Q., Tlili, A., and Thomas, K. F. (2025). A systematic literature review on designing self-regulated learning using generative artificial intelligence and its future research directions. Comput. Educ. 240:105465. doi: 10.1016/j.compedu.2025.105465

Crossref Full Text | Google Scholar

Rotsaert, T., Panadero, E., and Schellens, T. (2018). Anonymity as an instructional scaffold in peer assessment: its effects on peer feedback quality and evolution in students’ perceptions about peer assessment skills. Eur. J. Psychol. Educ. 33, 75–99. doi: 10.1007/s10212-017-0339-8

Crossref Full Text | Google Scholar

Sardi, J., Candra, O., Yuliana, D. F., Yanto, D. T. P., and Eliza, F. (2025). How generative AI influences students’ self-regulated learning and critical thinking skills? A systematic review. Int. J. Eng. Pedagogy 15, 94–108. doi: 10.3991/ijep.v15i1.53379

Crossref Full Text | Google Scholar

Sato, M. (2017). “Oral peer corrective feedback. Multiple theoretical perspectives,” in Corrective Feedback in Second Language Teaching and Learning: Research, Theory, Applications, Implications, eds H. Nassaji and E. Kartchava (New York, NY: Routledge), 19–34.

Google Scholar

Sutton, P. (2012). Conceptualizing feedback literacy: knowing, being, and acting. Innovat. Educ. Teach. Int. 49, 31–40. doi: 10.1080/14703297.2012.647781

Crossref Full Text | Google Scholar

Tai, J., Ajjawi, R., Boud, D., Dawson, P., and Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. High. Educ. 76, 467–481. doi: 10.1007/s10734-017-0220-3

Crossref Full Text | Google Scholar

Tai, J. H. M., Canny, B. J., Haines, T. P., and Molloy, E. K. (2016). The role of peer-assisted learning in building evaluative judgement: opportunities in clinical medical education. Adv. Health Sci. Educ. 21, 659–676. doi: 10.1007/s10459-015-9659-0

PubMed Abstract | Crossref Full Text | Google Scholar

Tomczak, M., Tomczak, E., Kleka, P., and Lew, R. (2014). Using power analysis to estimate appropriate sample size. Trends Sport Sci. 4, 195–206.

Google Scholar

Usher, M. (2025). Generative AI vs. instructor vs. peer assessments: a comparison of grading and feedback in higher education. Assess. Eval. High. Educ. 50, 912–927. doi: 10.1080/02602938.2025.2487495

Crossref Full Text | Google Scholar

Vygotsky, L. S. (1978). Mind in society: the Development of Higher Psychological Processes. Cambridge, MA: harvard University Press.

Google Scholar

Winstone, N. E., Nash, R. A., Parker, M., and Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: a systematic review and a taxonomy of recipience processes. Educ. Psychol. 52, 17–37. doi: 10.1080/00461520.2016.1207538

Crossref Full Text | Google Scholar

Wood, J. (2021). A dialogic technology-mediated model of feedback uptake and literacy. Assess. Eval. High. Educ. 46, 1173–1190. doi: 10.1080/02602938.2020.1852174

Crossref Full Text | Google Scholar

Xie, X., Zhang, L. J., and Wilson, A. J. (2025). Comparing ChatGPT feedback and peer feedback in shaping students’ evaluative judgement of statistical analysis: a case study. Behav. Sci. 15:884. doi: 10.3390/bs15070884

PubMed Abstract | Crossref Full Text | Google Scholar

Yan, Z., and Brown, G. T. (2017). A cyclical self-assessment process: towards a model of how students engage in self-assessment. Assess. Eval. High. Educ. 42, 1247–1262. doi: 10.1080/02602938.2016.1260091

Crossref Full Text | Google Scholar

Yan, Z., and Carless, D. (2022). Self-assessment is about more than self: the enabling role of feedback literacy. Assess. Eval. High. Educ. 47, 1116–1128. doi: 10.1080/02602938.2021.2001431

Crossref Full Text | Google Scholar

Yan, Z., Lao, H., Panadero, E., Fernández-Castilla, B., Yang, L., and Yang, M. (2022). Effects of self-assessment and peer-assessment interventions on academic performance: a meta-analysis. Educ. Res. Rev. 37:100484. doi: 10.1016/j.edurev.2022.100484

Crossref Full Text | Google Scholar

Zhan, Y., Boud, D., Dawson, P., and Yan, Z. (2025). Generative artificial intelligence as an enabler of student feedback engagement: a framework. High. Educ. Res. Dev. 44, 1289–1304. doi: 10.1080/07294360.2025.2476513

Crossref Full Text | Google Scholar

Zhan, Y., and Yan, Z. (2025). Students’ engagement with ChatGPT feedback: implications for student feedback literacy in the context of generative artificial intelligence. Assess. Eval. High. Educ. 1–14. doi: 10.1080/02602938.2025.2471821

Crossref Full Text | Google Scholar

Zhang, L., Li, L., Jiang, J., and Zou, B. (2025). Exploring the impact of diverse feedback sources on learners’ performance, motivation, and preference in a translation course: tutor, peer, and GPT insight. Thinking Skills Creat. 59:102042. doi: 10.1016/j.tsc.2025.102042

Crossref Full Text | Google Scholar

Zimmerman, B. J. (2000). “Attaining self-regulation: a social cognitive perspective,” in Handbook of self-regulation, eds M. Boekaerts, P. R. Pintrich, and M. Zeidner (NewYork, NY: academic Press), 13–39. doi: 10.1016/B978-012109890-2/50031-7

Crossref Full Text | Google Scholar

Appendices

Appendix A

Interview questions

1. How did you get feedback from GenAI/peers?

2. What types of feedback did you get from GenAI/peers? Which kind of feedback do you think is more useful or not useful?

3. How did you process feedback from GenAI/peers? How did you evaluate its quality?

4. How did the feedback from GenAI/peers help you reflect on your writing?

5. How did the feedback from GenAI/peers influence your future writing?

Appendix B

APPENDIX TABLE 1
www.frontiersin.org

Appendix Table 1. Students’ self-regulated feedback behaviors in two groups.

Keywords: educational technology, feedback literacy, Generative artificial intelligence, mixed-method, self-regulated learning

Citation: Gu J, Chen J and Yan Z (2026) Fostering feedback literacy by scaffolding self-regulated feedback: a comparative study of GenAI and human peers. Front. Educ. 10:1736446. doi: 10.3389/feduc.2025.1736446

Received: 31 October 2025; Revised: 21 December 2025; Accepted: 22 December 2025;
Published: 12 January 2026.

Edited by:

Danny Glick, University of California, Irvine, United States

Reviewed by:

Jonathan Chee, Temasek Polytechnic, Singapore
George Gyamfi, Flinders University, Australia

Copyright © 2026 Gu, Chen and Yan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zi Yan, enlhbkBlZHVoay5oaw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.