Skip to main content

REVIEW article

Front. Educ., 08 April 2024
Sec. Educational Psychology
Volume 9 - 2024 | https://doi.org/10.3389/feduc.2024.1376505

Peer assessment to promote self-regulated learning with technology in higher education: systematic review for improving course design

  • 1Innovation and Educational Improvement Research Group, Rey Juan Carlos University, Madrid, Spain
  • 2Elkarrikertuz Research Group, University of the Basque Country, Donostia-San Sebastián, Spain

Peer assessment is one of the approaches to develop self-regulation of learning. When evaluating the work of peers, metacognitive strategies of critical reflection are employed. They improve their own learning especially if evaluative feedback and/or suggestions for modification are provided. The aim of this systematic review is to learn how technology can facilitate self-regulation of learning, using peer assessment activities. We focus on higher education. To achieve the objective, we searched WoS and Scopus, obtaining 15 publications that concatenate the four search terms: self-regulated learning, peer assessment, higher education, and technology. These four terms must appear in the title, abstract or keywords. In this way, we ensure that the topic to be reviewed is central to the publication. The results are analyzed using the model for systematic review, which has three phases: description, synthesis, and critique. A proposal has been made to improve the design of courses in virtual classrooms, focusing on Moodle, and to include peer evaluation to improve self-regulated learning. It highlights the possibility of virtual classrooms to configure a rubric to guide the evaluation, together with the request for mandatory comments to justify the evaluation. This helps the student reflect on what is wrong and why, and how to improve. It also highlights the facility to randomly assign a specific number of tasks per reviewer or per task, and to make the whole process completely anonymous. The technology allows short deadlines for submission and review times to be maintained for instant feedback, as it can be configured with a single click. Finally, and related to this, Moodle can reopen the submission phase, to send an improved version based on feedback, and the evaluation phase, to check that the proposed improvements have been made. This helps to a greater extent to apply metacognitive strategies.

1 Introduction

Lifelong learning has been included as one of the Sustainable Development Goals (SDGs) (United Nations, 2023) to face the complex context in which we find ourselves in the 21st century. To this end, the European Commission proposed the Learning to Learn competence to achieve lifelong learning (Hoskins and Fredriksson, 2008). And, as Lluch and Portillo (2018) state, self-regulated learning (SRL) is essential to develop it in higher education. SRL is a process composed of thoughts, emotions and planned actions aimed at achieving a personal goal, that is, a set of strategies that students can activate when working toward their goals (Zimmerman, 2002). Thus, SRL enables students to manage their own learning process.

One of the ways of working on SRL is through Peer Assessment (PA), which refers to the analysis and assessment of the quality of a peer's product or performance, through a process of critical reflection (Roberts, 2006; Topping, 2009). The level of reflection will depend on whether the peer assessment consists only of proposing a score on the quality of the work. This is known as summative PA, or if it includes feedback derived from that reflection, formative PA. Feedback is no longer seen only as a process of transmitting information. Thus, we now also find a new focus on learning (Winstone et al., 2022). Black and Wiliam (2009) propose feedback in formative assessment as the information that enables students to advance in their learning. Considering that feedback is a key element in instruction because of its high effectiveness, different approaches have been proposed to study how to deal with feedback in the classroom.

Thus, adopting a formative approach to PA enables students to develop metacognitive skills, helping each other to identify strengths and weaknesses, and to plan and guide their learning (Topping, 2009). Metacognition includes knowledge, related to process evaluation, and metacognitive skills, related to feedback mechanisms that facilitate action planning and performance evaluation (Veenman et al., 2006). Metacognition has been shown to be a fundamental component of self-regulated learning, including processes such as goal setting, planning, progress monitoring and reflection (Azevedo and Gašević, 2019).

Peer feedback in these activities refers mainly to the performance of the task, but also to the process and even formal aspects of writing. This leads to improvements in the task and in future learning (Ion et al., 2016). Therefore, most studies tend to assume a formative PA (Alqassab et al., 2023). In this case, feedback becomes feedforward, which can be positive or negative, and, if negative, must be accompanied by proposals for improvement (Topping, 2018).

This results in students employing advanced-level metacognitive strategies to provide feedback during peer assessment, especially if they are asked to provide evaluative comments and/or suggestions for modification on the assessed work (Liu and Lin, 2007). Furthermore, Van Helden et al. (2023) have been able to conclude that, in many cases, PA promotes a better understanding of the assessment criteria. Consequently, this improves judgement and quality of feedback comments. Thus, students can learn from the feedback provided by their peers, but also through metacognitive reflection by having to justify what they have done (Liu and Carless, 2006).

For this reason, the application of metacognitive strategies during feedback facilitates SRL (Butler and Winne, 1995; Winne, 1996). Moreover, this reflection will be enhanced if it is implemented together with a backward evaluation process, which consists of the evaluated student assessing the feedback received from his or her reviewer. This helps the student to reflect on their work and use it to improve the assessed product (Misiejuk and Wasson, 2021).

At this point, the design of PA activities must take into account the results found so far in the literature. On the one hand, Van Zundert et al. (2010) have seen how the training and experience that students had when carrying out PA influenced the quality of the activities, so that some kind of training is necessary to be successful in this type of activities. In order to improve feedback processes, it is necessary to develop more effective processes based on teacher feedback literacy. With teacher feedback literacy, an approach based on shared responsibility between teachers and learners can be achieved (Carless and Winstone, 2023). This is the only way to develop feedback literacy in students, so that they are able to deal adequately with task assessment.

Students' feedback literacy involves developing the ability to take advantage of feedback opportunities by actively participating in feedback processes (Malecka et al., 2022). To this end, these authors propose three mechanisms to be taken into account in the curriculum: eliciting, processing, and enacting feedback. For example, it has been shown to be important for students to manage their perceptions and attitudes, as well as to have greater confidence and agency in the feedback process (Little et al., 2024). One technique to achieve this is the co-assessment of examples that would help students develop feedback literacy (Carless and Boud, 2018).

On the other hand, Panadero and Alqassab (2019) have concluded that, according to the studies reviewed, anonymous PA improves students' perception of the value of the learning provided through PA. This is because feedback is more critical and tends to lead to higher achievement, especially in higher education. If, in addition, authors are paired with reviewers with similar performance, self-regulation will be more effective (Zhang and Schunn, 2023).

All this should have an impact on the improvement of the activity, not only for students to learn through the help they receive from their peers. The useful activity should be used for metacognitive reflection. This means asking what they need to learn in order to apply it to the activity, what is important, how they should apply it, why it can be useful to them, and so on. At this point, it is important to consider the possibilities of PA to improve the self-regulation of their own learning, so we will focus on reviewing the literature on PA as a resource for improving SRL.

Currently, technology can be a great ally for the use of PA, as different tools can be used such as dedicated web-based PA system, Learning Management Systems (LMS), social media or mobile application (Zheng et al., 2019). LMSs, such as Moodle, are widely used in online university courses (Gamage et al., 2022), but can be applied to any modality that wants to benefit from a virtual classroom. Although originally used as after-class tools, technology-facilitated PA activities are increasing within the classroom (Fu et al., 2019).

Therefore, to ensure the benefits of PA, and to provide the necessary scaffolding, as Goh et al. (2019) argue, the Moodle workshop activity allows all these elements to be incorporated, by introducing examples in the workshop itself. It also provides assessment guidelines, such as the use of rubrics, which include the possibility of adding feedback comments with a formative approach. In addition, it facilitates the distribution of work among many students in a random and anonymous way.

In addition, to facilitate SRL, technology has enabled the development of the Open Learner Model (OML) (Hooshyar et al., 2020). This design facilitates the organization, monitoring, and regulation of learning in virtual environments, thanks to internal feedback through self-assessment of their learning. And, additionally, through external feedback such as from the teacher or peers (Chou and Zou, 2020). This is important because one of the weaknesses of higher education students in virtual environments is knowing how to identify the knowledge of objectives and assessment criteria (Ortega-Ruipérez and Castellanos-Sánchez, 2023), which is necessary to provide good feedback in PA activities.

Good technology design can also help in improve the quality of skills such as argumentative writing, according to Noroozi et al. (2023). This is because, if PA feedback is presented in an appropriate way, it can facilitate reflection to improve original work in situations with the backward assessment process (Misiejuk and Wasson, 2021).

This systematic review aims to account for how technology can facilitate SRL, that is, the ability to reflect on tasks, through PA activities. For this reason, the research questions that define the focus of the research are related to how technology can support PA to facilitate SRL in higher education. Firstly, a question is posed about the current state of research on the topic, as we intend to focus solely on virtual environments. Secondly, we aim to collect and provide guidelines to guide the design of PA activities to facilitate SRL in technological environments.

2 Materials and methods

It is a systematic review because it follows a specific protocol, uses an explicit and reproducible method, and attempts to critically appraise and synthesize the subject matter. Specifically, this review includes a narrative synthesis, an approach to systematic review that attempts to synthesize the findings of multiple studies (Popay et al., 2006). As confirmed by these authors, a systematic review with a narrative synthesis usually contains a limited number of publications, unlike other approaches such as meta-analysis. The possibility of focusing on a smaller number of publications makes it possible to select only those that best address the topic for a more adecuate critical analysis. For this systematic review of the most relevant literature on the topic addressed, a five-stage analysis protocol was followed, similar to that of other systematic reviews on educational innovation topics (Ramírez and Lugo, 2020; Gros and Cano, 2021).

In phase 1 the research questions have been posed, concerning the analysis of how technology can support PA to facilitate SRL in higher education. Firstly, a question is posed about the current state of research on the topic, and secondly, a question is posed to answer with appropriate guidelines to guide the design of PA activities to facilitate SRL.

RQ1: what is the state of the literature on how peer assessment facilitates self-regulated learning?

RQ2: what design guidelines for peer assessment activities can we follow for our students to enhance their self-regulated learning?

In phase 2, the search process was established. Using the Web of Science and Scopus databases, the search was limited to articles from the last 10 years (2014–2023). A time frame of 10 years was considered appropriate because we believe that this last decade can be considered the inclusion and popularization of virtual classrooms in higher education. A Search String (SS) has been performed for the search in “Article title, Abstract, Keywords” combining the selected words, and adding a new filter in each iteration: Peer assessment/Peer feedback + Metacognition/Self-regulated learning/Self-regulation of learning + Higher education/Tertiary education/University + Technology/Moodle.

It is important to start the search by combining two keywords (Peer assessment + Self-regulated learning), as our study focuses on how the former can benefit the latter. In this way, SS1 has collected the following search criteria: (“peer assessment” OR “peer feedback”) AND (“self-regulated learning” OR “self-regulation of learning” OR metacognition).

Subsequently, the educational stage has been added because it is understood that the design of the activities may be different depending on the age of the students. Thus, SS2 includes the above search criteria plus the one relating to higher education: (“peer assessment” OR “peer feedback”) AND (“self-regulated learning” OR “self-regulation of learning” OR metacognition) AND (“higher education” OR “tertiary education” OR university).

Finally, the keyword on technology is included to obtain only those studies that incorporate it. In this way, the Search String was finally three. The final SS3 search has therefore included all important search criteria to answer the research questions: (“peer assessment” OR “peer feedback”) AND (“self-regulated learning” OR “self-regulation of learning” OR metacognition) AND (“higher education” OR “tertiary education” OR university) AND (technology OR Moodle).

The summary of the articles found in each one can be seen in Table 1. The result was 27 articles. Before continuing with the next phase, we proceeded to detect the articles that were repeated in both databases, detecting a total of 9, so that the number of articles for the first review was finally 18.

Table 1
www.frontiersin.org

Table 1. Summary of the number of selected papers.

In phase 3, two additional criteria were defined for the inclusion or exclusion of articles after reading the abstracts. We proceeded to (1) exclude articles that were not relevant to the object of study, as they had other objectives and in which self-regulation of learning through PA was not the central element. After this screening, 15 articles were left. And (2) only those related to the use of technological tools that allow the design of activities similar to those allowed by an LMS, such as Moodle, were selected, i.e. the results can be applied to any virtual classroom. In this case, it has not been necessary to eliminate any article, as it has been possible to draw some procedure or conclusion for the review of all of them. The evaluation of the selected studies has not always followed the same approach. However, all of them address the use of technology to facilitate peer assessment in higher education as their main theme, and all of them show the positive aspects to be taken into acount for an optimal use of technology. Therefore, the final number of articles analyzed was 15 (Figure 1).

Figure 1
www.frontiersin.org

Figure 1. Procedure for the final selection of papers.

It is important to note here that this is a systematic review that attempts to address a very specific topic. Therefore, we have preferred to have fewer articles, but to ensure that the articles reviewed allow us to fully answer the research questions. Thus, these 15 articles allow us to answer how a technological tool in higher education can simplify peer assessment to facilitate self-regulated learning. This is considered a sufficient number of articles as a starting point to move forward on this topic. In line with Popay et al. (2006), by including a narrative synthesis of the systematic review, a small number of articles is proposed. This allows to focus on the publications that best address the two research questions posed on the topic.

The selected articles are representative and of high quality, as the search has been carried out only in the most reliable databases: web of science and Scopus. In addition, all these publications have passed a rigorous blind peer review process, in which experts in the field have decided that these publications add value to the topic of peer review. Therefore, we did not want to discard any of them, regardless of whether they are journal articles, book chapters or conference papers. In the case of conference papers, not only an abstract of an experience is found, also the experience is expanded in the selected publication through results that demonstrate the usefulness of the experience.

In phase 4, data selection and extraction was done in an Excel document (omitted for blind review, data in figshare), trying to systematize the information around some important questions for the consideration and generalizability of the results of the reviewed articles: sample size, duration (< 1 week, 2–5, 6–10, more than 10), technology (web, LMS or social media), assignment (system, professor, students), evaluation method (quantitative, qualitative, both), with or without scaffolding, organization (group, individual), number of evaluators per task, number of tasks per evaluator, course modality (in-person, blended, online). No other variables were considered relevant given the narrative nature of this systematic review.

Finally, in phase 5, the tripartite model for systematic review (Daniel and Harland, 2017) was applied. First, a description of the results of each of the 15 selected contributions was made, then a synthesis of the most important contributions was elaborated, and finally, a critique of applications to compile guidelines for the design of PA activities with technology, specifically oriented to a LMS, such as Moodle. The critique is presented as the discussion of results, as the guidelines obtained relate to the results of previous research.

3 Results

3.1 Description

First, a summary table (Table 2) has been prepared with the basic information of the 15 selected articles. Secondly, after a detailed reading each of the publications, the most important information provided by each article for our purpose, i.e., how PA can facilitate SRL in students, is described. Also, in some cases, the procedure of how the PA has been carried out has been included in the synthesis, as it has been considered important to consider some key aspects of the design of PA activities that have proved to be useful for SRL.

Table 2
www.frontiersin.org

Table 2. Summary of contributions on our topic in each publication.

Janson et al. (2014) propose the design of their PA focused on supports interaction for awareness and reflection, and thus, improving learning outcomes. Regarding the procedure, after preparing the material with flipped classroom, students propose solutions in groups, and they have to comment on the proposals of the other groups. After receiving feedback from the other groups, each student must reflect individually on the strengths and weaknesses of their proposals, in order to revise and improve them based on the feedback.

García-Jiménez (2015) makes a proposal based on the literature on how the teacher should guide reflection on learning at the beginning, giving more and more protagonism to the students. After this first scaffolding step, peers guide and monitor the process of student' reflection on their learning and its outcomes, to provide feedback on whether the reflection is sufficient and appropriate.

García-Jiménez et al. (2015) review and discuss how the PA helps students to understand what is required of them in the task, as it is necessary analyze and discuss the elements of the task in order to assess. If teachers allow students to participate in the design of assessment tasks, criteria and benchmarks, it improves their understanding so that they can assess with quality. In other words, teachers should not impose assessment criteria, but listen to how students interpret what they mean until they understand them. This is very important for the development of SRL, i.e., that they understand that building feedback for their peers is more important than receiving it. Constructing feedback that allows the peer to progress in their learning, feedforward, will help the learner to identify strategies to improve their learning. In addition, receiving good feedback will encourage the learner to maintain or modify their effort on the task. At this point, technology can facilitate an appropriate interaction, a dialogue, between assessor and assessed, so that personalized feedback is achieved to promote SRL. Finally, they point out that technology can also provide feedback in different formats in addition to written feedback, which can facilitate its reception.

Hsu and Huang (2015) found that, in PA, grading was quite similar to teacher grading, more realistic than self-assessment. PA was positively valued in two ways: when students receive peer evaluation, even when the feedback is negative, it helps them to be reasonable and appreciate the possibilities for improvement; and in relation to SRL, when students evaluate, it helps them to compare with their work, to know where not to make mistakes and to improve their work. Moreover, to improve SRL, PA feedback is better than giving a mark, but it should be guided so that they learn to reflect well on what is expected from the task. Written feedback can be misinterpreted, so it is recommended to accompany it with face-to-face feedback.

Marín and Pérez (2016) used PA in preservice teacher training, using the Moodle tool to facilitate PA, with formative assessment strategies and feedback management. In the last phase of the Moodle workshop activity, they added an activity in which students had to self-assess and reflect on the feedback received in their e-portfolios. By reflecting on their work from the perspective of others, they were able to become aware of how to improve their work, a phase in which they work from an SRL perspective. Furthermore, in this experience, a weight was assigned to take account in the course grade, as the average of the evaluations of 3 peers is quite close to the one given by the teacher. One aspect that they consider necessary to implement in future proposals is a new “Conferencing” phase (Reinholz, 2016) so that they can discuss with their peer evaluators. These authors do not consider technology in this new proposed phase, so it would be necessary to see how to make the first part of the PA anonymous, and then know the identity of their assessors for the new phase.

Ng (2016) proposed the PA for the evaluation of wikis created by working groups, which were presented in a class to the rest of the groups. A representative from each assessing group was asked to give at least one positive observation and one suggestion for improvement via Moodle. Moreover, each student had to complete a rubric within 3 days of the presentation. Afterwards, each group reviewed all the feedback received to improve their work. In conclusion, although some did not find Moodle a good setting for providing feedback, they did note that in direct interactions they were unwilling to challenge themselves, so anonymous interaction through technology did help to provide more critical feedback.

Raposo-Rivas and Gallego-Arrufat (2016) highlighted, like other studies reviewed, a greater understanding of the evaluation process when carrying out PA. In this case, a tool is used to assess competences and knowledge of other group members, but it is not done anonymously, so the comments reveal an assessment based on cronyism rather than criticism.

Albano et al. (2017) consider that PA has contributed to strengthening the development of students' explanation and argumentation processes. By grading, they are also assessing their own learning. On the other hand, by using a method of triangulating the grades of 3 peers beyond the arithmetic mean (giving more weight to the most similar grades), the quality of the grading is usually adequate. In some specific cases, the teacher must intervene to provide high quality assessments, easily applicable in Moodle.

Blau and Shamir-Inbal (2017) proposed a PA in which the procedure included three phases: first reviewing an example in pairs critically, then assessing their task based on evaluation criteria, and finally evaluating the results of their peers, proposing questions and suggesting improvements. Metacognitive thinking thus occurred before PA, as a way of monitoring of learning. During PA, SRL was also produced by applying critical thinking during the analysis of other tasks, which they used to learn. At first they had difficulty keeping up and did not learn, but later they learned to self-organize in order to learn independently and flexibly.

Fernández-Ferrer and Cano (2019) carried out PA activities in all the topics of the subject and observed an improvement in quality after each iteration. They conclude that both their PA and the feedback received from peers have been useful for their own learning, as they have improved the relevance of what is requested in the assignment.

Roman et al. (2020) developed a tool for PA that allows comments to be added next to the assessed content, making it easier to know what each comment refers to. In this tool, different assessors evaluate a task, over several iterations. Being able to receive multiple perspectives on the work over several iterations, helped to further challenge the content and the task, resulting in more useful feedback to apply to future tasks (feedforward). An improvement over an LMS is that students must incorporate feedback from all peers, one by one (in LMS they paid more attention to some comments than others).

Swartz (2020) proposed solving 2 ill-defined problems, and the PA consists of metacognitively reflecting on the partner's proposed outcome and helping the partner to continue solving. The lack of scaffolding meant that many students focused on figuring out how to use the tool, or how or when to provide feedback, and could not focus on reflecting and helping to solve the ill-defined problem. According to the results of the feedback collected from participants, in order to improve the “assessment as learning” approach so that SRL could be fostered, a second round of feedback and further extension of learning should be included.

Wang (2020) conducted two phases of the study: in the first phase, each student was required to provide feedback to the groups presenting the project (in the middle of the project and at the end of the project) by submitting a comment for discussion at the end of the presentation. In the second, feedback came from the learning journals anonymously. Students felt that the first phase was more useful for self-regulating their learning, as it was instantaneous. However, they also mentioned that the feedback in the second phase could be more critical and comprehensive because it was anonymous. The third phase, with instant and anonymous feedback, was the most highly rated, as it was the most helpful for self-reflection.

Zhu et al. (2023) sought to exploit the metacognitive advantages of PA, which is assessed immediately after task sumision, thus providing critical and comprehensive feedback. It was especially useful for lower-achieving students, in whom a greater improvement was observed. In designing the programme, the first update was to be able to keep the students in the same working groups, so that iterations could be applied gradually until the final product was achieved.

Lluch and Cano (2023) decided to include different activity options in Moodle for their PA activity. In addition to the workshop, the activity par excellence for PA in Moodle, they included forums to discuss the assessment criteria, and to improve understanding of why and for what purpose PA is introduced; open questionnaires to encourage self-regulation (objectives, planning, etc.); forms to integrate changes in the activities; and questionnaires to explain actions on the following phases. They end the activity with a reflection phase that enhances SRL after PA, including a final task, with the new version of the activity including improvements based on the feedback. After their experience, they conclude that SRL, associated with the Learning to Learn competence, should be developed throughout higher education, and that it is necessary to plan self-assessment and PA experiences to develop it. They propose to adapt these experiences for different levels of SRL during the progress of different courses, starting with scaffolded activities up to performing these tasks autonomously.

3.2 Synthesis

Peer feedback can be highly relevant in improving students' learning. This improvement is especially evident in lower-achieving students, where a greater improvement is observed after reflecting on the feedback received (Zhu et al., 2023).

Firstly, receiving good peer feedback helps students to be reasonable about failures, appreciating that there is room for improvement (Hsu and Huang, 2015). Receiving good feedback, which allows one to reflect on the comments to improve for the future, is known as feedforward. If the feedback is constructive, the learner should be able to reflect individually on their strengths and weaknesses, so that they can revise and improve the task based on the feedback (Janson et al., 2014).

In addition, the use of PA activities facilitates the teacher's work in situations where he/she has many students and cannot give personalized feedback to each student. The grades provided in PA are often quite similar to those of the teacher, rather than the students' own self-assessment grades (Hsu and Huang, 2015). If, in addition, an average grade of 3 peers' assessments is used, the grade is quite similar to that of the teacher (Marín and Pérez, 2016). Even triangulation methods can be used, which are very easy to apply with technology, and the quality of the grade is very high (Albano et al., 2017). As these authors point out, in some cases the teacher intervention may be necessary to adjust the grades, which is easy to implement with technology such as Moodle.

It can also be positive for students to know if they are doing it correctly if they can combine group and individual feedback, as group feedback helps them to better understand how to do it (Ng, 2016). This can be done in different ways, for example, in phases, with a group phase first and an individual phase afterwards to apply what they have learned (Janson et al., 2014), or according to the type of feedback, with group feedback being qualitative and individual feedback being quantitative with a rubric (Ng, 2016).

Secondly, and much more important for PA activities to benefit students' SRL, is the provision of feedback to their peers. Having to provide feedback helps students to understand the learning and task objectives, as they must analyse the task elements for assessment (García-Jiménez et al., 2015) and the assessment process itself (Raposo-Rivas and Gallego-Arrufat, 2016).

In this sense, understanding how to provide feedback is more important for SRL than receiving it in order to implement improvements, since generating a good feedforward helps the student him/herself to identify the strategies that will improve his/her learning (García-Jiménez et al., 2015). In addition to improving the processes of explanation and argumentation that facilitate deep learning (Albano et al., 2017). By evaluating peers, they evaluate their own learning, as they must compare both tasks to know where they should not make mistakes (Hsu and Huang, 2015).

In addition, if students are involved in the design of the assessment tasks, defining the criteria and reference levels, their understanding of the objectives of the tasks improves, and thus they can assess with greater precision and quality (García-Jiménez et al., 2015).

And, if possible, it is very beneficial for learners to maintain a dialogue between assessor and assessed, i.e., to facilitate several iterations that help to personalize the feedback, so that it properly understood and integrated, especially promoting SRL (García-Jiménez et al., 2015).

Thirdly, there is scaffolding, which has been found to be essential for students to learn how to provide good feedback based on the proposed assessment criteria, as they must learn to reflect on what is expected from the task (Hsu and Huang, 2015). If scaffolding is not provided, it is very likely that students will not know how to provide good feedback, as in the case of Swartz (2020) where students focused more on how to use the tool or perform the task, rather than reflecting on the content to help the peer improve their task.

The first and most common way to create this scaffolding is with the help of the teacher, who should guide the reflection at the beginning, and gradually give more of a leading role to the students (García-Jiménez, 2015). The second way to create scaffolding is with the support of peers. Peers can guide and monitor the reflection process to give feedback on whether the reflection is sufficient (García-Jiménez, 2015).

They can also conduct an analysis of examples in pairs/groups. For example, they start by analyzing an example in pairs and their own work developed individually, before conducting the PA (Blau and Shamir-Inbal, 2017). By following this procedure, they ensure that they employ metacognitive thinking by monitoring their own learning before the PA, but also during the PA because they apply critical thinking by analyzing other tasks, and being able to compare them with their own, which helps them to learn.

In this second case, it may be an extra effort for them to keep up, because they are not able to organize themselves and do not have a teacher as a reference point, but in the end they achieve independent learning (Blau and Shamir-Inbal, 2017).

It is also important to note that this procedure is learned and improved with practice, both by assessing peers and receiving their evaluations, which improves learning and understanding of the task and objectives, as the relevance of the requested content is improved (Fernández-Ferrer and Cano, 2019).

An important point to highlight is that the SRL enabled by PA is related to the Learning to Learn competence, which must be developed throughout the entire higher education stage, as it is an essential competence for lifelong learning (Lluch and Portillo, 2018). To this end, PA and self-assessment experiences should be planned progressively in the different higher education courses, starting with some kind of scaffolding until autonomous completion by students is achieved (Lluch and Cano, 2023).

Focusing on how technology can help in the design of PA activities to facilitate SRL, it is worth noting that some key elements of PA have clearly benefited from the use of technology.

Technology makes it easier for learners to have more than one iteration (García-Jiménez et al., 2015), as a single iteration may not be to self-regulate future learning (Swartz, 2020). Several iterations produce a great improvement over a traditional feedforward, as feedback is better understood, becoming more useful, allows for deeper questioning of content and improving learning (Roman et al., 2020). In fact, it is best to perform several iterations until the delivery of the final product, in the same working groups (evaluator-evaluees), to better apply the feedback received (Zhu et al., 2023).

However, it is necessary to rethink how to add a discussion with the evaluators in an appropriate way through the technology itself, as doing so without technology will allow the identity of the evaluators to be known (Marín and Pérez, 2016), which can be counterproductive. Anonymous PA allows for more critical feedback (Ng, 2016), as non-anonymous PA is based on cronyism (Raposo-Rivas and Gallego-Arrufat, 2016). Thus, the introduction of forums or other activity formats could be considered to maintain anonymity.

In addition to anonymity, instant feedback is necessary to improve self-regulation (Wang, 2020). Therefore, if we want it to be instantaneous yet anonymous, technology plays a crucial role. In addition to facilitating the improvement of one's own learning by receiving it instantaneously, providing it right at the end of the task helps assessors to better reflect on the task, thus providing more critical and comprehensive feedback (Zhu et al., 2023).

As we have said, technology offers the possibility of using different formats, such as video, audio, etc., beyond a written format that can be misinterpreted (Hsu and Huang, 2015), without the need to use a face-to-face format that makes anonymity disappear, thus favoring the reception of feedback (García-Jiménez et al., 2015). LMSs, such as Moodle, have different activities in addition to the workshop. The workshop activity is designed for PA, but it may be insufficient. It is worth highlighting the proposal by Lluch and Cano (2023) in which they add different types depending on the objective. Firstly, forums to involve students in the design of assessment criteria and to improve understanding of the task. Secondly, open-ended questionnaires to improve the metacognitive phases for self-regulation of learning (goal identification, planning, monitoring, and self-assessment). Thirdly, individual forms and tasks to hand in assignments with the improvements introduced thanks to feedforward.

Being able to complement different types of activities in the same technological tool facilitates self-regulation, as students can comfortably self-assess and reflect on the feedback received, for example, in an e-portfolio (Marín and Pérez, 2016).

Technology has the potential to be updated with improvements when necessary. For example, in certain tools, such as the one designed by Roman et al. (2020), it is proposed that students incorporate all comments to improve their work. A dynamic for introducing this into LMSs would need to be explored, as students often only take into account the comments that are easy for them to include and ignore the others.

4 Discussion: critique

This discussion section provides the third phase of the tripartite model for systematic review (Daniel and Harland, 2017). It sets out guidelines for designing courses in virtual classrooms or similar technologies. Following these guidelines, teachers can be design peer review workshops that facilitate students' own self-regulated learning.

First, it is recommended that feedback is provided in different formats to improve its comprehensibility. In this way, learners do not rely solely on written feedback that can be misinterpreted (Hsu and Huang, 2015). For this purpose, written feedback can be used in addition to a rubric, which is easily configurable in the virtual classroom. A file in any format can also be added, e.g., a short one-minute video, with a reflection on what they have learned from the first submission to the last. Other types of tasks such as forums, open-ended questionnaires or individual forms and tasks can also be used (Lluch and Cano, 2023).

On the other hand, the grades proposed by the assessors can be used, although it is recommended that at least the average of 3 grades is obtained (Marín and Pérez, 2016). Thus, the quality of the assessment is very high, as suggested by Albano et al. (2017), and if there are cases where the assessments are very disparate, the teacher should review the assignment and provide their own feedback. In this case, the Moodle workshop averages the assessors' grades. In addition, the quality of the assessment is graded, depending on whether the mark awarded is like that of the other assessors. It is recommended that this assessor grading is considered to ensure that students assess their peers well. This activity also allowed the teacher to modify the marks if the final average mark is not considered adequate.

As we have seen, it is also important that the AP process is anonymous, in order to achieve more critical and comprehensive feedback (Ng, 2016). Otherwise, it will be based on cronyism and the tasks of friends will not be critically questioned (Raposo-Rivas and Gallego-Arrufat, 2016). In the specific case of Moodle, we have a specific configuration so that both the activity is assessed anonymously, and the identity of the assessors is unknown. It is also more useful for feedback to be instantaneous, both for the assessor, who can provide more complete feedback, and for the assessed, who can apply the comments on the spot, improving the regulation of learning (Wang, 2020). The use of technology makes it possible to propose short timelines to ensure that feedback is instantaneous, as fixed deadlines can be set for each phase.

In addition, for peer assessment to truly facilitate self-regulated learning, students must be taught to provide feedback. This constructive feedback allows the student to reflect on the comments and apply them in a new and improved version of the task (Janson et al., 2014). For, as Malecka et al. (2022) argue, it is necessary to include the processing and application of the feedback received. Thus, they identify strategies to improve their own learning (García-Jiménez et al., 2015), and evaluate their learning (Hsu and Huang, 2015). Technology can be used to create questionnaires with good and bad examples of feedforward, and discuss the results with them in class to justify why it is or is not constructive.

The use of feedforward will be useful if the evaluation consists of several iterations, allowing a dialogue between evaluator and evaluated (García-Jiménez et al., 2015), helping to better understand the feedback and integrate it properly into the final product. The technology facilitates work with several iterations (García-Jiménez et al., 2015). Thus, the technology makes it possible to reopen phases that have already been completed in order to carry out a new submission, and, subsequently, a new evaluation. In the specific case of Moodle, the tool facilitates that the evaluators of a task are always the same in the different iterations, as recommended by Zhu et al. (2023). Furthermore, it is recommended to include a final task after the whole AP process, in which the learner can apply the knowledge developed through the reflection of the feedback (Janson et al., 2014). In this way, in addition to the PA activity, a task can be created in the virtual classroom for the student to hand in the final version of their work, which is assessed by the teacher. Another option is the re-evaluation of modified submissions by the assessors themselves, who can focus on the improvements included to revise the grading of the rubric.

Secondly, in addition to the characteristics that must be taken into account for good feedback, students must learn to evaluate their peers from a learning perspective. As proposed by Carless and Winstone (2023), literacy feedback from the teacher is necessary for proper scaffolding. Therefore, scaffolding is required, either by the teacher as a guide or by practicing with examples in pairs or small groups. This scaffolding enables a focus on building useful and relevant feedback on the objectives (Hsu and Huang, 2015). In the Moodle workshop, the teacher can include already corrected examples with feedback comments. These comments can be used to teach students what is expected at each point (García-Jiménez, 2015). In addition, these examples can also be reviewed in pairs or groups of three to ensure that they understand how they should approach and create the feedback (Blau and Shamir-Inbal, 2017). As Carless and Boud (2018) explain, the use of examples is ideal for developing feedback literacy.

Therefore, it is advisable to combine group feedback with individual feedback. They start with group feedback to discuss and reflect on what the feedback should look like (Ng, 2016). Applying this idea to the use of technology, the first PA activity can be done in pairs. In the workshop activity, by including part rubric and part open-ended feedback, it is possible to discuss in pairs at which level of the rubric the assessed activities fall. Afterwards, they individually justify their decision in the comments. In addition, the pair can review the comments to discuss how to improve them.

It is also essential that the use of peer assessment and self-assessment is planned progressively, as they will learn to assess little by little. Finally, they will be able to develop the Learning to Learn competence, which is necessary for lifelong learning (Lluch and Cano, 2023). Thus, it is recommended to carry out several AP activities, approximately one per month or up to four in a four-month period. It is also recommended to support these activities with self-assessment, in the Moodle workshop the option can be enabled for them to assess their own work based on the assessment criteria, forms or tasks with short audio or video files can be used for them to reflect on their progress.

Finally, it should be noted that in this scaffolding process, it is advisable to involve students in the development of the assessment criteria and reference levels. In this way, they can check their interpretation of the objectives, so that they can assess the task more accurately (García-Jiménez et al., 2015). In the days prior to the PA activity, a forum can be opened to discuss the assessment criteria, for students to review and propose modifications. They can also be asked to give an example of what it would mean to be assessed at one of the benchmark levels of a criterion in a rubric. To ensure their participation in the forum, they can be asked to participate in pairs or groups of three in class or with a video call tool, and then discuss the forum comments together to construct a final rubric.

Thus, we can see, as Little et al. (2024) state, how all these scaffolding aids will help learners to manage their perceptions and attitudes toward the feedback process. In this way, they will then be able to improve their confidence and feedback agency, developing their feedback literacy.

5 Conclusions

Peer assessment facilitates metacognitive reflection, thanks to the use of formative feedback, which is used in most of the experiences reviewed (Alqassab et al., 2023). In order to achieve quality feedback, we start from the importance of students developing literacy feedback (Carless and Boud, 2018). This is due to the fact that in recent years an approach to feedback that focuses on learning, rather than just transmission has begun to be considered (Winstone et al., 2022). This helps them to plan and guide their own learning (Topping, 2009), as they need to understand the assessment criteria (Van Helden et al., 2023) to justify the feedback (Liu and Carless, 2006). Moreover, metacognition is especially applied when receiving it (Liu and Lin, 2007), as Ng (2016) appreciates that they reflect on it in order to apply it later in their work, according to Misiejuk and Wasson (2021).

Fernández-Ferrer and Cano (2019) confirm how experience improves the application of PA (Van Zundert et al., 2010), so they should train as proposed by Lluch and Cano (2023), or support with scaffolding, as suggested by García-Jiménez (2015) and Blau and Shamir-Inbal (2017). On the other hand, Marín and Pérez (2016), Ng (2016), and Raposo-Rivas and Gallego-Arrufat (2016) mention the importance of the anonymity they achieve with technology, according to Panadero and Alqassab (2019), very easy to implement with technology.

We can conclude that, if the design guidelines drawn in this review are followed, it is possible to develop SRL with PA activities. As we have seen in all the studies included in this article, peer assessment activities provide more than just a benefit from the feedback received. Students, as reviewers, can gain a greater understanding of the task and improve their knowledge of the topic they are to assess. The reflection required during the assessment is conducive for them to regulate their own learning.

As limitations we can highlight the few articles that meet all the established search criteria. If the higher education and/or technology criteria were removed, the result would be much higher, and more design recommendations could have been achieved. It is therefore recommended that future literature reviews be conducted with a more open search.

Author contributions

BO-R: Data curation, Formal analysis, Methodology, Writing—original draft. JC-G: Conceptualization, Supervision, Validation, Writing—review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

This research has been developed thanks to the research stay carried out at the University of the Basque Country (UPV/EHU), with the research group Elkarrikertuz, in relation to the research project Trayectorias de aprendizajes de jóvenes universitarios: concepciones, estrategias, tecnologías y contextos (Learning trajectories of young university students: conceptions, strategies, technologies and contexts). TRAY-AP. 2020/2023 (Ministry of Science and Innovation, PID2019-108696RB-I00. 2020-2022).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Albano, G., Capuano, N., and Pierri, A. (2017). Adaptive peer grading and formative assessment. J. e-Learn. Knowl. Soc. 13, 1. doi: 10.20368/1971-8829/159

Crossref Full Text | Google Scholar

Alqassab, M., Strijbos, J. W., Panadero, E., Ruiz, J. F., Warrens, M., and To, J. (2023). A systematic review of peer assessment design elements. Educ. Psychol. Rev. 35, 18. doi: 10.1007/s10648-023-09723-7

Crossref Full Text | Google Scholar

Azevedo, R., and Gašević, D. (2019). Analyzing multimodal multichannel data about self-regulated learning with advanced learning technologies: issues and challenges. Comput. Human Behav. 96, 207–210. doi: 10.1016/j.chb.2019.03.025

Crossref Full Text | Google Scholar

Black, P., and Wiliam, D. (2009). Developing the theory of formative assessment. Educ. Assessm. Evaluat. Accountab. 21, 5–31. doi: 10.1007/s11092-008-9068-5

Crossref Full Text | Google Scholar

Blau, I., and Shamir-Inbal, T. (2017). Re-designed flipped learning model in an academic course: the role of co-creation and co-regulation. Comput. Educ. 115, 69–81. doi: 10.1016/j.compedu.2017.07.014

Crossref Full Text | Google Scholar

Butler, D. L., and Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65, 245–281. doi: 10.3102/00346543065003245

Crossref Full Text | Google Scholar

Carless, D., and Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessm. Evaluat. Higher Educ. 43, 1315–1325. doi: 10.1080/02602938.2018.1463354

Crossref Full Text | Google Scholar

Carless, D., and Winstone, N. (2023). Teacher feedback literacy and its interplay with student feedback literacy. Teach. Higher Educ. 28, 150–163. doi: 10.1080/13562517.2020.1782372

Crossref Full Text | Google Scholar

Chou, C. Y., and Zou, N. B. (2020). An analysis of internal and external feedback in self-regulated learning activities mediated by self-regulated learning tools and open learner models. Int. J. Educ. Technol. High. Educ. 17, 1–27. doi: 10.1186/s41239-020-00233-y

Crossref Full Text | Google Scholar

Daniel, B. K., and Harland, T. (2017). Higher Education Research Methodology: A Step-by-Step Guide to the Research Process. London: Routledge.

Google Scholar

Fernández-Ferrer, M., and Cano, E. (2019). Feedback experiences to improve the continuous assessment: the use of Twitter as an emerging technology. Educar 55, 437–455. doi: 10.5565/rev/educar.872

Crossref Full Text | Google Scholar

Fu, Q. K., Lin, C. J., and Hwang, G. J. (2019). Research trends and applications of technology-supported peer assessment: a review of selected journal publications from 2007 to 2016. J. Comp. Educ. 6, 191–213. doi: 10.1007/s40692-019-00131-x

Crossref Full Text | Google Scholar

Gamage, S. H. P. W., Ayres, J. R., and Behrend, M. B. (2022). A systematic review on trends in using Moodle for teaching and learning. Int. J. STEM Educ. 9, 9. doi: 10.1186/s40594-021-00323-x

PubMed Abstract | Crossref Full Text | Google Scholar

García-Jiménez, E. (2015). Assessment of learning: From feedback to self-regulation. The role of technologies. Elect. J. Educ. Res. Assessm. Evaluat. 21, 2. doi: 10.7203/relieve.21.2.7546

Crossref Full Text | Google Scholar

García-Jiménez, E., Gallego-Noche, B., and Gómez-Ruiz, M. Á. (2015). “Feedback and self-regulated learning: How feedback can contribute to increase students' autonomy as learners,” in Sustainable Learning in Higher Education: Developing Competencies for the Global Marketplace, eds. M. Peris-Ortiz and J. M. Merigó (Cham: Springer), 113–130.

Google Scholar

Goh, C. F., Tan, O. K., Rasli, A., and Choi, S. L. (2019). Engagement in peer review, learner-content interaction and learning outcomes. Int. J. Inf. Learn. Technol. 36, 423–433. doi: 10.1108/ijilt-04-2018-0038

Crossref Full Text | Google Scholar

Gros, B., and Cano, E. (2021). Procesos de feedback para fomentar la autorregulación con soporte tecnológico en la educación superior: Revisión sistemática. RIED. Revista Iberoamericana de Educación a Distancia 24, 107–125. doi: 10.5944/ried.24.2.28886

Crossref Full Text | Google Scholar

Hooshyar, D., Pedaste, M., Saks, K., Leijen, Ä., Bardone, E., and Wang, M. (2020). Open learner models in supporting self-regulated learning in higher education: A systematic literature review. Comput. Educ. 154, 103878. doi: 10.1016/j.compedu.2020.103878

Crossref Full Text | Google Scholar

Hoskins, B., and Fredriksson, U. (2008). Learning to Learn: What is it and Can it Be Measured. New York: JRC Publications Repository. OPOCE.

Google Scholar

Hsu, P. L., and Huang, K. H. (2015). “Evaluating online peer assessment as an educational tool for promoting self-regulated learning,” in Multidisciplinary Social Networks Research: Second International Conference. Proceedings 2 (Berlin: Springer Berlin Heidelberg), 161–173.

Google Scholar

Ion, G., Barrera-Corominas, A., and Tomàs-Folch, M. (2016). Written peer-feedback to enhance students' current and future learning. Int. J. Educ. Technol. High. Educ. 13, 1–11. doi: 10.1186/s41239-016-0017-y

Crossref Full Text | Google Scholar

Janson, A., Ernst, S. J., Lehmann, K., and Leimeister, J. M. (2014). “Creating awareness and reflection in a large-scale is lecture – the application of a peer assessment in a flipped classroom scenario,” in 4th Workshop on Awareness and Reflection in Technology-Enhanced Learning (ARTEL 2014) to be held in the context of EC-TEL 2014 (Graz: ARTEL).

Google Scholar

Little, T., Dawson, P., Boud, D., and Tai, J. (2024). Can students' feedback literacy be improved? A scoping review of interventions. Assessm. Eval. Higher Educ. 49, 39–52. doi: 10.1080/02602938.2023.2177613

Crossref Full Text | Google Scholar

Liu, E. Z. F., and Lin, S. S. J. (2007). Relationship between peer feedback, cognitive and metacognitive strategies, and achievement in networked peer assessment. Br. J. Educ. Technol. 38, 1122–1125. doi: 10.1111/j.1467-8535.2007.00702.x

Crossref Full Text | Google Scholar

Liu, N. F., and Carless, D. (2006). Peer feedback: the learning element of peer assessment. Teach. High. Educ. 11, 279–290. doi: 10.1080/13562510600680582

Crossref Full Text | Google Scholar

Lluch, L., and Cano, E. (2023). How to embed SRL in online learning settings? Design through learning analytics and personalized learning design in moodle. J. New Approaches Educ. Res. 12, 120–138. doi: 10.7821/naer.2023.1.1127

Crossref Full Text | Google Scholar

Lluch, L., and Portillo, M. C. (2018). La competencia de aprender a aprender en el marco de la educación superior. Revista Iberoamericana de Educación 78, 59–76. doi: 10.35362/rie7823183

Crossref Full Text | Google Scholar

Malecka, B., Boud, D., and Carless, D. (2022). Eliciting, processing and enacting feedback: mechanisms for embedding student feedback literacy within the curriculum. Teach. Higher Educ. 27, 908–922. doi: 10.1080/13562517.2020.1754784

Crossref Full Text | Google Scholar

Marín, V. I., and Pérez, A. (2016). “Collaborative e-assessment as a strategy for scaffolding self-regulated learning in higher education,” in Formative Assessment, Learning Data Analytics and Gamification (Cambridge: Academic Press), 3–24.

Google Scholar

Misiejuk, K., and Wasson, B. (2021). Backward evaluation in peer assessment: a scoping review. Comput. Educ. 175, 104319. doi: 10.1016/j.compedu.2021.104319

Crossref Full Text | Google Scholar

Ng, E. M. (2016). Fostering pre-service teachers' self-regulated learning through self-and peer assessment of wiki projects. Comput. Educ. 98, 180–191. doi: 10.1016/j.compedu.2016.03.015

Crossref Full Text | Google Scholar

Noroozi, O., Banihashem, S. K., Biemans, H. J., Smits, M., Vervoort, M. T., and Verbaan, C. L. (2023). Design, implementation, and evaluation of an online supported peer feedback module to enhance students' argumentative essay quality. Educ. Inf. Technol. 28, 12757–12784. doi: 10.1007/s10639-023-11683-y

PubMed Abstract | Crossref Full Text | Google Scholar

Ortega-Ruipérez, B., and Castellanos-Sánchez, A. (2023). Guidelines for instructional design of courses for the development of self-regulated learning for teachers. S Afr J Educ. 43, 1–13. doi: 10.15700/saje.v43n3a2202

Crossref Full Text | Google Scholar

Panadero, E., and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assess. Eval. High Educ. 44, 1253–1278. doi: 10.1080/02602938.2019.1600186

Crossref Full Text | Google Scholar

Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., et al. (2006). “Guidance on the conduct of narrative synthesis in systematic reviews,” in A product from the ESRC Methods Programme. Version 1, b92 (London: Institute for Health Research).

Google Scholar

Ramírez,;, M. S, and Lugo, J. (2020). Revisión sistemática de métodos mixtos en el marco de la innovación educativa. Comunicar 65, 9–20. doi: 10.3916/C65-2020-01

Crossref Full Text | Google Scholar

Raposo-Rivas, M., and Gallego-Arrufat, M. J. (2016). University students' perceptions of electronic rubric-based assessment. Digit. Educ. Rev. 30, 220–233. doi: 10.1344/der.2016.30.220-233

Crossref Full Text | Google Scholar

Reinholz, D. (2016). The assessment cycle: a model for learning through peer assessment. Assess. Eval. High. Educ. 41:301–315. doi: 10.1080/02602938.2015.1008982

Crossref Full Text | Google Scholar

Roberts, T. S. (2006). Self, Peer, and Group Assesment in E-Learning. Hershey, PA: Information Science Publishing.

Google Scholar

Roman, T. A., Callison, M., Myers, R. D., and Berry, A. H. (2020). Facilitating authentic learning experiences in distance education: Embedding research-based practices into an online peer feedback tool. TechTrends 64, 591–605. doi: 10.1007/s11528-020-00496-2

Crossref Full Text | Google Scholar

Swartz, B. (2020). “‘Assessment as Learning' as a tool to prepare engineering students to manage ill-defined problems in industry,” in 2020 IFEES World Engineering Education Forum-Global Engineering Deans Council (WEEF-GEDC) (Cape Town: IEEE), 1–5.

Google Scholar

Topping, K. J. (2009). Peer assessment. Theory Pract. 48, 20–27. doi: 10.1080/00405840802577569

Crossref Full Text | Google Scholar

Topping, K. J. (2018). Using Peer Assessment to Inspire Reflection and Learning. New York: Routledge.

Google Scholar

United Nations (2023). “Goal 4,” in Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for all. SDGS. Available online at: https://sdgs.un.org/goals/goal4 (accessed January 24, 2024).

Google Scholar

Van Helden, G., Van Der Werf, V., Saunders-Smits, G. N., and Specht, M. M. (2023). The use of digital peer assessment in higher education–an umbrella review of literature. IEEE Access 11, 22948–22960. doi: 10.1109/ACCESS.2023.3252914

Crossref Full Text | Google Scholar

Van Zundert, M., Sluijsmans, D., and Van Merriënboer, J. (2010). Effective peer assessment processes: research findings and future directions. Learn Instr. 20, 270–279. doi: 10.1016/j.learninstruc.2009.08.004

Crossref Full Text | Google Scholar

Veenman, M. V. J., Van Hout-Wolters, H. A. M., and Afflerbach, P. (2006). Metacognition and learning: conceptual and methodological considerations. Metacognit. Learn. 1, 3–14. doi: 10.1007/s11409-006-6893-0

Crossref Full Text | Google Scholar

Wang, Y. H. (2020). Design-based research on integrating learning technology tools into higher education classes to achieve active learning. Comput. Educ. 156, 103935. doi: 10.1016/j.compedu.2020.103935

Crossref Full Text | Google Scholar

Winne, P. H. (1996). A metacognitive view of individual differences in self-regulated learning. Learn. Individ. Differ. 8, 327–353. doi: 10.1016/S1041-6080(96)90022-9

Crossref Full Text | Google Scholar

Winstone, N., Boud, D., Dawson, P., and Heron, M. (2022). From feedback-as-information to feedback-as-process: a linguistic analysis of the feedback literature. Assessm. Evaluat. Higher Educ. 47, 213–230. doi: 10.1080/02602938.2021.1902467

Crossref Full Text | Google Scholar

Zhang, Y., and Schunn, C. D. (2023). Self-regulation of peer feedback quality aspects through different dimensions of experience within prior peer feedback assignments. Contemp. Educ. Psychol. 74, 102210. doi: 10.1016/j.cedpsych.2023.102210

Crossref Full Text | Google Scholar

Zheng, L., Chen, N. S., Cui, P., and Zhang, X. (2019). A systematic review of technology-supported peer assessment research: an activity theory approach. Int. Rev. Res. Open Dis. 20, 168–191. doi: 10.19173/irrodl.v20i5.4333

Crossref Full Text | Google Scholar

Zhu, H., Li, N., Rai, N. K., and Carroll, J. M. (2023). SmartGroup: a tool for small-group learning activities. Future Int. 15, 7. doi: 10.3390/fi15010007

Crossref Full Text | Google Scholar

Zimmerman, B. J. (2002). Becoming a self-regulated learner: an overview. Theory Pract. 41, 64–70. doi: 10.1207/s15430421tip4102_2

Crossref Full Text | Google Scholar

Keywords: peer assessment, self-regulated learning, higher education, Moodle, technology

Citation: Ortega-Ruipérez B and Correa-Gorospe JM (2024) Peer assessment to promote self-regulated learning with technology in higher education: systematic review for improving course design. Front. Educ. 9:1376505. doi: 10.3389/feduc.2024.1376505

Received: 25 January 2024; Accepted: 26 March 2024;
Published: 08 April 2024.

Edited by:

Barbara Otto, Fresenius University of Applied Sciences, Germany

Reviewed by:

Elena Mirela Samfira, University of Life Sciences “King Mihai I” from Timisoara, Romania
Rosalynn Argelia Campos Ortuño, University of Salamanca, Spain

Copyright © 2024 Ortega-Ruipérez and Correa-Gorospe. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Beatriz Ortega-Ruipérez, beatriz.ortega@urjc.es

Download