Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 25 March 2024
Sec. Educational Psychology
This article is part of the Research Topic Fostering self-regulated learning View all 5 articles

Developing a scale to explore self-regulatory approaches to assessment and feedback with academics in higher education

  • 1Cardiff University, Cardiff, United Kingdom
  • 2School of Biosciences, Cardiff University, Cardiff, United Kingdom
  • 3School of Biological Sciences, Faculty of Health and Life Sciences, University of Bristol, Bristol, United Kingdom
  • 4Faculty of Education, University of Zaragoza, Zaragoza, Aragon, Spain
  • 5European Association of Geographers (EUROGEO), Brussels, Belgium

Introduction: Students need to acquire high level self-regulatory skills if they are to be successful within higher education, and academics need support in facilitating this. In this article we explore how the current research gap between knowledge of self-regulatory assessment and feedback (SRAF) practices, and academics’ professional training in it can be bridged.

Methods: SRAF tools were used with academics to explore their understandings of and training needs in SRAF; central to this work was the development of a SRAF scale. We consider the value of such tools in supporting academics’ professional development needs in SRAF. The reliability and validity of the SRAF scale was tested using exploratory factor analyses (EFA).

Results: Iterative EFA resulted in a 17 item support required SRAF scale (SR). Two underpinning factors: Creating the Conditions for SRAF, and Supporting Students’ SRAF Skills Development were identified. The reliability of the instrument supported its primary use as a tool to facilitate academics’ professional development in fostering students’ self-regulatory skills.

Discussion: Our findings highlight the importance of supporting academics in developing strategies to maximize students’ metacognitive skills and motivation in assessment and feedback, contingent on effective assessment design. Such professional development needs to be mindful of individual and contextual factors impacting academics’ access to, and confidence and competence in, using SRAF in practice. This research is important in highlighting potential disconnects between where academics’ focus their attention in assessment, and what is known to have most impact on student learning success. The SRAF tools have considerable potential in supporting translation of theory into practice as part of sustained professional development for academics in higher education.

1 Introduction

The importance of supporting students’ self-regulatory learning (SRL) skills development in impacting their achievements in higher education is well known (Hayat et al., 2020; Büchele, 2023; Hattie, 2023; Evans and Waring, 2023a). Supporting academics in providing such skills support to students is challenging given that training in self-regulatory practices is significantly underrepresented in professional development provision for academics in higher education (Ruiz and Panadero, 2023). Translation of knowledge on effective self-regulatory assessment and feedback (SRAF) into practice is limited by the lack of guidelines available to academics on how to do this well (Honig and Coburn, 2007; Jansen et al., 2019). Academics need to know how to support students’ SRL skills development, and as part of this, they need better understanding of the relationships between learner characteristics and personal goals, and cognitive, metacognitive and emotional regulatory processes, and how these impact learning.

As identified above, supporting students’ SRL skills development is essential given the relationship between self-regulatory capacity and student achievement (Schneider and Preckel, 2017; Lin et al., 2022). Students enter higher education with varying levels of self-regulation. Of significant importance is that students’ abilities to self-regulate can be developed (Vosniadou, 2020), while accepting that some students are more capable of self-regulatory flexibility than others (Kozhevnikov et al., 2014). To support students’ academic development, and to utilize resources most effectively it makes sense for academics to focus on those SRL skillsets that are most implicated in student success (Dinsmore, 2017). As noted by Russell et al. (2022), academics’ self-regulation plays an important part in how academics’ support student SRL skills development. For academics to be able to do this effectively, they need to be aware of their own SRL skillsets, identify and focus on those high level SRL skills students most need within a specific context, and model these skillsets confidently with their students.

However, the complexity of the self-regulation construct makes translation of it into assessment and feedback practice in higher education difficult given that is an umbrella concept (Panadero, 2017) comprising many different variables and approaches with different theoretical underpinnings. Evans et al. (2021, p. 10) in exploring the multi-faceted nature of self-regulated learning (SRL) define it as:

a learner’s ability to regulate his/her learning in different contexts… SRL can be viewed as a construct, a process and an ability that can be developed… SRL may comprise state (approaches developed in response to a specific context) and trait elements (established patterns of working that are consistent across contexts).

Zimmerman (1998, p. 329) argued that “students can be described as self-regulated to the degree that they are metacognitively, motivationally, and behaviorally active participants in their own learning process”, and that “students’ learning must involve the use of specified strategies to achieve academic goals on the basis of self-efficacy perceptions”. Self-efficacy in this context refers to students’ perception of their own abilities to manage the learning process effectively, and achieve their desired goals. In exploring the structure of self-regulation, Zimmerman and his contemporaries discuss the recursive stages involved in managing a task such as forethought (planning and goal-setting), the performance phase (selection of appropriate strategies to complete a task and ongoing monitoring and review to maintain motivation and adjust strategies as necessary), and a self-reflection phase (involving self-evaluation of effectiveness and reframing as necessary in pursuit of goals). In all these phases metacognitive (understanding of which strategies to use), cognitive (how individuals make sense of and process information) and affective strategies (management of emotional aspects of learning) are required (Evans et al., 2021).

In this work, we were particularly interested in the metacognitive skills students deploy in assessment and feedback while acknowledging the interdependence of these with cognitive and affective strategies. The mediating nature of task requirements (e.g., nature of assessment), the context (e.g., extent to which the design of assessment requires and values students’ acquisition of high level self-regulation skills), and individual characteristics (e.g., self-efficacy and motivation) make it difficult to ascertain how best to support students in choosing the most appropriate SRL strategies and using them well (Dresel et al., 2015). Jansen et al. (2019) concluded from a review of 142 studies that academics need to support students’ engagement in SRL activities as well as their achievement; they argue that the lack of significant moderators of the effects of SRL interventions makes it difficult to provide concrete design guidelines for such SRL interventions. In providing more specific guidance, Hattie et al. (1996) recommended that training should be in context, and use tasks within the same domain as the target content, and promote a high degree of learner activity and metacognitive awareness. Evans and Waring (2023a) have gone further in articulating the key elements of a SRAF approach to support translation of research into practice through encouraging academics to articulate what those high level self-regulatory skills are that they want students to develop, and by providing a route map of how to build participatory assessment designs that provide the conditions in which development of these skills can flourish.

In this article we describe a pilot exploratory project developed to support better understanding of SRAF in practice, conscious of the relative lack of research on supporting academics’ professional development in SRAF in higher education research. This research is important given that academics’ knowledge of self-regulatory approaches impacts the quality of assessment design (Dörrenbächer and Perels, 2016; Peeters et al., 2016), and the fact that professional development for academics in this area is in its infancy (Evans and Waring, 2023a; Dinsmore et al., 2024).

This research is situated within the context of developing tools to support academics’ translation of SRAF into practice as part of an international Erasmus+ research funded project. This project used an established, research-informed assessment framework (EAT). The Equity, Agency and Transparency (EAT) framework was chosen to explore how best to support academics’ access to, and effective use of relevant SRAF approaches given its underpinning theoretical framing around agentic, inclusive, and self-regulatory approaches to assessment and feedback. This theoretical and conceptual framework (Evans, 2016, 2022) synthesizes what is known about effective assessment and feedback (Evans, 2013) and integrates this with an understanding of self-regulatory learning approaches and individual differences in learning. EAT was developed from extensive systematic review of the literature and evolution of the framework with staff and students across disciplines and higher education institutions. Its visual form is that of a wheel which guides academics to consider 12 core entry level questions about how they support students’ SRAF, and asks students how they engage with SRAF. There is a vast body of resources to support learners (academics and students) to consider how best to develop and evaluate the effectiveness of their approaches to SRAF using a research-informed approach. A key strength of the approach is it can be adapted to any context and any level of analysis (individual, course, discipline, faculty, institution) (Evans et al., 2022a).

Conceptually, EAT (Evans, 2022) highlights the central role of academics in designing assessment environments that support students’ SRL skills development in partnership with them. Partnership involves active engagement with students in decision-making processes about assessment and feedback to support co-ownership of assessment, dependent on supporting students’ skills development and confidence in being able to step up to take a more central role in the assessment and feedback process, and includes defining the limits of their engagement. Through partnership in assessment, it is argued that student agency is increased, creating opportunities for students to impact the quality of assessment, which in turn enhances the conditions to support SRL skills development (Evans and Waring, 2021).

Emphasis on how conditions are created to promote student ownership and agency in assessment is a central element of our SRAF pedagogical framing and aligns with Bandura’s (1986, 2001) idea of agency in how individuals deliberately guide their behavior (the actions they choose and how they execute them in pursuit of goals), and Reeve’s (2013) notion of agentic engagement in how individuals are empowered by their environment so they are able to leverage change within it. It also requires academics to be discerning in selecting what high level self-regulatory skills they wish to focus on related to their specific context.

In this article we: (i) consider what a SRAF approach is, and the key dimensions of it implicated in student learning in higher education, (ii) explore what SRAF support academics want to enhance their assessment practice, and in relation to their perceived use of SRAF, and (iii) consider the implications for the development of SRAF professional development in higher education from using SRAF tools with colleagues. To address these questions, we firstly, explore the context of SRAF in higher education. Secondly, we describe pedagogical tools developed to support understanding of SRAF including the development of a scale to explore academics’ perceived frequency of use of SRAF practices, and associated professional development needs, drawing on the model developed by Dinsmore et al. (2024). Thirdly, we model the outcomes from our work with academics on using SRAF tools, and explore the implications of these findings for enhancing SRAF professional development in higher education.

2 Developing a SRAF pedagogy

Evans and Waring (2023a,b) coined the term SRAF pedagogies to refer to assessment and feedback practices that focused on the systematic development of students’ SRL skillsets, and critical evaluation of them in practice. In their approach SRL is embedded within all aspects of assessment design and emphasis is placed on supporting students’ knowledge, skills and confidence in their ability to choose the most appropriate learning strategies and to use them effectively within a specific context (i.e., attuned to disciplinary and professional needs). According to Evans and Waring (2023a, pp. 11–12).

SRAF considers learner characteristics and personal goals, and how cognitive, metacognitive, and emotional regulatory processes come together to support learning. Of critical importance is the degree of alignment between academics’ and students’ perceptions of quality in impacting improvements in learning … A key emphasis in the design of self-regulatory assessment has to be on how we maximize the opportunities for students to gain an understanding of quality for themselves.

There are many potential permutations of SRAF pedagogies which may have different emphases depending on different theoretical perspectives on SRL, and in relation to how academics perceive the role of students in the process. Evans and Waring (2023a) highlight the importance of effective assessment design in creating the necessary conditions for SRAF to support students’ agentic engagement with assessment and feedback. Agentic engagement involves students’ abilities to evolve their learning context to address their assessment needs. In providing explicit guidelines on SRAF, they argue that academics need to start by articulating what the core SRL skills they want students to acquire within their discipline are.

In drawing together research on SRAF, Evans and Waring (2023a) argue that emphasis should be on ensuring the assessment context supports SRL development, and provides focused skills training by attending to the following:

• Embedding SRL skills development within discipline-specific contexts (Hattie et al., 1996).

• Ensuring SRL skills development is integrated into all aspects of assessment and feedback design (Evans, 2016, 2022).

• Addressing academics’ and students’ conceptions of their roles in assessment and feedback to support student agency and autonomy (Vermunt and Donche, 2017).

• Focusing activities to support alignment of academics’ and students’ conceptions of quality.

• Working with students to support their engagement as co-constructors of assessment and feedback practices to support internalization of standards (Hattie et al., 1996; Simper, 2020; Nicol, 2022).

• Making explicit what the core high level SRL skills are that students need to be successful within a course (Evans, 2016, 2022; van Merrienboer and de Bruin, 2019).

• Focusing on the development of high-level SRL skills that have the most impact on learning outcomes (e.g., motivational and metacognitive) (Dinsmore, 2017; Panadero, 2017; van Merrienboer and de Bruin, 2019; Dekker et al., 2023).

• Providing repeated opportunities for students to observe, emulate, apply and evolve self-regulation strategies that are most relevant to the contexts they are working in (Zimmerman and Kitsantas, 2005; Dunlosky et al., 2013).

• Using data and technologies with academics and students to support their understanding of their learning, and the implications of different teaching and learning approaches on outcomes (Tempelaar et al., 2021; Hattie, 2023).

• Acknowledging and addressing the increasing role of digital including artificial intelligence (AI) literacy in self-regulatory skills development (Istifci and Goksel, 2022; Krempkow and Petri, 2022).

• Placing emphasis on high quality professional development in SRAF supported by high quality research design including evaluation processes (Panadero, 2023).

3 Theoretical framing

3.1 SRAF skills development

In developing SRAF professional development frameworks and tools, as previously identified, we drew on the EAT assessment and feedback framework (Evans, 2016, 2022) given its strong integrated theoretical frame. EAT brings together constructivist, socio-cultural and socio-critical theories in supporting effective self-regulatory assessment and feedback (Evans, 2013) with understanding of student approaches to learning (SAL) (Vermunt and Verloop, 1999), and agentic engagement (Reeve, 2013).

EAT aligns with socio-cognitive (Bandura, 1986, 1991, 2001; Pintrich, 1989, 2004; Zimmerman, 2001; Zimmerman and Campillo, 2003) and information processing self-regulation models (Winne and Hadwin, 1998; Winne, 2001). Socio-cognitive models emphasize the role of interaction with others in impacting learning behaviors, and information processing models focus on how individuals make sense of information, and the cognitive, metacognitive, and affective processes inherent in this.

In Figure 1, EAT portrays effective assessment and feedback practices (Evans, 2013) as 12 interconnected sub-dimensions of assessment literacy (AL), assessment feedback (AF), and assessment design (AD). The EAT sub-dimensions are all highly integrated, in that actions taken in one aspect of assessment and feedback practice have an impact on others. Academics are asked to consider how they engage students in supporting their self-regulatory development in each of these sub-dimensions of practice as integral to the focus of the model on ensuring student access to assessment and feedback and their agentic engagement with it. The quality of assessment design and a supportive institutional context are important in providing the conditions to support SRAF development for students and academics, respectively. The relational dimension of SRL involves being able to utilize one’s own skills effectively, and gain support from others in the realization of one’s learning goals. Agency and engagement are identified as essential in supporting SRL skill development and achievement (Boud and Molloy, 2013; Evans, 2013). Our approach recognizes the combined influence of individual dispositions, metacognitive, cognitive, and affective strategies, and contextual affordances and barriers in impacting learners’ management of assessment and feedback (Vermunt and Verloop, 1999).

Figure 1
www.frontiersin.org

Figure 1. The dimensions of the EAT Framework (Evans, 2022).

3.2 SRAF skills development

Our SRAF approach considers how learners acquire competencies, the importance of individuals’ and teams’ conceptions and beliefs on this process (Bembenutty et al., 2015), and awareness of the different ways in which learners process information (Waring and Evans, 2015). The importance of explicit teaching of SRL skills is intrinsic to this approach, while also acknowledging that some individuals are capable of higher level SRL skills development than others, especially in relation to metacognitive flexibility (Kozhevnikov et al., 2014). Emphasis is also on supporting quality and conditional use of strategies; using strategies effectively and selecting the most appropriate ones for a given task (Dinsmore, 2017).

From a self-regulatory process perspective, key metacognitive skillsets required in managing assessment tasks include accuracy in interpreting the requirements of a task and meta-memory in ascertaining what you know, and how you can use this knowledge to support task completion. In planning an appropriate approach to manage assessment the quality and nature of goals (Dent and Koenka, 2016) and contextual regulation (being able to read the context well in knowing where and who to get support from and how to use such support well) are important. Monitoring accuracy is dependent on effective use of cues coming from the task itself, the task context, from cognitive processing fluency, and from a learner’s affective states and self-concept (van Merrienboer and de Bruin, 2019). Metacognitive skills are required in accurate monitoring of progress, and in adapting strategies where necessary to support maintenance of effort (Panadero, 2017), and alignment of strategies to achieve goals (adaptive control). The ability to synthesize internal information and that from others in assessing one’s own work accurately is emphasized in self-evaluative capacity which also includes reflexivity in being able to effectively ‘step outside of oneself’ to objectively review lessons learnt and to make adaptations in one’s approach for the future.

In supporting SRAF development with academics we focused on high level metacognitive skillsets given that these skillsets are known to have the most impact on student learning outcomes (Dinsmore, 2017; Schneider and Preckel, 2017). This included firstly, a focus on students’ self-efficacy and goal-setting given the tendency for higher education students to have better results when interventions are aimed at motivational and emotional aspects of learning (Panadero, 2017; Van der Zanden et al., 2019). Efficacy beliefs are positively related to effective self-regulated learning (SRL) processes (Pintrich and Zusho, 2007) and according to Pintrich (2004), a much better predictor of performance than task value. Addressing goals and self-efficacy is thought to be especially impactful given the strong connections between goal orientation, control (academic self-efficacy) and affect, as explained in Pekrun’s control-value theory of achievement emotion (Pekrun et al., 2002, 2006).

Secondly, we looked at metacognitive strategy instruction in assessment and feedback which includes supporting students’ (i) understanding of strengths and weaknesses in relation to the demands of a task; (ii) strategy choice and effective use of strategy; (iii) internalization of standards in recognizing what good work is in supporting accuracy of monitoring and evaluation of work; (iv) recognition of feedback opportunities (cues) and developing effective feedback strategy use (processing and application skills); (iv) evaluation of the quality of approaches used, and in relation to accurate reading of context and task. This emphasis on SRL approaches combining metacognitive and motivational strategies is warranted given that they have the highest effects on student learning outcomes (Dignath et al., 2008). The importance of strategy instruction on student learning outcomes is established (Hattie et al., 1996; Schneider and Preckel, 2017). Metacognitive monitoring is essential in impacting outcomes (DiFrancesca et al., 2016). Donker et al. (2014) also found from meta-analytical research on 95 interventions that the effectiveness of strategy instruction on performance was enhanced when interventions included general metacognitive knowledge about when, why, how, and which strategy to use, taught students how to plan, and addressed task value.

In attending to motivational aspects of learning and acquisition of high level SRL skills, and taking account of information processing and socio-cognitive aspects of learning, we considered key features of assessment design and the environment that could support the development of students’ high level self-regulatory assessment and feedback skills drawing on EAT. SRL skills development takes place within specific contexts, and the extent to which the context enhances or reduces the potential impact of SRL strategy development on student performance is central to the EAT framework that we drew upon in this research.

3.3 The role of assessment design in supporting SRAF

The quality of assessment design impacts the efficacy of academics’ and students’ SRL skills development (Hawe and Dixon, 2017; Evans et al., 2021). It is important to address academics’ and students’ starting points, and their beliefs and conceptions about assessment in supporting SRAF (Evans and Waring, 2021). Essential elements of assessment design that support SRAF, drawing on EAT, Evans (2016, 2022), include: (i) engagement of students in working with academics to develop shared understandings of SRAF; (ii) embedding SRAF in all aspects of assessment design to support students’ progressive development of core knowledge, understanding, and skills; (iii) ensuring the balance and distribution of assessment activities is conducive to deep approaches to learning (e.g., positioning feedback so that it can be used to improve work), (iv) training students in what constitutes quality so they can gain an appreciation of quality for themselves; (v) supporting students as active agents of assessment and feedback change with clear roles and responsibilities, and opportunities to engage fully in all aspects of the assessment process as part of team ownership; (vi) understanding how a course as a whole is engineered, and how different assessment elements fit together (Bass, 2012), and (vii) ensuring learning outcomes are focused on student attainment of high level SRL skills (Brown et al., 2016). To create the conditions to support SRAF, cognitivist information processing, and socio-cognitivist perspectives on SRL were considered (See Table 1).

Table 1
www.frontiersin.org

Table 1. Developing students’ self-regulatory skills within assessment design.

Supporting students’ accurate interpretation of tasks, requires an emphasis on making the requirements of the task explicit supported by clear signposting of information to reduce cognitive load (i.e., the amount of resource that a learner can devote to dealing with one task given the limits of working memory capacity) (Sweller et al., 2011), and providing early opportunities to address learners’ assessment conceptions and poor use of strategies (DiFrancesca et al., 2016). To support deep understanding of assessment requirements, students need frequent opportunities to discuss and interrogate the meaning of assessment tasks in order to come to a consensus as to what counts as quality. Understanding of students’ starting points and their previous experiences of success are important in tailoring SRL skills development (Douglas et al., 2016; Kim and Shakory, 2017).

In assisting students’ planning and goal-setting, emphasis should be placed on making the requirements of the task explicit, explaining the rationale of the task to support buy-in and shared goals, and exploring academics’ and students’ beliefs and conceptions about their roles in assessment. Academics working with students to agree goals that support their perceptions that a task is manageable and doable is important in relation to supporting student self-efficacy and agency in the assessment process. Autonomy supportive approaches where students are encouraged to question their understandings, where the “rules of the game are laid bare”, and where students are enabled a degree of ownership of the assessment process with academics are impactful (Panadero and Alonso-Tapia, 2013; Kumar et al., 2018; Evans, 2022).

In supporting students’ operationalization and effective monitoring and completion of assessment and feedback tasks, early opportunities to test their understanding, and explicit demonstration and modeling of effective strategies with them are important. Students need opportunities to practice, implement and evolve their metacognitive strategy use, and within relevant contexts (Zimmerman, 2000). Ensuring feedback is placed effectively to enable students the time to internalize and apply it, and ensuring feedback is focused on how students can enhance their skills development with examples of how to do so, are important. Similarly, facilitating students’ self-evaluation skills requires opportunities for students to test their understanding throughout their courses through being actively involved in activities which require them to exemplify their understandings (e.g., writing of practice and final tests; marking and moderation of work, constant comparison of work) to establish the merits and limitations of different approaches (Nicol, 2022). Eva and Regehr (2011, p. 327) argued the importance of creating situations for learners [and academics] to experience the limits of their competence in the presence of feedback with improvement strategies tailored to those experiences rather than self-assessment alone. Emphasis is therefore placed on supporting learners to assess their own strengths and weaknesses and to adapt their strategies according to task needs.

4 Aims

In working with academics, a key aim of our research was to support the translation of SRAF into practice in higher education through the following objectives as outlined below.

• Objective 1: To undertake a pilot study to clarify the factor structure of the SRAF scale.

• Objective 2: To ascertain academics’ perceived use of SRAF practice and professional development needs, and the relationship between use and needs.

• Objective 3: To explore the relevance of our findings for professional development of SRAF in higher education.

4.1 Development of the SRAF scale items

Research was undertaken with colleagues at four higher education institutions in Spain, Portugal, and the UK (two UK universities) to develop and implement a SRAF approach using the EAT framework (Evans, 2016, 2022). A multi-step methodological approach comprising the following elements was implemented:

Identification of SRL variables that demonstrated maximum impact on student learning. An extensive narrative review of the literature on SRL was undertaken to explore the relative effectiveness of self-regulation variables on student learning outcomes (Evans et al., 2021).

Emphasis was placed on high-level metacognitive self-regulatory skills drawing on Dinsmore’s (2017) notions of conditional use (selection of appropriate strategies) and quality (using strategies well) aligned with Schneider and Preckel’s (2017) analysis which identified that the most successful students were those who were discerning in what they attended to in learning (Evans and Waring, 2021, 2023a). The interrelationships between metacognitive, cognitive, and affective dimensions of self-regulation in assessment and feedback were acknowledged (Vermunt and Verloop, 1999; Dinsmore, 2017).

Use of frameworks and tools to support understanding of SRAF

EAT was used with academics to explore the self-regulatory skills needed to be successful in managing the requirements of assessment and feedback in all 12 sub-dimensions of EAT (Evans et al., 2022a; Evans, 2022).

A SRL skills framework evolved from EAT was used to support academics’ in thinking about the metacognitive skills required at each stage of a typical self-regulatory process (forethought, planning and goal-setting, performing a task and monitoring progress in relation to goals, and evaluating the extent to which goals had been met, and future actions) (Dinsmore, 2017; Seufert, 2018). Table 1 provides a summary of the SRL skills framework aligned with the sub-dimensions of EAT.

• Project leads and their teams in four institutions, as part of the wider project work on supporting SRAF skills development, engaged in two initial core SRAF training sessions each (eight in total) to explore approaches to using SRAF, with follow up work with project teams which provided important information on contextual affordances and barriers.

• Development of reward and recognition frameworks and online resources to support and recognize academics’ achievements (Evans et al., 2022b).

4.2 Participants

The online SRAF survey was distributed via project leads in the UK, Spain, and Portugal to academic colleagues in their institutions and their wider networks to ascertain the SRAF support academics’ wanted (support required), and perceived frequency of use of SRAF activities (practice frequency). This work is important given the lack of research exploring the gaps between academics’ knowledge of SRAF and implementation of it, and the need for robust measures to assist understanding of academics’ experiences of learning about and applying SRAF in practice.

Our initial sample size for analysis was n = 207. We removed observations from 4 participants who we considered to have submitted erroneous responses. Academics from 25 countries, including 115 higher education institutions contributed to this research. Most responses were from Portugal (n = 49, 24%), UK (n = 36, 18%), and Spain (n = 28, 13.7%) where lead partners were based. There were 103 (50.5%) males, 95 females (46.6%), and 6 academics (2.94%) not reporting their gender. Other key countries represented in the data included Greece (n = 29, 14.2%), and Brazil (n = 20, 9.8%), with the remaining 20% of respondents coming from individual associates of core partners from 20 countries. There was a broad distribution of respondents from across disciplines with 72 (35.3%) from STEM, 50 (25.5%) from medicine and related disciplines, 42 (20.6%) from social sciences, 35 (17.2%) from arts and humanities, and two colleagues whose roles were across disciplines. One hundred and eighteen academics (57.8%) identified their primary role was teaching, and 86 (42.16%) participants identified their main role was research. In relation to years of experience in higher education the profile of respondents was skewed toward those who had more experience in higher education. One hundred and thirty-three (65%) of respondents had 16 years or more, 27 (13.3%) had 11–15 years, 23 (11.3%) had 6–10 years, 15 (7.4%) had 2–5 years, and 6 (3%) had less than two years’ experience.

Ethical approval for the collection and use of data was obtained from the School Research Ethics Committee of the School of Biosciences, Cardiff University, UK, in accordance with institutional ethics policy and partner institutions ethical clearance arrangements and in relation to General Data Protection Requirements (GDPR). The purposes of the data collection were made clear to all potential respondents in line with ethical consent procedures, and all participants had the right to have their data withdrawn at any time.

4.3 The self-regulatory assessment and feedback scale

We were keen to identify participants’ perceptions of the support they required in developing SRAF, and against a marker of what SRAF practices they felt they currently used frequently in their practice.

All participants were asked to complete two versions of the questionnaire scale, one asking academics what support they required in SRAF (SR), and the other asking them about their perceived practice frequency of SRAF (PF). Participants were asked to score items on a five point likert scale. For example, for SR (personal needs for training: 0 = not needed, 1 = very low to 5 = most needed) and for PF (frequency of use of SRAF approaches: 0 = not used, 1 = used very rarely to 5 = used very often).

The SRAF scale comprised 21 items generated from the EAT Framework and research on high level self-regulatory skills (Dinsmore, 2017; Evans and Waring, 2021). The questions highlight the importance of addressing cognitive, affective, and metacognitive aspects of self-regulation. For example, (i) clarifying how assessment elements fit together and facilitating student access to concepts by making core concepts explicit, thereby reducing cognitive load (cognitive); (ii) explaining the rationale underpinning assessment design, and the role of the student in assessment and feedback (affective), and (iii) ensuring opportunities for students to test their understanding through repeated opportunities to engage actively in assessment processes so as to support internalization of learning processes (Sadler, 2009; Nicol, 2022) (metacognitive aspects).

In developing the SRAF scale items consideration was also given to the metacognitive skills needed at each stage of the self-regulatory process (Pintrich, 2004) to include planning and goal-setting, including activating perceptions of a task, and one’s role in it, utilizing strategies to complete the task including ongoing monitoring of progress, and evaluation of the extent to which goals have been met. Self-regulatory assessment practices targeted included academics’ support of students’ (i) planning and goal setting, (ii) self-efficacy, (iii) internalization of standards, (iv) dispositions in encouraging a mastery approach to learning, (v) ability to adapt and transfer learning to new contexts, (vi) management of feedback, (vii) metacognitive skills regarding their self-awareness of their strengths and weaknesses, and (viii) ability to accurately judge the quality of their own work.

In working with academics we explored the high level self-regulatory skills required to support effective assessment and feedback and how these could be applied in different cultural contexts (Table 1) drawing on the 12 sub-dimensions of EAT (Evans, 2022). Importantly, we intentionally focused on participatory approaches in how academics work with students in partnership to support SRL development. Participants’ responses to the 21 items comprising the scale in relation to support required are depicted in Table 2.

Table 2
www.frontiersin.org

Table 2. Descriptive statistics for 21 original items of SRAF (SR) scale items.

5 Data analysis

5.1 Establishing the factor structure of the SRAF questionnaire

All data analyses were conducted using R (version 4.3.1, R Core Team, 2023). The SRAF scale comprised 21-items.

The SRAF scale survey was distributed online to academics who were asked about which of the 21 items they most wanted professional development support in, and which of the 21 items they perceived they used most frequently in their teaching.

We anticipated that factors arising from the underlying concepts would be correlated, hence we performed iterative exploratory factor analysis (EFA) with oblique (Promax) rotation. The EFA was undertaken to evaluate the dimensional structure and internal consistency of the SRAF scale, and reliability analysis was undertaken for academics’ perceptions of support required (SR) and practice frequency (PF). We explored mean absolute difference (MAD) to compare the differences in academics’ responses to the items on the support required (SR) and practice frequency (PF).

First, we undertook initial data screening of our 21 items. Following Evans and Zhu (2023), we considered items for elimination if (i) absolute skew values were > 2.0 and absolute kurtosis values were < 7.0 (Kim, 2013), (ii) items had a low average inter-item correlation ( < 0.3 ), (iii) items had very high average inter-item correlation suggesting multicollinearity ( > 0.9 ), and (iv) a low inter-total correlation ( < 0.3 ).

Descriptive statistics for each item were calculated using the descriptives function from the psych package (Revelle, 2023). The MAD was calculated by taking the mean of the absolute difference between PF and SR for each respondent. Inter-item correlations were calculated using the corrr package (Kuhn et al., 2022); and item-total correlations were calculated using the performance package (Lüdecke et al., 2021).

We deemed items suitable for EFA if (i) Kaiser-Meyer-Olkin (KMO) factor adequacy score was > 0.7 (Kaiser, 1974), (ii) a Bartlett’s test of sphericity returned a significant result ( p < 0.05 ), and (iii) the determinant of the correlation matrix was > 0.00001 (Yong and Pearce, 2013). To determine the maximum number of factors to explore in each EFA, we considered multiple sources of evidence: (i) the number of eigenvalues exceeding Kaiser’s criterion ( > 1 ) (Kaiser, 1974), (ii) visual inspection of scree plots (Ledesma et al., 2015), (iii) parallel analysis using both principal components and common factor analysis extraction methods (Hayton et al., 2004), (iv) Minimum Average Partial (MAP) tests (Velicer, 1976), sequential chi-square model tests (Auerswald and Moshagen, 2019) and empirical Kaiser criterion scores (Braeken and van Assen, 2017).

We examined EFA results in the context of the percentage cumulative variance explained (Costello and Osborne, 2005), and according to factor loadings. Specifically, we eliminated items with factor loadings < 0.45 (Hair et al., 2010) or with cross-loadings on two or more factors without a difference of  > 0.3 . Following EFA, reliability analysis was conducted to determine internal consistency of items loading onto their associated factors. This was done using Cronbach’s Alpha ( α ) with items deemed reliable if α > 0.7 (Taber, 2018).

6 Results

6.1 Confirming the factors underpinning the SRAF scale

Descriptive statistics for individual items of the SRAF survey for Support Required (SR) are presented in Table 2. The 21 SRAF scale items entered the first EFA using data from 173 participants; there were 30 missing cases. EFA resulted in elimination of four items (items 1, 10, 15, 16) due to low loadings <0.45; two items (5 and 14) were borderline, and a decision was made to retain these.

The final scale with the remaining 17 items yielded a Kaiser MSA value of 0.93 and a significant Bartlett’s test of sphericity (χ2 (136) = 2102.00, p < 0.001). These results verified that the SRAF (SR) sample was suitable for factor analysis. Two components had eigenvalues over 1. Inspection of the scree plot further recommended a two-factor solution.

The final two-factor solution (shown in Table 3) accounted for 57% of the overall variance with 95% reliability: factor 1 containing nine items (31%) and factor 2 containing eight items (26%). Internal reliabilities for the two factors suggest each subscale as a reliable measure (α = 0.94 for factor one with item-total correlation ranging from 0.53 to 0.65; and α = 0.0.9 for factor two, with item-total correlation ranging from 0.49 to 0.59). From review of the loadings on the two factors, factor 1 represented Supporting Students’ SRAF Skills Development, and factor two represented Creating the Conditions for SRAF. According to the component correlation matrix there was a strong positive correlation between the two factors (r = 0.75) which was expected given the highly interconnected nature of the constructs we were exploring. The same two factor solution was verified in running iterative ERA on academics’ responses to practice frequency items with two items having eigenvalues over 1. However, four questionnaire items (questions 2, 9, 11, 18) were removed as the loadings were below 0.45 for the practice frequency questions.

Table 3
www.frontiersin.org

Table 3. SRAF (SR) summary of exploratory factor analysis with Cronbach’s alpha (n = 173).

6.2 Academics’ perceptions of support required in developing SRAF approaches

The means and standard deviations for the 17 items comprising the SRAF Support Required (SR) scale are provided in Table 2. Overall, academics wanted most support with students’ metacognitive strategy development, and least support with assessment literacy and supporting students’ cognitive skills (e.g., managing cognitive load). Academics most wanted assistance with supporting students’ monitoring and evaluation skills, their self-awareness of strengths and weaknesses in relation to course demands. In Creating the Conditions for SRAF academics most wanted support with how to embed self-assessment within assessment design.

6.3 The relationship between support required and reported frequency of use of SRAF

Using the approach used by Dinsmore et al. (2024), we calculated the mean absolute difference (MAD) to explore the gap between the SRAF approaches academics reported wanting most support with, and those they reported using most (Table 4). A larger MAD represented a bigger discrepancy between the score for practice frequency and support required. MAD was calculated for the 13 items that loaded above 0.45 on both sets of SRAF questions (practice frequency (PF) and support required (SR)).

Table 4
www.frontiersin.org

Table 4. Mean Absolute Difference between responses to frequency of practice and support required.

With the exception of item ten (embedding self-assessment within assessment design), academics’ reported giving most attention to items comprising Creating the Conditions for SRAF with emphasis on supporting students’ assessment literacy which is reflective on the emphasis there has been on this in research for last ten years (Zhu and Evans, 2022). Academics reported greatest focus on supporting students’ cognitive skills (e.g., access to assessment and feedback by reducing cognitive load, signposting key skills, and explaining the rationale underpinning assessment). From an assessment design perspective, academics’ focused efforts on the placement of feedback to maximize support for student learning and on rewarding collaborative practices. An emphasis on Supporting Students’ SRAF Skills Development was less evident, with academics’ reporting least attention being placed on engaging students in developing assessment criteria, supporting monitoring and evaluation skills, and agreeing goals for learning.

Figure 2 illustrates the relationship between PF and SR for each item of SRAF that could be compared across the two SRAF surveys (13 items in total). Three clusters were identified from the data from a possible four combinations (Evans and Waring, 2023a).

• Cluster 1: high use (PF) and high interest (SR) (items 17, 8)

• Cluster 2: low use (PF) and high interest (SR) (items, 5, 12, 13, 14, 19, 20, 21)

• Cluster 3: low use (PF) and low interest (SR) (no items)

• Cluster 4: high use (PF) and low interest (SR) (3, 4, 6, 7)

Figure 2
www.frontiersin.org

Figure 2. The relationship between practice frequency and support required for SRAF items.

Perceived high usage and high interest items (Cluster 1) relate to supporting students’ feedback skills, and abilities to work collaboratively in developing co- and shared regulatory practices with peers. High usage and low interest items (Cluster 4) relate to areas that academics feel are embedded in their practice and do not need more support with; these include, for example, supporting students’ cognitive access to assessment and aspects of assessment design to ensure that feedback feeds forward. Low usage and high interest items (Cluster 2) form the largest group in our findings and are largely focused on academics’ metacognitive skills development and enabling the embedding of such within assessment design. These items include the need to support students’ goal-setting, engagement in co-creation of assessment criteria; use of data with students, and monitoring and evaluation skills; areas reported as less frequently used in practice by academics.

In reviewing missing items (those that did not load at sufficient levels on either factor), Item 1 ‘Awareness of students’ starting points, and ongoing review of progress’ is an area that is known to be very important in self-regulated learning in impacting the effectiveness of instructional techniques (Fyfe and Rittle Johnson, 2016), however, academics’ reported relatively low usage of, and low interest in getting more support in this area. Evans and Waring (2023a) highlight the importance of using SRAF tools with colleagues to surface the relevance of key constructs and to demonstrate how to integrate SRAF into practice in a manageable way that is relevant to context. Academics in training sessions integral to this research identified the main reason for not using certain SRAF approaches was due to not knowing about them in the first place, providing face validity for Evans and Waring’s cluster 3 category.

7 Discussion

7.1 Confirming reliability, validity, and underpinning SRAF constructs

Construct validity was established through exploratory factor analysis (EFA). Our end product included a two-factor scale. Content validity and internal consistency reliability supported evidence of construct validity which was also supported by the theoretical underpinnings of the SRAF scales.

Two constructs Supporting Students’ SRAF Skills Development and Creating Conditions for SRAF were established. Academics’ placed most emphasis on supporting students’ metacognitive skills development, and less attention on motivational aspects such as goal-setting, and planning aspects of self-regulation. This finding was congruent with academics’ reported focus on Creating the Conditions for SRAF with emphasis on cognitive skills development. Our findings, in many respects, are similar to those of Dinsmore et al. (2024) in that academics reported greater use of cognitive strategies compared to metacognitive and motivational dimensions of learning. Identifying that these themes are common across very different samples suggests the potential generalizability of these findings which would need further verification across wider contexts. These results are not surprising given that research suggests less emphasis is being placed on developing students’ goal-setting strategies compared to other aspects of self-regulation (e.g., feedback-using skills) in higher education (Evans and Waring, 2021, 2023a,b). This finding is congruent with evidence suggesting that much emphasis has been placed on reflection on feedback on performance on a task at the expense of time spent on supporting students’ planning and goal setting in assessment and feedback (Farrell et al., 2017). This matters because of the importance of planning skills and goal development in impacting student outcomes, and suggests an important gap between practice and research that needs to be addressed (Panadero, 2017).

Similarly, academics who reported high usage of items loaded on Creating Conditions for SRAF were likely to report lower usage on Supporting Students’ SRAF Skills Development. To be most effective SRAF requires both aspects of SRAF to work in unison (Evans, 2013). The findings may reflect the stage of professional development that academics are at, in that staff identified the need for support in facilitating SRAF skills development with students.

7.2 Strengths and areas for development

7.2.1 Scale considerations

One of the greatest strengths of the SRAF scale was in its practical use as a powerful heuristic to guide discussions on effective SRAF with academics. A common criticism of self-rating scales is the degree of discrepancy between actual and perceived behaviors (Chrystal et al., 2019; Uher, 2023). The SRAF scale focuses on perceived training needs and estimates of frequency of use of SRAF. The scale was useful in supporting academics to identify high level SRL skills and in showing them how to implement SRAF using EAT. It provided a valuable mechanism to support discussions about what effective SRAF practice looked like in different contexts. A core aspect of building SRAF competency is in unpacking conceptions and beliefs about what constitutes good practice and why.

In scrutinizing the properties of the SRAF scale it demonstrated strong internal reliability. The sample size was adequate for preliminary EFA but needed a larger sample in order to perform second stage confirmatory factor analysis. This initial pilot was valuable in identifying the strength of the scale but also indicated areas where it could be further refined.

While the same two underpinning factors were identified in questions focusing on the support academics’ wanted and their perception of practice frequency of SRAF, the discrepancy between the number of items that loaded on questions about support and those that loaded on frequency of use of SRAF was a concern. Some of the pilot SRAF items that would have been expected to load on the two identified dimensions did not have loadings above 0.45, which was our cut off point for further EFA, suggesting the need for further refinement of the items comprising the scale. In looking at academics’ scores on some of these excluded items it is interesting to note that item one about reviewing data on students’ starting points and ongoing checking of progress is integral to developing an inclusive culture to support SRAF. Effective use of data to enhance assessment design is a significant issue for higher education (Evans et al., 2019). Item 15 on working with students to support their understanding of assessment criteria is fundamental to students being clear about the expectations of assessment. Item 16 on supporting students’ development of a deep approach to learning should be central to assessment design but it is a complex construct. Traditionally a deep approach is associated with the intention to understand, but it also requires understanding of the process of learning within specific contexts (McCune and Entwistle, 2011), and discernment in knowing what the most appropriate strategies are to master a task (Evans and Vermunt, 2013). This construct needs further unpacking as it has many constituent parts. The complexity inherent in individual self-regulatory constructs is a key challenge in SRL skills research (Carless and Boud, 2019). For example, in looking at evaluative judgment, Luo et al. (2023) have identified five constructs involved including understanding of the context (i.e., assessment literacy) and the interplay of metacognitive, affective, and cognitive components which aligns with EAT (Evans, 2016, 2022). The key challenge is distilling the essential items that can best support academics’ understanding of key factors at play within SRAF, and across contexts.

Given the complexities of self-regulatory constructs and the need to develop clear understandings of them, and within discipline contexts (Evans and Waring, 2023a), there is a need to refine these items to explore different facets of them. The SRAF (SR) scale inter-item correlations suggest especially for factor 1, Supporting Students’ SRAF Skills Development, that the scale could be enhanced to capture a broader bandwith of the construct (Piedmont, 2014).

Initial findings from this preliminary study are positive given alignment with comparable studies and testing of ideas with colleagues from very different cultural contexts (institution, country, discipline). Further work is needed to refine and test items with a larger sample that will permit further testing of the SRAF scale’s properties through exploratory and confirmatory (CFA) factor analyses as to its suitability for use with different samples. Our results to date are promising in this respect, given the similar findings in Dinsmore et al. (2024) when focusing on skills development (factor 1). In working collaboratively with academics and students it is possible to verify individual and team perceptions of strengths in areas of SRAF practice through peer feedback and open dialog around understanding of concepts, and evidence of effectiveness of SRAF approaches.

Subject to satisfactory CFA results, convergent validity can be explored through utilization of aligned frameworks and tools:

i. A relationship between Dinsmore et al.’s (2024) self-regulatory assessment scale and dimension 1 of SRAF: Supporting Students’ SRAF Skills Development would be expected as they are both measuring self-regulatory skills use. Key differences are that Dinsmore et al. place greater emphasis on task value, whereas the SRAF scale, drawing on the EAT framework, places greater emphasis on partnership with students in supporting different phases of the self-regulatory cycle aligned to very specific SRAF practices, whereas Dinmore et al.’s scale emphasizes broader metacognitive skills.

ii. A relationship would be expected between the assessment engagement scale (AES) of Evans and Zhu (2023), and Creating the Conditions to Support SRAF (factor 2) given that the AES is focused on the extent to which assessment design supports SRAF, suggesting there should be strong alignment.

iii. Predictive validity can be explored through academics’ perceived engagement in SRAF, perceived self-efficacy in ability to implement SRAF, in impacting the quality of assessment design, and the extent to which students’ perceive that assessment design enables them to engage in SRAF (using the Assessment Engagement Scale (student version), Evans and Zhu, 2023).

7.2.2 Wider methodological strengths and limitations

In developing the SRAF scale, a key strength of our sample was that it was representative of the higher education academic community in that it comprised international academics from a wide range of disciplines, research and teaching roles, and was well balanced with respect to gender. However, the breadth of the sample limited certain types of analyses at the individual institution level.

The testing of SRAF concepts with colleagues across different cultural contexts was effective in maximizing the utility and relevance of SRAF tools for an international audience, supporting translation of ideas into practice.

Focusing attention on academics’ perceptions of their use of SRAF and the professional development they wanted in SRAF was powerful in supporting the reframing of professional development activities to focus on key SRAF knowledge and skills gaps in specific contexts. Our research draws attention to the importance of exploring how academics assess the quality of their SRAF practice, and what evidence informs this process. A key question arising from this research was how those leading SRAF training are supported in bridging SRAF knowledge and practice gaps. In this article, our focus was purely on academic’s perceptions of this process. Further work is recommended on the perceptions of professional development staff in relation to how they perceive affordances and barriers in supporting the quality of SRAF professional development training aligned with the SRAF skillsets required within specific disciplines.

7.3 Implications of academics’ reported use of, and interest in, SRAF in supporting the professional development of SRAF

Academics reported greater use of cognitive strategies compared to metacognitive ones as also identified by Dinsmore et al. (2024) in a very different context. In contrast to Dinsmore et al. our sample of academics demonstrated high interest in learning more about how to support students’ goal-setting and academic self-efficacy. This finding may be related to the fact that our sample of academics was purposeful in that they were engaged in networks where we had been promoting the importance of attending to affective and motivational dimensions of self-regulation to include self-efficacy, goal-setting, and planning.

SRAF requires discernment in knowing which strategies to use in any given context, and how to use them well (Dinsmore, 2017); this is especially pertinent to SRAF professional development in higher education (Evans and Waring, 2023a). Dinsmore et al. (2024) argue that the nature of strategies used to support academics’ professional development in SRAF is dependent on academics’ use and interest in SRAF. Expectancy value theory is relevant to Dinsmore et al.’s argument (Wigfield and Eccles, 2000), in that to invest in SRAF training, academics need to have a reasonable expectation that such training will benefit them and their students.

Evans and Waring (2023a) argue that valuing of a task is insufficient in itself to gain engagement of academics in SRAF, drawing on the role of control value theory of achievement emotions in this (Pekrun et al., 2006). In supporting academics’ engagement in SRAF they highlight the importance of academics’ perceptions of competency (e.g., expectancy of successful outcomes, productive relationships with students), and support from others (colleagues, department, institution). The interaction of these variables impacts academics’ choice of metacognitive, cognitive, and affective strategies (e.g., help-seeking and managing one’s environment), with impacts on performance, satisfaction, and motivations, which also affect emotions and perceptions of competency, task value and goal orientation (Evans and Waring, 2021, p. 462).

Figure 3, adapted from Evans and Waring (2023a) and drawing on Dinsmore et al., highlights that professional development strategies should take account of academics’ perceived use and interest in learning more about SRAF. There are a range of challenges in managing SRAF professional development dependent, for example, on academics’ dispositions, interests, and the contexts in which they work. For example, colleagues may report high usage of a particular strategy but ensuring shared understanding of what constitutes quality is difficult to achieve without an ongoing, co-ordinated and high quality SRAF professional development offer; an area in great need of attention in higher education (Ruiz and Panadero, 2023). Alternatively, academics may report high use of SRAF practices and little need for further development in them. Challenging ingrained positions on practice is difficult and requires a strong evidence-based approach to convince and empower individuals to make changes to established ways of working. Alternatively where low usage of SRAF is reported, barriers to access need to be addressed and the importance of brokers within disciplines to support change is imperative. Academics involved in SRAF networking activities highlighted that low use of SRAF was often related to lack of awareness of it and the strategies to support implementation of SRAF (Evans et al., 2019; Evans et al., 2021). In this respect we argue the importance of a coherent institution-wide communications strategy that supports networking and sustainability of SRAF through dissemination of effective research strategies to evaluate the relative value of using SRAF approaches to build a sustainable research and practice SRAF community.

Figure 3
www.frontiersin.org

Figure 3. The perceived use of, and interest in developing, self-regulatory assessment and feedback skillsets, and associated challenges Adapted from Evans and Waring (2023a).

Through working in practice with academics and using the exploratory tools described in this article, it was possible to identify key challenges impacting use of SRAF; these align closely with those found in previous studies (Evans et al., 2019, 2021). In Figure 4, these factors are grouped into individual and organizational factors that work in unison to impact knowledge of, engagement in, and successful application of SRAF. The individual factors closely align with Winne’s (1996) identification of five key factors implicated in self-regulated learning (i.e., global dispositions, domain knowledge, knowledge of tactics and strategies, performance and regulation of tactics and strategies).

Figure 4
www.frontiersin.org

Figure 4. Building SRAF capacity (Evans and Waring, 2023b).

An emphasis is needed on academics’ self-regulation of assessment and feedback if they are to be best supported in developing these practices with their students (Russell et al., 2022). Using SRAF requires both a cognitive shift in academics’ understanding of how best to support students’ acquisition of SRL skills within a specific domain (Simper, 2020) and a seismic cultural shift in reconceptualizing assessment as a participatory process with a different role for the students in this process (Faherty, 2015). Simper (2020) aligns academics’ changes in thinking about assessment with the notion of threshold concepts (Meyer and Land, 2003). Changing one’s thinking about assessment is a challenging process which often involves changes in one’s ontological positioning (Land et al., 2005), which may also be in conflict with other colleagues’ views on assessment and the institution view of assessment. SRAF requires academics and students to reposition themselves and their roles in relation to each other (Chan and Chen, 2023), where assessment is no longer “done unto students”, and where students, while not ultimate authorities in assessment (Cook-Sather, 2014), have valid input into assessment (Evans, 2013, 2016). The challenge for academics is in supporting students to take on a number of different roles in assessment (input into assessment design, feedback, and marking), which requires developing focused training for students in how to take on these roles, and to understand the specific requirements associated with different types of roles.

The challenges impacting academics’ development and use of SRAF in practice drawing on information processing and socio-cultural perspectives are highlighted in Figure 4. At the individual level, in supporting SRAF three core areas from our work with academics have been identified relating to access to concepts, openness to new approaches, and perceived political capital in leveraging SRAF with their peers and their students (Evans et al., 2019). We argue that SRAF training needs to attend to these different areas, and to explore the relationships between them. While understanding of individual differences in learning could be encapsulated within pedagogical expertise given the need to focus training on this area, we have created a separate category for it.

Myyry et al. (2022) highlight the importance of addressing teacher self-efficacy. Academics’ perceptions of their agency and advocacy in leveraging change were key factors mentioned in this research in discussions around challenges in implementing SRAF. Academics mentioned difficulties in accessing SRAF concepts that were totally new to them in many cases, and needing access to the language and theoretical framing underpinning concepts and help in seeing how these ideas could be applied within their discipline. The EAT framework was useful in providing a concrete routemap of how to apply SRAF to practice, and through explicit labeling of key self-regulatory processes implicated in assessment (Table 1).

Figure 4 highlights the importance of institutional alignment in supporting academics’ implementation of SRAF through policy and strategy emphasizing students’ meaningful engagement in assessment underpinned by evidence-informed practice. A coherent, integrated and sustainable assessment strategy must take account of the roles of academics, professional services, technical support teams, wider stakeholders and students in assessment and feedback activities. The importance of effective infrastructure that takes away the “heavy lifting” of assessment (e.g., through automation of basic functions, agile policy to enable dynamic change in assessment, efficient marking and moderation systems) is emphasized. Prioritizing time for academics to work on SRAF embedded assessment designs and resources is seen as essential to ensure aligned assessment processes focused on supporting the progressive development of students’ self-regulatory skills. Empowering academics through recognizing and rewarding SRAF, and in supporting the building of collaborative communities that enable the sustained development of effective SRAF are important (Evans et al., 2022a,b). Integral to Figure 4 and central to it, is attending to students’ engagement in SRAF which parallels key constructs identified as central for academics (e.g., domain knowledge, processing styles, conceptions of assessment and confidence and willingness to engage). Greater understanding of the attributes that students bring into higher education is essential to complete the SRAF learning cycle.

8 Conclusions: implications for evolving SRAF research and practice

This article makes an important contribution to advancing assessment and feedback practice in higher education by highlighting the importance of supporting academics’ SRAF development if they are to effectively facilitate their students’ SRL skills development. This focus on SRAF is essential in supporting students’ learning in higher education. In bridging the research-practice divide, this article outlines conceptual and practical frameworks and tools to support the translation of SRAF concepts into practice. A considered and research-informed approach to academics’ professional development in SRAF is advocated to support academics in evaluating their practice, and in enabling focused attention on what matters in assessment and feedback as part of a self-regulatory approach.

In advancing understanding of SRAF, we identified two factors underpinning the SRAF scale: Supporting Students’ SRAF Skills Development and Creating the Conditions for SRAF. The strong internal reliability of the scales supported its use with academics although further work is needed to fully capture the high level SRL skills we were focusing on given the complexity of the SRL construct, and the need to test the scale items on larger samples. In this pilot study we focused on those self-regulatory behaviors known to have greatest impact on learning. Data captured from academics through this initial data gathering stage will be used to refine the scale items to capture greater breadth of SRAF.

One of the greatest benefits of the SRAF scale was in its use as a practical learning tool; a heuristic to guide academics in exploring high level SRL skills with their students. The SRAF scale was a valuable measure to explore academics’ views on the areas in which they perceived they needed most help in developing SRAF, and in comparison to their reported frequency of use of SRAF. The conceptual and practical tools developed to support SRAF implementation were powerful in raising awareness of the importance of developing students’ metacognitive skills, as evidenced in academics’ preference for training in this area, and in promoting a shift to a more evidence-informed approach to supporting students’ SRAF. Our findings highlight the importance of effective dissemination of information about core SRAF practices, and how to implement them in practice.

Further work is needed to better understand the processes involved in supporting academics’ understanding and use of SRAF. Figure 4 highlights a range of factors impacting academics’ use of SRAF which need consideration in the design of professional development to support academics’ understanding of SRAF.

A key challenge in supporting SRAF research and practice in higher education is in the complex interplay between the numerous self-regulatory concepts and processes involved in interaction with individuals and the contexts in which they are working. Cassidy (2011) argued that it is the aggregated effects of many components that determine the efficacy of the self-regulation process, and Jansen et al. (2019) argued that there may be many different models of how to support self-regulation that may be equally valid. A key priority in supporting SRAF with academics is well designed methodologies to enable exploration of how the different elements of self-regulation come together to impact outcomes for academics and their students in specific contexts.

Individual differences are implicated in the effectiveness of SRAF (Dörrenbächer and Perels, 2016). Academics need better understanding of the role of individual differences in supporting effective self-regulation (Panadero, 2023). Further research is required on how students’ self-regulatory profiles impact their engagement in SRAF and the strategies that they use, and how best to support them, as integral to SRAF professional development. Greater focus is also needed on collaborative self-regulatory approaches. In reality, regulating oneself, being supported by others (co-regulation), and regulating together (shared regulation) are all present in many aspects of SRAF, and need consideration in training to support the most appropriate use of different regulation strategies in relation to the nature of the task.

Investing in coherent and sustained programs of SRAF professional development is important in supporting high quality and efficient assessment design that benefits academics’ and students’ mastery of assessment and feedback. In this article we have highlighted how the EAT framework provides a useful structure to facilitate conversations about how to actualize SRAF, but needs brokers on the ground that can translate the work to a specific disciplinary context as to which SRL skills are prioritized for development, and for whom. Extensive opportunities are needed for dialog to support shared understandings and effective use of SRAF pedagogies underpinned by high quality research. To support academics in implementing effective SRAF focus needs to be placed on supporting them in understanding their own self-regulatory behaviors if they are to be best placed to support their students’ acquisition of such skills.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by the School Research Ethics Committee of the School of Biosciences, Cardiff University, UK. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

CE core work included Conceptualization, Writing, Formal analysis, Methodology, Resources, and Visualisation. WK core work included Data Curation, Software, Formal analysis, Visualisation, Writing. SR, SA-D, RdMG, and KD core work included the Research and Investigation process.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by funding from Erasmus+ (British Council, in partnership with Ecorys UK) through the ‘Enhancing equity, agency, and transparency in assessment practices in higher education’ project award (Grant Number: KA203-12DAFAAA7-EN).

Acknowledgments

The authors are grateful to all academics who contributed to this research, and especially the contributions of Manuel João Costa, and Flávia Vieira from the University of Minho, Portugal.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Auerswald, M., and Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: a comparison of extraction methods under realistic conditions. Psychol. Methods 24, 468–491. doi: 10.1037/met0000200

PubMed Abstract | Crossref Full Text | Google Scholar

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.

Google Scholar

Bandura, A. (1991). Social cognitive theory of self-regulation. Organ. Behav. Hum. Decis. Process. 50, 248–287. doi: 10.1016/0749-5978(91)90022-L

Crossref Full Text | Google Scholar

Bandura, A. (2001). Social cognitive theory: an agentic perspective. Annu Rev Dev Psychol 52, 1–26. doi: 10.1146/annurev.psych.52.1.1

Crossref Full Text | Google Scholar

Bass, R. (2012). Disrupting ourselves: The problem of learning in higher education. EDUCAUSE Review 47, 23–33.

Google Scholar

Bembenutty, H., White, M. C., and Vélez, M. R. (2015). “Self-regulated learning and development in teacher preparation training” in Developing self-regulation of learning and teaching skills among teacher candidates (Dordrecht, Netherlands: Springer Briefs in Education).

Google Scholar

Boud, D., and Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assess. Eval. High. Educ. 38, 698–712. doi: 10.1080/02602938.2012.691462

Crossref Full Text | Google Scholar

Braeken, J., and van Assen, M. A. (2017). An empirical Kaiser criterion. Psychol. Methods 22, 450–466. doi: 10.1037/met0000074

PubMed Abstract | Crossref Full Text | Google Scholar

Brown, G. T. L., Peterson, E. R., and Yao, E. S. (2016). Student conceptions of feedback: impact on self-regulation, self-efficacy, and academic achievement. Br. J. Educ. Psychol. 86, 606–629. doi: 10.1111/bjep.12126

PubMed Abstract | Crossref Full Text | Google Scholar

Büchele, S. (2023). Navigating success in higher education: engagement as a mediator between learning strategies and performance in mathematics. Assess. Eval. High. Educ. 48, 1356–1370. doi: 10.1080/02602938.2023.2230387

Crossref Full Text | Google Scholar

Carless, D., and Boud, D. (2019). The development of student feedback literacy: enabling uptake of feedback. Assess. Eval. High. Educ. 43, 1315–1325. doi: 10.1090/02602938.2018.14633534

Crossref Full Text | Google Scholar

Cassidy, S. (2011). Self-regulated learning in higher education: identifying key component processes. Stud. High. Educ. 36, 989–1000. doi: 10.1080/03075079.2010.503269

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Chen, S. W. (2023). Student partnership in assessment in higher education: a systematic review. Assess. Eval. High. Educ. 48, 1402–1414. doi: 10.1080/02602938.2023.2224948

Crossref Full Text | Google Scholar

Chrystal, M., Karl, J. A., and Fischer, R. (2019). The complexities of "minding the gap": perceived discrepancies between values and behavior affect well-being. Front. Psychol. 10:736. doi: 10.3389/fpsyg.2019.00736

PubMed Abstract | Crossref Full Text | Google Scholar

Cook-Sather, A. (2014). Student-faculty partnership in explorations of pedagogical practice: a threshold concept in academic development. Int. J. Acad. Dev. 19, 186–198. doi: 10.1080/1360144X.2013.805694

Crossref Full Text | Google Scholar

Costello, A. B., and Osborne, J. (2005). Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 10:7. doi: 10.7275/jyj1-4868

Crossref Full Text | Google Scholar

Dekker, I., Schippers, M., and Van Schooten, E. (2023). Reflective goal-setting improves academic performance in teacher and business education: a large-scale field experiment. J. Res. Educ. Effect., 1–29. doi: 10.1080/19345747.2023.2231440

Crossref Full Text | Google Scholar

Dent, A. L., and Koenka, A. C. (2016). The relation between self-regulated learning and academic achievement across childhood and adolescence: a meta-analysis. Educ. Psychol. Rev. 28, 425–474. doi: 10.1007/s10648-015-9320-8

Crossref Full Text | Google Scholar

DiFrancesca, D., Nietfeld, J. L., and Cao, L. (2016). A comparison of high and low achieving students on self-regulated learning variables. Learn. Individ. Differ. 45, 228–236. doi: 10.1016/j.lindif.2015.11.010

Crossref Full Text | Google Scholar

Dignath, C., Buttner, G., and Langfeld, H. P. (2008). How can primary school students learn self-regulated learning strategies most effectively? A meta-analysis on self-regulation training programmes. Educ. Res. Rev. 3, 101–129. doi: 10.1016/j.edurev.2008.02.003

Crossref Full Text | Google Scholar

Dinsmore, D. (2017). Examining the ontological and epistemic assumptions of research on metacognition, self-regulation and self-regulated learning. Educ. Psychol. 37, 1125–1153. doi: 10.1080/01443410.2017.1333575

Crossref Full Text | Google Scholar

Dinsmore, D., Rakita, G., and Kulp, A. (2024). “Promises and challenges in developing self-regulatory assessment and feedback practices in diverse higher education contexts,” in Research handbook on innovations in assessment and feedback in higher education: Implications for learning and teaching. eds. C. Evans and M. Waring (Cheltenham, Gloucestershire, UK: ELGAR Publishing).

Google Scholar

Donker, A. S., de Boer, H., Kostons, D., Dignath van Ewijk, C. C., and van der Werf, M. P. C. (2014). Effectiveness of learning strategy instruction on academic performance: a meta-analysis. Educ. Res. Rev. 11, 1–26. doi: 10.1016/j.edurev.2013.11.002

Crossref Full Text | Google Scholar

Dörrenbächer, L., and Perels, F. (2016). Self-regulated learning profiles in college students: their relationship to achievement, personality, and the effectiveness of an intervention to foster self-regulated learning. Learn. Individ. Differ. 51, 229–241. doi: 10.1016/j.lindif.2016.09.015

Crossref Full Text | Google Scholar

Douglas, K., Barnett, T., Poletti, A., Seaboyer, J., and Kennedy, R. (2016). Building reading resilience: re-thinking reading for the literary studies classroom. High. Educ. Res. Dev. 35, 254–266. doi: 10.1080/07294360.2015.1087475

Crossref Full Text | Google Scholar

Dresel, M., Schmitz, B., Schober, B., Spiel, C., Ziegler, A., Engelschalk, T., et al. (2015). Competencies for successful self-regulated learning in higher education: structural model and indications drawn from expert interviews. Stud. High. Educ. 40, 454–470. doi: 10.1080/03075079.2015.1004236

Crossref Full Text | Google Scholar

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., and Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psychol. Sci. Public Interest 14, 4–58. doi: 10.1177/1529100612453266

Crossref Full Text | Google Scholar

Eva, K. W., and Regehr, G. (2011). Exploring the divergence between self-assessment and self-monitoring. Adv. Health Sci. Educ. 16, 311–329. doi: 10.1007/s10459-010-9263-2

PubMed Abstract | Crossref Full Text | Google Scholar

Evans, C. (2013). Making sense of assessment feedback in higher education. Rev. Educ. Res. 83, 70–120. doi: 10.3102/0034654312474350

Crossref Full Text | Google Scholar

Evans, C. (2016). Enhancing assessment feedback practice in higher education: The EAT framework. Southampton, UK: University of Southampton. Available at: https://eatframework.com

Google Scholar

Evans, C. (2022). The EAT framework. Enhancing assessment feedback practice in higher education. Cardiff, Wales: Inclusivehe.org with Cardiff University. Available at: https://inclusiveHEheorg.files.wordpress.com/2022/09/2022_eat-framework_220922.pdf

Google Scholar

Evans, C., Amici-Dargan, S., Rutherford, S., and Vieira, F.Erasmus Team (2022a). A Guide to Using the EAT Assessment Framework. A Resource for Developing Assessment Practice in Higher Education. An Erasmus+ production. Cardiff, Wales: Inclusivehe.org with Cardiff University. Available at: https://inclusiveheorg.files.wordpress.com/2022/12/using_eat_guide-2022_12_2022.pdf

Google Scholar

Evans, C., and Amici-Dargon, S.ERASMUS team (2022b). Assessment Accreditation Guidance. Acknowledging and Rewarding Excellence in Assessment Practices in Higher Education. An Erasmus+ production. Cardiff, Wales: Inclusivehe.org with Cardiff University. Available at: https://inclusiveheorg.files.wordpress.com/2023/04/assessment_standards_accreditation_guidance_2023.pdf

Google Scholar

Evans, C., Howson, C., Forsythe, A., and Edwards, C. (2021). What constitutes high quality higher education pedagogical research? Assess. Eval. High. Educ. 46, 525–546. doi: 10.1080/02602938.2020.1790500

Crossref Full Text | Google Scholar

Evans, C., Rutherford, S., and Vieira, FErasmus+ team (2021). A Self-Regulatory Approach to Assessment. An Erasmus+ production. Cardiff, Wales: Inclusivehe.org with Cardiff University. Available at: https://inclusiveheorg.files.wordpress.com/2022/08/self-regulation_in_assessment_report-2021.pdf

Google Scholar

Evans, C., and Vermunt, J. (2013). Styles, approaches and patterns in student learning. Br. J. Educ. Psychol. 83, 185–195. doi: 10.1111/bjep.12017

Crossref Full Text | Google Scholar

Evans, C., and Waring, M. (2021). “Enhancing students’ assessment feedback skills within higher education” in Oxford research encyclopedia of education. ed. L-f. Zhang (New York, NY: Oxford University Press).

Google Scholar

Evans, C., and Waring, M. (2023a). “Prioritising a self-regulatory assessment and feedback approach in higher education” in Research handbook on innovations in assessment and feedback in higher education: Implications for learning and teaching. eds. C. Evans and M. Waring (Cheltenham, Gloucestershire, UK: ELGAR Publishing).

Google Scholar

Evans, C., and Waring, M. (2023b). Self-regulatory assessment and feedback approaches that make sense in higher education. INRAP Webinar. Available at: https://inclusiveheorg.files.wordpress.com/2023/12/evans-and-waring-sraf-december-inrap2.pdf

Google Scholar

Evans, C., and Zhu, X. (2023). The development and validation of the assessment engagement scale. Front. Psychol. 14:878. doi: 10.3389/fpsyg.2023.1136878

PubMed Abstract | Crossref Full Text | Google Scholar

Evans, C., Zhu, X., Winstone, N., Balloo, K., Hughes, A., and Bright, C. (2019). Maximising student success through the development of self-regulation. Office for Students ABSS Report (L16). Southampton, University of Southampton.

Google Scholar

Faherty, A. (2015). Developing enterprise skills through peer-assessed pitch presentations. Educ. Train. 57, 290–305. doi: 10.1108/ET-02-2014-0013

Crossref Full Text | Google Scholar

Farrell, L., Bourgeois-Law, G., Ajjawi, R., and Regehr, G. (2017). An autoethnographic exploration of the use of goal oriented feedback to enhance brief clinical teaching encounters. Adv. Health Sci. Educ. 22, 91–104. doi: 10.1007/s10459-016-9686-5

PubMed Abstract | Crossref Full Text | Google Scholar

Fyfe, E. R., and Rittle-Johnson, B. (2016). Feedback both helps and hinders learning: The causal role of prior knowledge. J. Educ. Psychol. 108, 82–97. doi: 10.1037/edu0000053

Crossref Full Text | Google Scholar

Hair, J., Black, W. C., Babin, B. J., and Anderson, R. E. (2010). Multivariate data analysis. 7th Edn. New York, NY: Pearson Educational International.

Google Scholar

Hattie, J. (2023). Visible learning: The sequel a synthesis of over 2,100 Meta-analyses relating to achievement. Abingdon, Oxfordshire, UK: Routledge.

Google Scholar

Hattie, J., Biggs, J., and Purdie, N. (1996). Effects of learning skills interventions on student learning: a meta-analysis. Rev. Educ. Res. 66, 99–136. doi: 10.3102/0034654306600209

Crossref Full Text | Google Scholar

Hawe, E., and Dixon, H. (2017). Assessment for learning: a catalyst for student self-regulation. Assess. Eval. High. Educ. 42, 1181–1192. doi: 10.1080/02602938.2016.1236360

Crossref Full Text | Google Scholar

Hayat, A. A, Shateri, K., and Amini, M. and, Shokrpour, N. (2020). Relationships between academic self-efficacy, learning-related emotions, and metacognitive learning strategies with academic performance in medical students: a structural equation model. BMC Med. Educ., 20,:76. doi: 10.1186/s12909-020-01995-9

Crossref Full Text | Google Scholar

Hayton, J. C., Allen, D. G., and Scarpello, V. (2004). Factor retention decisions in exploratory factor analysis: a tutorial on parallel analysis. Organ. Res. Methods 7, 191–205. doi: 10.1177/1094428104263675

Crossref Full Text | Google Scholar

Honig, M. I., and Coburn, C. (2007). Evidence-based decision making in school district central offices: toward a policy and research agenda. Educ. Policy 22, 578–608. doi: 10.1177/0895904807307067

Crossref Full Text | Google Scholar

Istifci, I., and Goksel, N. (2022). The relationship between digital literacy skills and self-regulated learning skills of open education faculty students. English as a Foreign Language International Journal 2, 59–81. doi: 10.56498/164212022

Crossref Full Text | Google Scholar

Jansen, R. S., van Leeuwen, A., Janssen, J., Jak, S., and Kester, L. (2019). Self-regulated learning partially mediates the effect of self-regulated learning interventions on achievement in higher education: a meta-analysis. Educ. Res. Rev. 28, 100292–100220. doi: 10.1016/j.edurev.2019.100292

Crossref Full Text | Google Scholar

Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika 39, 31–36. doi: 10.1007/BF02291575

Crossref Full Text | Google Scholar

Kim, H. Y. (2013). Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis. Restorat. Dentist. Endodon. 38, 52–54. doi: 10.5395/rde.2013.38.1.52

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, A. S., and Shakory, S. (2017). Early, but not intermediate, evaluative feedback predicts cumulative exam scores in large lecture-style post-secondary education classrooms. Scholarsh. Teach. Learn. Psychol. 3, 141–150. doi: 10.1037/stl0000086

Crossref Full Text | Google Scholar

Kozhevnikov, M., Evans, C., and Kosslyn, S. (2014). Cognitive style as environmentally sensitive individual differences in cognition: a modern synthesis and applications in education, business and management. Psychol. Sci. Public Interest 15, 3–33. doi: 10.1177/1529100614525555

PubMed Abstract | Crossref Full Text | Google Scholar

Krempkow, R., and Petri, P. S. (2022). “Digital competencies of students: how they are captured and what they can contribute to student success” in Transformation fast and slow: Quality, trust and digitalisation in higher education. eds. B. Broucker, R. Pritchard, C. Milsom, and R. Krempkow. (Leiden, Netherlands: BRILL).

Google Scholar

Kuhn, M., Jackson, S., and Cimentada, J. (2022). Corrr: correlations in R. R package version 0.4.4, Available at: https://CRAN.R-project.org/package=corrr.

Google Scholar

Kumar, R., Zusho, A., and Bondie, R. (2018). Weaving cultural relevance and achievement motivation into inclusive classroom cultures. Educ. Psychol. 53, 78–96. doi: 10.1080/00461520.2018.1432361

Crossref Full Text | Google Scholar

Land, R., Cousin, G., Meyer, J. H. F., and Davies, P. (2005). “Threshold concepts and troublesome knowledge (3): implications for course design and evaluation” in Improving student learning. ed. C. Rust (Oxford: Oxford Centre for Staff and Learning Development)

Google Scholar

Ledesma, R. D., Valero-Mora, P., and Macbeth, G. (2015). The scree test and the number of factors: a dynamic graphics approach. Span. J. Psychol. 18:E11. doi: 10.1017/sjp.2015.13

PubMed Abstract | Crossref Full Text | Google Scholar

Lin, S., Mastrokoukou, S., Longobardi, C., Bozzato, P., Giovanna, F., Gastaldi, M., et al. (2022). Students' transition into higher education: the role of self-efficacy, regulation strategies, and academic achievements. High. Educ. Q. 77, 121–137. doi: 10.1111/hequ.12374

Crossref Full Text | Google Scholar

Lüdecke, D., Ben-Shachar, M. S., Patil, I., Waggoner, P., and Makowski, D. (2021). Performance: an R package for assessment, comparison and testing of statistical models. J. Open Sour. Softw. 6:3139. doi: 10.21105/joss.03139

Crossref Full Text | Google Scholar

Luo, J., Chan, C. K. Y., and Zhao, Y. (2023). The development and validation of an instrument to measure evaluative judgement: a focus on engineering students’ judgement of intercultural competence. Assess. Eval. High. Educ. 48, 1178–1194. doi: 10.1080/02602938.2023.2222937

Crossref Full Text | Google Scholar

McCune, V., and Entwistle, N. (2011). Cultivating the disposition to understand in 21st century university education. Learn. Individ. Differ. 21, 303–310. doi: 10.1016/j.lindif.2010.11.017

Crossref Full Text | Google Scholar

Meyer, J. H. F., and Land, R. (2003). “Threshold concepts and troublesome knowledge 1 – linkages to ways of thinking and practising” in Improving student learning – Ten years on. ed. C. Rust (Oxford: OCSLD)

Google Scholar

Myyry, L., Karaharju-Suvanto, T., Virtala, A.-M., Raekallio, M., Salminen, O., Vesalainen, M., et al. (2022). How self-efficacy beliefs are related to assessment practices: a study of experienced university teachers. Assess. Eval. High. Educ. 47, 155–168. doi: 10.1080/02602938.2021.1887812

Crossref Full Text | Google Scholar

Nicol, D. (2022). Turning active learning into active feedback: Introductory guide from active feedback toolkit, Glasgow: Adam Smith Business School, University of Glasgow.

Google Scholar

Panadero, E. (2017). A review of self-regulated learning: six models and four directions for research. Front. Psychol. 8:422. doi: 10.3389/fpsyg.2017.00422

PubMed Abstract | Crossref Full Text | Google Scholar

Panadero, E. (2023). Toward a paradigm shift in feedback research: five further steps influenced by self-regulated learning theory. Educ. Psychol. 58, 193–204. doi: 10.1080/00461520.2023.2223642

Crossref Full Text | Google Scholar

Panadero, E., and Alonso-Tapia, J. (2013). Self-assessment: theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electron. J. Res. Educ. Psychol. 11, 551–576. doi: 10.14204/ejrep.30.12200

Crossref Full Text | Google Scholar

Peeters, J., De Backer, F., Kindekens, A., Triquet, K., and Lombaerts, K. (2016). Teacher differences in promoting students' self-regulated learning: exploring the role of student characteristics. Learn. Individ. Differ. 52, 88–96. doi: 10.1016/j.lindif.2016.10.014

Crossref Full Text | Google Scholar

Pekrun, R., Elliot, A., and Maier, M. A. (2006). Achievement goals and discrete achievement emotions: a theoretical model and prospective test. J. Educ. Psychol. 98, 583–597. doi: 10.1037/0022-0663.98.3.583

Crossref Full Text | Google Scholar

Pekrun, R., Goetz, T., Wolfram, T., and Perry, R. P. (2002). Academic emotions in students' self-regulated learning and achievement: a program of qualitative and quantitative research. Educ. Psychol. 37, 91–105. doi: 10.1207/S15326985EP3702_4

Crossref Full Text | Google Scholar

Piedmont, R. L. (2014). “Inter-item correlations” in Encyclopedia of quality of life and well-being research. ed. A. C. Michalos . (Dordrecht, Netherlands: Springer).

Google Scholar

Pintrich, P. R. (1989). “The dynamic interplay of student motivation and cognition in the college classroom” in Advances in motivation and achievement Vol. 6: Motivation enhancing environments. eds. C. Ames and M. Maehr (Greenwich, CT: JAI Press).

Google Scholar

Pintrich, P. R. (2004). A conceptual framework for assessing motivation and self-regulated learning in college students. Educ. Psychol. Rev. 16, 385–407. doi: 10.1007/s10648-004-0006-x

Crossref Full Text | Google Scholar

Pintrich, P., and Zusho, A. (2007). Student motivation and self-regulated learning in the college classroom. Scholarsh. Teach. Learn. High. Educ. Evid. Perspect. 3, 731–810. doi: 10.1007/1-4020-5742-3_16

Crossref Full Text | Google Scholar

R Core Team (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria: R Core Team.

Google Scholar

Reeve, M. (2013). How students create motivationally supportive learning. The concept of agentic engagement. J. Educ. Psychol. 105, 579–595. doi: 10.1037/a0032690

Crossref Full Text | Google Scholar

Revelle, W. (2023). Psych: Procedures for psychological, psychometric, and personality research. Evanston, Illinois: Northwestern University.

Google Scholar

Ruiz, J. F., and Panadero, E. (2023). Assessment professional development courses for university teachers: a nationwide analysis exploring length, evaluation and content. Assess. Eval. High. Educ. 48, 485–501. doi: 10.1080/02602938.2022.2099811

Crossref Full Text | Google Scholar

Russell, J. M., Baik, C., Ryan, A. T., and Molloy, E. (2022). Fostering self-regulated learning in higher education: making self-regulation visible. Act. Learn. High. Educ. 23, 97–113. doi: 10.1177/1469787420982378

Crossref Full Text | Google Scholar

Sadler, D. R. (2009). “Transforming holistic assessment and grading into a vehicle for complex learning” in Assessment, learning and judgement in higher education. ed. G. Joughin . (Dordrecht, Netherlands: Springer), 45–63.

Google Scholar

Schneider, M., and Preckel, F. (2017). Variables associated with achievement in higher education. A systematic review of meta-analyses. Psychol. Bull. 143, 565–600. doi: 10.1037/bul0000098

PubMed Abstract | Crossref Full Text | Google Scholar

Seufert, T. (2018). The interplay between self-regulation in learning and cognitive load. Educ. Res. Rev. 24, 116–129. doi: 10.1016/j.edurev.2018.03.004

Crossref Full Text | Google Scholar

Simper, S. (2020). Assessment thresholds for academic staff: constructive alignment and differentiation of standards. Assess. Eval. High. Educ. 45, 1016–1030. doi: 10.1080/02602938.2020.1718600

Crossref Full Text | Google Scholar

Sweller, J., Ayres, P., and Kalyuga, S. (2011). Cognitive load theory. New York, NY: Springer.

Google Scholar

Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 48, 1273–1296. doi: 10.1007/s11165-016-9602-2

Crossref Full Text | Google Scholar

Tempelaar, D. T., Rienties, B., and Nguyen, Q. (2021). Dispositional learning analytics for supporting individualized learning feedback. Front. Educ. 6:773. doi: 10.3389/feduc.2021.703773

Crossref Full Text | Google Scholar

Uher, J. (2023). What’s wrong with rating scales? Psychology’s replication and confidence crisis cannot be resolved without transparency in data generation. Soc. Pers. psychol. Comp. 17, 1–27. doi: 10.1111/spc3.12740

Crossref Full Text | Google Scholar

van der Zanden, P. J. A. C., Denessen, E., Cillessen, A. H. N., and Meijer, P. C. (2019). Patterns of success: first-year student success in multiple domains. Stud. High. Educ. 44, 2081–2095. doi: 10.1080/03075079.2018.1493097

Crossref Full Text | Google Scholar

van Merrienboer, J. J. G., and de Bruin, A. (2019). Cue-based facilitation of self-regulated learning: a discussion of multidisciplinary innovations and technologies. Comput. Educ. 100, 384–391. doi: 10.1016/j.chb.2019.07.021

Crossref Full Text | Google Scholar

Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika 41, 321–327. doi: 10.1007/BF02293557

Crossref Full Text | Google Scholar

Vermunt, J. D., and Donche, V. (2017). A learning patterns perspective on student learning in higher education: state of the art and moving forward. Educ. Psychol. Rev. 29, 269–299. doi: 10.1007/s10648-017-9414-6

Crossref Full Text | Google Scholar

Vermunt, J. D., and Verloop, N. (1999). Congruence and friction between learning and teaching. Learn. Instr. 9, 257–280. doi: 10.1016/S0959-4752(98)00028-0

Crossref Full Text | Google Scholar

Vosniadou, S. (2020). Part 3. Special initiatives across disciplines. Self-education bridging secondary and higher education – the importance of self-regulated learning. Euro. Rev. 28, S94–S103. doi: 10.1017/S106279872000093

Crossref Full Text | Google Scholar

Waring, M., and Evans, C. (2015). “Making sense of styles (5), and application of styles (6)” in Understanding pedagogy, developing a critical approach to teaching and learning. eds. M. Waring and C. Evans. (Abingdon, Oxford, UK: Routledge)

Google Scholar

Wigfield, A., and Eccles, J. S. (2000). Expectancy–value theory of achievement motivation. Contemp. Educ. Psychol. 25, 68–81. doi: 10.1006/ceps.1999.1015

Crossref Full Text | Google Scholar

Winne, P. H. (1996). A metacognitive view of individual differences in self-regulated learning. Learn. Individ. Differ. 8, 327–353. doi: 10.1016/S1041-6080(96)90022-9

Crossref Full Text | Google Scholar

Winne, P. H. (2001). “Self-regulated learning viewed from models of information processing” in Self-regulated learning and academic achievement: Theoretical perspectives. eds. B. J. Zimmerman and D. H. Schunk (Mahwah, NJ: Lawrence Erlbaum Associates).

Google Scholar

Winne, P. H., and Hadwin, A. F. (1998). “Studying as self-regulated learning” in Metacognition in educational theory and practice. eds. D. J. Hacker, J. Dunlosky, and A. C. Graesser. (Mahwah, NJ: Lawrence Erlbaum Associates Publishers).

Google Scholar

Yong, A. G., and Pearce, S. (2013). A beginner’s guide to factor analysis: focusing on exploratory factor analysis. Tutor. Quant. Methods Psychol. 9, 79–94. doi: 10.20982/tqmp.09.2.p079

Crossref Full Text | Google Scholar

Zhu, X., and Evans, C. (2022). Enhancing the development and understanding of assessment literacy in higher education. Euro. J. High. Educ. 14, 80–100. doi: 10.1080/21568235.2022.2118149

Crossref Full Text | Google Scholar

Zimmerman, B. J. (1998). A social cognitive view of self-regulated academic learning. J. Educ. Psychol. 81, 329–339. doi: 10.1037/0022-0663.81.3.329

Crossref Full Text | Google Scholar

Zimmerman, B. J. (2000). “Attainment of self-regulation: a social cognitive perspective” in Handbook of self-regulation. eds. M. Boekaerts, P. R. Pintrich, and M. Zeidner (San Diego, CA: Academic Press).

Google Scholar

Zimmerman, B. J. (2001). “Theories of self-regulated learning and academic achievement: an overview and analysis” in Self-regulated learning and academic achievement: Theoretical perspectives. eds. B. J. Zimmerman and D. H. Schunk. (Mahwah, NJ: Lawrence Erlbaum Associates Publishers).

Google Scholar

Zimmerman, B. J., and Campillo, M. (2003). “Motivating self-regulated problem solvers” in The nature of problem solving. eds. J. E. Davidson and R. J. Sternberg (Cambridge: Cambridge University Press).

Google Scholar

Zimmerman, B. J., and Kitsantas, A. (2005). “The hidden dimension of personal competence: self-regulated learning and practice” in Handbook of competence and motivation. eds. A. J. Elliot and C. S. Dweck. (New York, NY: Guilford).

Google Scholar

Keywords: self-regulatory assessment and feedback practices, higher education, academics, professional development, scale reliability

Citation: Evans C, Kay W, Amici-Dargan S, González RDM, Donert K and Rutherford S (2024) Developing a scale to explore self-regulatory approaches to assessment and feedback with academics in higher education. Front. Psychol. 15:1357939. doi: 10.3389/fpsyg.2024.1357939

Received: 19 December 2023; Accepted: 16 February 2024;
Published: 25 March 2024.

Edited by:

Barbara Otto, Fresenius University of Applied Sciences, Germany

Reviewed by:

Lindsey Basileo, Instructional Empowerment, United States
Julian Duhm, Fresenius University of Applied Sciences, Germany

Copyright © 2024 Evans, Kay, Amici-Dargan, González, Donert and Rutherford. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Carol Evans, evansc101@cardiff.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.