- 1Faculty for Education, Arts and Culture, Nord University, Bodø, Norway
- 2Department of Mathematics, Natural and Social Sciences, Queen Maud University College, Trondheim, Norway
- 3Department of Pedagogy, Queen Maud University College, Trondheim, Norway
Introduction: High-quality early childhood education and care (ECEC) positively influence children’s wellbeing and development. This study explores 18 teachers’ perspectives on the Early Childhood Quality (EQUAL) External tool—co-created by Norwegian ECEC teachers and researchers to assess process quality.
Methods: Semi-structured interviews, conducted in pairs, were analyzed using thematic analysis.
Results: Findings suggest that, although the use of the tool was perceived by some teachers as overwhelming or demanding, its application may offer beneficial outcomes. External observations fostered awareness of teachers’ own and others’ practices, positioning it as a valuable tool for reflection, assessment and improvement of process quality. However, certain contents were perceived as challenging to observe and assess, highlighting the need for professional support and reflective dialog during implementation.
Discussion: The findings are discussed using theory and literature focusing on quality in ECEC, children’s development and learning, as well as on practice development. A possible implication for practice is the development of a quality improvement framework that seeks to integrate both practitioners’ and researchers’ perspectives. While the findings provide valuable insights, it is acknowledged that further research is necessary to validate and extend the findings across broader and more diverse contexts. Further research could also focus on how teachers apply professional judgment during observations and rating processes, or on how EQUAL External influences professional learning and practice.
Introduction
The strong relationship between process quality in early childhood education and care (ECEC) and children’s wellbeing, learning, and development is widely known and accepted (Belsky et al., 2007; Howard et al., 2018; Siraj et al., 2017; Taggart et al., 2015; Vandell et al., 2010). A national scoping review shows a significant increase in studies addressing process quality the last decade (Furenes et al., 2023). This trend aligns with political priorities that stress early intervention to support children’s development and reduce inequalities from an early age. However, despite this increased attention, findings reveal that many children attend ECEC institutions with low or moderate levels of process quality, not only in Norway (Bjørnestad and Os, 2018; Bjørnestad et al., 2020), but also internationally (Vermeer et al., 2016). This discrepancy underscores a critical issue: while structural quality (e.g., staff-child ratios and practitioners’ education) is important, it is process quality [e.g., children’s everyday experiences and the relationships and interactions between practitioners and children that most directly influences children’s wellbeing, learning, and development (Cassidy et al., 2005; Slot, 2018)]. Therefore, addressing process quality, including finding ways to assess it and improve it, is not just a research trend, it is a professional necessity. Improving the quality of children’s everyday experiences in ECEC is vital for supporting their holistic development.
The study took place in Norway and is part of a larger study aimed at developing and validating a quality improvement framework designed for use in the ECEC context. Based on an initiative from practice, two existing quality tools,1 already used by practitioners in the field, were revised and developed into a quality improvement framework called Early Childhood Quality (EQUAL). This quality framework consists of three parts: EQUAL External, EQUAL Internal, and EQUAL Child. In this study, we focused on EQUAL External, which relies on systematic observation. In contrast, EQUAL Internal—a self-assessment tool, and Equal Child—aims to capture children’s perspectives through interviews, were not a part of the study. ECEC teachers were trained and asked to pilot the EQUAL External. The research question guiding this work is as follows: What are the teachers’ perceptions of the feasibility of the EQUAL External tool? Capturing practitioners’ perceptions of the tool is essential for several reasons; first, it helps amplify their voices within the development process; second, if the tool aimed at assessing process quality and supporting improvement processes is not perceived as feasible by the teachers, there is a risk that it will not be used in practice.
The Norwegian context and international relevance
Norwegian policymakers place a high premium on quality in ECEC (cf. White Paper no. 41 (2008–2009), 2009; Ministry of Education and Research, 2017, 2022, 2023). Norwegian ECEC institutions are considered the first step in the national educational system, although attendance is not mandatory. In 2024, 97.4% of children aged 3–5 years old (89.4% of whom are 1- to 2-year-olds) attend ECEC before starting school (Statistics Norway, 2025). Most of these children, including one-year-olds, attend full time, which means 41 h or more each week, from Monday to Friday. Norwegian ECEC institutions follow the Kindergarten Act (Ministry of Education and Research, 2005) and the National Framework Plan (Norwegian Directorate for Education and Training, 2017), which describe activities in the sector. The Framework Plan is a central document that governs quality in the ECEC sector (Gulbrandsen and Sundnes, 2004). It emphasizes a holistic view of children’s play and learning, which has also been highly recognized internationally (Organisation for Economic Co-operation and Development, 2015). Thus, we consider the present study relevant to a broad audience, as it emphasizes universal relational values concerning children’s wellbeing, participation, learning, and development. Care, play, learning, and formation are all viewed as core activities related to children’s holistic development, and the Framework Plan states that practitioners shall meet every child’s need for care, security, sense of belonging, respect, and play and facilitate meaningful interactions. Teachers are responsible for planning and implementing pedagogical processes, exercising sound professional judgment, and reflecting on their work to seek beneficial improvements for the children (Norwegian Directorate for Education and Training, 2017). Practitioners in Norwegian ECEC consist of teachers with bachelor’s degrees in ECEC,2 childcare and youth workers,3 and assistants.4 ECEC teachers still represent a minority. While preservice education is often considered essential for ensuring quality in ECEC, the shortage of highly qualified practitioners in Norwegian ECEC has been identified as a serious challenge, posing a risk to process quality (Organisation for Economic Co-operation and Development, 2015). Although teachers in Norwegian ECEC have access to various tools (e.g., self-assessment and parent survey) offered by the Directorate for Education and Training governing the activities and tasks prescribed by the National Framework Plan, including those addressing process quality, these tools have, to our knowledge, not yet undergone formal validation.
Conceptual and theoretical framework
This study is theoretically based on literature addressing quality in ECEC, quality assessment, and the development of pedagogical practice. The concept of quality in ECEC is both complex and value-laden and remains a subject of debate (see, e.g., Alvestad et al., 2017; Fenech, 2011; Moss and Pence, 1994). Despite its multifaceted nature, it is common practice to distinguish between structural and process quality when discussing quality in ECEC (Slot, 2018). However, what characterizes quality in one culture might not be considered the same in another culture (Bjørnestad et al., 2019; Ishimine and Tayler, 2014). Furthermore, perspectives on quality may also vary depending on stakeholders’ roles and responsibilities within the field. For instance, the views of the insiders (e.g., teachers) may differ from those of outsiders (e.g., researchers and policymakers) (Katz, 1992). Capturing insiders’ (teachers’) perceptions of the feasibility of EQUAL External provide valuable information that is beneficial for the further development of the tool and the field of practice, thereby promoting children’s wellbeing and development. As previously noted, identifying methods to assess and improve process quality in ECEC is a professional necessity. If EQUAL External turn out to be a valuable tool for this purpose, it can be considered beyond Norway, although some adjustments may be necessary.
The concept of process quality refers often to children’s everyday experiences and the relationships, interactions, and communication between practitioners and children (Cassidy et al., 2005; Slot, 2018). It is considered the heart of quality in ECEC (Slot, 2018) and can be related to one of the most widely cited theories in human development and educational psychology—Bronfenbrenner’s bioecological model of human development (Weisner, 2008). According to the revised bioecological model of human development, or the person–process–context–time model, a child’s daily experiences, including the relationships and interactions between the child and their primary caregivers in the immediate environment, are the most important factors for a child’s development (Bronfenbrenner and Morris, 2006). In the context of this study, practitioners can be viewed as primary caregivers, whereas ECEC child groups are considered the immediate environment. An immediate environment refers to a place or surrounding where children are on a regular basis. Almost all children in Norway attend an ECEC institution on a regular basis (5 days each week) and ECEC child groups can be viewed as learning environment since they are in ECEC institutions considered pedagogical. Both immediate environments and learning environments align well with Bronfenbrenner’s microsystem (Bronfenbrenner and Morris, 2006). When process quality is considered, it is also often guided by Vygotsky’s sociocultural learning theory (cf. Burchinal, 2018). Vygotsky’s sociocultural learning theory underscores the centrality of interaction, dialog, and scaffolding in children’s learning processes, where learning occurs through support from more knowable others (e.g., practitioners or peers) (Vygotsky, 1978). The way in which practitioners engage with children and intentionally shape the learning environment plays a central role in fostering children’s development and wellbeing.
While the terms “assessment” and “evaluation” are often used interchangeably in the literature, we use the former in this article. Assessment can be related to the individual, the group, and the activity level, which are the areas of focus of the current study, whereas evaluation can be related to the institutional, system, or program level (Vallberg-Roth, 2012). Assessment can further be understood as a pedagogical approach that can include questions such as what, how, and why (Vallberg-Roth, 2012, see also Bjørnestad et al., 2019). The present study focuses on teachers’ perceptions of the feasibility of the EQUAL external, more accurately the clarity of the tool (linked to what of assessment), the usability of the tool (linked to how of assessment), and the underlying justification for its use (linked to why of assessment) (Vallberg-Roth, 2012).
Decades of research within the ECEC field highlight the importance of focusing on process quality and underscore the need to develop pedagogical practice (cf. Vermeer et al., 2016). Since Schön (1983, 1987) introduced the concept of the reflective practitioner, as well as reflection-in-action and reflection-on-action, reflection has been widely used to develop pedagogical practice in both schools and ECEC contexts. Schön’s theory refers to an epistemology of practice and builds on Dewey’s theory of inquiry (Schön, 1992). The primary purpose of inquiry is to create knowledge about interest in change and improvement (Kaushik and Walsh, 2019, p. 5). While reflection-in-action refers to individual reflections in a situation, reflection-on-action refers to individual and collective reflections after a situation. Reflection on-action is often centered on experiences of surprise or new ways of seeing things. In the context of this study, the quality indicators provided by EQUAL External may serve as catalysts for such reflective processes where teachers reframe their understanding and perceive pedagogical situations in new ways.
Relevant literature
As highlighted in the introduction, literature demonstrates a strong connection between process quality and children’s wellbeing, learning outcomes, and overall development. For instance, Howes et al. (2008) identified significant positive associations between process quality and children’s development in areas such as language, literacy, mathematics, and social skills in pre-K classrooms (ages three to four) in the United States. Similarly, studies by Sylva et al. (2011) and Vandell et al. (2010) demonstrated that attending preschool in the UK and early childcare in the US can benefit all children, provided the quality of care is high. Sylva et al. (2011) further emphasized that low-quality may, in some cases, hinder children’s development. Recent studies from Norway have also identified links between process quality and various aspects of children’s development. Løkken et al. (2018) found that process quality at age three was linked to empathy at the same age, and to self-regulation by age five. Similarly, Hansen and Broekhuizen (2019) reported a connection between process quality at age three and children’s verbal communication skills at age five.
Qualitative studies that focus on teachers’ reflections on using systematic observation to assess quality are scarce, both internationally and nationally (see Baustad and Bjørnestad, 2023; Evertsen et al., 2022). In Norway, Evertsen et al. (2022) explored 22 ECEC professionals’ perceptions of the Classroom Assessment Scoring System (CLASS), developed in the US, as a basis for in-service professional development [see, e.g., CLASS Toddler in La Paro et al. (2012)]. The authors found that the participants were highly satisfied with using CLASS as a framework for their observations. The professionals expressed that CLASS functioned well for both individual and collective learning. Similarly, Baustad and Bjørnestad (2023) explored 19 ECEC practitioners’ experiences of a professional development intervention structured by the Caregiver Interaction Profile (CIP) scales (observational tool), which was developed in the Netherlands (cf. Helmerhorst et al., 2014). The authors found that all practitioners were highly satisfied with using the tool as a basis for observation, reflection and improvement work, a finding similar to that obtained by Evertsen et al. (2022).
Many schools in Norway are familiar with external assessment (Stiberg-Jamt et al., 2015; Official Norwegian Reports, 2023), and ECEC institutions have also become increasingly aware of it (Bråten and Lunde, 2016). External assessment in schools and ECEC usually entails large resources, as it often includes interviews with leaders, practitioners, employees, children, and parents and observation of pedagogical practices (Bråten and Lunde, 2016; Moen and Mørreaunet, 2019). It also requires thorough preparatory work from the receiving institutions, as they need to point out their core areas for improvement prior to the external assessment. Research focusing on ECEC teachers’ perceptions of such processes is limited. In Australia, Phillips and Fenech (2023) explored 15 educators’ perceptions of the National Quality Framework (NQF), a system of regulation, quality assurance and quality improvement in ECEC services. They targeted educators’ perceptions of the NQF assessment and rating process (external assessment). The authors found that educators perceived the process to be positive and ‘necessary’ for quality ECEC and considered it to be an excellent reflection tool. The process forced them to be reflective about all aspects of the service (Phillips and Fenech, 2023, p. 993). On the other hand, they also found that the process could be stressful. Some participants felt as if they needed to defend their own practice. Participants were also less satisfied by being rated by only one authorized officer, especially since the rate is based on what that officer sees during the observation and the officer’s interpretation of it, or their subjective judgment. As suggested by Phillips and Fenech (2023), having more than one assessor could address the issue of subjective judgment, and using multiple perspectives including those of the teachers, might allow for dialog where the richness and complexities of quality contributors can be recognized (p. 997).
Recent examples from school research show that teachers can improve quality through lesson studies, which are less resource demanding, class planning in teams, and performing and assessing one another’s practice (Aas et al., 2023). The tool explored in this study includes an element of local yet external assessment in which colleagues visit one another’s institutions, observe practitioners’ practice, and provide both quantitative and qualitative feedback.
Materials and methods
This study investigates, for the first time, the feasibility of EQUAL Externals, an ECEC quality assessment tool developed collaboratively by practitioners and researcher in Norway.
Participants
The participants, who were teachers recruited from ECEC institutions in two regions in Norway, also acted as co-researchers in the larger study aimed at developing and validating a research-based quality improvement framework (see Introduction). In addition to testing the EQUAL External tool, which is one part of the quality improvement framework, they collected other data, such as children’s perceptions, which are not the focus of the present study. Half of the participants worked in private ECEC institutions, whereas the other half worked in public institutions. The average age of the participants was 37.5 years. The participants’ average work experience in years was 14.3. All were educated ECEC teachers who had worked in ECEC institutions following the educational philosophy typical of the Nordic ECEC tradition, in which care, play, and children’s participation are emphasized above children’s academic learning (cf. Johansson, 2020; Wagner and Einarsdottir, 2006). For an overview of participants’ characteristics, see Table 1.
EQUAL External
The EQUAL External tool is designed to support quality assessment and improvement in Norwegian ECEC institutions by translating objectives and contents presented in the National Framework Plan into observable and actionable categories. These categories are: (1) care, (2) play, (3) formative development, (4) learning, (5) friendship and community, (6) communication and language, and (7) children’s participation. Each category is further divided into five indicators, resulting in a total of 35 indicators that provide a structured framework for observation.
The tool contains supplementary descriptions of the indicators, designed to support external observers in making informed observations and facilitating accurate scoring. One example of an indicator in the category of play is as follows: “The physical environment inspires the children to engage in different types of play.” The supplementary description states the following:
Physical environments that inspire many different types of play, both outside and inside, require conscious facilitation by the staff. In a good play environment, both boys and girls have many play ideas and a strong desire to play. Good physical environments have clear play areas and a varied and well-thought-out selection of toys and equipment. These toys are accessible so that the children can pick up what they need themselves.
The observers rated their observations on a six-point Likert scale, with 1 representing “strongly disagree,” 2 representing “disagree,” 3 representing “somewhat disagree,” 4 representing “somewhat agree,” 5 representing “agree,” and 6 representing “strongly agree.”
An observation protocol including rates and notes (elaborating examples of observed practice) is created and presented to one teacher of the observed child group in a scheduled feedback meeting. Each feedback meeting (one per group) lasts approximately 45 min, and the ECEC manager is present at all meetings.
For an overview of the categories and indicators in EQUAL External, see the drafted version of EQUAL External (Supplementary materials).
Training
Although the seven categories, derived from the National Framework Plan, were assumed to be familiar to the participants testing the EQUAL External tool, they nonetheless received guidance on its application. The training spanned 2 days, with the first day focused on EQUAL Internal and EQUAL External, covering all the seven categories and 35 indicators in total. The categories and indicators are identical for EQUAL Internal and External, however while Internal involves practitioners evaluating practice within their own child group, External entails ECEC teachers observing practice in a neighboring ECEC institution. The participants worked in groups with cases to practice using a six-point Likert scale, and they shared reflections about the observer’s role and the concept of assessment. They were further divided into pairs from different ECEC institutions, and they received written procedures describing how the external assessment should be carried out. In accordance with the procedures in BLIKK, participants were instructed to visit ECEC settings in pairs and to observe a maximum of two child groups/classrooms in a single day, spending 2.5 h in each group. Participants from two different ECEC institutions observed a third institution in pairs, everyone was paired with someone they did not normally work with, and the observations took place in ECEC institutions unfamiliar to both. They were further instructed to observe a range of everyday situations, such as free play, mealtimes, and structured activities, without following a fixed observation cycle. They were also instructed to refrain from observing identical situations at the same time, to ensure a broader range of observational insights. Based on their individual observations and scores, the participants were required to reach a consensus on a final score for each of the categories. The procedures provided also instructions on how to fill out the observation protocol/the profile of the child group, and the observation pair should spend 20–30 min presenting the protocol in a feedback meeting. Only one teacher should be present from the observed group during the meeting, and she/he was given 10 min to provide comments afterwards. The manager was responsible for introducing and concluding the meeting. The training was provided in September, enabling them to initiate the observation by October. The observation period extended through January, followed by group interviews conducted in February and March.
Data collection
In line with the focus of the study, a qualitative approach was applied. Eighteen teachers were interviewed in groups of two, and the data consisted of nine interviews. Before the interviews, all participants carried out observations in two ECEC institutions, each comprising two or three child groups (children aged three to five). The interviews took place face-to-face between two teachers and two researchers in a separate room, lasting from 1 to 1 ½ hours each, and were audiotaped and directly transcribed afterward. Examples of questions asked during the interviews: “Do you find the indicators [in EQUAL External] to be clear and easy to understand?” “Please elaborate.” “What are your thoughts on the number of indicators?” “Was there anything you missed in the indicators?” “How did you experience conducting the observations in pair?” “Were any of the categories or indicators particularly challenging to assess?” “How did you experience the feedback meeting–giving feedback to colleagues?” The participants were informed in advance of the focus of the interview. All participants provided their written consent for the interview, and all data were treated in accordance with the ethical guidelines of the Norwegian Agency for Shared Services in Education and Research. The data were anonymized.
Data analysis
Qualitative thematic analysis was applied for analyzing the transcribed interviews (Braun and Clarke, 2006; Braun and Clarke, 2012; Edwards, 2010). The analysis followed Braun and Clarke’s (2006) step-by-step guide for thematic analysis. Through the process we searched for thematic responses/patterns relevant to answering the research question (Braun and Clarke, 2012; Edwards, 2010). We read through all transcripts, became familiar with the transcripts, and then made notes. We searched for words and small units of text related to the research question and the concept of feasibility (i.e., clarity, usability, and effectiveness). We applied color coding to each, based on the semantic components conveyed in the respective statements. This approach was intended to support the categorization and interpretation of meaning elements across the dataset. All listed authors searched for themes and meaningful patterns in the text. One of the authors systematically reviewed all transcripts, identifying and coding meaningful segments of text. By meaningful segments of text, we refer to instances where, within a single sentence, it was possible to identify specific words or expressions that conveyed recurring pedagogical meanings or underlying assumptions, for example, words such as “clear,” “recognizable,” and “easy to understand based on the descriptions.” These meaning-bearing words were color-coded to examine whether they could serve as the basis for a “category.” This method allowed us to trace patterns and supported the development of categories grounded in participants’ own language and conceptual framing. They were then organized into three overarching themes, aligned with Vallberg-Roth’s (2012) conceptualization of assessment as a pedagogical approach encompassing the questions of what, how, and why of assessment in the process of inductive and deductive analysis. The coding was then reviewed and discussed among all listed authors. Two authors read through the transcripts and reviewed the codes and categories. The coding and categorization were discussed among all researchers again to achieve consensus in the coding and categorization. See Table 2 for a schematic overview of the development of themes.
Results
In this part of the paper, the findings are presented under the following headings: Teachers’ views on the quality criteria in EQUAL External (linked to the what of assessment), Teachers’ views on the usability of EQUAL External (linked to the how of assessment), and Teachers’ reflections on the role of EQUAL External (linked to the why of assessment). We have chosen in the present of the analysis to provide descriptive accounts of what was identified and strengthen the description by including illustrative examples from the data. The noun ‘participant’ is used while the results are presented.
Teachers’ views on the quality criteria in EQUAL External
As described earlier, EQUAL External consists of seven categories and 35 indicators. In this section, we focus mostly on the participants’ views of the tool itself—whether they found the quality criteria relevant to their everyday practice and how they viewed the clarity of the criteria (i.e., are they easy to understand and use?). Two main findings representing the teachers’ views were obtained in the analysis, namely, the indicators align with the Framework Plan, and EQUAL External is generally a comprehensive tool.
Comprehensive and feasible categories
An important aspect of quality observational tools is that the tools cover relevant contents, and there are no important areas that are left out, that overlap, or that are unnecessary to describe the practice. As EQUAL External consists of seven categories and 35 indicators, some of the participants commented that these were many when they were introduced to the tool during training. Participants wondered whether and how they could cover all the indicators highlighted in the tool, as is evident in the following remark of a participant: “At first, we thought, ‘My God, how are we going to finish this observation?’” Some similarly stated that they were overwhelmed by the number of indicators to focus on during the observation: “There are many indicators, but many are important ones” and “On the first day of observation, I thought, ‘Oh dear, there are far too many indicators to consider.’”
Upon further reflection, participants wanted to include all indicators in the tool to ensure that all areas in the Framework Plan could be assessed. All indicators were considered important, and no quality criteria was missed. One of the participants justified this issue as follows: “… because, even if it [the quality indicators] is a lot and time consuming, to get an overall impression, you really need that number. You cannot cut it in half because you will not get the overall picture.” Another participant stated the following:
I think all the indicators are important. It’s because they’re closely associated with the Framework Plan. After all, I think that all topics are equally important in their own way. They’re very descriptive of what you want an ECEC institution to be.
Although the participants were able to cover all indicators, some of them mentioned other important aspects that could have been considered in the indicators. One of them reacted to the supplementary descriptions under the category of play, specifically the following indicator: “The physical environment inspires the children to engage in different types of play.” The participant noted that undefined materials were not explicitly included: “It’s [undefined materials] not mentioned in the criterion, but it’s a very significant part of the physical environment for me. Thus, I had to tread carefully. We discussed it, and we had to look closely at what the criterion actually stated.”
Indicators aligned with the framework plan
The Norwegian Framework Plan is comprehensive and regulates the content and tasks of Norwegian ECEC. The analysis indicates that the participants could relate the categories and indicators to the Framework Plan. They believe that there is a close link between the indicators and the content and tasks described in the plan. Both the categories and the indicators are familiar to the participants and align with their understanding of the social mission outlined in the Framework Plan, as demonstrated by the following remarks: “I think that the indicators are good because they ‘are’ in the Framework Plan,” and “I immediately recognized that they are related to the Framework Plan. The first thought I had was, ‘Oh, we have here the Framework Plan, sort of.’”
Furthermore, the participants perceived that the indicators clarified the Framework Plan and that they were easier to understand and recognize in practice. “There’s nothing new except that the questions are different; they are more precise. It’s easier to look for things,” one of them remarked. Another participant reflected as follows:
I think the categories are good because they are, after all, based on the Framework Plan. I think they’re very comprehensive. After all, the Framework Plan is my working tool and is for the entirety of kindergarten, so it’s nice to be able to use it in this way.
By reading the categories and indicators, participants understood what to look for during the observations. They especially found the supplementary descriptions to be clear and helpful. Two of the informants reflected, “The categories were easy to understand. I really liked that there was supplementary text” and “This is what we are looking for when we examine this practice. It’s great that it’s detailed and descriptive.”
However, at the start of the observation, some of the participants took time to discuss and reach a consensus on the interpretation of certain aspects of the indicators, such as how to understand and rate diversity and nap time on the basis of the descriptions given in the tool. They pondered, “What does it mean when it says diversity is visible, and what should be visible and for whom?” and “[Should] diversity always be visible? What if someone [children or parents] does not want it to be visible?” Participants also discussed how nap time should be defined: “Does it mean that children should be able to lie down for a while, or can they just have silent play?” Their discussions were often linked to personal experiences and practices.
The feedback meetings aligned well with some of the teachers’ tasks highlighted in the Framework Plan. For example, teachers are expected to implement pedagogical processes in line with the Framework, provide guidance, and ensure that the plan is being carried out in practice. Regarding the feedback meetings, the participants appreciated that the observation protocol allowed them to write down examples from their observations that could explain or justify their ratings for different indicators. It was important for them that their feedback was fair. To give a fair score, the participants needed to be sure that the score was as accurate as possible and that they could explain in detail what they had observed. One participant said, “I would not be comfortable just submitting a rating without explaining what I had observed, not being able to tell them myself.” Clear and useful feedback also seems to have an important function in their continuing pedagogical processes, as the following line expresses: “If one were to succeed, it’s necessary to receive substantial feedback, such as what one should be working on. If not, it will only be put in the drawer.”
Teachers’ views on the usability of EQUAL External
In this section we focus mainly on participants’ experiences of using EQUAL External in practice. Three main findings emerged from the analyses, which are discussed as follows.
Indicators that were more challenging to observe than others
As mentioned previously, the participants visited ECEC institutions in pairs and conducted individual observations in two child groups per day. Several of the participants found this task demanding, especially at the start, as exemplified by the following responses: “We had two groups on the first (day). It was a bit much” and “It was a little difficult at first. You did not really know what was going to happen. It was a bit like, ‘Wow, this is a bit much.’” Although many of them found the tool somewhat overwhelming because it was comprehensive, they said that their use of the tool went well.
Some of the indicators were also more challenging to observe than others. The indicators that most participants viewed as most challenging were under the category of formative development, which are as follows: (1) the practitioners work to promote children’s experience of belonging in the local community, (2) The practitioners give the children regular experience in nature, and (3) the ECEC institution makes it visible that it is actively working with diversity. One indicator under the category of friendship and community was also considered difficult to observe: (1) the practitioners help children who are bullied. A few practitioners mentioned also an indicator under the category of care that was difficult to observe: (1) the children are given the opportunity to rest and relax during the day. However, as it was possible to ask the pedagogical leaders in the child groups questions, getting an impression of how they worked with the issue was possible. One of the participants concluded, “There were only a few indicators that we could not immediately gauge, but it was possible to ask follow-up questions afterward... once we have familiarized ourselves with the indicators, I do not think there was really a problem.”
In Norway, children commonly play outside every day, even when it is windy, raining, or snowing. One participant mentioned that it was more challenging to carry out observations during outdoor play than during indoor play. The most challenging task was observing the interactions and communications between the practitioners and the children. This could be due to the weather, as rain and wind could make it difficult to hear the conversations between the practitioners and the children. Nevertheless, the overall impression was that many of the challenges were linked to the participants doing a task that was new to them. They also said that it became easier to observe eventually, as the following line shows: “It went very well once you got started, especially the second time.”
Rating practice—an uncharted territory
The observations were transferred to a six-point Likert scale. Each pair had to compare and deliberate on their individual observations to arrive at a common score per child group. The collective score constituted the final observation protocol/the profile of the child group. Many of the participants were familiar with conducting observations, both through their education and daily work, but incorporating these observations into a rating system was new to them. Several described this as difficult and that they felt a little insecure doing this task. One of the participants said, “It’s hard, really hard.” During the analysis process, we found that much of what they described as difficult was connected with the indicators they considered challenging to observe. The participants could ask the pedagogical leader in the child groups about these indicators, but then questions were raised concerning how these would turn out in the rating system:
There were certain indicators that we couldn’t observe in the ECEC child groups, such as when they were on a trip. Diversity was also not easy to see. Some were given a rating after a conversation with the pedagogical leader. Then there was a lot of how... the pedagogical leader spoke for himself then. Some people were very good at speaking for themselves. Others get nervous and become uptight, and they may not explain everything. They get a worse score for that because we can’t observe what we need to.
Although participants described some indicators as challenging to rate, they provided input on how they managed: “What we did was that we took a holistic view of how the practitioners interacted with the children usually. So, they were assessed based on that.” Although participants described the rating procedure as difficult, they often concluded that they managed to agree on the ratings. One of them described this as follows: “I felt that we mostly agreed, and we did not have major differences [in the ratings].” Another referred to training, in which the scale was interpreted in colors:
I thought it was quite easy to give a rating. It was a neat scale. Scores of 1 and 2 are not good, 3 and 4… When I was giving the scores, I thought of colors, where green is 5 and 6, and red is 1 and 2. It was clear—not just numbers.
Many participants also wanted a not seen option that they could choose, especially in those situations in which they were not able to observe specific issues or indicators.
The ratings were included in an observation protocol provided to each child group after the feedback meeting. In addition to the scores, there was an opportunity to include comments and examples from the observations, and the participants were encouraged to add these.
The feedback meeting—a challenging but positive experience
The feedback meeting can be viewed as an important part of the tool, helping those who scored the observations articulate the scores (a kind of quality check). It can also help the recipients understand their scores better, rather than just receiving a number. One participant said, “It’s wise to carry out the feedback meetings as soon as possible after the observations because you have written notes, but do you really remember what they mean?” Several of the participants mentioned that the feedback meetings contributed to elaborating the reports.
At first, most of the participants found the feedback meetings challenging. It was exciting and somewhat frightening to present their EQUAL External ratings to pedagogical leaders at an external ECEC institution. This discomfort was connected with participants inexperience with oral feedback, especially when presenting low scores. However, all participants mentioned that performing this task soon became easier with practice, both because they became more familiar with the feedback meetings and because, as one of the participants said, “Those sitting at the other side of the table were very receptive of feedback.” One of the other participants described this further as follows:
The atmosphere was very good. I didn’t encounter anyone with their defenses up. Everybody was ready to receive feedback, and I experienced the same when they [the external observation team] came to us. They were not after us. We were a team. We were collaborators in this improvement work.
Teachers’ reflections on the purpose of using EQUAL External
In addition to what the participants assessed and their experiences with how they carried out their tasks, as described above, it is essential to consider the reasons for using EQUAL External or the why of the assessment process. From the analysis, two main findings emerged.
Enhanced awareness of one’s own practice through observations and reflections
The participants reported that they were able to enhance their observational and assessment skills. Many participants expressed that they eventually found the task of observing and assessing practice easy and that the systematic observations enhanced their reflections of their own practice and their sense of self-efficacy. One of the practitioners said, “You closely engage with the indicators; they become more effective in a way” … and “You learn a lot from watching others work and discussing afterward.”
Using the observation tool contributed to a shared understanding of practice among the participants. It was well suited to facilitating discussions and reflections among them. External observations made them aware of other good practices that were new to them, and participants started to assess and reflect on their own practice based on the external observations in other ECEC child groups. The participants expressed that they obtained new ideas by being out in ECEC institutions. One of the participants said the following to her fellow observer during the interview:
I learned a lot from discussing and talking with you. We incorporated some of the ideas into our own practice. You also become more aware of your own practice, such as when there’s a good idea, and it works so well here, so why don’t we do that as well?
One participant expressed that “to get out and see something new, to use a slightly different set of eyes, contributes to the motivation to change one’s own practice.” They also believed that EQUAL External, with predefined indicators, can function efficiently as a basis for reflection for common understanding and for improving practice, as is evident in the following remark: “The observation tool is well suited for fostering discussion and reflection within the group. It’s nice to use it together with the rest of the practitioners.” Another reflected as follows:
There wouldn’t have been such reflections if we were only in an ECEC institution and observing and visiting. They [the reflections] wouldn’t have happened without a focus on the categories or indicators. You also sit and observe your own practice and how to do things yourself. It’s improvement for us, too.
Helpful external feedback
The feedback meetings themselves contributed to new reflections, as is evident in the following:
In the ECEC institution in which we observed, the adults were quiet, very quiet. It made me think a lot. We talked a lot about this child group, in which one person hardly spoke at all. We did make some solid reflections during the feedback meeting, such as the following: Is it wrong to be quiet? However, there’s a difference between being quiet and being inactive. Do you function, or don’t you?
The participants emphasized the importance of gaining something from the external assessments and feedback meetings that they could use in their own practice. They also highlighted that feedback from external observers can be helpful for the pedagogical leaders who are responsible for the children’s experiences in the child groups and for reflecting on and improving ongoing pedagogical processes in the groups. As one of the participants reflected,
The feedback may be useful for us to use as pedagogues. Even if we’re already working on a topic, we can take it in further, that now we’ve received feedback on this from external observers, we heard it mentioned, so we must remember or improve further. I feel it’s given somehow a bit more weight if you have some theoretical knowledge or if someone else has been observing your practice. If it’s something you think is difficult to work on, it gets more weight. What I see as pedagogue, someone else sees, too.
Discussion
The research question of this study explores how ECEC teachers perceive the feasibility of EQUAL External, a tool designed to support quality improvement in ECEC institutions. The concept of feasibility here is understood as the clarity, usability, and effectiveness of the tool, linked to what, how and why of assessment (see the data analysis chapter).
The clarity of the tool, especially its content and indicators, can be linked to the National Framework Plan (Norwegian Directorate of Education and Training, 2017). The Framework Plan is the most vital document concerning quality in Norwegian ECEC (Gulbrandsen and Sundnes, 2004), and we can expect all ECEC teachers5 to be well acquainted with it because of their responsibilities. The indicators in EQUAL External, with their supplementary descriptions, make it even clearer for teachers to address process quality in their own settings. Despite having many indicators to observe, the teachers considered all of them important. While the Framework Plan highlights all practitioners’ responsibilities to meet every child’s need for care, security, sense of belonging, respect, and play, facilitates meaningful interactions, and further describes seven subject areas with which to work (Norwegian Directorate of Education and Training, 2017), EQUAL External helps practitioners capture these aspects through observations and may also lead to improvements in process quality. As mentioned by the participants, systematic observations differ from open observations, which have no predefined indicators, and can enhance their reflections. This finding aligns with previous research by Baustad and Bjørnestad (2023), Evertsen et al. (2022), and Philips and Fenech (2023), which also highlights the role of systematic observation in enhancing reflection. On the other hand, the perceived difficulty in assessing certain areas of the tool, highlighting the need for professional support during implementation. It may be questioned whether a two-day training period provides sufficient depth and continuity to ensure effective implementation.
The participants found the tool comprehensive and content rich, offering valuable and useful information. This finding is consistent with those of Baustad and Bjørnestad (2023) and Evertsen et al. (2022). Both CLASS and CIP, which were used in the aforementioned studies, are considered reliable and valid tools for observing and assessing interaction quality in ECEC. However, they were developed in other countries. Using observational tools, especially those developed in other countries, as a basis for the assessment of practice has been criticized by researchers in the field (cf. Dalberg et al., 1999; Fenech, 2011; Ishimine and Tayler, 2014). Different countries may have varying pedagogical orientations and/or views of children, so transferring one tool from one country to another may be difficult (cf. Bjørnestad et al., 2019). Although ECEC professionals were satisfied with using CLASS and CIP in the studies by Evertsen et al. (2022), and Baustad and Bjørnestad (2023), it can be argued that EQUAL more accurately captures what is identified as quality in the Norwegian ECEC context, especially as defined in the Framework Plan. On the other hand, the objectives and contents encompassed by EQUAL: (1) care, (2) play, (3) formative development, (4) learning, (5) friendship and community, (6) communication and language, and (7) children’s participation may be considered as universally relevant and foundational to the development and wellbeing of children. Central to this is the emphasis on the learning environment, conceptually grounded in Bronfenbrenner’s bioecological theory (Bronfenbrenner and Morris, 2006) and Vygotsky’s sociocultural theory (Vygotsky, 1978). These theoretical perspectives provide a framework for understanding how children’s development is shaped by their immediate surroundings and social interactions, and they inform expectations regarding the role of practitioners in facilitating supportive and enriching learning environments. Comparable objectives and contents as well as theoretical perspectives are found in quality observation frameworks such as CLASS (La Paro et al., 2012) and CIP (Helmerhorst et al., 2014), indicating a shared emphasis on key aspects of ECEC. Accordingly, the EQUAL tool holds potential value for a wider audience beyond its immediate context.
Regarding the usability of the tool, the reliability of the results using EQUAL External is questioned, particularly in terms of capturing practices related to indicators that are difficult to observe. The participants’ clarification of the feedback they received from the pedagogical leaders on invisible practices (i.e., indicators that cannot be observed) is significant, especially because self-assessments are often viewed as less reliable than external observations (Egert et al., 2018). Furthermore, one may ask whether the difficulties in observing all indicators can be related to the specific days (Monday to Friday) the observations were carried out. For example, it is common to have special days for trips in Norwegian child groups, and cultural diversity may perhaps be more emphasized during the yearly celebration of the international week (normally in October in Norway) than during other periods. Nevertheless, capturing children’s daily experiences is important, as these might highly influence their wellbeing and development, both now and in the future (Belsky et al., 2007; Hansen and Broekhuizen, 2019; Howes et al., 2008; Howard et al., 2018; Løkken et al., 2018; Siraj et al., 2017; Sylva et al., 2011; Vandell et al., 2010). Some of the challenges related to observing specific indicators may also be attributed to the fact that participants were uncertain about how to conduct the observations. For most of them, performing systematic external observations and providing feedback to colleagues based on these was a new experience. In some cases, they had to engage in discussions to clarify what to observe for particular indicators. A more detailed description of how the observations could be carried out might have supported the participants and enhanced the usability of the tool. Given the challenges reported in observing certain indicators, it could be worth considering whether these should be revised or excluded to improve the usability of the tool in practice. A tool that facilitates the observation of all indicators is evidently more reliable than a tool in which observing all aspects of practice is not possible. Furthermore, the reliability of the results may also be questioned due to the absence of data on inter-rater reliability between the observers. Nonetheless, the participants wanted to have all the indicators included in the tool and whether some of the indicators should be removed from the tool has not yet been decided.
Although there may be limitations with the tool, EQUAL External turned out to be effective in stimulating the teachers’ reflections on their own practice (cf. Schön, 1983, 1987, 1992). This finding is as expected. Despite the fact that the content in the tool builds on the Framework Plan and was familiar to all participating teachers, use of indicators seemed to act as catalysts for inquiry or reflective processes, enabling teachers to view pedagogical situations from new perspectives (Kaushik and Walsh, 2019; Schön, 1992). However, this finding corresponds with the results reported by Philips and Fenech (2023), who found that the use of specific indicators for assessment and improvement heightened educators’ awareness of their practice. The indicators were perceived as essential for fostering reflection. Whether the indicators should all be included can be related to the goal of the tool. Observational tools that are grounded in theories, research, and practice can help practitioners reflect on the right things to do, especially if such tools are used with sound professional judgment (Peeters and Vandenbroeck, 2011). Reflecting on the right things to do is different from reflecting on doing things right (Peeters and Vandenbroeck, 2011). By reflecting on their own practice, practitioners can improve it further (Schön, 1983, 1987). By using EQUAL External as a basis for their reflections, teachers take part in a systematic inquiry process aiming to create knowledge about their own practice, which they can use to change or improve it (Kaushik and Walsh, 2019). Understanding a situation differently is a necessary first step, but it is insufficient on its own, reflection must be translated into action to bring about change and improvement (Schön, 1983). However, this study does not address the extent to which the tool leads to change in practice.
Observational tools, such as EQUAL External, which rely on predefined indicators, offer only one perspective of quality and provide only a snapshot of the observed practice (see also Buøen et al., 2021). One concern with such tools is their potential to influence on how practitioners conceptualize quality, thereby shaping their practice. In a Norwegian context this may present a minor challenge since the tool is grounded in the Framework Plan, which guides pedagogical practices and expectations. Even though the objectives and contents may be considered as universally relevant, other countries considering its use should reflect on how well they align with their values and pedagogical ideas (cf. Bjørnestad et al., 2019; Ishimine and Tayler, 2014). Furthermore, the experience of being observed may generate discomfort among practitioners, potentially leading to a sense of vulnerability or a perceived pressure to justify their practice, as noted in the findings of Philips and Fenech (2023). Surprisingly, few participants in the current study focused on these aspects during their responses. Even though few of the participants explicitly reflected on how EQUAL External might support the promotion or enhancement of children’s development and learning, the indicators embedded in the tool can suggest potential areas for improvement that could benefit children in ECEC institutions.
Conclusion
The research question addressed in this study was: What are the teachers’ perceptions of the feasibility of the EQUAL External tool? Focus has been on the clarity of the tool as well as its usability and effectiveness, in terms of what, how and why of assessment. All teachers had positive perceptions of EQUAL External. Regarding the clarity of the tool, the teachers found that the tool’s categories and indicators aligned well with the tasks described in the Framework Plan, and the feedback meeting was also clearly described. Although the tool was considered overwhelming at first because it was comprehensive and had numerous indicators, the teachers did not want to leave out any of the indicators. On the other hand, the study identified certain challenges related to the clarity and the usability of the tool, especially linked to assessment of specific indicators. At the same time, using the tool fostered awareness of teachers’ own and others’ practices, suggesting its potential value in supporting quality improvement processes. However, these initial findings indicate areas for further investigation.
Limitations
While the study offers valuable insights, its findings should be interpreted carefully. One limitation concerns the dual role of the interviewers, who were also involved in the development of the tool. This overlap may have influenced participants’ responses, potentially prompting them to align their feedback with perceived expectations or to offer predominantly positive evaluations to appear competent. Some may also have felt reluctant to express critical views in this context. However, the foundation of EQUAL’s development, as well as the research design, was rooted in a close collaborative partnership between ECEC practitioners and researchers. Furthermore, shared experience through, for example, training sessions may have contributed to a sense of trust and psychological safety, thereby encouraging more open and honest reflections. Notably, predominantly positive feedback regarding the use of such tools has also been reported in other Norwegian studies (Baustad and Bjørnestad, 2023; Evertsen et al., 2022). The second limitation is related to the fact that the interviews were conducted in pairs. While group interviews and focus groups can be good solutions for exploring participants’ experiences, they may not capture individual views and concerns (Halkier, 2016). Participants could be influenced by one another, and some might dominate the discussion. In this study, the participants were prepared for the interviews, and they were told to bring their own notes from the observations. They also had access to the EQUAL manual during the interviews. However, as with all qualitative studies involving interviews, the participants, the interviewers, and the questions themselves influence the findings. It cannot be ruled out that teachers from other parts of the country, other interviewers, and other questions would have produced other findings. A third potential limitation concerns the training procedures and the absence of data on inter-rater reliability, which may affect the reliability and consistency of participants’ observations and assessments. Fourth, with the limited number of participants in this study, statistical generalization from the results was not possible.
Implications for practice and further research
Despite its limitations, this study has value. This is the first research to explore teachers’ experiences with using EQUAL External, a part of a newly developed quality improvement framework aiming to support ECEC practitioners in assessing and improving their work. It is well known that changing and improving practice requires reflection. Reflection-on-action is often associated with unexpected insights or alternative ways of seeing things. This study suggests that the quality indicators provided by EQUAL External can stimulate such reflection, enabling teachers to reconsider and deepen their understanding of pedagogical practice. EQUAL External may contribute to an improved awareness among practitioners regarding the impact of their pedagogical practice on children’s development and learning. As such, the study offers valuable implications for the experiences of the children in ECEC settings.
We recognize though that further research is needed to validate and expand upon these findings in broader and more diverse contexts. Future studies could for example also focus on what teachers are looking for when observing using the indicators, as it seems that their ratings are also linked to their personal experiences and views. How do they, for example, use their professional judgment during observations and ratings? Studies could explore in depth how EQUAL External influences teachers’ professional learning. Research also needs to focus on EQUAL Internal and EQUAL Child, as well as the connection between them. As the larger study includes both qualitative and quantitative data, combining data is possible, such as the quantitative results from EQUAL External observations and qualitative data from the interviews after the implementation period (10 months), to determine the effects/influence of the EQUAL framework. A documentary analysis of how EQUAL’s content relates to other policy texts could indeed offer valuable insights regarding the usability of the tool, particular in international contexts.
Data availability statement
The datasets presented in this article are not readily available because of licensing and restriction agreements. The data can only be shared with the key personnel involved in the grant given by the Research Council of Norway. Requests to access the datasets should be directed to Børge Moe, Ym9yZ2UubW9lQGRtbWgubm8=.
Ethics statement
The studies involving humans were approved by the Norwegian Centre for Research Data (now named as the Norwegian Agency for Shared Services in Education and Research). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
AB: Writing – original draft, Writing – review & editing, Investigation. MT: Writing – original draft, Writing – review & editing, Investigation. KS: Writing – original draft, Writing – review & editing. LF: Writing – original draft, Writing – review & editing, Investigation.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. The study was supported by the Research Council of Norway under grant number 326633. The open access fee was funded by Nord University.
Acknowledgments
This work is a part of the KUMBA project with the following key researchers: Børge Moe, Kristine Warhuus Smeby, Ellen Beate Hansen Sandseter, Laila Skjei Flormælen, Birgitte Ljunggren, Monica Seland (Queen Maud University College of Early Childhood Education), Vera Skalicka (Norwegian University of Science and Technology), May Liss Olsen Tobiassen, and Anne Grethe Baustad (Nord University). Our special thanks go to Espira (a private ECEC organization in Norway) for its initiative to the project, the owners of the ECEC institutions that participated in the study, and the ECEC teachers who are partners in the project.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that Gen AI was used in the creation of this manuscript. Generative AI was used for language editing of selected sentences in the revised versions of the manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1542263/full#supplementary-material
Footnotes
^These are BLIKK External, a Norwegian quality tool developed by Espira, and the Norwegian ECEC Well-Being Monitor (WBM), a children’s conversation tool developed by Sandseter and Seland.
^With bachelor’s degrees in ECEC, they focus on pedagogical work with children aged one to 5 years in ECEC.
^With vocational certificates, they focus on pedagogical work with children and youths aged 0–18 years.
^They have no formal education on pedagogical work.
^Even though the noun ‘participant’ was used when the results were presented, ‘teacher’ is used in the discussion section. The participants in the study were ECEC teachers.
References
Aas, H. K., Uthus, M., and Løhre, A. (2023). Inclusive education for students with challenging behaviour: development of teachers’ beliefs and ideas for adaptions through lesson study. Eur. J. Spec. 39, 64–78. doi: 10.1080/08856257.2023.2191107
Alvestad, M., Tuastad, S. E., and Bjørnestad, E. (2017). “Barnehagen i ei brytningstid. Spenninga mellom samfunnsnytte og barndommen sin eigenverdi. [kindergarten in a time of tensions between the societal and childhood’s intrinsic value]” in Barneomsorg på norsk: Samspill og spenning mellom hjem og stat. eds. S. A. Tuastad and I. Studsrød (Oslo: Universitetsforlaget) 154–72.
Baustad, A. G., and Bjørnestad, E. (2022). Everyday interactions between staff and children aged 1–5 in Norwegian ECEC. Early Years 42, 557–571. doi: 10.1080/09575146.2020.1819207
Baustad, A. G., and Bjørnestad, E. (2023). In-service professional development to enhance interaction–staffs’ reflections, experiences and skills. Eur. Early. Child. Educ. Res. J. 31, 1001–1015. doi: 10.1080/1350293X.2023.2217694
Belsky, J., Vandell, D. L., Burchinal, M., Clarke-Stewart, K. A., McCartney, K., Owen, M. T., et al. (2007). Are there long-term effects of early child care? Child Dev. 78, 681–701. doi: 10.1111/j.1467-8624.2007.01021.x,
Bjørnestad, E., and Os, E. (2018). Quality in Norwegian childcare for toddlers using ITERS-R. Eur. Early Child. Educ. Res. J. 26, 111–127. doi: 10.1080/1350293X.2018.1412051
Bjørnestad, E., Baustad, A. G., and Alvestad, M. (2019). “To what extent does the ITERS-R address pedagogical quality as described in the Norwegian framework plan?” in Teachers’ and families’ perspectives in Early childhood education and care. eds. S. Phillipson and S. Garvis (Oxon, OX: Routledge), 156–171.
Bjørnestad, E., Broekhuizen, M., Os, E., and Baustad, A. G. (2020). Interaction quality in Norwegian ECEC for toddlers measured with the caregiver interaction profile (CIP) scales. Scand. J. Educ. Res. 64, 901–920. doi: 10.1080/00313831.2019.1639813
Bråten, B., and Lunde, H. 2016 Ekstern barnehagevurdering. Hvorfor, hvordan og hva er erfaringene? [external ECEC assessment. Why, how and what are the experiences?] Fafo-rapport 2016:17 Available online at: https://www.fafo.no/images/pub/2016/20579.pdf (Accessed November 11, 2025).
Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa
Braun, V., and Clarke, V. (2012). “Thematic analysis” in Handbook of research methods in psychology: Vol. 2. Research designs. ed. H. Cooper (Washington, DC: American Psychological Association), 57–71.
Bronfenbrenner, U., and Morris, P. A. (2006). “The bioecological model of human development” in Handbook of child psychology: Theoretical models of human development. eds. R. M. Lerner and W. Damon (New York, NY: John Wiley & Sons Inc.), 793–828.
Buøen, E. S., Lekhal, R., Lydersen, S., Berg-Nielsen, T. S., and Drugli, M. B. (2021). Promoting the quality of teacher–toddler interactions: a randomized controlled trial of “thrive by three” in- service professional development in 187 Norwegian toddler classrooms. Front. Psychol. 12:778777. doi: 10.3389/fpsyg.2021.778777,
Burchinal, M. (2018). Measuring early care and education quality. Child. Dev. Perspect. 12, 3–9. doi: 10.1111/cdep.12260
Cassidy, D. J., Hestenes, L. I., Hansen, J. K., Hedge, A., Shim, J., and Hestenes, S. (2005). Revisiting the two faces of child care quality: structure and process. Early Child Res. Q. 20, 345–360. doi: 10.1207/s15566935eed1604_10
Dalberg, G., Moss, P., and Pence, A. (1999). Beyond quality in Early childhood education and care. Postmodern perspectives. Oxon, OX: Routledge.
Edwards, A. (2010). “Qualitative design and analysis” in Doing early childhood research. International perspectives on theory and practice. eds. G. M. Naughton, S. A. Rolfe, and I. Siraj-Blatchford (Berkshire, UK: Open University Press), 155–175.
Egert, F., and Fukkink, R. G., and Eckhardt, A. G. (2018). Impact of in-service professional development programs for early childhood teachers on quality ratings and child outcomes: A meta-analysis. Rev. Educ. Res, 88, 401–433. doi: 10.3102/0034654317751918
Evertsen, C., Størksen, I., and Kucirkova, N. (2022). Professionalsʼ perceptions of the classroom assessment scoring system as a structure for professional community and development. Eur. Early Child. Educ. Res. J. 30, 701–714. doi: 10.1080/1350293X.2022.2031245
Fenech, M. (2011). An analysis of the conceptualisation of ‘quality’ in early childhood education and care empirical research: promoting ‘blind spots’ as foci for future research. Contemp. Issues Early Childh. 12, 102–117. doi: 10.2304/ciec.2011.12.2.102
Furenes, M. I., Andresen, A. K., Løkken, I. M., Moser, T., Nilsen, T. R., and Dahl, A.-L. S. (2023). Norwegian research on ECEC quality from 2010 to 2021—a systematic scoping review. Educ. Sci. 13:600. doi: 10.3390/educsci13060600
Gulbrandsen, L., and Sundnes, A. (2004). Fra best til bedre? Kvalitetssatsing i norske barnehager. Statusrapport ved kvalitetssatsingsperiodens slutt. [from best to better? Quality Investment in Norwegian Kindergartens. Status report at the end of the quality investment period]. Oslo: Norsk institutt for forskning om oppvekst, velferd og aldring (NOVA). Available online at: https://oda.oslomet.no/oda-xmlui/handle/20.500.12199/4880 (Accessed November 11, 2025).
Hansen, E. J., and Broekhuizen, M. (2019). Quality of the language-learning environment and vocabulary development in early childhood. Scand. J. Educ. Res. 65, 302–317. doi: 10.1080/00313831.2019.1705894
Helmerhorst, K. O. W., Riksen-Walraven, J. M. A., Vermeer, H. J., Fukkink, R. G., and Tavecchio, L.-W. C. (2014). Measuring the interactive skills of caregivers in child care centers: development and validation of the caregiver interaction profile scales. Early Educ. Dev. 25, 770–790. doi: 10.1080/10409289.2014.840482
Howard, S. J., Siraj, I., Melhuish, E., Kingston, D., Neilsen-Hewett, C., de Rosnay, M., et al. (2018). Measuring interactional quality in pre-school settings: introduction and validation of the sustained shared thinking and emotional wellbeing (SSTEW) scale. Early Child Dev. Care 190, 1017–1030. doi: 10.1080/03004430.2018.1511549
Howes, C., Burchinal, M., Pianta, R., Bryant, D., Early, D., Clifford, R., et al. (2008). Ready to learn? Children’s pre-academic achievement in pre-kindergarten programs. Early Child Res. Q. 23, 27–50. doi: 10.1016/j.ecresq.2007.05.002
Ishimine, K., and Tayler, C. (2014). Assessing quality in early childhood education and care. Eur. J. Educ. 49, 272–290. doi: 10.1111/ejed.12043
Johansson, J.-E. (2020). “Barnehagens opprinnelse, styring og praksis. En introduksjon til barnehagepedagogikkens kraftfelt” in The origin, management, and practice of the kindergarten. An introduction to the sector of kindergarten pedagogy (Bergen: Fagbokforlaget).
Katz, L. G. (1992). Early childhood programs: multiple perspectives on quality. Child. Educ. 69, 66–71. doi: 10.1080/00094056.1992.10520891
Kaushik, V., and Walsh, C. A. (2019). Pragmatism as a research paradigm and its implications for social work research. Soc. Sci. 8, 1–17. doi: 10.3390/socsci8090255
La Paro, K. M., Hamre, B. K., and Pianta, R. C. (2012). Classroom assessment scoring system (CLASS) manual, toddler. Baltimore, MD: Paul H. Brookes Publishing.
Løkken, I., Broekhuizen, M., Barnes, J., Moser, T., and Bjørnestad, E. (2018). Interaction quality and children’s social-emotional competence in Norwegian ECEC. JECER 7, 338–361.
Ministry of Education and Research (2005) Lov om barnehager. [Kindergarten Act]. Available online at: https://www.regjeringen.no/no/tema/familie-og-barn/barnehager/artikler/regelverk-pa-barnehageomradet-/id620720/ (Accessed November 11, 2025).
Ministry of Education and Research (2017) Kompetanse for fremtidens barnehage. Revidert strategi for kompetanse og rekruttering 2018–2020. [competence for the ECEC of the future. Revised strategy for competence and recruitment 2018–2022]. Available online at: https://www.regjeringen.no/contentassets/7e72a90a6b884d0399d9537cce8b801e/kompetansestrategi-for-barnehage-2018_2022.pdf (Accessed November 11, 2025).
Ministry of Education and Research (2022) Kompetanse for fremtidens Barnehage. Revidert strategi for kompetanse og rekruttering 2023–2025. [competence for the ECEC of the future. Revised strategy for competence and recruitment 2023–2025]. Available online at: https://www.regjeringen.no/contentassets/60cb8cea7014464a9cbd22bbb2e3664e/no/pdfs/kompetansestrategi-barnehage.pdf (Accessed November 11, 2025).
Ministry of Education and Research (2023) Barnehager for en ny tid. Nasjonal Barnehagestrategi mot 2030. [ECEC for a new era. National ECEC strategy towards 2030] https://www.regjeringen.no/no/dokumenter/barnehagen-for-en-nytid/id2959402/ (Accessed November 11, 2025).
Moen, K. H., and Mørreaunet, S. (2019). “Ledelse av vurdering i en lærende barnehage,” In Ledelse av en lærende barnehage, 2nd ed., eds. S. Mørreaunet, K-Å Gotvassli, K. H. Moen, and E. Skogen Bergen: Fagbokforlaget, 185–210.
Moss, P., and Pence, A. (1994). Valuing quality in Early childhood services. New approaches to defining quality. London: Paul Chapman.
Norwegian Directorate for Education and Training 2017 Framework plan for kindergartens. Available online at: https://www.udir.no/contentassets/5d0f4d947b244cfe90be8e6d475ba1b4/framework-plan-for-kindergartens--rammeplan-engelsk-pdf.pdf (Accessed November 11, 2025).
Official Norwegian Reports, 2023:1. (2023). Kvalitetsvurdering og kvalitetsutvikling i skolen – Et kunnskapsgrunnlag. [Quality Assessment and Quality Development in Schools - A Knowledge Base]. Oslo: Ministry of Education and Research. Available online at: https://www.regjeringen.no/no/dokumenter/nou-2023-1/id2961070/ (Accessed November 11, 2025).
Organisation for Economic Co-operation and Development. (2015). Early childhood education and care policy review (Norway: OECD).
Peeters, J., and Vandenbroeck, M. (2011). “Childcare practitioners and professionalization” in Professionalization, leadership and Management in the Early Years. eds. L. Miller and C. Cable (London: Sage Publications Ltd), 62–76. doi: 10.4135/9781446288795
Phillips, A., and Fenech, M. (2023). Educators’ perceptions of Australia’s early childhood education and care quality assurance rating system. Eur. Early Child. Educ. Res. J. 31, 988–1000. doi: 10.1080/1350293X.2023.2211758,
Schön, D. A. (1983). The reflective practitioner: how professionals think in action. New York: Basic Books.
Schön, D. A. (1987). Educating the reflective practitioner. Toward a new Design for Teaching and Learning in the professions. San Francisco, California: Jossey-Bass Inc.
Schön, D. A. (1992). The theory of inquiry: Dewey’s legacy to education. Curriculum Inquiry, 22, 119–139. doi: 10.1080/03626784.1992.11076093
Siraj, I., Kingston, K., Neilsen-Hewett, C., Howard, S. J., Melhuish, E., de Rosnay, M., et al. (2017). Fostering effective Early learning: a review of the current international evidence considering quality in Early childhood education and care Programmes - in delivery, pedagogy and child outcomes. Sydney: NSW Department of Education.
Slot, P. (2018). “Structural characteristics and process quality in early childhood education and care. a literature review” in OECD education working papers no. 176 (OECD Publishing).
Statistics Norway. 2025. Barnehager 2024. [Kindergartens 2024]. Available online at: https://www.ssb.no/en/utdanning/barnehager/statistikk/barnehager (Accessed September 15, 2025).
Stiberg-Jamt, R., Emstad, A. B., Meltevik, S., Buland, T. H., and Furrebø, S. 2015. Evaluering av ordningen med ekstern skolevurdering. [evaluation of the external school assessment approach]. Kristiansand: Oxford Research AS. Utdanningsdirektoratet. Available online at: https://kudos.dfo.no/documents/11138/files/11210.pdf (Accessed November 21, 2024).
Sylva, K., Melhuish, E., Sammons, P., Siraj-Blatchford, I., and Taggart, B. (2011). Pre-school quality and educational outcomes at age 11: low quality has little benefit. J. Early Child. Res. 9, 109–124. doi: 10.1177/1476718X10387900
Taggart, B., Sylva, K., Melhuish, E., Sammons, P., and Siraj, I. 2015. Effective pre-school, primary and secondary education project (EPPSE 3-16+): how pre-school influences children and young people's attainment and developmental outcomes over time (research brief). London: Department for Education. Available online at: https://dera.ioe.ac.uk/id/eprint/23344/1/RB455_Effective_pre-school_primary_and_secondary_education_project.pdf (Accessed November 11, 2024).
Vallberg-Roth, A.-C. (2012). Different forms of assessment and documentation in Swedish preschools. Nord. Barnehageforsk. 5, 1–18. doi: 10.7577/nbf.479
Vandell, D. L., Belsky, J., Burchinal, M., Steinberg, L., and Vandergrift, N. (2010). Do effects of early child care extend to age 15 years? Results from the NICHD study of early child care and youth development. Child Dev. 81, 737–756. doi: 10.1111/j.1467-8624.2010.01431.x,
Vermeer, H. J., van IJzendoorn, M. H., Cárcamo, R. A., and Harrison, L. J. (2016). Quality of child care using the environment rating scales: a meta-analysis of international studies. Int. J. Early Child. 48, 33–60. doi: 10.1007/s13158-015-0154-9e
Vygotsky, L. S. (1978). Mind in society: development of higher psychological processes. Cambridge: Harvard University Press.
Wagner, J. T., and Einarsdottir, J. (2006). “Nordic ideals as reflected in Nordic childhoods and early education” in Nordic childhoods and early education. eds. J. Einarsdottir and J. T. Wagner (Greenwich, Connecticut: Information Age Publishing), 1–12.
Weisner, T. S. (2008). The Urie Bronfenbrenner top 19: looking back at his bioecological perspective. Mind Cult. Act. 15, 258–262. doi: 10.1080/10749030802186785
White Paper no. 41 (2008-2009). (2009). Kvalitet i barnehagen. [quality in ECEC]. Ministry of Education and Research. Available online at: https://www.regjeringen.no/no/dokumenter/stmeld-nr-41-2008-2009-/id563868/ (Accessed November 11, 2024).
Keywords: early childhood education and care, process quality, assessing quality, improving quality, external observations, external feedback
Citation: Baustad AG, Tobiassen MLO, Smeby KW and Flormælen LS (2025) Early childhood education and care teachers’ experiences of using Early Childhood Quality External to assess process quality. Front. Educ. 10:1542263. doi: 10.3389/feduc.2025.1542263
Edited by:
Torben Næsby, University College of Northern Denmark, DenmarkReviewed by:
Marja Syrjämäki, University of Eastern Finland, FinlandMikkel Tore Eskildsen, University College of Northern Denmark, Denmark
Igor Shiyan, WiseHart Limited, United Kingdom
Copyright © 2025 Baustad, Tobiassen, Smeby and Flormælen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Anne Grethe Baustad, YW5uZS5nLmJhdXN0YWRAbm9yZC5ubw==