- Chair of Primary School Education, Institute of Pedagogy, Faculty of Human Sciences, Julius-Maximilians-University of Würzburg, Würzburg, Germany
Introduction: Children increasingly engage with online content from an early age, but often lack the competencies to critically evaluate it. To foster these skills, suitable assessment instruments are required, yet none currently exist for the elementary school level. Drawing on a conceptual framework comprising four evaluation criteria applied across five content areas, this study addresses the need for a target group-specific operationalization. The aim was to derive concrete indicators to guide the development of an assessment instrument designed to measure elementary school children’s ability to evaluate online content.
Methods: To specify evaluation criteria within the content areas, a qualitative preliminary study was conducted. All German curricula were systematically examined with respect to evaluation competencies in digital contexts. In addition, interviews with media experts were carried out. Both the curricula and the interviews were analyzed using qualitative content analysis to refine and extend existing research with indicators tailored to elementary school children.
Results: Indicators were derived for four evaluation criteria across various forms of online content relevant to elementary school children. For design aspects, indicators include, for example, understanding the role of likes and comments in personalized content or interpreting emojis in chat messages. Indicators for credibility involve distinguishing facts from opinions and evaluating the intent behind influencer content or chain messages. Regarding closeness to reality, children are expected to differentiate real from fictional content, such as assessing YouTube pranks. Finally, identification relates to recognizing digital phenomena, for instance, distinguishing between educational videos and hidden advertising.
Discussion: This study highlights the development of indicators for assessing elementary school children’s evaluation of online content. These indicators enable the construction of standardized items that capture both knowledge and procedural skills using age-appropriate, multimodal online materials.
1 Introduction
Elementary school children engage with various online offerings on a regular basis. Among the most popular social media platforms for this age group are YouTube, TikTok, and WhatsApp, which are primarily accessed for entertainment purposes (Feierabend et al., 2023; Ofcom, 2022). In recent years, empirical research has increasingly examined the use of social media platforms and their associated risks for children and adolescents from educational, psychological, and pediatric perspectives. Depending on individual factors, social media use may negatively affect body image (Modrzejewska et al., 2022) or mental health, potentially contributing to conditions such as depression and anxiety disorders (Mojtabai, 2024). Moreover, 21.1% of children and adolescents aged 10–17 exhibit risky patterns of social media use, with 4.0% of 10- to 13-year-olds meeting the criteria for pathological use (according to ICD-11 criteria; Wiedemann et al., 2025). Additional online risks include the rise of cyberbullying (Beitzinger and Leest, 2024), targeted advertising based on children’s personal data (Trevino and Mortin, 2019), and exposure to deceptive content such as fake news, deepfakes, or phishing messages (Dale, 2019).
Overall, the findings presented above should be interpreted considering individual factors such as gender, self-esteem, and social support, and should not be understood as promoting fear-based or overly protective pedagogical approaches. Rather, children and adolescents need to develop digital competencies and strategies to navigate potential risks and to leverage the potential of digital environments through structured learning opportunities and interventions. Among these digital competencies, one aspect is the ability of children to critically evaluate online content from an early age (Weisberg et al., 2023; Livingstone, 2014). Strategies for evaluating online content aim, among other things, to help children distinguish between true and false information (Artmann et al., 2023), verify sources or claims across different websites (Paul et al., 2019), and identify manipulated images and videos. To assess and foster such abilities, the underlying construct must first be clearly defined for the elementary school context. The conceptualization of online content evaluation for this target group serves as a theoretical foundation (Jocham and Pohlmann-Rother, 2025, in review; see section 2).
In addition, validated and standardized instruments are required to assess interventions and learning opportunities that aim to develop children’s ability to critically assess online content. Such tools are currently lacking for this age group and are crucial both for empirical research and for educators aiming to foster and assess children’s competencies. To measure children’s abilities, the operationalization must include elementary school-specific requirements and indicators. Purington Drake et al. (2023) emphasize the importance of such specifications, arguing that elementary education requires concrete contexts and situations in which children’s competencies can be assessed. Without this contextualization, items designed to measure complex competencies remain too abstract and difficult for the target group to comprehend. This study aims to address a notable gap in existing literature by refining the criteria used to evaluate online content in elementary education. While previous academic discussions have offered valuable insights, the criteria have often been described in broad and abstract terms and remain only partially developed with respect to the specific cognitive and developmental needs of this age group. Accordingly, the study derives indicators for the construct of online content evaluation that can serve as a basis for item development in future empirical research.
2 Elementary school children evaluate online content
Given that children engage with online content with varying intentions, the ability to evaluate content in a goal- and task-oriented manner is essential. Children must be able to determine which evaluation criteria are appropriate for their specific task, such as school-related research, or for their particular purpose, such as watching videos for entertainment (Weisberg et al., 2023). In addition, multimodal online content, including visual and auditory elements, plays a significant role. This is particularly relevant, as elementary school children primarily engage with video- and image-based content, with YouTube as the most popular platform. So far, existing studies on online content evaluation have focused on the application of evaluation criteria to (multiple) online texts, such as comparing different websites to answer a question. This emphasis has primarily highlighted reading comprehension skills (e.g., Paul et al., 2018), while overlooking (multimodal) online content, which children predominantly engage with.
Based on established theoretical and normative models as well as empirical research findings, a conceptual framework for evaluating online content for elementary school children has been developed (Jocham and Pohlmann-Rother, 2025, in review). At its core, the framework comprises four evaluation criteria, which are applied across five distinct content areas (see Figure 1). These criteria include the evaluation of design aspects, credibility, closeness to reality, and the identification of characteristics relevant to evaluation. The criteria can be applied to entertaining, knowledge-transferring, commercial, deceptive, and personality-impairing online content (content areas). The following section provides definitions of the four evaluation criteria and five content areas, which are not exclusively intended for elementary school children (ages 6–10 in Germany). The presented research findings serve to further specify these criteria, highlighting that the context of application may vary (e.g., fake news) or that criteria may be emphasized (e.g., the evaluation of expertise).
Figure 1. Conceptualization of online content evaluation (Jocham and Pohlmann-Rother, 2025, in review).
2.1 Evaluation criteria
2.1.1 General design aspects
The way information is presented online can be complex, increasing the challenge for elementary school children to evaluate content effectively (Macedo-Rouet et al., 2013). Critical evaluation of online content requires knowledge of and awareness about digital environments, which are characterized by a change that occurs in the “language, interaction, and behavior [as part of] different social and cultural contexts” (Yeh and Swinehart, 2020, p. 1736) formed online. To account for the specific nature of digital spaces, this criterion focuses on the evaluation of various design aspects, including their potentially manipulative intent. This includes general design aspects of the internet, such as connectivity and search engine optimization (Kammerer and Gerjets, 2012), as well as features of digital platforms, including multimodality and algorithmic structuring (Cotter and Reisdorf, 2020). Despite theoretical overlaps with credibility assessments of context (Forzani, 2020) and with platform- and content-related prior knowledge (Brand-Gruwel et al., 2017; Vanwynsberghe et al., 2012), the unique characteristics of online content and platforms are better conceptualized within the criterion of design aspects rather than being subsumed under credibility (Sundar, 2008). This is particularly relevant for deriving concrete criteria and indicators that can be used to assess and foster competencies in the elementary school age group.
At the core of this criterion is the awareness children need to develop regarding the fundamental characteristics of digital content—and how these characteristics can add complexity to the application of other evaluation criteria. Technical features of digital media serve as cues that trigger various heuristics, which in turn influence the users’ evaluations (Sundar, 2008). For example, the closeness to reality heuristic predicts greater trust in audiovisual modalities, as they are perceived to more closely resemble the “real” world (see also Bezemer and Jewitt, 2010). Multimodal forms of presentation—such as videos, images, and hypertext—are defining features of digital content. Studies involving children point to contradictory mechanisms: some findings suggest that videos exert a stronger influence on attitude change, while others indicate that children assign greater authority to textual content (Salmerón et al., 2020). This highlights the need for items that assess evaluative competencies based on multimodal online content. Relevant indicators can be partially derived from the following findings. However, gaps become apparent in goal- and task-specific evaluation contexts that also consider children’s patterns of interaction with online content.
Initially, knowledge about how online information is presented can play a significant role in the evaluation process. Unlike print media, digital environments require awareness of elements such as hyperlinks (e.g., PageRank), layout (e.g., graphics or navigation functionality), typography (e.g., vocabulary use) (Keshavarz, 2021), structure (e.g., the arrangement and formatting of visual or textual elements), and URLs (Forzani, 2020). Empirical findings highlight the strong influence of information presentation. For instance, students tend to focus more on the aesthetic design of social media content than on its actual message or source (Shabani and Keshavarz, 2022). Furthermore, third- to fifth-grade students rate websites as more trustworthy when they include more dynamic graphics (Eastin et al., 2006). A study involving fifth- to eighth-grade students also shows that learners primarily rely on superficial cues—such as typographic emphasis or keywords—when assessing the relevance of online content (Rouet et al., 2011).
Further indicators relate to knowledge about the design of specific platforms, which also becomes relevant in the evaluation process (e.g., the peer-editing structure of Wikipedia; Miller and Bartlett, 2012). Many digital platforms are intentionally designed to be more appealing and engaging to children than other leisure activities, often through reward systems and persuasive design elements (Radesky, 2021). However, these individually positive experiences are frequently leveraged to serve other—often commercial—purposes. As a result, it is essential that users learn to critically evaluate how (social media) platforms influence their own perceptions. This can be supported by introducing children to overarching persuasive strategies that are common across many (social media) platforms (Tandoc et al., 2021). This includes, for instance, developing an understanding of their digital footprint, which enables them to critically evaluate personalized content and phenomena such as filter bubbles (Cotter and Reisdorf, 2020; Weisberg et al., 2023). Moreover, it is essential to understand specific design aspects, such as characteristics of quality journalism (Association of College and Research Libraries, 2015) and persuasive techniques like image and video editing (Syam and Nurrahmi, 2020). For example, elementary school children need specific knowledge to identify and evaluate online advertising and influencer marketing. This includes understanding advertising language, the blending of entertainment and promotional content, and strategies for building relationships with consumers (Rozendaal et al., 2011; Evans et al., 2019). Despite their frequent use of social media, research involving elementary school children remains underrepresented. One possible reason is that social media platforms are not designed for this age group.
2.1.2 Credibility
The concept of credibility is defined and operationalized in various ways across theoretical and empirical studies. Credibility judgments may be made at different levels or in combination. First, credibility can be assessed with regard to specific content, such as search results or social media posts, which is interpreted in light of one’s prior knowledge and beliefs (Anttonen et al., 2023). In the context of a (research) goal framed as a problem-solving process (Kim et al., 2019), judgments about the relevance and usefulness of content are also essential (Keshavarz, 2021; Eickelmann et al., 2019). Furthermore, online content can be evaluated based on its argumentation (Forzani, 2020), using criteria such as truthfulness (Hilligoss and Rieh, 2008), accuracy (Tate, 2018), objectivity (Keshavarz, 2021), and clarity (Eickelmann et al., 2019). To assess the quality and evidential basis of online content (Forzani, 2020), the strategy of corroboration—comparing information across multiple sources—is particularly effective (Eickelmann et al., 2019). These and similar cognitively demanding strategies are more complex than evaluating the source (e.g., author, expert, or publisher) and are used less frequently by adolescents (Kiili et al., 2008). Closely related to content evaluation is the assessment of sources, which serves to verify the reliability and objectivity of information (Kim, 2019). Evaluation strategies used by skilled readers (Flanagin and Metzger, 2010) include assessing trustworthiness, for example the objectivity or impartiality of content, and expertise, including author qualifications or document type (Braasch et al., 2009; Wineburg, 1991; Fogg and Tseng, 1999; McGrew and Byrne, 2021). The concept of expertise refers to the author’s or user profile’s professional experience or qualifications in a specific domain (Forzani, 2020). This assessment is particularly challenging on digital platforms, where anyone can potentially assume the role of an expert (Chinn et al., 2020). Additionally, analyzing the source’s intent is crucial (Polanco-Levicán and Salvo-Garrido, 2022), raising questions about personal interests and political or commercial motives. Strategies for verifying source information are often grouped under the term sourcing (Wineburg, 1991), which involves consulting multiple sources to validate information (Duncan et al., 2018).
Empirical studies consistently suggest that students, university learners, and adults struggle to evaluate online content effectively. This poses a risk of being influenced by misinformation or inaccurate representations (Stanford History Education Group, 2016). As a result, the importance of targeted educational interventions is increasingly emphasized. On the one hand, elementary and high school teachers report that students tend to overestimate their ability to handle online information and often reuse inaccurate content (Miller and Bartlett, 2012). On the other hand, only a portion of students report that they learn how to handle internet-related tasks in school, as shown by recent ICILS findings (Eickelmann et al., 2024). When examining students’ source evaluation skills, it becomes evident that seventh graders often cannot determine whether an author is an expert (Coiro et al., 2015). Similarly, Kiili et al. (2018) found that half of sixth graders did not question the credibility of a commercial source. Overall, secondary school students rarely attend to source characteristics, even though they are generally capable of doing so (Paul et al., 2017). At the same time, Kuiper et al. (2008) showed that students, regardless of academic performance, possess knowledge about search and evaluation strategies, but do not apply it consistently. The reasons for this are multifaceted and influenced by various factors, including social aspects.
Given findings from studies with adults and adolescents, it is unsurprising that elementary school children struggle to distinguish between factual reporting and opinion (Kerslake and Hannam, 2022). However, this distinction is crucial for differentiating between largely neutral (journalistic) and biased information. To do so, children must be familiar with characteristics of credible and ethical journalism (Weisberg et al., 2023). Research also shows that elementary students can identify source information and assess an author’s intent and expertise in age-appropriate reading tasks. However, they struggle to transfer these skills to multiple documents (Paul et al., 2018), which is particularly relevant in digital contexts due to the interconnected nature of information. Macedo-Rouet et al. (2013) similarly found that fourth and fifth graders could identify information sources, but had difficulty evaluating the knowledge of those sources using textual cues. Following an intervention, students were better able to evaluate source credibility (Macedo-Rouet et al., 2013). Other studies also suggest that sourcing skills can be improved through targeted prompts, such as citation guidance (Paul et al., 2019), and through training, which in turn supports the development of critical thinking (e.g., Pérez et al., 2018).
A substantial portion of the literature on the evaluation of online content focuses on criteria for determining credibility. Although recent findings increasingly address the digital context, much of the research still centers on information-seeking scenarios, which represent only a fraction of elementary school children’s actual media use. Contexts in which children pursue personal goals have received significantly less attention. Moreover, the emphasis has largely been on the evaluation of (multiple) online documents, rather than multimodal online content that includes not only text but also audio, images, and video. Existing insights can be used to derive indicators for new contexts, such as multimodal online content.
2.1.3 Closeness to reality
This criterion has gained importance as a growing number of individuals receive and disseminate content through interconnected platforms and new forms of communication. On social media platforms in particular, the boundaries between consumers and producers are increasingly blurred (Vanwynsberghe, 2014), in contrast to traditional mass media, which are typically produced by trained professionals within a limited number of media organizations. In digital environments, content is shaped by individual motivations, experiences, and values, resulting in diverse representations of reality (Pangrazio and Selwyn, 2018). This diversity, along with rapid technological advancements, makes it increasingly difficult to distinguish between real, fictional, or manipulated content (Cho et al., 2022). For example, technologies such as deepfakes enable highly realistic editing of audio, images, and video. Since elementary school children primarily utilize videos and images on their preferred platforms, these editing capabilities make it harder for them to detect manipulation or fakes and to assess the authenticity of content (Chesney and Citron, 2019). The complex knowledge required for this is particularly challenging for elementary school children.
Empirical findings in early childhood and elementary education primarily focus on children’s evaluations of the closeness to reality of fictional or real characters, events, people, or other aspects in television, stories, or fairy tales (e.g., Li et al., 2015). Children as young as 3 years old are capable of distinguishing between reality and fantasy at a basic level (Woolley and Ghossainy, 2013). Numerous studies show that children develop and refine these abilities with age, depending on the context. For example, their ability to reliably identify real people or events increases, while they are less likely to classify unrealistic or fictional elements as real (e.g., Bunce and Harris, 2013). Nevertheless, elementary school children may still believe in magical figures or events and may mistakenly judge unfamiliar content as impossible (Mares and Bonus, 2019). Preschool children (ages 4–5) tend to justify their distinctions between real and fictional characters based on authenticity, while children aged 5–7 also consider whether natural laws are violated (e.g., Snow White being revived by a kiss) (Bunce and Harris, 2013). As they grow older, elementary students increasingly draw on their own experiences and other sources of information to evaluate closeness to reality (Woolley and Ghossainy, 2013). This development is rooted in cognitive growth: although elementary students can sometimes make similar judgments about the closeness to reality as adults, doing so requires significantly more cognitive effort (Li et al., 2015). The significance of evaluating the perceived closeness to reality of online content becomes evident when considering its impact on the effectiveness of media literacy trainings programs (Cho et al., 2019; Cho et al., 2020).
Given the popularity and frequency of YouTube (Pew Research Center, 2020), recent studies have also examined how children assess the closeness to reality of YouTube content (e.g., Hassinger-Das and Dore, 2023; Hassinger-Das et al., 2020; Martínez and Olsson, 2019). These studies highlight the challenge of multiplicity: videos cannot always be clearly categorized as real or fictional, and various aspects must be evaluated and weighed (Cho et al., 2022). One illustrative example is a video featuring a real YouTuber caring for fictional (digital) pets in an app. Initial findings with children aged 3–8 shows that individuals in smartphone videos are perceived as more real than those on television. The perceived closeness to reality of individuals on YouTube is lower than in smartphone videos but higher than on television, suggesting that children may find it more difficult to evaluate the closeness to reality of YouTube content—possibly due to the platform’s diversity and complexity. Children of all ages were able to distinguish between formats. They justified their evaluations of YouTube personalities using “medium-objective” reasoning, whereas their judgments of the other two formats were primarily based on characteristics of the person (Hassinger-Das et al., 2020). These tendencies increase with age, suggesting that older children possess more differentiated knowledge about the YouTube platform or increasingly use this knowledge to evaluate the closeness to reality (Hassinger-Das and Dore, 2023). Finally, children’s preferences for certain videos could be predicted by how real they perceived the videos to be (Hassinger-Das et al., 2020). Potential indicators may therefore include findings from previous research on YouTube formats, while expanding the scope to other relevant platforms could be beneficial in covering a broader range of topics and content areas.
2.1.4 Identification
The previously described criteria focus on online-specific aspects that can support children in evaluating online content. However, a fundamental prerequisite is that users can identify key features relevant to evaluation (e.g., content genre) in the first place. Only then can they assess whether these features are relevant to their specific goals and whether they support or undermine the quality of the content (Lucassen and Schraagen, 2011). The ability to identify such features is also addressed in the revised version of Bloom’s taxonomy, where identifying is part of the learning objective recognizing and serves as a necessary condition for more complex problem-solving processes such as evaluation itself (Mayer, 2002). The taxonomy should not be interpreted as a rigid hierarchy of lower- and higher-order cognitive tasks. For example, identifying hidden features or phenomena (e.g., phishing) may be more challenging for children than evaluating familiar and obvious content (Dubs, 2009). Equally important is the consideration of how knowledge across different content areas can support the critical evaluation of online content. This assumption is supported by empirical studies showing that different evaluation competencies are required depending on the type of online content, for instance when comparing neutral and commercial content (Kiili et al., 2018). To better understand how the identification of features contributes to the evaluation itself, three research contexts are considered, from which initial indicators can be derived.
The first area concerns the identification of genre, for example blogs, wikis, or video formats, which can significantly improve online research (Leeder, 2016). This is explained by the activation of implicit knowledge, which enables predictions about complexity and other genre-specific features. As a result, cognitive resources are freed up for further evaluation processes (Santini et al., 2011). However, studies show that students often lack comprehensive knowledge of online genres that would allow for such cognitive relief (e.g., Leeder, 2016; Sidler, 2002). Moreover, research suggests that the genre of online texts can imply specific intentions and trigger expectations related to credibility evaluation (Kiili et al., 2023).
The second area involves the identification of specific phenomena in digital environments. When children suspect or identify a deepfake in a video, this influences their evaluation of other aspects, such as the perceived closeness to reality or the credibility of the content. This highlights that recognizing complex online phenomena, for instance phishing, cyber grooming, or clickbait, requires abstraction skills that are cognitively demanding. On social media, the boundaries between entertainment, information, and marketing strategies are increasingly blurred—something that is particularly difficult for children to detect (Lupiánez-Villaneuva et al., 2016; Scholl et al., 2007). However, studies on elementary school children’s developmental stages show that they are potentially capable of recognizing advertising and reflecting on aspects of commercial privacy. Between the ages of 7 and 11, information processing improves significantly (analytical stage), enabling more complex knowledge about advertising and more detached reflection (John, 1999). Kerslake and Hannam (2022), for example, found that younger children struggle to recognize covert advertising. Similar assumptions can be made for deceptive content. Qualitative studies indicate that elementary school children already possess knowledge about fake news: they are aware of its existence, can define the term, cite everyday examples, and describe identification strategies (Tamboer et al., 2024; Vartiainen et al., 2023). However, it must be noted that the children surveyed rarely named all key features of fake news (Tamboer et al., 2024) and were less able to articulate deeper evaluation strategies (e.g., quality and consistency of evidence, Vartiainen et al., 2023). Although initial intervention studies on dealing with disinformation in elementary education exist (e.g., Artmann et al., 2023), research is still in its early stages. The challenge of sustainably promoting the identification of complex phenomena in elementary school is illustrated by an intervention study by Lastdrager et al. (2017), which found no lasting improvement in the identification of phishing messages among children aged 9–12.
The third area relates to credibility evaluation, which can involve many criteria, for instance expertise, recency, or trustworthiness (see 2.1.2). Identifying credible content or sources can serve as an initial anchor for evaluation (Lucassen and Schraagen, 2011). When clear indicators are available—such as identifying the most credible source in a search result—working memory capacity is freed up for deeper information processing (Keßel, 2017).
2.2 Online context
To assess and promote the presented evaluation criteria beyond an abstract context, specific areas of application are required. The importance of online context is also emphasized by Kim (2019), who advocates for the development of context-based (credibility) models. As a result, the conceptualization integrates five content areas reflecting the online contexts of elementary school children. Prior to outlining the content areas, attention is directed to the social media platforms where such online content is commonly encountered and engaged with by elementary school children.
2.2.1 Ethical considerations for social media in elementary schools
Social media platforms are not designed for elementary school children and explicitly exclude them through their terms of use. Nevertheless, evidence shows that children increasingly engage with these platforms. This engagement is closely linked to greater access to personal devices, which enable independent use. By the end of elementary school, both the frequency of use and the proportion of children owning a mobile phone rise significantly, from 44% at age 9 to 91% at age 11 (Ofcom, 2022). This creates a dilemma: children are accessing online content that is not appropriate for their age. For instance, 11% of children aged 6–11 report having encountered content that made them feel uncomfortable, most of which was pornographic or erotic (Feierabend et al., 2023; von Soest, 2023). Research increasingly points to the negative impact of social media on children’s mental health. A large-scale longitudinal study (n = 17,409) identifies sensitive developmental windows (girls: 11–13 and 19 years; boys: 14–15 and 19 years), during which increased social media use predicts lower life satisfaction, and vice versa (Orben et al., 2022). This may explain survey findings among German adolescents aged 16–17, in which 45% supported a minimum age of 16 years for creating a personal social media account, as applied in Australia (Wedel et al., 2025). Against this backdrop, unrestricted access to social media platforms across Europe appears problematic. Educators and institutions must consider how to respond to this reality. If children in a classroom use different platforms, this should be acknowledged as part of their lived experience and integrated into educational efforts. To do so, teachers need to be aware of the learning prerequisites within their class and of the relevant platforms to foster critical evaluation competencies.
2.2.2 Online content areas in children’s media use
On social media platforms, children encounter a wide range of online content, each potentially requiring different evaluation criteria. The application of these criteria depends on the intent and purpose behind the content’s use. To derive indicators for the use of evaluation criteria that are specifically relevant for elementary school settings, the following section briefly introduces selected content areas. These areas offer a foundation for identifying concrete situations in which the promotion and assessment of evaluative competencies can be meaningfully differentiated.
2.2.2.1 Entertaining online content
Numerous qualitative and quantitative studies confirm that elementary school children primarily use the internet for school-related and entertainment purposes (e.g., Trevino and Mortin, 2019; Feierabend et al., 2023). Children describe their online activities with phrases such as „access [to] fun games and cool things” (female, age 8 years), “watch YouTube” (male, age 7 years), or “make videos” (female, age 7 years) (Donelle et al., 2021, p. 4). These activities are reflected in the popularity of platforms such as YouTube, WhatsApp, and TikTok, with short-form video content gaining increasing traction. Content preferences vary by age and gender—for example, boys tend to utilize more gaming- or sports-related content, while girls more often watch music videos, tutorials, or influencer content (Ofcom, 2022).
2.2.2.2 Knowledge-transferring online content
Beyond entertainment, the internet is the primary medium for searching information. In Germany, 71% of elementary school children actively search online for information related to schoolwork, 51% research consumer products, and 45% use online searches to solve everyday problems (Feierabend et al., 2023; von Soest, 2023). In addition to search engines used for student projects, there are other digital offerings aimed at informing or educating users—such as educational influencers (see Carpenter et al., 2023). These include news articles, tutorials, explainer videos, learning apps, or online courses. Of particular importance is the fact that children use popular platforms like YouTube to acquire knowledge through multisensory content (Donelle et al., 2021), which they perceive as having high educational value (Hassinger-Das et al., 2020). This helps explain YouTube’s longstanding dominance as both an entertainment and search platform.
2.2.2.3 Commercial and deceptive online content
During both academic and recreational online activities, children inevitably encounter commercial and deceptive content. Advertising includes influencer marketing, which many brands use—sometimes exclusively—to promote their products (De Veirman et al., 2017). In the toy industry, for instance, entertaining unboxing and toy-play videos are produced specifically to appeal to children (Radesky et al., 2020). Deceptive practices may include „lies, omission, evasion, equivocation and generating false conclusions with objectively true information” (Levine, 2014, p. 381). In digital environments, deception often manifests in the form of fake news, identity theft, phishing, financial fraud (Dale, 2019), such as Ponzi and pyramid schemes or scam cryptocurrencies (Chiluwa, 2019), AI-generated deepfakes (Weisberg et al., 2023), or cyber grooming (Singh and Gitte, 2014). However, elementary school children may not encounter all these phenomena.
2.2.2.4 Personality-impairing online content
In addition to commercial and deceptive content, the digital space presents numerous ways in which individuals’ personal development can be negatively affected. Both governmental and private actors, including companies and individuals, may infringe on informational self-determination, for example, by collecting and disclosing personal data, violate image rights, or otherwise compromise personal privacy (Bumke and Voßkuhle, 2020). Other threats to the “development and preservation of personality” (translation by the authors; Bumke and Voßkuhle, 2020, p. 88) include issues related to cyber safety, cyberbullying, and hate speech. Cyber safety refers to children’s ability to navigate the internet safely and minimize risks by managing their digital footprint and responding to challenging content (Roddel, 2006). This includes knowledge about handling personal data, such as phishing, cyber grooming, and targeted advertising, as well as knowledge about others’ data, including copyright and data protection, with overlaps to the construct of data literacy. “The ability to collect, manage, evaluate, and apply data, in a critical manner” (Ridsdale et al., 2015, p. 2) is closely linked to children’s capacity to mitigate data-driven threats in cyber safety contexts (e.g., dataveillance; Lupton and Williamson, 2017) while also respecting the rights of others. Cyberbullying and hate speech represent a second major concern, where elementary school children may be both victims and offenders. Prevalence rates vary depending on the study and methodology. A recent WHO/Europe study (Cosma et al., 2024) involving 279,000 participants found that one in six children aged 11–15 had experienced cyberbullying. For elementary school children, prevalence rates range between 13 and 31% (Muller et al., 2017; Tao et al., 2022).
In summary, elementary school children encounter a wide variety of online content that presents both challenges and opportunities. To navigate these effectively, children must be familiar with and able to apply evaluation criteria across different online contexts. However, existing research often focuses on older age groups, is formulated too broadly, or lacks specificity when it comes to the needs of younger children. Therefore, the aim of this study is to specify evaluation criteria for online content that are appropriate for elementary school children. This includes linking concrete types of online content with corresponding, age-appropriate evaluation criteria.
3 Methodology
To address the research gap outlined above, this study aims to specify evaluation criteria suitable for elementary school children based on prior research. As part of this process, all German curricula were analyzed, and interviews with media experts were conducted to identify additional indicators. Accordingly, the present study represents a qualitative preliminary study (Mayring, 2001) within the context of an item development process.
3.1 Data collection
German Curricula: Each of Germany’s 16 federal states has its own curriculum and guidelines outlining subject-specific and cross-disciplinary competency goals. The integration of digital competencies in elementary education is guided by overarching national directives (KMK, 2016), which are based on the European DigComp framework (Ferrari, 2013). Since the DigComp framework is not specifically tailored to elementary education and lacks age-specific requirements, the federal states in Germany define digital competencies within subject profiles or in supplementary media and digital competency plans. These curricula serve as binding reference frameworks and form the foundation for educational practice in elementary schools. Through clearly defined goals and content across subjects and learning domains, they aim to ensure quality and comparability across schools. Competency goals are also assessed in national comparative studies such as PIRLS (German; McElvany et al., 2023) and TIMSS (Mathematics; Schwippert et al., 2024). However, there is currently no standardized assessment of digital competencies at the elementary level. Teachers retain autonomy in implementing the curriculum. They determine the pedagogical methods, sequencing of content, and areas of emphasis.
The data basis for this study comprises the subject-specific and cross-disciplinary curricula of each federal state (curricula per state: M = 12.5; SD = 1.7). While implementation varies, the curricula share a common structure. All are competency-based and aim to foster methodological, social, and self-competence. They include defined thematic areas and learning objectives, often supplemented by practical implementation suggestions and sample tasks.
Media Experts: To identify additional indicators, media experts in the education sector were recruited for interviews. Selection criteria included: (1) active engagement in digital media, (2) development of online content and offerings for elementary school children, and (3) design and implementation of programs in digital media for this age group. Based on these criteria, major German child-focused search engines, public broadcasters, and well-known online formats promoting digital literacy (e.g., identifying fake news) were contacted. This resulted in a random sample of seven media experts (n = 6 female). Three works in editorial teams of child search engines, two develop support programs for public broadcasting channels, and two serve as media advisors to schools. The experts have an average age of 45 years (SD = 8) and predominantly hold degrees or additional qualifications in media studies, media education, information science, or special education.
Prior to the interviews, participants were informed about the study’s objectives and the voluntary nature of participation. They signed data protection and consent forms. Interviews were conducted via Zoom and recorded. On average, interviews lasted 63.7 min (SD = 13.3).
3.2 Expert interviews
The interviews were conducted using a structured guide divided into three thematic sections. As the primary aim was to generate indicators for item development (Reinders, 2022), a semi-structured interview format was chosen. This ensured that all participants received the same core questions while allowing flexibility for follow-up inquiries tailored to individual professional contexts and responses (Merriam and Tisdell, 2015). The first section focused on initial questions (Savin-Baden and Major, 2023) designed to gather background information about the interviewees, such as their educational qualifications and professional roles. The second section aimed to collect in-depth data relevant to the research question. The initial focus was on general media and digital competencies that children need to navigate the internet effectively. This was followed by questions about what experts consider when designing and producing multimodal online content for elementary school children. Subsequently, the discussion was narrowed to evaluation competencies in digital environments. Specific and often challenging types of online content and phenomena commonly encountered by elementary school children were addressed, particularly those requiring critical assessment. The final section of the interview guide was intended to conclude the interview (Reinders, 2022). Throughout the interview, follow-up questions were used to elicit concrete examples from the experts’ professional practice, which could inform the development of relevant use cases for elementary education. This approach ensured that the interviewer could fully understand the content by posing clarifying questions where necessary (Merriam and Tisdell, 2015).
All interviews were recorded, anonymized, and transcribed. Verbal responses were transcribed verbatim, without corrections for grammar, dialect, or colloquial language (literal transcription). Paralinguistic and nonverbal elements were excluded, as they were not relevant to the research focus (Misoch, 2015). The analysis primarily focused on responses related to evaluation competencies and the online contexts in which these competencies are required.
3.3 Qualitative content analysis
The interviews and curricula were analyzed using qualitative content analysis (Mayring, 2022). Since the aim of the study was to derive indicators for item development, the four evaluation criteria of the proposed framework served as the main categories. These categories also guided the development of subcategories, which represent the specific contexts in which the evaluation criteria are applied. These were defined a priori based on five content areas: advertising, knowledge transfer, entertainment, deception, and intrusions into personal integrity. This contextualization is particularly relevant for elementary school children, as indicators should not be abstract but embedded in specific online contexts (Purington Drake et al., 2023). The rationale for the deductive category development was to concretize the existing conceptual framework through empirical data and to support the operationalization of the construct for the target group. At the same time, the data were applied to the categories, as the interview guide was designed accordingly and the curricula describe evaluation competencies as part of digital literacy.
Following the deductive development of the category system, the first interviews and curricula were coded (Cohen et al., 2018). All units that relate to the evaluation criteria or address specific internet phenomena that children might potentially need to evaluate were coded. This and all subsequent coding steps were conducted independently by the authors and two additional raters. A consensus coding process was then used to compare all coding’s across raters and to discuss discrepancies in detail (Kuckartz and Rädiker, 2022). The aim was to validate the deductively developed category system through empirical data and to refine definitions and coding rules to ensure conceptual coherence. Once discrepancies in the consensus coding process decreased, the entire dataset was coded (Mayring, 2022).
This approach was intended to empirically specify the framework for elementary school children using empirical data. Therefore, both interview and curriculum data were incorporated into the analysis. Moran-Ellis et al. (2006) refer to this as data integration, where both data sources are weighted equally and address the same research question. Triangulating the data would not have been appropriate considering the research question, as the aim was not to weigh or compare the findings from the two approaches against each other, but rather to compile a set of indicators. Triangulation would have been the case, for example, if teachers had been asked to what extent they perceive and concretely implement curricular requirements in a specific area. However, this would have required a different interview protocol.
4 Results
The results derived from the deductive category system are presented below in accordance with the four evaluation criteria. For each criterion, insights from both the interview data and curriculum documents are synthesized. This process includes assigning findings to specific content domains or refining those domains where necessary. The empirical data are interpreted considering the existing theoretical and research literature. Based on this analysis, a series of competency matrices is developed, each grounded in previously established indicators and expanded through the empirical contributions of this study. Since the aim of the study is to identify additional indicators for item development, no analysis is conducted regarding whether divergences between the media experts exist. Any distinct content-related emphases identified in the curricula and interviews are reported.
4.1 Evaluation of general design aspects
Overall, the experts and all curricula emphasized the importance of competencies related to multimedia content, which is particularly engaging for children. Students should be able to interpret various design elements (e.g., images, videos, hypertext) and understand their interrelations (e.g., not just technical skills). One possible element in both self-produced and pre-existing media content is the effect of colors or sounds (“Structure and impact of advertising: identifying, describing, and comparing advertising strategies and design elements, such as color or shape”; MBK, 2011, p. 21). The goal is for children to understand and apply visual and textual elements as tools of advertising and communication in both analog and digital formats, and to evaluate design elements based on specific criteria—such as advertising language or the motives behind advertising in areas like health or mobility (“Design elements of a commercial versus objective information”; Der Senator für Bildung und Wissenschaft, 2007, p. 34). In the interviews, experts emphasized the importance of child-friendly design in online environments. This includes the use of different colors for distinct sections, simplified language, age-appropriate content labels, and a clear, structured layout of websites using frames, headings, subheadings, and advertising elements (“So it really needs to be right next to that ad element—advertisement or ad—although from our perspective, advertisement would actually be the better wording, something kids would understand more easily”; I3; pos. 230–232).
Criteria concerning the influence of design elements appeared almost exclusively in curriculum documents (“Recognize and evaluate design techniques used in digital media offerings”; SMK, 2017, p. 40) and were largely absent from expert interviews. In general, students are expected to develop the ability to critically describe both their own and others’ media productions and to assess them based on design aspects and intended effects using defined criteria. This also includes the evaluation of techniques and strategies used in media manipulation. Furthermore, students should be able to assess the truthfulness of media products by drawing on familiar design aspects (“Students can create and present different types of media products and, based on their understanding of design possibilities in media products, verify the truthfulness of information”; Der Senator für Bildung und Wissenschaft, 2007, p. 15). Additional examples relate to the evaluation of design elements and persuasive techniques in the context of stereotypes, gender roles, beauty ideals, and clichés. In the context of online searches, children need to understand how to interpret search results (“The other thing is how to deal with the search results. What determines a ranking like this? What comes up at the top, what appears further down”; I1, pos. 330–331).
Further considerations regarding the evaluation of design aspects focused on child-appropriate design, a topic primarily emphasized by the experts. Given that elementary school children often use platforms and search engines designed for adults, knowledge about the design of digital environments is essential. According to the experts, this includes an awareness of how websites are interconnected, and a basic understanding of how the internet is structured and functions (“A lot of kids just think, you know, the first hits on the list—that’s the answer they were looking for. Just figuring out what this whole networking thing on the internet is. Like, how do other sites link to other sites, and then those link to something else again. I mean, even a lot of adults do not really get that.”; I1, pos. 326–330). Navigation design and its level of complexity were also identified as critical factors. Regarding content, experts highlighted the need for age-appropriate topic presentation, factual accuracy, and child protection measures (e.g., exclusion of alcohol advertising). Safety features such as moderated chats and comment sections were seen as helpful in reducing risks like cyberbullying and grooming. Moreover, knowledge about algorithms, data usage, and the role of likes in content personalization is already relevant at the elementary school level (“Knowledge about algorithms. For elementary school kids—I actually think they can understand that. You can definitely break it down, and do it in a meaningful way. (.) Like, show them how an algorithm reacts faster and faster.”; I4, pos. 240–244).
Table 1 presents a synthesis of the study’s findings considering existing theoretical and empirical research. This synthesis results in a set of indicators that serve to operationalize the evaluation of design aspects. The indicators listed under general content in the table refer to the evaluation of design aspects that apply regardless of the application context. These indicators, along with those specific to the content areas, were added to the table where applicable. Importantly, the evaluation of these design aspects is not confined to a single content area; rather, it can be meaningfully transferred across various content areas. For instance, design aspects used in advertising may be applied to other online content by incorporating specific elements from entertaining formats.
4.2 Evaluation of credibility
Educational standards emphasize the (guided) evaluation of credibility of information and sources based on selected criteria (“Critically evaluate sources of information: distinguish between informational and commercial content”; MBWFK, 2019, p. 34). Specific criteria for evaluating internet sources include the identity of the author(s), references, objectivity, and recency. Experts addressed specific criteria for the evaluation of credibility less frequently. Instead, they primarily emphasized that elementary students should be able to critically assess the presence of an imprint and the quality of sources (“So you also have to raise awareness, like, look (.) Who’s behind this site? Why might they actually be right, or credible enough? I mean, are they even, could they be experts?,” I3, pos. 496–499). This is particularly relevant given that children increasingly use social media platforms (e.g., TikTok, YouTube) to inform themselves about current events.
This evaluation applies not only to online texts but also to the credibility of images. In addition, elementary students are expected to evaluate the informational content of their research findings (“Can extract information from age-appropriate texts/media, understand it, and evaluate it within its respective context“; HMKB, 2011, p. 9) and distinguish between factual and interest-driven information. Several curricula emphasize that elementary students should be able to evaluate whether content is true or false, or (un) likely. Further criteria mentioned include balance, informational value, and supporting evidence. Experts also highlighted the importance of assessing the accuracy of content (“But what they obviously cannot do is judge whether what the person is saying is actually true or not. But yeah, you can at least ask: What do you see? What do you hear? And what do you feel?”; I6, pos. 300–302).
Another frequently mentioned criterion emerging from both interviews and curricula is the evaluation of intent in visual as well as verbal communication. Students should be able to judge the different intentions of media in relation to information, entertainment, and manipulation (“Different intentions of media regarding entertainment, advertising, and information. Understand advertising messages and intentions, become familiar with different perspectives in reporting, manipulation”; TMBJS, 2017, p. 8). This includes recognizing advertising messages and intentions, different perspectives in news reporting, and propaganda (“What really comes into play here is getting them to put themselves in the author’s perspective—whether it’s an ad message, a company doing advertising, a blogger, or an editor (.). What is this message, this text supposed to achieve? And I think that’s definitely a goal: to notice these kinds of questions and, through that, to recognize the author’s intention“; I4, pos. 161–165). It also involves analyzing and evaluating the interests of various communication service providers.
Table 2 synthesizes the study’s results with existing research to offer a comprehensive overview of indicators for evaluating credibility. The criteria described can be applied across different content areas. This applies particularly to the empty fields and those with only limited entries in Table 2, for which no or only minimal results could be assigned. Nevertheless, it remains possible to examine personality-impairing online content in terms of intent or trustworthiness, for example in the context of cyber grooming.
4.3 Evaluation of closeness to reality
Many educational frameworks address the critical evaluation of the closeness to reality of media representations, referring to both actual reality and reality as mediated through images and media (“Recognize that media and virtual constructs and environments cannot be directly transferred into reality”; MBWFK, 2019, p. 40). In line with this, students are expected to distinguish between fiction and reality and to apply criteria for differentiating fictional and non-fictional media formats and content. This objective is further reinforced by the goal of separating the real world from the media world (“Distinguish between reality and fiction when engaging with media representations of history, including computer games and cinematic portrayals”; MBS, 2021, p. 194), which includes distinguishing between realistic and fictional images and understanding the relationship between mediated and actual reality. However, the degree of closeness to reality in online content is not always transparent to children, even though this is crucial, as media images shape perceptions of reality, establish aesthetic standards, and influence individual conceptions. Experts also highlighted the growing challenge of evaluating fakes, deceptions, and tricks in multimedia contexts, particularly regarding their degree of closeness to reality (“Especially photos, pictures, tricks—that’s really hard. What’s a trick, and what’s the difference between an act in a photo and a photo trick? Like, a staged photo, a little scene, or a photo trick where someone is actually deceiving you. (.) And the thing is, the whole idea of appearance versus reality is something that only develops over time (.), and even adults still cannot really tell the difference between appearance and reality.,” I5, pos. 78–83). In this context, elementary school children are increasingly confronted with chain messages, which they are expected to evaluate for their closeness to reality.
To address these challenges, curricula recommend introducing students to manipulation techniques as a means of understanding criteria for evaluating closeness to reality. Suggested activities include analyzing film scenes, creating and evaluating image montages, and altering images through cropping or editing (“Question information, alter images by selecting specific sections or through image editing”; Der Senator für Bildung und Wissenschaft, 2007, p. 32). At the same time, experts pointed to the rapid evolution of manipulative formats such as deepfakes and memes, which further complicate the evaluation of authenticity in digital media (“And then there’s this really thin line between, you know, it’s just a funny meme and when it’s actually propaganda. That’s a really fine line”; I4, pos. 350–351).
As shown in Table 3, not all content areas could be specified to the same extent based on the combined insights from curricula, interview data, and previous research. Nevertheless, the findings can be transferred to incomplete or vague online content. For example, criteria used to evaluate the closeness to reality of deceptive online content (e.g., real vs. fictional deepfakes) can also be applied to commercial online content (e.g., real vs. fictional advertising videos, such as exaggerated claims in promotional messages). The same applies to the evaluation of closeness to reality in news content, which can be assessed for its degree of factual accuracy.
4.4 Identification of relevant characteristics
German educational curricula emphasize the ability to identify relevant features of media as part of evaluative competence. Elementary students are expected to recognize statements in relation to the medium (“Compare information from newspapers, television, and the internet as well as their content“; TMBJS, 2017, p. 5). Furthermore, curricula highlight the importance of identifying advertising and understanding how it works, particularly in contrast to informational content. Students should also develop an awareness of the extent to which media influences their perception of the world (“Students recognize that their worldview is shaped by media”; MBWK M-V, 2020, p. 35). In addition, they are encouraged to uncover media manipulation in their everyday lives. Experts reinforced this perspective by emphasizing that elementary school students need knowledge about internet phenomena to identify and evaluate them. This includes recognizing advertising, memes, deepfakes, and trolls as part of understanding manipulative media practices (“When is something an advertising?”; I5, pos. 128–129 and “Fake news—how do you recognize that and so on. That’s really another content-related point that’s important to us”; I2, pos. 163–164).
Curricula also address behavioral norms for digital interaction and cooperation, including the understanding of ethical principles of digital communication (“Follow basic rules of communication when using digital media under guidance, e.g., SMS, email, chat”; MBWFK, 2019, p. 36). This involves recognizing and responding to chain messages, spam emails, hate comments, and insults as part of responsible online behavior (“Respect personal rights and behave respectfully in social networks”; MBWK M-V, 2020, p. 35). Furthermore, children should be sensitized to the handling of their own data and be able to recognize prompts that violate their own or others’ privacy, such as data entry in messengers, forums, comments, or chats (“Comment sections are still pretty common on websites, but how much data do I have to give up just to leave a comment? And also, is it moderated in any way?”; I3, pos. 204–206).
Finally, the findings from interviews and curriculum analysis are synthesized with prior research, as illustrated in Table 4. For all content areas, concrete specifications were identified, which can in turn be used to supplement other content areas. For example, the identification of genres aimed at knowledge transfer can also be applied to genres intended for entertainment (e.g., reality formats).
5 Discussion
To assess knowledge about evaluation criteria and their application in elementary education, it is essential to select and continuously adapt evaluation criteria and online content that is relevant to children’s everyday lives. The development of appropriate assessment instruments requires the conceptualization and operationalization of sub-competencies of digital competence that are tailored to the target group (Siddiq et al., 2016) and go beyond purely normative considerations. Currently, only a few studies assess the status quo of specific sub-competencies in elementary school children using methods other than self-report (e.g., Godaert et al., 2022; Pedaste et al., 2023; Kong et al., 2019; Lazonder et al., 2020; Aesaert and van Braak, 2014).
Based on a conceptual framework for the online content evaluation, this study aims to derive indicators for assessing online content among elementary school children to develop items in the future. To this end, interviews with media experts and German curricula were analyzed based on the framework. These findings were then synthesized with existing research to provide an overview of indicators (Tables 1–4). A qualitative design was chosen because methods such as interviews are suitable for operationalization when there is limited prior knowledge for item construction in each domain (Reinders, 2022). The results are discussed below, followed by educational implications and limitations.
5.1 Indicators for the evaluation of online content
As part of the operationalization process, it was necessary to define the online contexts in which the evaluation criteria should be applied at the elementary school level. Particular attention was given to specifying evaluation criteria for multimodal online content (Hassinger-Das and Dore, 2023), as this is the type of content children most frequently encounter. Wherever possible, the criteria were concretized across five overarching content areas: advertising, entertainment, knowledge transfer, deception, and intrusions into personal integrity. These areas served as umbrella categories from which a wide range of online content types and phenomena were derived. In accordance with youth protection standards, violent or sexualized online content was excluded from the operationalization, even though students may encounter such potentially harmful content during internet use (Livingstone, 2014). These topics are more appropriately addressed through pedagogical and trust-based engagement.
In integrating the findings from interviews, curricula, and the research literature, an uneven distribution of indicators across the evaluation criteria and/or content areas became apparent. One possible reason for this is that the methodological approaches emphasize different focal points. For instance, numerous German curricula address the critical evaluation of advertising, which also includes specific design aspects. Other content areas, however, are either not addressed or receive significantly less attention. The same applies to specific evaluation criteria, which often refer broadly to the assessment of online content and sources or mention criteria such as “age appropriateness, timeliness, scope, credibility” (TMBJS, 2017, p. 5). However, such references are unlikely to provide teachers with detailed guidance on what these criteria entail and how they can be applied in a task- and goal-oriented manner in elementary education. Moreover, curricula are normative documents that define educational goals and competencies in a formal and rather abstract manner. In contrast, the interview transcripts reflect subjective interpretations and expert knowledge provided by media professionals, which emerged in context- and topic-specific ways during the interviews. For example, the perspectives of media experts varied according to their professional domain. Experts working with child-oriented search engines focused more on evaluation skills in the context of online content searches, whereas those involved in developing support programs and online services for children tended to emphasize risk-related aspects such as misinformation, cyber grooming, or cyberbullying. Indicators derived from previous research primarily relate to the evaluation of multiple online texts and the assessment of sources (e.g., author expertise) by elementary school students (Paul et al., 2018). Multimodal content on social platforms, which adolescents predominantly use, has received considerably less attention in this regard (Veum et al., 2024).
When synthesizing the collected indicators, empty fields in Tables 1–4 also became apparent. These were intentionally left blank because no results emerged from the individual approaches in these areas. However, these fields can be expanded by transferring indicators from other fields. Indicators that specify an evaluation criterion within one content area (e.g., assessing the intention of online advertising) can be applied to other content areas (e.g., assessing the intention of personality-influencing online content). Nevertheless, the findings provide a more concrete basis for previous theoretical and normative assumptions. National and international frameworks such as DigComp (Vuorikari et al., 2022) do not specifically address the requirements for elementary school children and remain rather general with regard to the individual subdimensions of digital competence. Even media literacy plans tailored to elementary education (e.g., Medienberatung, 2020) largely remain at a general level, for example stating that children should “recognize and evaluate information and its sources as well as underlying strategies and intentions, e.g., in news and advertising” (p. 15). To measure and foster digital competences in a target-group-specific way, further specification of the individual subdimensions is therefore necessary (Siddiq et al., 2016). Previous studies, however, have operationalized and validated digital competencies using a wide range of subdimensions (e.g., Godaert et al., 2022). It is therefore unsurprising that the evaluation of online content is represented in most existing measurement instruments by only a few items. In such cases, it can be assumed that a comprehensive assessment of a subdimension of digital competence is not possible. This study therefore aims to specify various evaluation criteria for the application context of elementary school children to develop multimodal items and validate them empirically.
5.2 Educational implications
In addition to the importance of specifying the construct for operationalization, the findings also offer points of reference for curricular considerations. It is important to note that the actual internet usage of elementary school children does not necessarily align with the competency goals outlined in (e.g., German) curricula, or is only partially represented (e.g., social media). Against the backdrop of existing national and international frameworks (e.g., DigiComp, Vuorikari et al., 2022; ISTE standards, International Society for Technology in Education, 2016), curriculum guidelines should further specify evaluation criteria, as current frameworks often only refer broadly to evaluating content in terms of ‘credibility and reliability’. This also requires that evaluation criteria be explained through specific application contexts that are relevant for acquiring knowledge and skills at this age. This includes, for example, not only content areas that are frequently integrated into curricula, such as advertising, but also the cross-cutting integration of deceptive or entertaining content that should be critically examined across different platforms (e.g., clickbait on YouTube or cyber grooming in platform chats). Particular attention should be paid to the interrelation of content areas and their underlying intentions in the context of social media use (e.g., Fernández-Gómez et al., 2024). For instance, commercial or deceptive content is often embedded within entertaining formats (e.g., unboxing videos), making it particularly difficult for children to identify. To address this, children need to be familiar with the characteristics of different content areas and specific phenomena (e.g., emotionalization in fake news).
Specifying evaluation criteria and their areas of application in the curriculum could potentially support teachers in identifying subject-specific connections and practical applications for their lessons. Evaluation criteria become particularly relevant in the context of (school-based) research tasks (Feierabend et al., 2023), where ideally the specific task determines the selection of appropriate criteria. Furthermore, raising awareness of the application of suitable evaluation criteria in leisure contexts appears to be important. It is therefore important that children learn to apply evaluation criteria in a task- and goal-dependent manner (Weisberg et al., 2023). To implement this in classroom practice, teachers need to be able to assess students’ current level of competence and have access to appropriate instructional materials. These materials should, for example, cover different types of media (such as blogs or news articles), characteristics of content areas (e.g., what defines deceptive content), and platform-specific features (e.g., differences between social media platforms and online newsrooms). Such efforts require that elementary school teachers can respond to their students’ usage behavior by being familiar with the potential risks and opportunities associated with the platforms and online content they engage with (Berger and Wolling, 2019). While a few primarily digital teaching resources are now available for elementary education in Germany (e.g., Fake Finder by SWR1) and internationally (e.g., Be Internet Awesome by Google2), these resources have rarely been empirically validated. Furthermore, it is essential that elementary school teachers have opportunities to develop their digital competencies in this and related sub-dimensions, for instance through MOOCs (e.g., Europen Schoolnet Academy3). To ensure a consistent knowledge base, mandatory integration of digital competencies into teacher education programs could be one possible approach (OECD, 2023).
5.3 Limitations and outlook
The present study also has methodological limitations. The results of the qualitative content analysis of the German curricula and interviews were synthesized with the existing body of research. However, it is possible that not all previous findings were incorporated, as this was not a systematic literature review. Moreover, numerous normative requirements exist, suggesting that additional relevant literature may not have been included in the review. A research bias in certain evaluation criteria may also be assumed, as not all criteria have been equally investigated with elementary school children across different contexts. A systematic review would therefore be necessary in the future—one that centers on a specific research question and evaluates sources based on their quality. Additionally, the normative educational frameworks referenced stem exclusively from Germany, as the underlying objective is to develop a measurement instrument for German elementary school children. Nevertheless, additional international curricula for the elementary education sector could be analyzed to identify further specific indicators. The selection of media experts can also be critically discussed (selection bias, von Soest, 2023). The individuals interviewed volunteered to participate, which constitutes self-selection. Consequently, the composition of expertise and professional domains is random. In additional interviews, the existing indicators could be validated and expanded, either with the same experts or with new participants.
The data presented in Tables 1–4 aim to identify indicators for elementary school children. Therefore, in this study, the data were integrated with the objective of collecting as many indicators as possible that can be used for future item development. Gaps in Tables 1–4 point to further research needs, for example focusing on indicators for evaluating the closeness to reality of informational content. Moreover, the indicators represent only a snapshot of current technological developments. These must be continuously updated to keep pace with the digital experiences of elementary school children. The findings of this study are also not based on the perspectives of elementary school children, which could have contributed additional indicators. In the next phase, the indicators serve as the basis for constructing an initial item pool. To this end, multimodal online content is selected that is relevant to the everyday lives of elementary school children (e.g., familiar formats, influencers) and originates from platforms popular among the target group (YouTube, TikTok, WhatsApp; Ofcom, 2022; Feierabend et al., 2023). The items and indicators were first refined using the think-aloud method with elementary school children and subsequently tested in a pilot study (Theurer et al., 2024). The larger empirical study aimed at validating the items has been completed, with analyses currently underway.
Data availability statement
The datasets presented in this article will not be made available in their original form, as some interviewees disclosed internal information about the platforms they work for. Requests to access the datasets should be directed to Tina Jocham, dGluYS5qb2NoYW1AdW5pLXd1ZXJ6YnVyZy5kZQ==.
Ethics statement
Ethical approval was not required for the studies involving humans because at the time of data collection, no ethics committee was established at the Faculty of Human Sciences. However, the informed consent process was coordinated with the university’s data protection officer. The study involved interviews with media experts, who were informed about data anonymization and deletion procedures, among other aspects. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
TJ: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. SP-R: Supervision, Validation, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by the Faculty of Human Sciences of the Julius-Maximilians-University of Würzburg, Germany.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
References
Aesaert, K., and van Braak, J. (2014). Exploring factors related to primary school pupils’ ICT self-efficacy: a multilevel approach. Comput. Hum. Behav. 41, 327–341. doi: 10.1016/j.chb.2014.10.006
Anttonen, R., Räikkönen, E., Kiili, K., and Kiili, C. (2023). Sixth graders evaluating online texts: self-efficacy beliefs predict confirming but not questioning the credibility. Scand. J. Educ. Res. 68, 1214–1230. doi: 10.1080/00313831.2023.2228834
Artmann, B., Scheibenzuber, C., and Nistor, N. (2023). Elementary school students’ information literacy: instructional design and evaluation of a pilot training focused on misinformation. JMLE 15, 31–43. doi: 10.23860/JMLE-2023-15-2-3
Association of College and Research Libraries. (2015). “Framework for information literacy for higher education.” Available online at: https://www.ala.org/acrl/sites/ala.org.acrl/files/content/issues/infolit/framework1.pdf.
Beitzinger, F., and Leest, U. (2024). “Cyberlife V. Spannungsfeld Zwischen Faszination Und Gefahr: Cybermobbing Bei Schülerinnen Und Schülern.” Available online at: https://buendnis-gegen-cybermobbing.de/wp-content/uploads/2024/10/Cyberlife_Studie_2024_Endversion.pdf.
Berger, P., and Wolling, J. (2019). They need more than Technology-equipped schools: teachers’ practice of fostering students’ digital protective skills. MaC 7, 137–147. doi: 10.17645/mac.v7i2.1902
Bezemer, J., and Jewitt, C. (2010). “Multimodal analysis: key issues” in Research methods in linguistic. ed. L. Litisselitit (London: Continuum), 180–197.
Braasch, J. L. G., Lawless, K. A., Goldman, S. R., Manning, F. H., Gomez, K. W., and Macleod, S. M. (2009). Evaluating search results: an empirical analysis of middle school students' use of source attributes to select useful sources. J. Educ. Comput. Res. 41, 63–82. doi: 10.2190/EC.41.1.c
Brand-Gruwel, S., Kammerer, Y., van Meeuwen, L., and Gog, T. (2017). Source evaluation of domain experts and novices during web search. J. Comput. Assist. Learn. 33, 234–251. doi: 10.1111/jcal.12162
Bunce, L., and Harris, M. (2013). “He hasn't got the real toolkit!” young children's reasoning about real/not-real status. Dev. Psychol. 49, 1494–1504. doi: 10.1037/a0030608
Carpenter, J., Shelton, C., and Schroeder, S. (2023). The education influencer: a new player in the educator professional landscape. J. Res. Technol. Educ. 55, 749–764. doi: 10.1080/15391523.2022.2030267
Chesney, B., and Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review 107, 1753–1819. doi: 10.15779/Z38RV0D15J
Chiluwa, I. M. (2019). “'Truth,' lies, and deception in ponzi and pyramid schemes” in Handbook of research on deception, fake news, and misinformation online. eds. I. E. Chiluwa and S. A. Samoilenko, vol. 2019 (Information Science Reference/IGI Global), 439–458.
Chinn, C., Barzilai, S., and Ducan, R. G. (2020). Education for a “Post-truth” world: new directions for research and practice. Educ. Res. 50, 51–60. doi: 10.3102/0013189X20940683
Cho, H., Cannon, J., Lopez, R., and Li, W. (2022). Social media literacy: a conceptual framework. New Media Soc. 26, 941–960. doi: 10.1177/14614448211068530
Cho, H., Li, W., Shen, L., and Cannon, J. (2019). Mechanisms of social media effects on attitudes toward E-cigarette use: motivations, mediators, and moderators in a National Survey of adolescents. J. Med. Internet Res. 21:e14303. doi: 10.2196/14303
Cho, H., Song, C. C., and Adams, D. (2020). Efficacy and mediators of a web-based media literacy intervention for indoor tanning prevention. J. Health Commun. 25, 105–114. doi: 10.1080/10810730.2020.1712500
Cohen, L., Manion, L., and Morrison, K. (2018). Research methods in education. 8th Edn. London: Routledge.
Coiro, J., Coscarelli, C., Maykel, C., and Forzani, E. (2015). Investigating criteria that seventh graders use to evaluate the quality of online information. J. Adolesc. Adult. Lit. 59, 287–297. doi: 10.1002/jaal.448
Cosma, A., Molcho, M., and Pickett, W. (2024). “A focus on adolescent peer violence and bullying in Europe, Central Asia and Canada” in Health behaviour in school-aged children international report from the 2021/2022 survey 2 (Copenhagen: WHO Regional Office for Europe).
Cotter, K., and Reisdorf, B. C. (2020). Algorithmic knowledge gaps: a new dimension of (digital) inequality. Int. J. Commun. 14, 745–765.
Dale, T. (2019). “The fundamental roles of Technology in the Spread of fake news” in Handbook of research on deception, fake news, and misinformation online. eds. I. E. Chiluwa and S. A. Samoilenko, vol. 2019 (IGI Global), 122–137.
De Veirman, M., Cauberghe, V., and Hudders, L. (2017). Marketing through Instagram influencers: the impact of number of followers and product divergence on Brand attitude. Int. J. Advert. 36, 798–828. doi: 10.1080/02650487.2017.1348035
Der Senator für Bildung und Wissenschaft (2007). “Sachunterricht: Bildungsplan Für Die Primarstufe.” Available online at: https://www.lis.bremen.de/sixcms/media.php/13/Primar_Sachunterricht_2007.pdf.
Donelle, L., Facca, D., Burke, S., Hiebert, B., Bender, E., and Ling, S. (2021). Exploring Canadian Children’s Social Media Use, Digital Literacy and Quality of Life: Pilot Cross-Sectional Survey Study. JMIR Formative Research 5, 1–11. doi: 10.2196/18771
Dubs, R. (2009). Lehrerverhalten: Ein Beitrag zur Interaktion von Lehrenden und Lernenden im Unterricht. Pädagogik. Stuttgart: Franz Steiner Verlag. Available online at: http://www.socialnet.de/rezensionen/isbn.php?isbn=978-3-515-09304-0.
Duncan, R. G., Chinn, C. A., and Barzilai, S. (2018). Grasp of evidence: problematizing and expanding the next generation science standards’ conceptualization of evidence. J. Res. Sci. Teach. 55, 907–937. doi: 10.1002/tea.21468
Eastin, M. S., Yang, M.-S., and Nathanson, A. I. (2006). Children of the net: an empirical exploration into the evaluation of internet content. J. Broadcast. Electron. Media 50, 211–230. doi: 10.1207/s15506878jobem5002_3
Eickelmann, B., Bos, W., Gerick, J., Goldhammer, F., Schaumburg, H., Schwippert, K., et al. (Eds.) (2019). ICILS 2018 #Deutschland: Computer- Und Informationsbezogene Kompetenzen Von Schülerinnen Und Schülern Im Zweiten Internationalen Vergleich Und Kompetenzen Im Bereich Computational Thinking. Münster: Waxmann.
Eickelmann, B., Fröhlich, N., Bos, W., Gerick, J., Goldhammer, F., Schaumburg, H., et al. (2024). ICILS 2023 #Deutschland. Computer- Und Informationsbezogene Kompetenzen Und Kompetenzen Im Bereich Computational Thinking Von Schüler*innen Im Internationalen Vergleich. New York: Waxmann Verlag GmbH.
Evans, N. J., Wojdynski, B. W., and Hoy, M. G. (2019). How sponsorship transparency mitigates negative effects of advertising recognition. Int. J. Advert. 38, 364–382. doi: 10.1080/02650487.2018.1474998
Feierabend, S., Rathgeb, T., Kheredmand, H., and Glöckler, S. (2023). “KIM-Studie 2022 Kindheit, Internet, Medien: Basisuntersuchung Zum Medienumgang 6-Bis 13-Jähriger.” Available online at: https://www.mpfs.de/studien/kim-studie/2022/.
Fernández-Gómez, E., Placer, P. N., and Fernández, B. F. (2024). New mobile advertising formats targeting young audiences: an analysis of advertainment and influencers’ role in perception and understanding. Humanit. Soc. Sci. Commun. 11. doi: 10.1057/s41599-024-04003-3
Ferrari, A. (2013). “DigComp: a framework for developing and understanding digital competence in Europe.” Available online at: https://op.europa.eu/en/publication-detail/-/publication/a410aad4-10bf-4d25-8c5a-8646fe4101f1/language-en.
Flanagin, A. J., and Metzger, M. J. (2010). Kids and credibility: an empirical examination of youth, digital media use, and information credibility. Massachusetts: The MIT Press.
Fogg, B. J., and Tseng, H. (1999). “The elements of computer credibility.” In CHI 99: The CHI Is the Limit, Human Factors in Computing Systems; CHI 99 Conference Proceedings; [Pittsburgh, PA, USA, May 15–20 1999, Ed. by M. G. Williams, 80–87. New York, NY: ACM Press.
Forzani, E. (2020). A three-tiered framework for proactive critical evaluation during online inquiry. J. Adolesc. Adult. Lit. 63, 401–414. doi: 10.1002/jaal.1004
Godaert, E., Aesaert, K., Voogt, J., and van Braak, J. (2022). Assessment of students’ digital competences in primary school: a systematic review. Educ. Inf. Technol. 27, 9953–10011. doi: 10.1007/s10639-022-11020-9
Hassinger-Das, B., and Dore, R. A. (2023). Sometimes people on YouTube are real, but sometimes not: children’s understanding of the reality status of YouTube. E-Learning and Digital Media 20, 618–630. doi: 10.1177/20427530221140679
Hassinger-Das, B., Dore, R. A., Aloisi, K., Hossain, M., Pearce, M., and Paterra, M. (2020). Children's reality status judgments of digital media: implications for a COVID-19 world and beyond. Front. Psychol. 11:570068. doi: 10.3389/fpsyg.2020.570068
Hilligoss, B., and Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: construct, heuristics, and interaction in context. Inf. Process. Manag. 44, 1467–1484. doi: 10.1016/j.ipm.2007.10.001
HMKB. (2011). “Bildungsstandards Und Inhaltsfelder: Das Neue Kerncurriculum Für Hessen. Primarstufe. Deutsch.” Available online at: https://kultus.hessen.de/unterricht/kerncurricula-und-lehrplaene/kerncurricula/kerncurricula-primarstufe.
International Society for Technology in Education. (2016). “ISTE standards for students.” Available online at: https://www.iste.org/standards/.
Jocham, T., and Pohlmann-Rother, S. (2025 in review). The Evaluation of Online Content – a Conceptualization for Elementary School Children.
John, D. R. (1999). Consumer Socialization of Children: A Retrospective Look at Twenty-Five Years of Research. J Consum Res. 26, 183–213.
Kammerer, Y., and Gerjets, P. (2012). Effects of search Interface and internet-specific epistemic beliefs on source evaluations during web search for medical information: an eye-Trackig study. Behav. Inf. Technol. 31, 83–97. doi: 10.1080/0144929X.2011.599040
Kerslake, L., and Hannam, J. (2022). Designing media and information literacy curricula in English primary schools: children’s perceptions of the internet and ability to navigate online information. Ir. Educ. Stud. 41, 151–160. doi: 10.1080/03323315.2021.2022518
Keshavarz, H. (2021). Evaluating Credibility of Social Media Information: Current Challenges, Research Directions and Practical Criteria. Information Discovery and Delivery 49, 269–279. doi: 10.1108/IDD-03-2020-0033
Keßel, Y. (2017). “Development of Interactive Performance Measures for Two Components of ICT Literacy: Successfully Accessing and Evaluating Information” in Dissertation (Frankfurt am Main: Johann Wolfgang Goethe-Universität).
Kiili, C., Laurinen, L., and Marttunen, M. (2008). Students evaluating internet sources: from versatile evaluators to uncritical readers. J. Educ. Comput. Res. 39, 75–95. doi: 10.2190/EC.39.1.e
Kiili, C., Leu, D. J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., et al. (2018). Reading to learn from online information: modeling the factor structure. J. Lit. Res. 50, 304–334. doi: 10.1177/1086296X18784640
Kiili, C., Räikkönen, E., Bråten, I., Strømsø, H. I., and Hagerman, M. S. (2023). Examining the structure of credibility evaluation when sixth graders read online texts. J. Comput. Assist. Learn. 39, 954–969. doi: 10.1111/jcal.12779
Kim, H. (2019). Credibility assessment of health information on social media: discovering credibility factors, operationalization, and prediction. Dissertation, Chapel Hill: The University of North Carolina at Chapel Hill University Libraries.
Kim, H. S., Ahn, S. H., and Kim, C. M. (2019). A new ICT literacy test for elementary and middle school students in Republic of Korea. Asia-Pac. Educ. Res. 28, 203–212. doi: 10.1007/s40299-018-0428-8
KMK. (2016). “Bildung in Der Digitalen Welt. Strategie Der Kultusministerkonferenz. Beschluss Der Kultusministerkonferenz Vom 08.12.2016 in Der Fassung Vom 07.12.2017.” Available online at: https://www.kmk.org/fileadmin/Dateien/veroeffentlichungen_beschluesse/2016/2016_12_08-Bildung-in-der-digitalen-Welt.pdf.
Kong, S.-C., Wang, Y.-Q., and Lai, M. (2019). “Development and validation of an instrument for measuring digital empowerment of primary school students” in Proceedings of the ACM conference on global computing education. eds. M. Zhang, B. Yang, S. Cooper, and A. Luxton-Reilly (New York, NY: ACM), 172–177.
Kuckartz, U., and Rädiker, S. (2022). Qualitative Inhaltsanalyse: Methoden, Praxis, Computerunterstützung: Grundlagentexte Methoden. 5. Auflage. Grundlagentexte Methoden. Weinheim, Basel: Beltz Juventa. Available online at: https://www.beltz.de/fileadmin/beltz/leseproben/978-3-7799-6231-1.pdf.
Kuiper, E., Volman, M., and Terwel, J. (2008). Integrating critical web skills and content knowledge: development and evaluation of a 5th grade educational program. Comput. Hum. Behav. 24, 666–692. doi: 10.1016/j.chb.2007.01.022
Lastdrager, E., Gallardo, I. C., Hartel, P., and Junger, M. (2017). “How Effective Is Anti-Phishing Training for Children?” in Proceedings of the 13th Symposium on Usable Privacy and Security (SOUPS’17), edited by USENIX Association, 229–239.
Lazonder, A. W., Walraven, A., Gijlers, H., and Janssen, N. (2020). Longitudinal assessment of digital literacy in children: findings from a large Dutch single-school study. Comput. Educ. 143, 1–8. doi: 10.1016/j.compedu.2019.103681
Leeder, C. (2016). Student misidentification of online genres. Libr. Inf. Sci. Res. 38, 125–132. doi: 10.1016/j.lisr.2016.04.003
Levine, T. R. (2014). Truth-default theory (TDT). J. Lang. Soc. Psychol. 33, 378–392. doi: 10.1177/0261927X14535916
Li, H., Boguszewski, K., and Lillard, A. S. (2015). Can that really happen? Children's knowledge about the reality status of fantastical events in television. J. Exp. Child Psychol. 139, 99–114. doi: 10.1016/j.jecp.2015.05.007
Livingstone, S. (2014). Developing Social Media Literacy: How Children Learn to Interpret Risky Opportunities on Social Network Sites. Communication 39, 283–303. doi: 10.1515/commun-2014-0113
Lucassen, T., and Schraagen, J. M. (2011). Factual accuracy and Trust in Information: the role of expertise. J. Am. Soc. Inf. Sci. 62, 1232–1242. doi: 10.1002/asi.21545
Lupiánez-Villaneuva, F., Gaskell, G., Veltri, G., Theben, A., Folkford, F., Bonatti, L., et al. (2016). Study on the impact of marketing through social media, online games and Mobile applications on children's behaviour. Justice and Consumers: European Commisson.
Lupton, D., and Williamson, B. (2017). The Datafied child: the Dataveillance of children and implications for their rights. New Media Soc. 19, 780–794. doi: 10.1177/1461444816686328
Macedo-Rouet, M., Braasch, J. L. G., Britt, M. A., and Rouet, J.-F. (2013). Teaching fourth and fifth graders to evaluate information sources during text comprehension. Cogn. Instr. 31, 204–226. doi: 10.1080/07370008.2013.769995
Mares, M.-L., and Bonus, J. A. (2019). “Children’s judgment of reality and fantasy” in The international encyclopedia of media literacy. eds. R. Hobbs and P. Mihailidis (New York, NY: John Wiley & Sons), 1–6.
Martínez, C., and Olsson, T. (2019). Making sense of YouTubers: how Swedish children construct and negotiate the YouTuber Misslisibell as a girl celebrity. J. Child. Media 13, 36–52. doi: 10.1080/17482798.2018.1517656
Mayer, R. E. (2002). Rote versus meaningful learning. Theory Into Pract. 41, 226–232. doi: 10.1207/s15430421tip4104_4
Mayring, P. (2001). Kombination Und Integration Qualitativer Und Quantitativer Analyse. Forum Qual. Soc. Res. 2:11.
Mayring, P. (2022). Qualitative Inhaltsanalyse: Grundlagen Und Techniken. 13th Edn. Weinheim, Basel: Beltz.
MBK. (2011). “Kernlehrplan Bildende Kunst: Grundschule.” Available online at: https://www.saarland.de/SharedDocs/Downloads/DE/mbk/Lehrpl%C3%A4ne/Lehrplaene_Grundschule/GS_Kernlehrplan_BildendeKunst.
MBS. (2021). “Lehrplan Für Die Primarstufe in Nordrhein-Westfalen: Fach Sachunterricht.” Available online at: https://lehrplannavigator.nrw.de/system/files/media/document/file/ps_lp_su_einzeldatei_2021_08_02.pdf.
MBWFK. (2019). “Fachanforderungen Sachunterricht: Primarstufe/Grundschule.” Available online at: https://fachportal.lernnetz.de/sh/faecher/sachunterricht/fachanforderungen.html.
MBWK M-V. (2020). “Rahmenplan Für Die Primarstufe: Sachunterricht.” Available online at: https://www.bildung-mv.de/export/sites/bildungsserver/.galleries/dokumente/unterricht/rahmenplaene/RP_GS_Sachunterricht.pdf.
McElvany, N., Lorenz, R., Frey, A., Goldhammer, F., Schilcher, A., and Stubbe, T. C. (2023). IGLU 2021. Lesekompetenz Von Grundschulkindern Im Internationalen Vergleich Und Im Trend Über 20. Jahre: Waxmann Verlag GmbH.
McGrew, S., and Byrne, V. L. (2021). Who is behind this? Preparing high school students to evaluate online content. J. Res. Technol. Educ. 53, 457–475. doi: 10.1080/15391523.2020.1795956
Medienberatung, N. R. W. (2020). “Medienkompetenzrahmen NRW.” Available online at: https://medienkompetenzrahmen.nrw/fileadmin/pdf/LVR_ZMB_MKR_Broschuere.pdf. (Accessed July 05, 2022)
Merriam, S. B., and Tisdell, E. J. (2015). Qualitative research: a guide to design and implementation. 4. Auflage. The Jossey-bass higher and adult education series. Newark: Wiley. Available online at: https://ebookcentral.proquest.com/lib/kxp/detail.action?docID=2089475.
Miller, C., and Bartlett, J. (2012). Digital fluency': towards young people's critical use of the internet. J. Inf. Lit. 6, 35–55. doi: 10.11645/6.2.1714
Misoch, S. (2015). Qualitative interviews. Berlin, München, Boston: Walter de Gruyter GmbH. Available online at: https://ebookcentral.proquest.com/lib/kxp/detail.action?docID=1897928.
Modrzejewska, A., Czepczor-Bernat, K., Modrzejewska, J., Roszkowska, A., Zembura, M., and Matusik, P. (2022). #childhoodobesity - a brief literature review of the role of social Media in Body Image Shaping and Eating Patterns among Children and adolescents. Front. Pediatr. 10:993460. doi: 10.3389/fped.2022.993460
Mojtabai, R. (2024). Problematic social media use and psychological symptoms in adolescents. Soc. Psychiatry Psychiatr. Epidemiol. 59, 2271–2278. doi: 10.1007/s00127-024-02657-7
Moran-Ellis, J., Alexander, V. D., Cronin, A., Dickinson, M., Fielding, J., Sleney, J., et al. (2006). Triangulation and integration: processes, claims and implications. Qual. Res. 6, 45–59. doi: 10.1177/1468794106058870
Muller, R. D., Skues, J. L., and Wise, L. Z. (2017). Cyberbullying in Australian primary schools: how victims differ in attachment, locus of control, self-esteem, and coping styles compared to non-victims. J. Psychol. Couns. Sch. 27, 85–104. doi: 10.1017/jgc.2016.5
OECD (2023). OECD digital education outlook 2023: towards an effective digital education ecosystem. Paris: OECD Publishing.
Ofcom. (2022). “Children and parents: media use and attitudes report.” Available online at: https://www.ofcom.org.uk/__data/assets/pdf_file/0024/234609/childrens-media-use-and-attitudes-report-2022.pdf.
Orben, A., Przybylski, A. K., Blakemore, S.-J., and Kievit, R. A. (2022). Windows of developmental sensitivity to social media. Nat. Commun. 13:1649. doi: 10.1038/s41467-022-29296-3
Pangrazio, L., and Selwyn, N. (2018). Soc. Media Soc. 4:It’s not like it’s life or death or whatever”: young people’s understandings of social media data:205630511878780. doi: 10.1177/2056305118787808
Paul, J., Cerdán, R., Rouet, J.-F., and Stadtler, M. (2018). Exploring fourth graders’ sourcing skills / un Análisis De La Capacidad De Escrutinio sobre las Fuentes De Información De Los Estudiantes De Cuarto Grado. Infanc. Aprendiz. 41, 536–580. doi: 10.1080/02103702.2018.1480458
Paul, J., Macedo-Rouet, M., Rouet, J.-F., and Stadtler, M. (2017). Why attend to source information when reading online? The perspective of ninth grade students from two different countries. Comput. Educ. 113, 339–354. doi: 10.1016/j.compedu.2017.05.020
Paul, J., Stadtler, M., and Bromme, R. (2019). Effects of a sourcing prompt and conflicts in reading materials on elementary students’ use of source information. Discourse Process. 56, 155–169. doi: 10.1080/0163853X.2017.1402165
Pedaste, M., Kallas, K., and Baucal, A. (2023). Digital competence test for learning in schools: development of items and scales. Comput. Educ. 203:104830. doi: 10.1016/j.compedu.2023.104830
Pérez, A., Potocki, A., Stadtler, M., Macedo-Rouet, M., Paul, J., Salmerón, L., et al. (2018). Fostering teenagers' assessment of information reliability: effects of a classroom intervention focused on critical source dimensions. Learn. Instr. 58, 53–64. doi: 10.1016/j.learninstruc.2018.04.006
Pew Research Center (2020). Parenting children in the age if screens. Washington: Pew Research Center.
Polanco-Levicán, K., and Salvo-Garrido, S. (2022). Understanding social media literacy: a systematic review of the concept and its competences. Int. J. Environ. Res. Public Health 19, 1–16. doi: 10.3390/ijerph19148807
Purington Drake, P., Amanda, P. K., Masur, N. N., Bazarova, W. Z., and Whitlock, J. (2023). The youth social media literacy inventory: development and validation using item response theory in the US. J. Child. Media 17, 467–487. doi: 10.1080/17482798.2023.2230493
Radesky, J. (2021). Young children's online-offline balance. Acta paediatrica 110, 748–749. doi: 10.1111/apa.15649
Radesky, J., Chassiakos, Y. L. R., Ameenuddin, N., and Navsaria, D. (2020). Digital advertising to children. Pediatrics 146, 1–8. doi: 10.1542/peds.2020-1681
Reinders, H. (2022). “Interview” in Empirische Bildungsforschung: Eine Elementare Einführung. eds. H. Reinders, D. Bergs-Winkels, A. Prochnow, and I. Post (Wiesbaden: Springer Fachmedien Wiesbaden), 211–222.
Ridsdale, C., Rothwell, J., Smit, M., Bliemel, M., Irvine, D., Kelley, D., et al. (2015). “Strategies and best practices for data literacy education knowledge synthesis report.” doi: 10.13140/RG.2.1.1922.5044
Rouet, J.-F., Ros, C., Goumi, A., Macedo-Rouet, M., and Dinet, J. (2011). The influence of surface and deep cues on primary and secondary school students' assessment of relevance in web menus. Learn. Instr. 21, 205–219. doi: 10.1016/j.learninstruc.2010.02.007
Rozendaal, E., Lapierre, M. A., van Reijmersdal, E. A., and Buijzen, M. (2011). Reconsidering advertising literacy as a defense against advertising effects. Media Psychol. 14, 333–354. doi: 10.1080/15213269.2011.620540
Salmerón, L., Sampietro, A., and Delgado, P. (2020). Using internet videos to learn about controversies: evaluation and integration of multiple and multimodal documents by primary school students. Comput. Educ. 148:103796. doi: 10.1016/j.compedu.2019.103796
Santini, M., Mehler, A., and Sharoff, S. (2011). “Riding the rough waves of genre on the web” in In genres on the web: computational models and empirical studies. eds. A. Mehler, S. Sharoff, and M. Santini (Dordrecht: Springer Netherlands).
Scholl, A., Renger, R., and Blöbaum, B. (Eds.) (2007). Journalismus Und Unterhaltung: Theoretische Ansätze Und Empirische Befunde. 1st Edn. Wiesbaden: VS Verlag für Sozialwissenschaften.
Schwippert, K., Kasper, D., Eickelmann, B., Goldhammer, F., Köller, O., Selter, C., et al., Eds. (2024). TIMSS 2023: mathematische und naturwissenschaftliche Kompetenzen von Grundschulkindern in Deutschland im internationalen Vergleich. Münster: Waxmann. Available online at: https://elibrary.utb.de/doi/book/10.31244/9783830999591.
Shabani, A., and Keshavarz, H. (2022). Media literacy and the credibility evaluation of social media information: students’ use of Instagram, WhatsApp and telegram. GKMC 71, 413–431. doi: 10.1108/GKMC-02-2021-0029
Siddiq, F., Hatlevik, O. E., Olsen, R. V., Throndsen, I., and Scherer, R. (2016). Taking a future perspective by learning from the past – a systematic review of assessment instruments that aim to measure primary and secondary school students' ICT literacy. Educ. Res. Rev. 19, 58–84. doi: 10.1016/j.edurev.2016.05.002
Sidler, M. (2002). Web research and genres in online databases: when the glossy page disappears. Comput. Compos. 19, 57–70. doi: 10.1016/S8755-4615(02)00080-4
SMK. (2017). “Medienbildung Und Digitalisierung in Der Schule.” Available online at: https://publikationen.sachsen.de/bdb/artikel/29798.
Stanford History Education Group. (2016). “Evaluating information: the cornerstone of civic online reasoning: executive summary.” Available online at: https://stacks.stanford.edu/file/druid:fv751yt5934/SHEG%20Evaluating%20Information%20Online.pdf. (Accessed August 29, 2023)
Sundar, S. (2008). “The MAIN model: a heuristic approach to understanding technology effects on credibility” in Digital media, youth, and credibility. eds. M. J. Metzger and A. J. Flanagin (Cambridge: The MIT Press).73–100 The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning
Syam, H. M., and Nurrahmi, F. (2020). “I Don’t know if it is fake or real news” how little Indonesian university students understand social media literacy. JKMJC 36, 92–105. doi: 10.17576/JKMJC-2020-3602-06
Tamboer, S. L., Vlaanderen, A., Bevelander, K. E., and Kleemans, M. (2024). Do you know what fake news is? An exploration of and intervention to increase youth’s fake news literacy. Youth Soc. 56, 774–792. doi: 10.1177/0044118X231205930
Tandoc, E. C., Thomas, R. J., and Bishop, L. (2021). What is (fake) news? Analyzing news values (and more) in fake stories. MaC 9, 110–119. doi: 10.17645/mac.v9i1.3331
Tao, S., Reichert, F., Law, N., and Rao, N. (2022). Digital technology use and cyberbullying among primary school children: digital literacy and parental mediation as moderators. Cyberpsychol. Behav. Soc. Netw. 25, 571–579. doi: 10.1089/cyber.2022.0012
Theurer, C., Jocham, T., and Pohlmann-Rother, S. (2024). Digitalkompetenzen von Grundschulkindern. Unfassbar und vermessen?!. MedienPädagogik 57, 165–195. doi: 10.21240/mpaed/57/2024.04.30.X
TMBJS. (2017). “Kursplan Medienkunde in Der Grundschule.” Available online at: https://www.schulportal-thueringen.de/web/guest/media/detail?tspi=6214. (Accessed September 23, 2025)
Trevino, T., and Mortin, F. (2019). Children on social media: an exploratory study of their habits, Online Content Consumption and Brand Experiences. J. Dig. Soc. Media Mark. 7, 88–97. doi: 10.69554/MWBW4195
Vanwynsberghe, H. (2014). “How users balance opportunity and risk: a conceptual exploration of social media literacy and measurement.” Dissertation. (Accessed April 04, 2023).
Vanwynsberghe, H., Boudry, E., and Verdegem, P. (2012). The development of a conceptual framework of social media literacy. In Etmaal Van De Communicatiewetenschappen, Proceedings: Ghent University, Department of Communication studies.
Vartiainen, H., Kahila, J., Tedre, M., Sointu, E., and Valtonen, T. (2023). More than fabricated news reports: children’s perspectives and experiences of fake news. JMLE 15, 17–30. doi: 10.23860/JMLE-2023-15-2-2
Veum, A., Burgess, M. Ø., and Mills, K. A. (2024). Adolescents’ critical, multimodal analysis of social media self-representation. Lang. Educ. 38, 482–501. doi: 10.1080/09500782.2023.2287508
von Soest, C. (2023). Why do we speak to experts? Reviving the strength of the expert interview method. Perspect. Polit. 21, 277–287. doi: 10.1017/S1537592722001116
Vuorikari, R., Kluzer, S., and Punie, Y. (2022). “DigComp 2.2 - the digital competence framework for citizens: with new examples of knowledge, skills and attitudes.” JRC Publications Repository. Available online at: https://op.europa.eu/en/publication-detail/-/publication/50c53c01-abeb-11ec-83e1-01aa75ed71a1/language-en/format-PDF/source-280137285.
Wedel, K., Freundl, V., Pfaehler, F., and Wößmann, L. (2025). Zwischen Likes Und Lernen: Was Jugendliche Und Erwachsene Über Social Media Denken: Ergebnisse Des Ifo Bildungsbarometers 2025. ifo Schnelldienst 78, 37–57.
Weisberg, L., Wan, X., Wusylko, C., and Kohnen, A. (2023). Critical online information evaluation (COIE): a comprehensive model for curriculum and assessment design. JMLE 15, 14–30. doi: 10.23860/JMLE-2023-15-1-2
Wiedemann, H., Thomasius, R., and Paschke, K. (2025). “Problematische Mediennutzung Bei Kindern Und Jugendlichen in Deutschland: Ergebnisbericht 2024/2025: Ausgewählte Ergebnisse Der Siebten Erhebungswelle Im September/Oktober 2024.” Available online at: www.dak.de/mediensucht.
Wineburg, S. (1991). Historical Problem Solving: A Study of the Cognitive Processes Used in the Evaluation of Documentary and Pictorial Evidence. J. Educ. Psychol. 83, 73–87. doi: 10.1037/0022-0663.83.1.73
Woolley, J. D., and Ghossainy, M. (2013). Revisiting the fantasy-reality distinction: children as naïve skeptics. Child Dev. 84, 1496–1510. doi: 10.1111/cdev.12081
Keywords: online content, evaluation criteria, indicators, elementary school children, digital competencies
Citation: Jocham T and Pohlmann-Rother S (2025) Measuring what matters: developing indicators for online content evaluation competencies in elementary school. Front. Educ. 10:1652500. doi: 10.3389/feduc.2025.1652500
Edited by:
Raona Williams, Ministry of Education, United Arab EmiratesReviewed by:
Gül Kadan, Cankiri Karatekin University, TürkiyeJuliene Madureira Ferreira, Tampere University, Finland
Copyright © 2025 Jocham and Pohlmann-Rother. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Tina Jocham, dGluYS5qb2NoYW1AdW5pLXd1ZXJ6YnVyZy5kZQ==; Sanna Pohlmann-Rother, c2FubmEucG9obG1hbm4tcm90aGVyQHVuaS13dWVyemJ1cmcuZGU=