Your new experience awaits. Try the new design now and help us make it even better

SYSTEMATIC REVIEW article

Front. Educ., 30 September 2025

Sec. Digital Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1568028

This article is part of the Research TopicDigital Learning Innovations: Trends Emerging Scenario, Challenges and OpportunitiesView all 32 articles

Child engagement during interaction with digital and robotic activities: a systematic review

  • 1Department of Education, Literatures, Intercultural Studies, Languages and Psychology (FORLILPSI), University of Florence, Florence, Italy
  • 2Department of Political and Social Sciences, University of Bologna, Bologna, Italy

Introduction: In the last decades we assisted in the exponential increase of information and robotic technologies for remote learning and rehabilitation. Such procedures are associated with a decrease of human interaction and “in person” control of responses, characteristics that, especially when children or youth are involved, can affect learning performances. Thus, online quantitative, and qualitative indicators of child's psychological engagement are mandatory to personalize the interaction with the technological device. According to the literature, the studies on child engagement during digitalized or robotic tasks vary in terms of underpinning constructs, technological tools, measures, and results obtained.

Methods: This systematic review was conducted with the general aim to provide a theoretical and methodological framework of children's engagement during digitalized and robotic tasks. The review included 27 studies conducted between 2014 and 2023. The sample size ranged from 5 to 299, including typically and atypically developing children, aged between 6 and 18 years.

Results: The results suggest the need for adopting a transversal approach including simultaneously emotional, behavioral and cognitive dimensions of engagement by diverse tools such as self-report questionnaires, video recordings, and eye-tracker. Although fewer studies have examined the relationship between children's engagement and task performance, existing evidence suggests a positive association between emotional, behavioral, and cognitive engagement and both task performance and skill acquisition.

Discussion: These results have implications for setting adequate protocols when using information and robotic technologies in child education and rehabilitation.

Systematic review registration: https://www.crd.york.ac.uk/PROSPERO/view/CRD42024528719, identifier CRD42024528719.

1 Introduction

Compared to previous generations, today's children demonstrate a high level of digital proficiency and knowledge: they embrace technology as a means of learning and entertainment, dedicating a substantial amount of time to technological devices (Duradoni et al., 2022). Beside the acknowledged risks associated to this prevailing trend, information and communication technology (ICTs) are demonstrating promising potential for enhancing learning (Di Lieto et al., 2021; Groccia, 2018; Ruffini et al., 2022; Stephen et al., 2008) and cognitive processes (Drigas et al., 2015; Noorhidawati et al., 2015) in children with typical development or special needs, within everyday settings such as households and schools. These contexts are particularly important since they are the places where children spend most of their time during their development phase through infancy and adolescence (Horvat, 1982; Pecini et al., 2019; Nadeau et al., 2020).

Additionally, ICTs have the potential to make the learning process easier and more enjoyable, for example by gamifying learning content (Brewer et al., 2013; De Aguilera and Mendiz, 2003). Gamifying refers to the application of game design elements—such as points, challenges, and rewards—in non-game contexts to increase motivation and engagement. This strategy has been shown to improve learning outcomes by making educational activities more interactive and rewarding (Deterding et al., 2011; Albertazzi et al., 2019). It is important to distinguish between formal game-based learning, which employs structured digital games with clear goals, rules, and feedback, and more open-ended or creative digital play activities that utilize technology in less structured ways. Such activities include exploratory, imaginative, or collaborative tasks that often emphasize creativity, social interaction, or free expression rather than competition or explicit challenges. These varied forms of digital play are widely used in educational settings and contribute differently to children's learning and development, offering benefits such as fostering creativity, problem-solving, and social skills (Clark et al., 2016). Among these, digital games stand out as a particularly influential modality due to their structured nature and strong potential for engagement. Digital games represent a fundamental and developmentally appropriate modality through which children explore, learn, and interact with the world. They possess core features, such as defined goals, rules, feedback systems, challenges, interactivity, and narrative structures that naturally support attention, emotional involvement, and intrinsic motivation. These characteristics allow digital games to create immersive and engaging learning environments where children are encouraged to participate actively, solve problems, and persist in the face of difficulty. As such, game-based activities have become increasingly relevant in educational contexts, especially those involving technology, due to their capacity to enhance engagement and promote meaningful learning experiences (Breien and Wasson, 2021). Indeed, children often lack motivation to continue learning if the process is tedious, cognitively demanding, and lacks stimulation (Dykstra Steinbrenner and Watson, 2015; Macklem, 2015). Thus, using innovative technology in learning not only enhances children's knowledge and skills but also provides enjoyable experiences through activities like gaming, fostering a sense of joy and pleasure (Hui et al., 2014). Furthermore, the concept of “fun” or “enjoyment,” frequently associated with digital games and play, warrants critical consideration. While positive affect and motivation can enhance engagement and learning, effective educational experiences do not necessarily depend on constant feelings of fun. Play-based learning may also involve moments of challenge, frustration, and sustained effort, which are essential for deep learning and cognitive growth (Whitton, 2018). Therefore, the educational value of digital play lies not solely in entertainment, but in its capacity to engage learners emotionally, cognitively, and behaviorally through meaningful and sometimes demanding experiences.

Finally, the incorporation of technology within educational and intervention settings holds promising prospects even in terms of efficiency and efficacy in time and cost for families, educational and clinical institutions, and policy practitioners involved. Especially in remote format, technology can offer significant advantages by facilitating low-cost, intensive, and personalized exercises (Alexopoulou et al., 2019; Pecini et al., 2019; Rivella et al., 2023; Sandlund et al., 2009; Paneru and Paneru, 2024).

Notwithstanding, to make the use of remote ICTs services as useful and personalized as possible for students and young pupils, it is imperative to gather information concerning children's engagement during the interaction with the device, that is online quantitative and qualitative indicators of the learning process and of the child's emotional and cognitive status. Indeed, engagement research is fundamental for the creation of digital interventions (Nahum-Shani et al., 2022) and in the field of human-computer interaction (Doherty and Doherty, 2019). In this context, children's actions are strongly influenced both by situational factors, linked to the characteristics of the digital task, and by individual factors, such as personal cognitive skills, emotional needs, and motivational tendencies. The combination of these contextual and personal factors can determine different forms of engagement which then predict success in digital learning. This can be particularly important in developmental ages or in the case of neurodevelopmental disorders as they are characterized by a high intersubject variability in children's needs that can affect the successfulness of the remote intervention (Di Lieto et al., 2020). In virtual environments—particularly during play or learning activities—monitoring engagement can help address critical issues such as content personalization, improved accessibility, integration with assistive technologies, and the optimization of strategies involving augmented reality and immersive educational environments to support more effective treatment approaches. Augmented reality refers to technology that overlays digital content (such as images, sounds, or information) onto the real world, enhancing the user's perception of their environment. Immersive and interactive educational environments, on the other hand, are digitally mediated settings designed to deeply engage learners through multisensory input and real-time feedback, encouraging active participation and experiential learning. Tracking children's engagement also contributes to ensuring the reliability of data collected during interactions with robots, digital tasks, or immersive environments. This is especially important for applications involving atypical developmental conditions and for the development of algorithms for assessment and intervention (Paneru and Paneru, 2024; Paneru et al., 2024).

Nevertheless, researchers continue to face numerous challenges in understanding engagement, both due to its definition, which yields various and sometimes conflicting interpretations, and its multidimensional nature, which makes measurement challenging.

Difficulties in defining and measuring engagement can hinder the development of practical applications aimed at supporting children's learning and skill acquisition through more targeted and informed use of technology. It is therefore essential to systematize the conceptualization of the “child's engagement” within robotic and digital contexts, in order to clarify its components, identify the influencing factors, and determine the most effective methodologies for measuring it accurately and consistently. A systematic review of the existing literature can offer a theoretical framework for the construct, as well as support the identification and selection of reliable and valid tools to monitor children's engagement during interactions with digital and robotic technologies. Moreover, findings from the literature can provide valuable insights into the role of emotional, cognitive, and behavioral engagement in influencing children's performance, thereby contributing to the optimization of educational technologies in both typical and atypical development.

1.1 Engagement definition

Existing definitions of engagement vary depending on the context and the individuals involved, illustrating the lack of a universal definition (Nahum-Shani et al., 2022).

Within the frame of conversation between two agents, engagement is defined as the process through which two or more participants establish, maintain, and interrupt their perceived connection (Sidner et al., 2005). This process includes initial contact, negotiating a collaboration, verifying that the other is still taking part in the interaction, evaluating whether to remain involved, and deciding when to end the connection.

In other fields, such as education research and healthcare, engagement can be defined as the effort children devote to educationally beneficial activities to desired learning outcomes (Hu and Kuh, 2002) or as actions taken by subjects to support their health (Cunningham, 2014). Furthermore, considering adulthood, engagement can be seen also as a stable characteristic of the individual, which is based on personality traits, therefore a propensity to engage or to be engaged (Barco et al., 2014).

Nowadays, engagement can have a broader meaning if one considers a single user interacting with a screen-based interface, a technological device, or a robot. A technological device refers to an integrated hardware-software system—such as a tablet, an augmented reality headset, or an educational robot—that enables users to access, navigate, and interact with digital content or immersive environments. These devices often serve as the physical interface through which playful or learning experiences take place. In the context of social media, Jaimes et al. (2011) define engagement as the phenomenon of people being fascinated and motivated by developing a relationship with a social platform and integrating it into their lives. In human-computer interaction (HCI) it is defined as the quality of users' experiences when interacting with a digital or robotic system. This includes aspects such as challenge, positive affect, usability, attention attraction and maintenance, aesthetic and sensory appeal, feedback, variety/novelty, interactivity, and user-perceived control (O'Brien and Toms, 2008). In the definition of O'Brien and Toms, engagement is therefore a dynamic process within which four discrete events are identified: the point of involvement, the period of involvement, disengagement, and re-engagement.

Beyond defining engagement as a unitary construct, it must be acknowledged that engagement is multifaceted and can include multiple processes at different levels (Bouta and Retalis, 2013; Islas Sedano et al., 2013). In fact, although there is no consensus on which dimensions are most important in defining engagement (Lee, 2014), it is acknowledged that engagement represents the simultaneous investment of emotional, cognitive, and physical energies (Rich et al., 2010).

Definitions of emotional engagement tend to emphasize the subjective nature of the experience, including attitudes and emotions that reflect intrinsic motivation, positive affects, and a sense of pleasure and interest in the task (Fredricks et al., 2004). Cognitive engagement instead primarily refers to the appropriate use of several cognitive processes such as attention, information processing, and memory (Fredricks et al., 2004). Finally, behavioral/physical engagement implies action, participation and individual conduct during the interaction with a person or a device (Bouta and Retalis, 2013). To date there is a growing literature supporting bodily engagement in learning contexts. Proponents of “embodied cognition” in fact, agree that the way people think and reason about the world is closely related to the body's interaction with the physical environment (Lindgren et al., 2016). As a consequence, body movement can have an impact on learning processes (Goldin-Meadow et al., 2009) and on degree of engagement (Anastopoulou et al., 2011).

Considering the diverse definitions of engagement and its multifaceted underlying constructs, the first objective of this review is to describe how engagement with digital tasks is conceptualized and operationalized in the educational and intervention contexts in childhood.

1.2 Tools and procedures to measure engagement

One of the most recent approaches that attempts to measure child's engagement within the educational setting is called Learning Analytics. Learning Analytics (LA) involves the measurement, collection, analysis, and reporting of data about learners and their contexts to understand and optimize learning and its environments (LAK, 2011). It offers educators and practitioners valuable information to enhance the learning experience, improve instructional design, and support performance success. LA can supply powerful tools for teachers and researchers to improve the effectiveness and the quality of children's performance as well as inform, extract and visualize real-time data about learners' engagement and their success (Macfadyen and Dawson, 2010).

In line with the LA approach, several innovative technologies and valid and reliable tools can be used to investigate engagement in technology- or game-mediated learning experiences (Abbasi et al., 2023). Researchers have used various methods to measure it such as quantitative self-report surveys, semi-structured interviews, qualitative observation, eye-trackers, artificial intelligence (AI), video recordings, physiological measures (Crescenzi-Lanna, 2020a; Sharma et al., 2020), as reported by previous literature reviews (Boyle et al., 2012; Henrie et al., 2015; Sharma et al., 2020).

Quantitative self-report surveys, often utilizing tools like the Likert scale, have been widely employed to assess learners' engagement. Survey inquiries span from evaluating self-perceived levels of engagement (Gallini and Barron, 2001) to delving into behavioral, cognitive, and emotional aspects of engagement (Chen et al., 2010; Price et al., 2007; Yang, 2011). While surveys are valuable for older students, they may not always be suitable with younger children who may struggle to comprehend and respond to the questions directly. Additionally, data are typically collected at the end of a learning activity, and this may not be ideal for those interested in developing systems that provide researchers with real-time feedback on child engagement over the course of an activity. This need may be particularly relevant in rehabilitative contexts, when the child's response to the intervention should be constantly monitored.

The second most common approach involves qualitative measures, which include direct observations via video, capturing screenshots of children's behavior during learning, interviews or focus groups, and texting by other digital communication tools. These qualitative measures are particularly useful for exploratory studies characterized by uncertainty about how to measure or define engagement. Although qualitative methodologies offer in-depth data regarding student engagement, they are limited in their ability to generalize findings to a larger population. This lack of generalizability impedes the establishment of a common strategy for defining or assessing engagement.

Regarding quantitative observational measures, researchers use various indicators obtained through direct human observation, video recording, and computer-generated user activity data (e.g., log data). These methods have the advantage of allowing researchers to assess engagement in real time, avoiding interruptions or subsequent measurements.

Finally, another approach to measuring engagement involves the use of physiological sensors, which can detect children's physical responses during learning. Eye-tracking technologies, skin conductance sensors, blood pressure, and electrophysiological data (e.g., EEG) are used to assess the impact of various technological devices and interactive lessons on engagement and learning (Boucheix et al., 2013). Physiological sensors have the advantage to not interrupt the task, but need confirmation through self-reported data. In addition, one challenge in using physiological sensors concerns the complexity of the technology and the associated cost. Finally, it is critical to pay attention to sensor placement and to subject's limitations while conducting monitoring. To date, physiological sensor technology is advancing with simpler and more affordable options, making this type of measurement increasingly viable for studying engagement (Henrie et al., 2015).

It must be noted that a multimodal approach integrating various interaction modalities, such as verbal language, gestures, facial expression, body posture and sensory data, could provide a more complete understanding of the individual's engagement. The importance of multimodal approaches in evaluating engagement in digital tasks lies precisely in their ability to capture a wider range of behavioral and emotional signals and in providing educators with the possibility to evaluate student engagement more accurately through valuable information to adapt and optimize online learning experiences. Additionally, using advanced data analytics techniques can help identify patterns and trends in children engagement, allowing teachers to intervene in real time to improve learning. That is why the use of multimodal assessments could be widely used in learning and rehabilitation, not only because it allows for a better understanding of learning behaviors, but also because it has the potential to improve intervention and adaptation to special educational needs by supporting cognitive, affective, and metacognitive (Emerson et al., 2020; Checa Romero and Jiménez Lozano, 2025).

However, it is not always feasible to implement multiple and valid tools to study engagement in an online learning environment. Studies often measure only one dimension of engagement, or study engagement in general (without operationalize it in terms of cognitive, emotional, and behavioral components) or even within a single subject area (e.g., math; Henrie et al., 2015). Additionally, the tools mentioned above are often developed to measure students' engagement in real, face-to-face classrooms rather than the online learning environment. Most importantly, the studies reported in the previous reviews (Henrie et al., 2015) have examined engagement of university students, thus leaving school age uncovered.

To date, although various tools can be used to measure engagement, little is known about which tools and procedures are most used to evaluate the different types of engagement, i.e., cognitive, emotional and behavioral, during childhood and adolescence. However, considering the evident benefits of utilizing technology in gathering relevant information, it is imperative to identify informative and valuable methodologies and tools to be used by practitioners and researchers with children.

Thus, the second aim of the review is to describe the tools and procedures used to measure engagement during children and adolescent activities with digital technologies and interaction with robots, through paying attention to differentiate the emotional, cognitive, and behavioral components.

1.3 Relationships between engagement, performances, and characteristics of the digital tasks

In the recent decades, there has been a growing interest in understanding the role of engagement in children's development, especially for educational special needs (Fredricks et al., 2004). Particularly, student engagement is universally recognized as one of the best indicators of success in the learning process and in personal development (Skinner et al., 2008). In the early school years engagement predicts academic achievement and test performances while subsequently it affects students' patterns of attendance, continuity, and academic resilience (Sinclair et al., 2003), creating an important gateway to better academic achievement while in school (O'Farrell and Morrison, 2003). Engagement has been also found to act as a protective factor against risky behaviors typical of adolescence, such as substance abuse, risky sexual behavior, and delinquency (Skinner et al., 2008).

Despite substantial investments in the digitization of education, which have made information and communication technologies (ICTs) an integral part of learning, current research is rather limited when considering the impact of engagement on performance during digital learning tasks (Ferrer et al., 2011). In fact, it must be noted that although it is assumed that the engagement induced by digital and game-based tasks affects positively learning and task performances in children, no systematic analysis of such a relationship has ever been explored. Moreover, it remains unclear whether different types of engagement are differently affected by the use of digital learning tasks. While emotional engagement is expected to be positively related to performance in digital tasks (Tisza et al., 2022), some hypotheses suggest that the use of technology for learning may induce cognitive fatigue, potentially negatively impacting cognitive engagement (Giannakos et al., 2020) with larger effects on accessibility to digital learning by subjects with neurodevelopmental disabilities. In addition, it is of interest to clarify which characteristics of the digital and game-based learning tasks (Crescenzi-Lanna, 2020a) favor engagement across typical and atypical developmental populations as it may have important implications for ponderated choices in the educational and interventional fields.

Given their influence on children's engagement, the design features of digital tasks deserve careful attention to ensure they effectively support motivation and learning.

Accordingly, the third aim of the review was to verify whether the degree and type of children's engagement correlated with their task performances and if it varied according to task characteristics.

2 Methods and procedure

2.1 Eligibility criteria

Studies were included if they presented the following criteria: (i) being written in English, (ii) reporting measurements of children's engagement in terms of emotion, cognition or behavior, (iii) being a peer-reviewed article, (iv) reporting on quantitative data, qualitative data or mixed-method study designs were included, (v) having a sample between 0 and 18 years old, (vi) having children completing a digital task, (vii) being published between 2000 and 2024.

2.2 Search methodology

The review was conducted in accordance with the recommendations of the Preferred Items for Reporting of Systematic Reviews and Meta-Analyses (PRISMA) to organize all the data retrieved from possible eligible studies (Page et al., 2021). Electronic bibliographic databases including the PsycINFO, PubMED, and Scopus were searched up to January 2024 using the following full string:

child* AND engage* AND ((“learning analytics” OR “embodied learning” OR “immersive learning”) AND (“game*” OR “computer*”OR “robot*” OR “digital*” OR “tablet” OR “multimod*” OR “educational technology”)).

The search strategy string was informed by previous literature on children's engagement and utilizing commonly used key terms pertaining to each of the three categories of engagement (e.g., Crescenzi-Lanna, 2020b; Lee-Cultura et al., 2021; Kosmas et al., 2019). Population was identified by the keyword “child*,” thus including typical and atypical development; the keyword “engage*” defined the variable of interest; the keywords “learning analytics,” “embodied learning,” and “immersive learning” were used to target the procedures used to measure engagement; the keywords “game*,” “computer*,” “robot*,” “digital*,” “tablet,” “multimod*,” and “educational technology” were used to define the type of task within which engagement was measured.

Filters were applied to include only English. The reference lists of all studies included were screened to identify additional citations of interest.

2.3 Review process

Papers were screened according to the following procedure: the principal reviewer (XX) fully compiled a list of all the papers obtained through the keywords and selected them as eligible based on the reading of the abstracts. A second reviewer (XX) independently carried out the same task and reported which papers were deemed eligible according to them based on their abstracts. A third reviewer (XX) intervened if there were any discrepancies in selecting a paper between the two other reviewers.

2.4 Data extraction

Data extracted were also in duplicate by two independent reviewers (XX, XX). The following information was reported for each of the included papers: reference, journal, main aim and scope, study methods, populations characteristics, intervention setting, tools used and main findings.

2.5 Study collection

Four thousands abstracts were screened to assess the potential eligibility of the studies to be included in the systematic review. During this process of screening, 89 papers were accepted based on their titles and abstracts only and were thus read in full-text. Seventy-five papers were removed because they did not meet the eligibility criteria, specifically 8 presented a sample of either university students or of adult people, 24 did not present any empirical data, 40 did not focus on evaluating emotional, behavioral, or cognitive engagement, and 3 did not present any digital task. As a result, 17 papers were included through this process. References were also analyzed through other sources (e.g., forward citation searches, contact with authors) to find potentially eligible studies. A total of 13 references were screened and 10 papers were added in the end. Finally, 27 papers were deemed eligible for this PRISMA systematic review (Figure 1).

Figure 1
Flowchart depicting the components of engagement: emotional, behavioral, and cognitive. Emotional includes positive and negative emotions. Behavioral involves gestures and posture. Cognitive covers attention and fatigue. Data is collected via video recording, self-report questionnaires, and eye-tracking, leading to qualitative notes and observational scales.

Figure 1. Flow diagram showing the search methodology to identify papers about the engagement of children and instruments used to assess it.

2.6 Quality assessment

A risk of bias assessment was conducted independently by two reviewers (XX, XX) using a quality appraisal tool, the Mixed Methods Appraisal Tool (MMAT) version 2018 (Hong et al., 2018), which has been validated and tested on different methodologies including quantitative, qualitative, and mixed-methods study designs. The tool consists of two screening questions (i.e., “Are there clear research questions?” and “Do the collected data allow for addressing the research questions?”). If both questions receive affirmative responses, five additional questions are posed regarding sample characteristics, study design, measurement efficacy, statistical analysis, and outcome data. These five questions vary based on the study design: qualitative, quantitative, or mixed methods. For mixed-method studies, all types of questions are utilized, totaling 15 questions. Total scores are computed as the percentage of MMAT criteria met, ranging from 0% (indicating no quality) to 20% (very low quality), 40% (low quality), 60% (moderate quality), 80% (good quality), and 100% (very high quality).

3 Results

3.1 Studies' setting

This systematic review synthesizes data from 27 studies on the assessment of children's emotional, cognitive, and behavioral engagement during the completion of digital activities. A brief summary with reference numbers used in the Section 3 of the included studies is provided in Tables 1, 2.

Table 1
www.frontiersin.org

Table 1. Main characteristics of the studies included: APA reference, sample size, and its characteristics; type of task; type of engagement measured and the tool/technology used in the studies.

Table 2
www.frontiersin.org

Table 2. Methods used for each engagement category.

All studies were published between 2014 and 2023. Studies were cross-sectional in nature and most of them were conducted within the school setting. Specifically, 22 were carried out inside a school context, 2 were carried out in a university lab [12, 19], 1 was in a home setting [7], and 1 in a therapeutic center [27]. One paper did not provide any setting details [24]. Additionally, 2 papers used a double setting by carrying the experiments in a school and in a museum room [3, 13]. By location studies were conducted in Norway (n = 6) [2, 3, 8, 12, 13, 18], followed by Cyprus (n = 3) [6, 17, 20], United States (n = 1) [25], the Netherlands (n = 2) [4, 9], United Kingdom (n = 1) [22], China (n = 2) [1, 7], Brazil (n = 1) [11], Morocco (n = 1) [18], Singapore (n = 2) [14, 21], Greece (n = 1) [23], Canada (n = 1) [24], Finland (n = 1) [26], Croatia (n = 1) [5], German (n = 2) [10, 15], and Italy (n = 2) [16, 27].

3.1.1 Studies' participants

Sample sizes ranged from 5 to 299. In total, across the 27 studies there were 1,666 participants. Twenty five studies included primary and middle school children with the age varying from 6 to 18 years old. Only two studies had a sample of preschool children, aged between 4 and 6 years old [11, 21]. Twenty-four studies included children with typical development, whereas two included children with Autism Spectrum Disorder (ASD) [25, 27], three focused exclusively on children with Special Educational Needs or Disabilities (SEND) [20, 23, 26], and one study included both typically and atypically develop children (i.e., Learning Disabilities) [18]. In most studies children were predominantly White whereas one study conducted in Morocco [18] and another in China [7]. As per gender distributions, most samples comprised over 50% male children. Gender was not reported in seven studies [5, 11, 14, 17, 22, 25, 27].

3.1.2 Description of the digital, game-based, and robotic tasks

All studies engaged students in a digital task, through a game, a robot, or a computer to complete an activity (Figure 2). In particular, nine papers used educational games or digital exercises, whose purpose was to gamify the school learning content and make the learning process easier for the student. The game content among studies was broad, touching different topics such as math [i.e., 6] and language [i.e., 21]. Two studies [5, 14] used augmented reality (AR) technology for creating and presenting digital lessons to students and for promoting vocabulary learning in young children. One study used robot interactions [1] through a learning instrument called LEGO Mindstorms EV3, a combination of physical robot and a computer programming environment, including LEGO building blocks, sensors, and programmable hardware.

Figure 2
Panel (A) shows a child sitting at a table with a laptop, accompanied by a humanoid robot. Panel (B) is an illustration featuring a child at a desk with a laptop, a robot, and a camera. Labels indicate “Behavior Recording,” “Facial Expression Analysis,” “Direct Behavioral Observation,” “Performances,” and “Psychophysiological Recording.” A seated adult observes with a tablet.

Figure 2. Graphical representation of task types categorized by associated hardware and software components used across the reviewed studies+. “Other” records: forward citation searches, contact with authors.

Game-based tasks typically rely on software interfaces emphasizing goal-oriented play and feedback. Differently, robotic tasks engage users through embodied, multimodal human-robot interaction and involve physical interaction, sensorimotor feedback, and a richer multisensory engagement, differentiating it from purely screen-based or software-driven learning experiences.

Eight papers used Kinect games, also called “kinems.” Kinems are motion-based games that can empower a variety of learning skills such as math (i.e., “Sea Formuli,” [13]), second language (i.e., “Suffiz,” [13]), memory (i.e., “Melody Tree,” [20]), and attention (i.e., “Body Ball,” [27]). Kinect uses a camera and sensors to detect the actions carried out by the participants in order to interact with the games. The platform can also register data about joints movements, the jerks and even descriptive data such as time spent playing. The gameplay activity usually varies based on the type of game whereas the difficulty can be determined by the player or the researcher, making the games self-adaptive according to children's needs.

In four studies children were asked to complete a coding task within small groups of 2/3 people [4, 9, 12, 19]. During this activity, children were expected to use the Arduino platform and upon watching a tutorial to code on a computer with the aim of crafting a small robot or creating a game. In one study children attended a lesson designed with VR technology [7]. During the study children were presented with a giant board that could project holograms and with which they could interact through the VR visor. Two studies [10, 15] used an e-book platform (i.e., ALCE). Lastly, one study used a simple computerized task on memory and executive functions [21].

3.2 Study findings

3.2.1 Engagement: definitions and construct

Through careful examination of studies, a diverse range of interpretations and conceptualizations of engagement in digital contexts have been identified. A summary of the definitions used in each study is reported in Table 1.

Specifically, 12 studies [1, 2, 3, 6, 8, 13, 16, 17, 20, 23, 25, 27] define engagement as linked to the concept of “embodied cognition” or “embodied learning,” that is, as mentioned in the introduction, an educational approach that emphasizes the importance of physical experience in learning, where active and bodily participation becomes fundamental.

Two studies [4, 9] define engagement as enjoyment, an affective state during which the user feels in control, loses perception of time and space, abandons social inhibition, and encounters the appropriate level of challenge.

In five studies [11, 18, 19, 21, 22] engagement is represented by all emotions and affective states an individual experiences during the task, including enjoyment, boredom, happiness, anger, and excitement.

One study [5] states that in the educational context, engagement includes a wide range of different dimensions including behaviors (such as perseverance, commitment, attention, and participation in challenging courses), emotions (such as interest, pride in success), and cognitive processes (such as problem solving, use of metacognitive strategies).

Two studies [7, 15], defined engagement by distinguishing the behavioral (i.e., as active participation and involvement in activities), emotional (i.e., as positive reactions and feelings toward teachers and work), and cognitive dimension (i.e., as effort and concentration on completing work).

One study [14] focuses on the cognitive engagement through defining it in terms four subject's modes: Interactive, Constructive, Active, and Passive (i.e., ICAP framework).

One study [24] used O'Brien and Toms (2008) definition that, as described in the introduction, postulates engagement as the quality of subject experiences when interacting with a digital or robotic system.

Finally, one study implies that engagement is the intrinsic motivation of an individual interacting with a specific device [26]. Intrinsic motivation refers to situations in which actions are undertaken without any perceivable external influence. For example, an intrinsically motivated individual derives satisfaction from the activity itself and does not anticipate specific gains, such as extrinsic rewards.

In sum, conceptualizations vary greatly, ranging from affective states and emotions experienced during interaction with digital content, to active participation, embodied learning, enjoyment, and intrinsic motivation in completing digital tasks.

3.2.2 Tools and procedures

Tools and procedures have been gathered according to the type of engagement, specifically emotional, behavioral, and cognitive. No studies directly compared the results obtained by different methodologies as most of them used one tool at time for different emotional, cognitive, and behavioral aspects.

A detailed description of the tools used in each of the three categories is provided in the following sections and is reported in Tables 1, 2.

3.2.2.1 Emotional/affective engagement

Studies evaluating children's emotional engagement gathered data on the affective state of the children during the completion of a digital task. Overall, emotional engagement was assessed by 22 papers, making it the most thoroughly explored category. All included studies focused on measuring a wide range of positive and negative affective states (e.g., boredom, happiness, anger, excitement, etc.). Nine studies measured emotional engagement through qualitative notes [2, 5, 7, 11, 17, 20, 21, 25, 27], five used video recording analysis through Artificial Intelligence [4, 13, 18, 19, 21], two speech analysis [22, 26], six self-report methods [1, 4, 9, 10, 16, 26], six physiological indexes [2, 3, 4, 8, 13, 21], one semi structured interview [23], one observational scale [24], and one log data gathered directly from the platform [22].

Video recordings focused on children's facial expressions, verbal cues (e.g., laughing, smiling), and body movements through qualitative analysis of the notes reported by researchers and teachers, AI (e.g., trained by the Facial Action Coding System taxonomy FACS; Cohn et al., 2007).

Self-reports consisted of Likert-based questionnaires or surveys referring to children's or parents' perceptions during the game-based learning activity. They were administered after completing the activity or online, while the child was playing the game or after each play session. The FunQ questionnaire (Tisza and Markopoulos, 2023) consists of 18-items gathered in six dimensions: (a) autonomy (i.e., experiencing control over the activity), (b) challenge (i.e., feeling challenged by the activity), (c) delight (i.e., experiencing positive emotions), (d) immersion (i.e., feeling immersed in it and lost the sense of time and space), (e) loss of social barriers (i.e., socially connectivity), and (f) stress (i.e., experiencing negative emotions during the activity). The questionnaire used by Reinhold et al. (2021, [10]) investigated perceived intrinsic motivation, competence support and autonomy, situational interest, and perceived demand.

In the study by Barana and Marchisio (2020, [16]) the questionnaire was composed of 35 questions inspired by the Pisa 2012 student questionnaire, including items investigating the emotional engagement on specific school topics (e.g., “I like lectures about Mathematics.”).

Finally, Ronimus et al. (2014, [26]) asked the parents to complete online a single quantitative item about the children's motivation in doing the task (i.e., “How eagerly did the child play the GraphoGame during the study?”).

Physiological indexes were used to measure children's stress and arousal (e.g., heart beat and rate and the electrodermal activity); they were collected by Empatica E4 wristband [2, 3, 4, 8, 13, 21] or the Consensys Shimmer 3 GSR [21]. The Empatica E4 wristband is an unobtrusive bracelet that can be worn by children while they are completing the task. The Shimmer, it is composed of electrodes that must be worn its along with a few probes and a sensor.

Qualitative field notes were collected from teachers, experienced therapists or researchers who were instructed to provide an overall evaluation of children's enjoyment or to focus on children's facial expressions and emotions [20, 25, 27]. Notes could be time-sampled (e.g., Baker-Rodrigo Ocumpaugh Monitoring Protocol method, BROMP, Ocumpaugh, 2012) and computer-assisted (e.g., Human Affect Recording Tool, HART; Ocumpaugh, 2012).

Semi structured interviews were used with teachers and the parents to verify the enjoyment experienced by children in using the intervention Kinems [23].

Speech analysis was used by two studies [22, 26] to evaluate the emotional characteristics of the interaction with the digital task, platform or robot provided. The affective states were analyzed through specific keywords pronounced by the children and prosodic signals that were then clustered by an algorithm in the different categories (i.e., frustration, in flow, boredom, confusion and surprise).

An observational scale, used by Martinovic et al. (2016, [24]), was compiled by the researchers to assess emotional engagement during digital tasks. The items assessed whether the child showed amusement (i.e., laughing or smiling), frustration (i.e., sighing, groaning) or anxiety and nervousness during the task.

Log data were acquired by Grawemeyer et al. (2017, [26]) to infer emotional engagement on the base of the interaction of the child with the digital platform (e.g., explore the functions of the platform to find a solution, stop without interacting with the platform to think about what to do).

3.2.2.2 Behavioral engagement

Behavioral engagement was explored by 16 studies, making it the second most thoroughly investigated category. Considering the fact that the reviewed studies analyzed diverse types of behavioral signs to determine behavioral engagement, results in this category are heterogeneous.

Precisely, the behavioral signs indicating engagement that were included in this category were as follows: the time spent looking and not looking at the screen [12, 13]; time spent on the activity and other information related to it (e.g., errors, speed, completion time; [10, 11, 15, 16, 20, 23]); socially interacting and being physically active while performing the task [2, 5, 8]; behavioral reactions (e.g., making noise in the classroom) and somatic posture while completing the task (e.g., leaning forward or keeping a neutral stand; [7]); behavioral distress [27]; behavioral signs of loss of attention [27]; and behavioral signs of loss of interest [27].

The most common methodologies to assess behavioral engagement were video recordings analyzed through qualitative notes [2, 5, 7, 8, 27] and log data acquired from the digital platform [10, 11, 15, 20, 23]. Other methods included eye correlates [12, 13], qualitative notes [27], a single quantitative item [27], observational scales [24], and self-report questionnaires [1, 16].

Video recordings were used to explore diverse signs of behavioral engagement such as: how much the children moved during the activity (e.g., too much, not at all, lazily) and whether they autonomously sought for social interaction with the researchers to also engage them in that activity [2]; specific behaviors, such as jumping, dancing, and making celebratory movements [8]; purposed and intentional movements toward the device or to reach the next goal [5]; overall behavior during virtual reality sessions (i.e., positive, normal or misbehavior) as well as children posture while [7]. The only study with children with Autism Spectrum Disorder [27] measured behavioral distress (e.g., clothing manipulation, teeth grinding, wobbling), loss of interest (e.g., verbal manifestation of tiredness), and loss of attention (e.g., child overstimulation—loss of movement control).

Log data from the digital platforms were collected automatically by the games and included the time spent playing on the platform, the number of mistakes, and the speed of completion of the game [10, 11, 15, 20, 23].

Eye correlates were measured by the Tobii eye-tracker device [12, 13]; the skewness of saccade velocity and the blink rate was used to calculate the level of anticipation and the tiredness.

Qualitative therapist notes were used in children with Autism Specter Disorders to record the presence of inappropriate movements (i.e., genital manipulation, clothing manipulation, teeth grinding, running in place, wobbling, putting hands on the mouth) [27].

Observational scales filled in by the researchers were used to evaluate whether the child was distracted by looking around or eating while carrying out the task [25].

Self-reported questionnaires were used by Barana and Marchisio (2020, [16]) and include items on students' effort, completion of work, perseverance, participation in school and social related activities.

3.2.2.3 Cognitive engagement

The “cognitive engagement” category refers to all those studies that tried to assess diverse cognitive aspects of the participating children, such as attention or cognitive load, while they were completing the digital task. Overall, 14 studies evaluated cognitive engagement. Majority of the studies assessed more than one cognitive aspect: seven papers assessed cognitive load [1, 2, 3, 6, 8, 12, 13, 15], three attention/focus [12, 13, 24], three perceived difficulty [2, 3, 8], two information processing (global and local) [2, 3], two anticipation of the task (i.e., anticipation of the stimuli's appearance during the task) [12, 13], two reasoning processes (e.g., remembering, understanding, analyzing, doing evaluations) [5, 7], one private speech (i.e., language directed to oneself that guides the cognitive execution) (Zivin, 1979) [11], one fatigue [12], one joint attention of all children [12], one ICAP framework (interactive, constructive, active or passive) [14], one perceived control of success and self-regulation [15], and one concentration [26].

The methods used to evaluate cognitive engagement included eye correlates and movements (n = 5) [2, 3, 8, 12, 13], questionnaires and self-reports (n = 5) [1, 6, 16, 24, 26], video recordings of each game session (n = 5) [3, 5, 7, 11, 14], and speech analysis [15].

Eye correlates were detected through Tobii eye-tracking glasses that are an unobtrusive support tool that are considered reliable to obtain a wider and more precise range of the eye data (Tobii, 2023). Several eye-tracker metrics were used to evaluate specific aspects of cognitive engagement: pupil diameter for cognitive load [2, 3, 8, 12, 13]; saccades velocity for perceived difficulty [2, 8, 12] and cognitive anticipation [12, 13]; fixations and saccades velocity for distinguish between global and local information processing [2, 3]; blink rate for cognitive fatigue [12]; simultaneous gazing for joint attention [12].

Self- reported questionnaires were used by three studies [1, 6, 16] to evaluate through Likert scales (e.g., 1 = extremely easy to 7 = extremely difficult) or direct questions (e.g., “How difficult was it for you to successfully accomplish the activity?”) the perceived cognitive load and students' effort in completing the task [16].

Parents' reports requested to rank children's concentration on a Likert scale and a final overall question (i.e., “How well did the child concentrate while playing the GraphoGame?”) [26].

Observational scale was used by one study [24] to assess comprehension of the game instruction and attention/distraction to the gameplay.

Video recordings were conducted by one [3] or more devices (e.g., one device to record the entire body of the children, one a mirror in front of the children and another through the tablet camera, [11]). Movements and postures were used to analyze the intent to understand the task given and to master it, as careful reading of instructions, reasoning, actively attempting to solve problems presented in the digital lesson, trying to understand task-related issues in discussion with fellow students or teachers and similar [5, 7]. In the study by Wen (2021, [14]) the researchers used video recordings to see whether the children displayed passive behavior (e.g., listening to the lecture but not taking notes), active behavior (e.g., turning or inspecting objects), constructive behavior (e.g., generating new ideas) or interactive behavior (e.g., interacting with the platform).

Speech and discourse analysis were used in three studies: audio recordings, exclamations and other phrases related to cognitive reasoning [7]; private speech as representative of language directed to oneself to guide cognitive execution and regulates social behavior [11]; e-text length (i.e., the word count of students' written responses in writing-to-learn activities) was used as an operationalization of cognitive effort exerted [15].

Figure 3 provides an illustrative overview of the various engagement measures during a child-robot interaction.

Figure 3
Flowchart depicting study selection process. Identification: 4,243 records from Scopus, 5 from PubMed, 1 from PsycInfo, 13 from other sources. 259 duplicates removed. Screening: 4,003 records screened, 3,309 excluded. Reports sought for retrieval: 102, none not retrieved. Eligibility: 102 assessed, exclusions include adults/university samples (8), methodology (24), non-emotional/cognitive focus (40), no digital task (3). Included: 27 studies reviewed.

Figure 3. Example setting of (A) a child interacting with robot and (B) procedure and tools used to measure different dimensions of engagement.

3.2.3 Does engagement predict performances and vary according to digital tasks?

Through different study design and methodologies, including qualitative, quantitative (mainly descriptive analyses) and mixed methods studies, it has been possible to evaluate the presence of a relationship between engagement and task performance.

Out of the 22 studies that evaluated emotional engagement, only a minority concentrated on examining this correlation. Through data obtained from self-report questionnaires, the results of a study [4] indicate that there is a link between the fun that children experienced during the digital task and their learning outcomes. Indeed, the FunQ questionnaire total score correlated significantly with learning to code, suggesting that having fun while completing a digital task contributes to learning outcomes. Similarly, two studies [17, 20] suggested that the pleasure of embodied learning, through the use of motion-based educational games, can help to improve children's short-term memory. Another study [10] found that emotional engagement, measured with a self-report questionnaire, could be a unique predictor in explaining a substantial portion of the variance in cognitive learning outcomes. Furthermore, through the use of an observational scale, Martinovic et al. (2016, [24]) demonstrated that children performed better in games in which they show higher levels of fun (e.g., smiling, verbal expression of enjoyment).

Many other studies did not investigate learning improvements, but, however, they analyzed the link between digital games characteristics and emotional engagement, suggesting that the use of digital tasks or robotic activities in comparison to traditional tasks can favor positive emotions [7], reduce stress levels [3, 4, 8], and boredom [7, 20, 22, 27]. Moreover, Zhang et al. (2023, [1]) found that average emotional engagement scores in embodied learning contexts are higher than those obtained in non-embodied learning contexts. Two studies reported that children tend to feel more frustrated when digital activity is too easy or too difficult [11, 24]. Three studies showed that tailoring the characteristics of an intervention to children's preference improves their emotional engagement [20, 22, 25]. One study [13], examined the effect of different modes of self-representation of avatars on children's participation and another study [18] analyzed emotional engagement in children typically developing and those with learning disabilities; neither study found differences between conditions in terms of emotional engagement.

Considering the high level of heterogeneity, studies on behavioral engagement have reported different conclusions. Specifically, only five studies [10, 12, 15, 24, 27] investigated the relationship between behavioral engagement and cognitive performance. Through data obtained from digital platforms, three studies found that the interaction time with the screen [10, 12] or the time on the task [15] predicted student learning and their academic performance. Lastly, through data obtained from video recordings and qualitative notes, Bartoli et al. (2013, [27]) observed that children with ASD tended to reduce their repetitive behaviors associated with discomfort, loss of attention and concentration when playing Kinect games [27].

Nevertheless, most studies have not analyzed this relation but investigated the link between digital task characteristics and behavioral engagement. Gong et al. (2021, [7]) reported that students participating in the digital activity were more behaviorally engaged when there was a positive class behavior and a closer posture to the device. Similarly, Lee-Cultura et al. (2021, [8]) and Lee-Cultura et al. (2020, [13]) found that when children enjoyed the activity they tended to move more. Barana and Marchisio (2020, [16]) reported that the digital learning environment had a strong impact on the behavioral engagement's levels of initially poorly engaged students. Kosmas et al. (2018, [20]) and Kourakli et al. (2017, [23]) found that embodied Kinect game lessons, which involve motor movements during the completion of the task, were in general enjoyable for children. Finally, five studies that utilized data as acquired automatically by the digital platform [10, 11, 20, 23] or a single quantitative item [26] found that if digital activities are well-liked by children, they tend to play more and express the desire to repeat the experience [11, 20, 23, 26].

Considering the 14 studies assessing children's cognitive engagement (i.e., putting effort in remembering and applying the activity rules), those studies that investigated the relationship between cognitive engagement and task performance used eye tracker data (saccade speed, fixation time, pupil diameter) to assess cognitive load, perceived difficulty and anticipation. Two studies [2, 3] found that saccade speed was negatively associated with children's task performance. Three studies found that the cognitive load, measured by pupil diameter and fixation time, was positively associated with children's performance, with higher cognitive load to be related to an overall better performance in the digital task [2, 8, 13]. Lastly, Giannakos et al. (2020, [12]) suggested that anticipation and attention levels, measured by fixation time and saccade speed, had a high predictive value of performances. Regarding the use of self-report questionnaires and observational scale, Georgiou et al. (2021, [6]) found that students who participated in the digital intervention outperformed those who participated in the non-digital intervention in terms of cognitive load and Martinovic et al. (2016, [24]) found that an increased cognitive engagement (e.g., paying attention and putting effort) was related to better performance in computer games. Finally, Barana and Marchisio (2020, [16]) found that cognitive engagement, measured by self-reported questionnaire, is linked to self-regulation and persistence with schoolwork and cognitively engaged students are less likely to give up their learning and more likely to keep engaged with school.

3.3 Methodological quality of the studies, limitations, and risk of biases

The methodological quality assessment of the included studies was carried out through the MMAT instruction. Table 3 depicts the MMAT analysis of each study included in full detail.

Table 3
www.frontiersin.org

Table 3. Methodological quality for the included studies as conducted with the Mixed Methods Appraisal Tool (Version 2018).

Fifteen studies out of 27 obtained high quality MMAT scores. Those studies show several characteristics to be considered reliable and valid, such as appropriate research designs to answer the research question, well-defined and representative sample populations, adequate control of confounding factors and analysis methodologies. Besides, eight studies obtained MMAT scores of some concerns, as they present some characteristics of the best research studies, but also show some limitations or weaknesses that distinguish them from high-quality ones (e.g., the study design is adequate but not optimal, the population is representative but the sample size is small, the data have been collected with methods that are not entirely appropriate, the results are presented in an incomplete or complete but not exhaustive manner). Finally, four studies were of low quality as they exhibited characteristics such as: weak or inappropriate study designs, non-representative populations, unreliable data collection methods, failure to control for confounding factors, inappropriate statistical analysis, and incomplete presentation of results.

The most common limitations of the aforementioned studies included having a cross-sectional study design. Almost all papers, except one, had a relatively small sample size. Furthermore, studies that used self-reported methodologies acknowledged that these methods can be affected by social desirability or other individual variables. Six studies reported that findings may differ based on children's age, thereby older or younger participants could have yielded different results [2, 3, 8, 11, 12, 13]. Five studies also noted that using a different methodology could have given different findings [3, 8, 12, 13, 26]. Notably, five studies recognized that many of the technical tools used (e.g., AI, Empatica E4) were not designed for children and thus findings may have been impacted because of this [4, 8, 13, 18, 22]. Carrying the intervention within the school setting provided high ecological validity to the findings, yet external variables were less monitorable [4, 8, 9, 12, 18, 20] whereas the implementation of the intervention tools within school could sometimes be challenging [7, 23].

In two studies the authors stated that quantitative data should be paired up with qualitative data to provide more in-depth findings concerning engagement [4, 19].

Overall, majority of studies were affected by the following risk biases: (a) potential sampling bias due to the relatively small sample sizes as children from the same country and often the same school setting were recruited, thereby making findings less generalizable; (b) response bias due to the self-report measures used; (c) measurement bias due to the fact that the digital tools used (e.g., AIs, Empatica E4) were not designed or adapted for children. More specific details about limitations and risks of biases are reported in Table 4.

Table 4
www.frontiersin.org

Table 4. Main findings, study limitations, and risk of biases of the papers included in the review.

4 Discussion

The main research objectives of this systematic review were: (1) to describe the most commonly used conceptualizations of children engagement in digital and robotic contexts; (2) to understand which tools and procedures are widely used to measure three main types of engagement, that is emotional, behavioral and cognitive, in children and adolescents performing digital and robotic tasks; (3) to investigate the relationship between engagement, children's performances, and task characteristics.

Thorough selection process conducted according to the PRISMA method 27 studies were deemed eligible.

The review includes a diverse selection of studies from different continents, albeit mainly Europe: America (n = 3), Asia (n = 4), Africa (n = 1), and Europe (n = 19). Except for Norway, which accommodates 6 of the 27 studies considered, the countries distribution is quite even but there are few studies from each one.

Regarding the population included in the selected studies, the review examined a wide age range, from 6 to 18 years, while only two studies (Crescenzi-Lanna, 2020a; Sridhar et al., 2018) had a sample of preschool children (4–6 years). While no study has investigated the effect of age on the results obtained, such a wide age range may prevent the generalization of the findings to different developmental periods. Indeed, both the conceptualization of engagement and the tools and methodologies used to measure it may vary between preschoolers or school-age children and adolescents, thus not being uniformly adaptable to all ages included in the review. Additionally, most of the studies focused on typically developing children, with only a few considering children with atypical development (Bartoli et al., 2013; Bhattacharya et al., 2015; Ouherrou et al., 2019; Ronimus et al., 2014), leaving open the issue of whether the engagement conceptualization and measures are suitable also for different needs. Future studies should include a larger number of children with atypical development. In fact, with the advancements in ICTs, tele-intervention, tele-assessment, and tools like AI or sensors, it is now possible to gather more information about these children's level of engagement and improve their accessibility and training. Another important consideration is that most of the tools used in these studies were not specifically designed for children (Grawemeyer et al., 2017; Lee-Cultura et al., 2020, 2021; Ouherrou et al., 2019; Tisza et al., 2022). This could have impacted the quality of the data and the suitability of the tools for assessing engagement in children. Future research should focus on developing and utilizing tools that are specifically tailored to the needs, characteristics, and different ages of children.

Additionally, none of the included studies compared different settings, as most of these were conducted in schools. Despite the high ecological validity of carrying the studies in such settings, the children's engagement might have been influenced by external factors, such as environmental noise or unexpected events (Giannakos et al., 2020; Lee-Cultura et al., 2021; Ouherrou et al., 2019; Tisza and Markopoulos, 2021; Tisza et al., 2022). Participants from different regions and age groups might produce different research results, therefore it is important to consider and control these external factors to obtain more reliable and valid data.

In the future, more specific studies focusing on interindividual differences or different subgroups might help implement engagement measurements that are customizable to each individual child.

The review also encompasses four studies classified as low quality (Gong et al., 2021; Wen, 2021; Kosmas et al., 2018; Kourakli et al., 2017), thus, their results being poorly interpretable due to methodological limitations.

4.1 Engagement conceptualizations

The first research question focused on analyzing the conceptualization of engagement, with particular attention to the developmental age, a period in which the concept of engagement may have different facets compared to adulthood.

The presence of multiple interpretations has highlighted numerous theories and approaches that analyze different aspects of the same construct. This variety of perspectives and theoretical approaches provides a rich and complex framework for understanding the nature and the importance of engagement in different digital contexts, thus contributing to a deeper and more articulate view of this fundamental phenomenon in educational dynamics.

Although there are many approaches used, it is important to note that there is no theory of engagement that is universally recognized as the most effective. The complexity of the phenomenon requires a more transversal exploration that takes into account numerous factors and should be able to capture the complexity of interactions between children and digital platforms. Engagement is a polyhedral multicomponential construct, which means that it involves different dimensions and manifestations, therefore, to fully understand it we cannot limit it to a single perspective or a single aspect (Tisza et al., 2022; Ronimus et al., 2014).

As represented in Figure 4, the constructs utilized in the selected studies can be conceptualized through three main components of engagement.

Figure 4
Diagram showing the interaction between emotional, behavioral, and cognitive engagement. Emotional engagement includes positive and negative emotions. Behavioral engagement covers gestures and actions. Cognitive engagement involves attention and fatigue. Inputs include self-report questionnaires, video recordings, and eye-tracking data, with outputs like qualitative notes and log data.

Figure 4. A three-dimension model of engagement and the respective measurement tools. Solid arrows indicate the most frequent and evidence-based measures.

The first component is the emotional engagement (Tisza et al., 2022; Drljević et al., 2024; Tisza and Markopoulos, 2021; Crescenzi-Lanna, 2020a; Reinhold et al., 2020; Ouherrou et al., 2019; Sharma et al., 2019; Sridhar et al., 2018; Grawemeyer et al., 2017). It regards the emotions perceived by children while they are performing a task and interacting with a device or at the end of the interaction. It must include both positive and negative emotions, such as sadness, happiness, fear, surprise, anger, and disgust, and other affective states, such as enjoyment, fun, boredom, frustration, and motivation. Measuring emotional engagement during interaction with digital tasks and robotics can be particularly important in children, for whom, unlike adults who generally have clear goals they want to achieve and therefore own motivation to perform tasks, emotional activation represents a driving factor in exercise and learning (Tisza et al., 2022). In fact, positive emotions are an essential feature to promote engagement and they are also essential in how children and adolescents imagine goals and challenges, guide their behavior, and shape group dynamics and interactions (Sharma et al., 2019). Therefore, when designing effective interventions aimed to promote engagement and improve children's experiences it is important to consider the role of affective state.

The second dimension is the behavioral/bodily engagement (i.e., Barana and Marchisio, 2020; Bartoli et al., 2013; Bhattacharya et al., 2015; Drljević et al., 2024; Georgiou et al., 2021; Kosmas et al., 2019; Lee-Cultura et al., 2020, 2021, 2022; Reinhold et al., 2021; Sharma et al., 2022; Zhang et al., 2023). It refers to the physical actions and active participation of the child during the activity. It includes time spent in front of the screen or the robot, the interaction time with the task, the posture (e.g., closeness to the device), the verbal expressions used, the movements frequency, the actions used. As illustrated by some studies (Bartoli et al., 2013; Giannakos et al., 2020; Reinhold et al., 2021), while log data can be indicators of this type of engagement in both adults and children, in the latter movements during task performance could be as well a positive indicator of engagement and promote learning through processes of embodied cognition.

Lastly, the third fundamental aspect of engagement is the cognitive dimension, which focuses on mental processing and understanding the task (Drljević et al., 2024; Martinovic et al., 2016; Reinhold et al., 2020). Cognitive engagement implies attentional aspects, which goes beyond simple superficial participation, and it translates into the ability to analyze, synthesize and apply information in a meaningful way. Digital and robotic environments could induce higher levels of cognitive engagement (such as attention and cognitive load), essential to promote a real acquisition of knowledge and skills, allowing individuals to develop critical thinking and problem solving skills that are essential for learning. However, in the developmental age, the appropriateness of using digital tools and robotics for learning and intervention, given the cognitive immaturity of individuals such as children, is still a debated issue (Vedechkina and Borgonovi, 2021). Although none of the included studies assessed the effects of age on different dimensions of cognitive engagement, such as attentional engagement and cognitive fatigue, the results of selected studies emphasize the importance of measuring cognitive engagement during children's interaction with digital tools and robotics.

As suggested by the model proposed in Figure 4 it is important to note that these three main dimensions of engagement are all relevant in childhood, as they can interact dynamically and influence each other as well as the child's overall level of engagement.

4.2 Tools and procedures to measure engagement in childhood

The second research question focuses on the evaluation of the tools used to measure the three main components of engagement. In this analysis, a dual approach was adopted. First, we evaluated how much the tools were a direct and sensitive measure of engagement, with particular attention to the interpretability of the results. Subsequently, we considered feasibility by examining the ease of use with children of different ages.

The model represented in Figure 4, alongside the visual depiction of the three components of the engagement that have been described in the section above, offers a comprehensive overview of the most commonly used tools and measures for each component.

In terms of the emotional dimension, there is a widely accepted consensus and understanding regarding the tools and methodologies mainly used to measure it. Indeed, the operationalization of emotions is firmly established, as emotions like joy or sadness, have clear definitions and known behaviors indicators (e.g., smiling, laughing). The majority of the studies (Barana and Marchisio, 2020; Lee-Cultura et al., 2020; Ouherrou et al., 2019; Reinhold et al., 2021; Ronimus et al., 2014; Sharma et al., 2019; Sridhar et al., 2018; Tisza et al., 2022; Tisza and Markopoulos, 2021; Zhang et al., 2023) used self-reports questionnaires or video-recordings coded by AI to measure the emotions and/or facial expressions the children experimented with during the execution of the digital tasks.

Self-report questionnaires offer children the opportunity to directly express their perceptions and communicate autonomously, without external interpretation. Furthermore, by completing the questionnaires, children are encouraged to reflect on their educational experiences, preferred learning methods and any obstacles encountered. Even from a usability point of view, self-report questionnaires represent a particularly suitable option for children as they are easy to use and understand. Their intuitive structure and clear questions make them accessible even to younger children or those with limited language skills. Furthermore, visual formats with images or smileys are often used, which further simplify the compilation and encourage active participation of all children.

Video-recording coded by AI that allows for codification of children's facial expressions associated with the emotions they feel (Lee-Cultura et al., 2020; Ouherrou et al., 2019; Sharma et al., 2019; Tisza et al., 2022). The use of facial detection systems provides continuous monitoring of emotional state with real-time feedback. Furthermore, these tools only require the child to be present and visible to the camera and there is no need to integrate them with sensors. Lastly, they can be configured to respect children's privacy. This is achieved by not storing or recording facial images and processing data anonymously or pseudonymously.

In addition to self-report questionnaires and video-recordings coded by AI, there are other tools used to assess emotional engagement, such as speech analysis, observational scales, and qualitative notes (Bartoli et al., 2013; Crescenzi-Lanna, 2020a; Drljević et al., 2024; Grawemeyer et al., 2017; Kosmas et al., 2019; Lee-Cultura et al., 2022; Martinovic et al., 2016; Sharma et al., 2019; Sridhar et al., 2018). These tools provide valuable information on the child's facial and verbal expression but require a certain level of expertise and preparation by researchers and the interpretation of the information collected can be influenced by their perspective. They can therefore be considered tools to be used in combination with self-report questionnaires or video recordings to provide a more complete and in-depth view of children's emotional engagement (see Figure 4).

Finally, physiological tools, although they offer an innovative approach to assessing engagement in children, present significant challenges related to their complexity of use. Furthermore, the interpretation of physiological data can be subjective and vary based on different factors. For instance, the study by Sharma et al. (2022) suggested that higher arousal is associated with greater stress, while another (Tisza et al., 2022) that higher arousal is associated with a higher level of engagement. Therefore, despite their potential to provide detailed information about children's emotional engagement, physiological tools are often considered more complex to use and require more attention to interpreting results.

Although a few of the selected studies involved children with special educational needs, the mentioned tools could be particularly useful to measure emotional engagement in those children with emotional dysregulation for whom the use of digital or robotic devices for learning could be highly challenging (Paneru and Paneru, 2024; Ribeiro Silva et al., 2024).

In terms of behavioral engagement, researchers collected heterogeneous data by assessing different behaviors based on the digital tasks used. Indeed, in contrast to the standardized operationalization of emotional engagement, there is a lack of consensus in the literature that pertains to the optimal motor or bodily indicators of behavioral engagement in childhood. The most commonly used tool in this category is video recording (Bartoli et al., 2013; Drljević et al., 2024; Lee-Cultura et al., 2021, 2022), which accurately and comprehensively records children's interactions with the digital environment, capturing gestures, body movements, reactions, verbal expressions, and postures. Thanks to its ability to capture a wide range of behaviors, video recording could be a valuable tool for understanding children's behavioral engagement and interactive dynamics with digital and robotic devices. From a usability point of view, it offers direct and non-invasive observation of children's actions, and it allows researchers to analyze the behaviors both during interaction with digital devices and at a later time.

In addition, there are other important methodologies that, although less direct, can provide valuable insights into children's behavioral engagement in digital contexts. These tools include observational scales, qualitative therapist notes, and log data and they could be used in combination with video-recordings to get a complete measure (see Figure 4). Among those studies which analyze data from digital platforms (Reinhold et al., 2020, 2021), some take into consideration the execution time, while others the number of tasks completed, the number of exercises carried out and the time to resolve the problem.

As well as for the emotional engagement, most of the studies demonstrated a shared consensus regarding cognitive engagement's definition and operationalization, as well as its associated indicators, such as attention, cognitive load, and fatigue (e.g., Lee-Cultura et al., 2022; Giannakos et al., 2020; Sharma et al., 2022).

The most used tool in this type of engagement was the eye-tracker, which allowed the collection of data on fixation time, saccade speed and pupil diameter (Giannakos et al., 2020; Lee-Cultura et al., 2020, 2021, 2022; Sharma et al., 2022). This tool provides objective and quantitative measurements of children's eye movements as they engage in digital tasks. It collects accurate data on the child's attention and fatigue and allows researchers to assess children's cognitive engagement in real time. By analyzing the fixation and saccade patterns of the eyes, it is possible to identify which elements of the digital interface capture the child's attention the most and which may be less stimulating or engaging. Modern eye trackers are also designed to be non-invasive and easy to use. They can be integrated into devices, allowing children to naturally engage in tasks without feeling disturbed or restrictive. They can be used with children of different ages and needs.

Additionally, besides the eye tracker, other tools are used to evaluate cognitive engagement in children, such as self-report questionnaires and qualitative notes (Barana and Marchisio, 2020; Georgiou et al., 2021; Martinovic et al., 2016; Zhang et al., 2023). Self-report questionnaires can be easily administered to children in formats suited to their age and level of understanding. However, it is important to note that these tools are based on subjectivity responses and depend on children's ability, which could vary based on age and individual experiences. Therefore, integrating the use of these questionnaires with objective tools, such as the eye tracker, can provide a more complete and accurate assessment of cognitive engagement in children during digital tasks (see Figure 4).

4.3 Engagement-performances relationship

Regarding the relationship between engagement measures and task performance the literature reviewed was scarce and thus, despite its critical role on the construct, limited conclusions can be drawn. The digital and robotic tasks used in the selected studies were developed primarily to entertain and amuse players, with an emphasis placed on immersion and enjoyment rather than on learning and performance. Few studies have shown that being emotionally, behaviorally, and cognitively engaged during a digital activity can lead to an improvement in children's performance or in a certain area of interest (Bartoli et al., 2013; Giannakos et al., 2020; Kosmas et al., 2019; Lee-Cultura et al., 2020, 2021, 2022; Martinovic et al., 2016; Reinhold et al., 2020, 2021; Sharma et al., 2022; Tisza et al., 2022). However, due to the limited amount of findings available on the topic, it is not currently possible to draw definitive conclusions about this relationship. Therefore, although games and activities with robots can be effective in promoting engagement and maintaining children's interest during learning activities, it is important to critically evaluate how they can be integrated into educational contexts to maximize their impact on learning. This requires careful design to ensure that digital games are used effectively as tools to support learning.

This gap in research presents an opportunity for future investigation. Indeed, it would be beneficial to have a better understanding of how engagement affects learning and, to personalize the digital or robotic activity to the diverse child's individual characteristics, how different engagement's components correlates with performance. Furthermore, better understanding this link could provide important information to optimize the design of digital tasks and improve the overall user experience.

The characteristics of the digital task can play a significant role in influencing children's engagement, and many studies have analyzed this aspect (Grawemeyer et al., 2017; Lee-Cultura et al., 2021; Sharma et al., 2022; Tisza et al., 2022). It turns out that, compared with non-digital environments, children who interact with digital and robotic tasks tend to show more positive emotions, greater enjoyment, and a lower propensity for boredom. In agreement with the framework of embedded cognition, games that require more physical and bodily activation were more engaged from the behavioral point of view (Lee-Cultura et al., 2020, 2021). This underlines the importance of actively engaging the body and the mind in educational experiences, not only influencing engagement during the activity but also motivation to participate in the future and enthusiasm to repeat the experience.

Lastly, in relation to the characteristics of digital games, one study highlighted that children liked games more that contained clear and concise instructions, objectives that corresponded to the player's skill level, appropriate use of sound and color, increasingly challenging gameplay, and pleasantly frustrating, immediate feedback, and opportunity for success (Martinovic et al., 2016).

In summary, the characteristics of the digital task can have a significant impact on children's engagement during the activity, highlighting the importance of carefully considering the design of digital environments to maximize engagement and improve the overall user experience.

4.4 Potential implications and impact

To our knowledge, this is one of the first systematic reviews that synthesizes evidence on the conceptualizations and existing tools, measures, and variables used to measure emotional, behavioral, and cognitive engagement during children's performance on a digital task or robotic activity. Previous reviews have focused more on specific approaches to explore children's engagement, such as multimodal data (Crescenzi-Lanna, 2020b; Sharma and Giannakos, 2020), or explored engagement as a general construct without focusing on the different types and aspects of it (Mangaroska and Giannakos, 2018; Sharma and Giannakos, 2020). Conversely, the comprehensive nature of the current systematic review extends beyond the mere documentation of assessment methodologies to encompass an exploration of the wide array of variables considered in evaluating emotional (e.g., facial expression), cognitive (e.g., cognitive load, focused attention), and behavioral (e.g., motor movements) engagement. This synthesis holds substantial value, as it equips different stakeholders, including policy practitioners, researchers, and educators, with rich evidence on the methodologies that can be directly applied. Additionally, this synthesis highlights the importance of a multidimensional approach to engagement assessment (e.g., Fiorini et al., 2024), promoting a holistic understanding of the diverse ways that each type of engagement can be assessed and providing evidence for the development of more nuanced interventions tailored to children's needs.

The findings of the present review hold promise for facilitating future advancements, not only within the realm of research but also at a practical level, by supporting the development of tailored educational tools and interventions When applied in a coordinated and synergic manner, technologies can provide immersive, interactive and adaptive environments that meet the specific cognitive and sensory needs of each child, thus enhancing language acquisition, promote social interaction and increase engagement during interventions, while ensuring accessibility and usability (Bhattacharya et al., 2015; Grawemeyer et al., 2017; Sridhar et al., 2018; Paneru and Paneru, 2024). Considering the interindividual differences in children's development, not all digital and robotic tolls could be effective for everyone. Consequently, selecting adequate tools to gain insight into specific elements that decrease motivation, elicit fatigue, as well as distracting factors during digital tasks, can prove valuable to personalize ICT and robotic tools to children's needs as much as possible.

Given this heterogeneity in the effectiveness of the tools, the use and development of cutting-edge technologies and Generative AI algorithms could prove particularly useful for the automated processing of large datasets and for identifying functioning clusters, thus enabling the identification of more appropriate measures of children's engagement, an essential aspect to ensure the reliability of the collected data.

It is thereby of the utmost importance for clinicians and researchers to pay attention to make treatments for children with special educational needs more enjoyable and sustainable over the long period. In fact, such information can find application in tele-assessment or tele-intervention settings enabling enhanced comprehension of children's behavior observed through webcams or during virtual interactions (Kheirollahzadeh et al., 2024). Despite promising initial findings, further research is needed to address challenges such as content customization, interface accessibility, and seamless technological integration. Optimizing these aspects may significantly improve the effectiveness of digital interventions and contribute to better developmental outcomes and overall quality of life for children with neurodevelopmental conditions (Paneru and Paneru, 2024).

Finally, It is worth noting that, although not directly investigated in the present review, the adoption of emerging technologies such as augmented reality (AR), virtual reality (VR), and generative artificial intelligence (AI) is transforming early childhood education by offering innovative tools to enhance learning, motivation, and personalization.

For example, AR applications can support language acquisition and increase engagement in young children through multisensory interaction (Demirdağ et al., 2024). Similarly, the structured use of these technologies in educational settings has been shown to foster deeper learning and improve perceived learning effectiveness (Demirdağ et al., 2024).

Nonetheless, it remains essential to address issues of accessibility and inclusive design (Ahmed, 2021), as well as to ensure the ethical and safe implementation of such technologies. The integration of comprehensive engagement monitoring measures may contribute significantly to achieving these goals, assuring as well the reliability of data processed by AI based algorithms.

4.5 Limitations and future studies

Despite its strengths, the current review has several limitations that need to be acknowledged. Firstly, there is high heterogeneity across the reviewed studies, with each intervention targeting different objectives or learning outcomes (e.g., language, memory). This heterogeneity poses challenges in providing a cohesive analysis and synthesis of the reviewed findings. Secondly, as previously mentioned, the broad age range of the children in the reviewed studies, including adolescents, may introduce additional variability and affect the generalizability of the findings. Furthermore, there are no comparative studies across the different types of engagement and the methods most suitable to assess each of them. Another significant limitation is the limited number of studies that specifically investigate the relationship between engagement, performance, and task characteristics. This lack of research hinders the ability to draw solid, generalizable conclusions about the effect of engagement on performance. Lastly, there is a need for further research that includes children with atypical development, utilizes tools designed specifically for children, explores different settings, and focuses on customizable engagement measurements for more effective interventions and understanding of individual differences.

To address these needs, it is crucial to conduct more studies and research specifically focused on children with special needs or atypical neurodevelopment. This will provide a better understanding of their engagement patterns and enable the development of tailored interventions for these types of populations. In addition, increasing the sample size in future quantitative and mixed method studies is important to enhance the generalizability of findings and to capture a wider range of individual differences. Comparisons between different settings, such as laboratory settings and more ecologically valid environments (e.g., classrooms), should be conducted to examine the impact of external variables on engagement results in ecological settings. This will help determine the extent to which engagement measures are influenced by specific contextual factors and provide insights into the ecological validity of the findings. Furthermore, using a variety of instruments and measures to collect data on engagement will provide a more comprehensive assessment. Instead of focusing solely on one aspect of engagement, incorporating multiple dimensions and perspectives will contribute to a richer understanding of children's engagement experiences.

In this context, future reviews should consider studies that incorporate the latest advancements in technology, including AR, VR and generative AI-based approaches. These emerging technologies are significantly reshaping the environments in which children develop, presenting new opportunities for enhancing educational engagement, therapeutic interventions, and promoting social inclusion. By facilitating more immersive, adaptive, and individualized experiences, these technologies have considerable potential to support the development of both typically developing children and those with special educational needs. It will be essential to integrate these innovations into future research to comprehensively capture the evolving landscape of child-technology interactions and to inform the design of advanced measurement tools and interventions (Neugnot-Cerioli and Laurenty, 2024).

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

VM: Writing – original draft, Writing – review & editing. AM: Writing – original draft, Writing – review & editing. EB: Writing – original draft, Writing – review & editing. DG: Writing – original draft, Writing – review & editing. SS: Writing – original draft, Writing – review & editing. AG: Writing – original draft, Writing – review & editing. CP: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This publication was produced with the co-funding of European Union—Next Generation EU, in the context of the National Recovery and Resilience Plan, Investment 1.5 Ecosystems of Innovation, Project Tuscany Health Ecosystem (THE), CUP: B83C22003920001.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abbasi, M., Ghamoushi, M., and Mohammadi Zenouzagh, Z. (2023). EFL learners' engagement in online learning context: development and validation of potential measurement inventory. Universal Access Inf. Soc. 23, 1467–1481. doi: 10.1007/s10209-023-00993-0

PubMed Abstract | Crossref Full Text | Google Scholar

Ahmed, M. E. K. (2021). Architectural education in light of the universal design approach. Int. Design J. 11, 121–126. doi: 10.21608/idj.2021.162503

PubMed Abstract | Crossref Full Text | Google Scholar

Albertazzi, D., Ferreira, M. G. G., and Forcellini, F. A. (2019). A wide view on gamification. Technol. Knowl. Learn. 24, 191–202. doi: 10.1007/s10758-018-9374-z

Crossref Full Text | Google Scholar

Alexopoulou, A., Batsou, A., and Drigas, A. S. (2019). Effectiveness of assessment, diagnostic and intervention ICT tools for children and adolescents with ADHD. Int. J. Recent Contrib. Eng. Sci. IT 7, 51–63. doi: 10.3991/ijes.v7i3.11178

Crossref Full Text | Google Scholar

Anastopoulou, S., Sharples, M., and Baber, C. (2011). An evaluation of multimodal interactions with technology while learning science concepts. Br. J. Educ. Technol. 42, 266–290. doi: 10.1111/j.1467-8535.2009.01017.x

Crossref Full Text | Google Scholar

*Barana, A., and Marchisio, M. (2020). An interactive learning environment to empower engagement in Mathematics. Interact. Design Architect. 45, 302–321. doi: 10.55612/s-5002-045-014

Crossref Full Text | Google Scholar

Barco, A., Albo-Canals, J., and Garriga, C. (2014). “Engagement based on a customization of an iPod-LEGO robot for a long-term interaction for an educational purpose,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (New York, NY: Association for Computing Machinery (ACM)), 124–125. doi: 10.1145/2559636.2563697

Crossref Full Text | Google Scholar

*Bartoli, L., Corradi, C., Garzotto, F., and Valoriani, M. (2013). “Exploring motion-based touchless games for autistic children's learning,” in Proceedings of the 12th International Conference on Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 102–111. doi: 10.1145/2485760.2485774

Crossref Full Text | Google Scholar

*Bhattacharya, A., Gelsomini, M., Pérez-Fuster, P., Abowd, G. D., and Rozga, A. (2015). “Designing motion-based activities to engage students with autism in classroom settings,” in Proceedings of the 14th International Conference on Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 69–78. doi: 10.1145/2771839.2771847

Crossref Full Text | Google Scholar

Boucheix, J.-M., Lowe, R. K., Putri, D. K., and Groff, J. (2013). Cueing animations: dynamic signaling aids information extraction and comprehension. Learn. Instruct. 25, 71–84. doi: 10.1016/j.learninstruc.2012.11.005

Crossref Full Text | Google Scholar

Bouta, H., and Retalis, S. (2013). Enhancing primary school children collaborative learning experiences in maths via a 3D virtual environment. Educ. Inf. Technol. 18, 571–596. doi: 10.1007/s10639-012-9198-8

Crossref Full Text | Google Scholar

Boyle, E. A., Connolly, T. M., Hainey, T., and Boyle, J. M. (2012). Engagement in digital entertainment games: a systematic review. Comput. Human Behav. 28, 771–780. doi: 10.1016/j.chb.2011.11.020

PubMed Abstract | Crossref Full Text | Google Scholar

Breien, F. S., and Wasson, B. (2021). Narrative categorization in digital game-based learning: engagement, motivation and learning. Br. J. Educ. Technol. 52, 91–111. doi: 10.1111/bjet.13004

PubMed Abstract | Crossref Full Text | Google Scholar

Brewer, R., Anthony, L., Brown, Q., Irwin, G., Nias, J., and Tate, B. (2013). “Using gamification to motivate children to complete empirical studies in lab environments,” in Proceedings of the 12th International Conference on Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 388–391. doi: 10.1145/2485760.2485816

Crossref Full Text | Google Scholar

Checa Romero, M., and Jiménez Lozano, J. M. (2025). Video games and metacognition in the classroom for the development of 21st century skills: a systematic review. Front. Educ. 9:1485098. doi: 10.3389/feduc.2024.1485098

Crossref Full Text | Google Scholar

Chen, P.-S. D., Lambert, A. D., and Guidry, K. R. (2010). Engaging online learners: the impact of Web-based learning technology on college student engagement. Comput. Educ. 54, 1222–1232. doi: 10.1016/j.compedu.2009.11.008

Crossref Full Text | Google Scholar

Clark, D. B., Tanner-Smith, E. E., and Killingsworth, S. S. (2016). Digital games, design, and learning: a systematic review and meta-analysis. Rev. Educ. Res. 86, 79–122. doi: 10.3102/0034654315582065

PubMed Abstract | Crossref Full Text | Google Scholar

Cohn, J. F., Ambadar, Z., and Ekman, P. (2007). “Observer-based measurement of facial expression with the Facial Action Coding System,” in The Handbook of Emotion Elicitation and Assessment, Vol. 1, eds. J. A. Coan and J. J. B. Allen (New York, NY: Oxford University Press), 203-221. doi: 10.1093/oso/9780195169157.003.0014

Crossref Full Text | Google Scholar

*Crescenzi-Lanna, L. (2020a). Emotions, private speech, involvement and other aspects of young children's interactions with educational apps. Comput. Human Behav. 111:106430. doi: 10.1016/j.chb.2020.106430

Crossref Full Text | Google Scholar

Crescenzi-Lanna, L. (2020b). Multimodal learning analytics research with young children: a systematic review. Br. J. Educ. Technol. 51, 1485–1504. doi: 10.1111/bjet.12959

Crossref Full Text | Google Scholar

Cunningham, P. (2014). Patient engagement during medical visits and smoking cessation counseling. JAMA Intern. Med. 174:1291. doi: 10.1001/jamainternmed.2014.2170

PubMed Abstract | Crossref Full Text | Google Scholar

De Aguilera, M., and Mendiz, A. (2003). Video games and education: (education in the face of a “parallel school”). Comput. Entertain. 1, 1–10. doi: 10.1145/950566.950583

Crossref Full Text | Google Scholar

Demirdağ, M. C., Kucuk, S., and Tasgin, A. (2024). An investigation of the effectiveness of augmented reality technology supported English language learning activities on preschool children. Int. J. Hum. Comput. Interact. 41, 1–14. doi: 10.1080/10447318.2024.2323278

Crossref Full Text | Google Scholar

Deterding, S., Dixon, D., Khaled, R., and Nacke, L. (2011). “From game design elements to gamefulness: defining ‘gamification',” in MindTrek'11 Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments (New York, NY: ACM), 9–15. doi: 10.1145/2181037.2181040

Crossref Full Text | Google Scholar

Di Lieto, M. C., Castro, E., Pecini, C., Inguaggiato, E., Cecchi, F., Dario, P., et al. (2020). Improving executive functions at school in children with special needs by educational robotics. Front. Psychol. 10:2813. doi: 10.3389/fpsyg.2019.02813

PubMed Abstract | Crossref Full Text | Google Scholar

Di Lieto, M. C., Pecini, C., Brovedani, P., Sgandurra, G., Dell'Omo, M., Chilosi, A. M., et al. (2021). Adaptive working memory training can improve executive functioning and visuo-spatial skills in children with pre-term spastic diplegia. Front. Neurol. 11:601148. doi: 10.3389/fneur.2020.601148

PubMed Abstract | Crossref Full Text | Google Scholar

Doherty, K., and Doherty, G. (2019). Engagement in HCI: conception, theory and measurement. ACM Comput. Surv. 51, 1–39. doi: 10.1145/3234149

PubMed Abstract | Crossref Full Text | Google Scholar

Drigas, A., Kokkalia, G., and Lytras, M. D. (2015). ICT and collaborative co-learning in preschool children who face memory difficulties. Comput. Human Behav. 51, 645–651. doi: 10.1016/j.chb.2015.01.019

Crossref Full Text | Google Scholar

*Drljević, N., Botički, I., and Wong, L. H. (2024). Observing student engagement during augmented reality learning in early primary school. J. Comput. Educ. 11, 181–213. doi: 10.1007/s40692-022-00253-9

Crossref Full Text | Google Scholar

Duradoni, M., Serritella, E., Avolio, C., Arnetoli, C., and Guazzini, A. (2022). Development and validation of the digital life balance (DLB) scale: a brand-new measure for both harmonic and disharmonic use of ICTs. Behav. Sci. 12:489. doi: 10.3390/bs12120489

PubMed Abstract | Crossref Full Text | Google Scholar

Dykstra Steinbrenner, J. R., and Watson, L. R. (2015). Student engagement in the classroom: the impact of classroom, teacher, and student factors. J. Autism Dev. Disord. 45, 2392–2410. doi: 10.1007/s10803-015-2406-9

PubMed Abstract | Crossref Full Text | Google Scholar

Emerson, A., Cloude, E. B., Azevedo, R., and Lester, J. (2020). Multimodal learning analytics for game-based learning. Br. J. Educ. Technol. 51, 1505–1526. doi: 10.1111/bjet.12992

PubMed Abstract | Crossref Full Text | Google Scholar

Ferrer, F., Belv?s, E., and P?mies, J. (2011). Tablet PCs, academic results and educational inequalities. Comput. Educ. 56, 280–288 doi: 10.1016/j.compedu.2010.07.018

Crossref Full Text | Google Scholar

Fiorini, L., Scatigna, S., Pugi, L., Adelucci, E., Cavallo, F., Bruni, A., et al. (2024). “Exploring emotional and cognitive engagement in school age children: an in-depth analysis of interaction with the NAO social robot during the storytelling activity,” in Italian Forum of Ambient Assisted Living (Cham: Springer), 243–252. doi: 10.1007/978-3-031-77318-1_16

Crossref Full Text | Google Scholar

Fredricks, J. A., Blumenfeld, P. C., and Paris, A. H. (2004). School engagement: potential of the concept, state of the evidence. Rev. Educ. Res. 74, 59–109. doi: 10.3102/00346543074001059

PubMed Abstract | Crossref Full Text | Google Scholar

Gallini, J. K., and Barron, D. (2001). Participants' perceptions of web-infused environments: a survey of teaching beliefs, learning approaches, and communication. J. Res. Technol. Educ. 34, 139–156. doi: 10.1080/15391523.2001.10782341

PubMed Abstract | Crossref Full Text | Google Scholar

*Georgiou, Y., Ioannou, A., and Kosmas, P. (2021). Comparing a digital and a non-digital embodied learning intervention in geometry: can technology facilitate? Technol. Pedagogy Educ. 30, 345–363. doi: 10.1080/1475939X.2021.1874501

Crossref Full Text | Google Scholar

*Giannakos, M. N., Papavlasopoulou, S., and Sharma, K. (2020). Monitoring children's learning through wearable eye-tracking: the case of a making-based coding activity. IEEE Pervasive Comput. 19, 10–21. doi: 10.1109/MPRV.2019.2941929

Crossref Full Text | Google Scholar

Goldin-Meadow, S., Cook, S. W., and Mitchell, Z. A. (2009). Gesturing gives children new ideas about math. Psychol. Sci. 20, 267–272. doi: 10.1111/j.1467-9280.2009.02297.x

PubMed Abstract | Crossref Full Text | Google Scholar

*Gong, J., Han, T., Guo, S., Li, J., Zha, S., Zhang, L., et al. (2021). “Holoboard: a large-format immersive teaching board based on pseudo holographics,” in The 34th Annual ACM Symposium on User Interface Software and Technology (New York, NY: Association for Computing Machinery (ACM)), 441–456. doi: 10.1145/3472749.3474761

Crossref Full Text | Google Scholar

*Grawemeyer, B., Mavrikis, M., Holmes, W., Gutiérrez-Santos, S., Wiedmann, M., and Rummel, N. (2017). Affective learning: improving engagement and enhancing learning with affect-aware feedback. User Model. User-adapt. Interact. 27, 119–158. doi: 10.1007/s11257-017-9188-z

Crossref Full Text | Google Scholar

Groccia, J. E. (2018). “What is student engagement?” in New Directions for Teaching and Learning 2018 (Hoboken, NJ: Wiley-Blackwell), 11–20. doi: 10.1002/tl.20287

Crossref Full Text | Google Scholar

Henrie, C. R., Halverson, L. R., and Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: a review. Comput. Educ. 90, 36–53. doi: 10.1016/j.compedu.2015.09.005

Crossref Full Text | Google Scholar

Hong, Q. N., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., Dagenais, P., et al. (2018). The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ. Inform. 34, 285–291 doi: 10.3233/EFI-180221

Crossref Full Text | Google Scholar

Horvat, M. A. (1982). Effect of a home learning program on learning disabled children's balance. Percept. Mot. Skills. 55:1158. doi: 10.2466/pms.1982.55.3f.1158

PubMed Abstract | Crossref Full Text | Google Scholar

Hu, S., and Kuh, G. D. (2002). Being (dis)engaged in educationally purposeful activities: the influences of student and institutional characteristics. Res. High. Educ. 43, 555–575. doi: 10.1023/A:1020114231387

Crossref Full Text | Google Scholar

Hui, L. T., Hoe, L. S., Ismail, H., Foon, N. H., and Michael, G. K. O. (2014). “Evaluate children learning experience of multitouch flash memory game,” in 2014 4th World Congress on Information and Communication Technologies (WICT 2014) (Piscataway, NJ: IEEE), 97–101. doi: 10.1109/WICT.2014.7077309

Crossref Full Text | Google Scholar

Islas Sedano, C., Leendertz, V., Vinni, M., Sutinen, E., and Ellis, S. (2013). Hypercontextualized learning games: fantasy, motivation, and engagement in reality. Simul. Gam. 44, 821–845. doi: 10.1177/1046878113514807

Crossref Full Text | Google Scholar

Jaimes, A., Lalmas, M., and Volkovich, Y. (2011). First international workshop on social media engagement (SoME 2011). ACM SIGIR Forum 45, 56–62. doi: 10.1145/1988852.1988863

Crossref Full Text | Google Scholar

Kheirollahzadeh, M., Azad, A., Saneii, S. H., and Alizadeh Zarei, M. (2024). Comparing telerehabilitation and in-person interventions in school-based occupational therapy for specific learning disorder a randomized controlled trial. Iran. J. Child Neurol. 18, 83–101. doi: 10.22037/ijcn.v18i2.43985

PubMed Abstract | Crossref Full Text | Google Scholar

*Kosmas, P., Ioannou, A., and Retalis, S. (2018). Moving bodies to moving minds: a study of the use of motion-based games in special education. TechTrends 62, 594–601. doi: 10.1007/s11528-018-0294-5

Crossref Full Text | Google Scholar

*Kosmas, P., Ioannou, A., and Zaphiris, P. (2019). Implementing embodied learning in the classroom: effects on children's memory and language skills. EMI. Educ. Media Int. 56, 59–74. doi: 10.1080/09523987.2018.1547948

Crossref Full Text | Google Scholar

*Kourakli, M., Altanis, I., Retalis, S., Boloudakis, M., Zbainos, D., and Antonopoulou, K. (2017). Towards the improvement of the cognitive, motoric and academic skills of students with special educational needs using Kinect learning games. Int. J. Child Comput. Interact. 11, 28–39. doi: 10.1016/j.ijcci.2016.10.009

Crossref Full Text | Google Scholar

LAK (2011). 1st International Conference on Learning Analytics and Knowledge. Banff, AB. Available online at:http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=11606

Google Scholar

Lee, J.-S. (2014). The relationship between student engagement and academic performance: is it a myth or reality? J. Educ. Res. 107, 177–185. doi: 10.1080/00220671.2013.807491

Crossref Full Text | Google Scholar

*Lee-Cultura, S., Sharma, K., Cosentino, G., Papavlasopoulou, S., and Giannakos, M. (2021). “Children's play and problem solving in motion-based educational games: synergies between human annotations and multi-modal data,” in Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 408–420. doi: 10.1145/3459990.3460702

Crossref Full Text | Google Scholar

*Lee-Cultura, S., Sharma, K., and Giannakos, M. (2022). Children's play and problem-solving in motion-based learning technologies using a multi-modal mixed methods approach. Int. J. Child Comput. Interact. 31:100355. doi: 10.1016/j.ijcci.2021.100355

Crossref Full Text | Google Scholar

*Lee-Cultura, S., Sharma, K., Papavlasopoulou, S., Retalis, S., and Giannakos, M. (2020). “Using sensing technologies to explain children's self-representation in motion-based educational games,” in Proceedings of the Interaction Design and Children Conference New York, NY: Association for Computing Machinery (ACM), 541–555. doi: 10.1145/3392063.3394419

Crossref Full Text | Google Scholar

Lindgren, R., Tscholl, M., Wang, S., and Johnson, E. (2016). Enhancing learning and engagement through embodied interaction within a mixed reality simulation. Comput. Educ. 95, 174–187. doi: 10.1016/j.compedu.2016.01.001

PubMed Abstract | Crossref Full Text | Google Scholar

Macfadyen, L. P., and Dawson, S. (2010). Mining LMS data to develop an “early warning system” for educators: a proof of concept. Comput. Educ. 54, 588–599. doi: 10.1016/j.compedu.2009.09.008

Crossref Full Text | Google Scholar

Macklem, G. L. (2015). Boredom in the Classroom: Addressing Student Motivation, Self-regulation, and Engagement in Learning, Vol. 1. Germania: Springer International Publishing. doi: 10.1007/978-3-319-13120-7

Crossref Full Text | Google Scholar

Mangaroska, K., and Giannakos, M. (2018). Learning analytics for learning design: a systematic literature review of analytics-driven design to enhance learning. IEEE Transac. Learn. Technol. 12, 516–534. doi: 10.1109/TLT.2018.2868673

Crossref Full Text | Google Scholar

*Martinovic, D., Burgess, G. H., Pomerleau, C. M., and Marin, C. (2016). Computer games that exercise cognitive skills: what makes them engaging for children? Comput. Human Behav. 60, 451–462. doi: 10.1016/j.chb.2016.02.063

Crossref Full Text | Google Scholar

Nadeau, M. F., Massé, L., Argumedes, M., and Verret, C. (2020). Education for students with neurodevelopmental disabilities-resources and educational adjustments. Handb. Clin. Neurol. 174, 369–378. doi: 10.1016/B978-0-444-64148-9.00027-2

PubMed Abstract | Crossref Full Text | Google Scholar

Nahum-Shani, I., Shaw, S. D., Carpenter, S. M., Murphy, S. A., and Yoon, C. (2022). Engagement in digital interventions. Am. Psychol. 77, 836–852. doi: 10.1037/amp0000983

PubMed Abstract | Crossref Full Text | Google Scholar

Neugnot-Cerioli, M., and Laurenty, O. M. (2024). The future of child development in the AI era. Cross-disciplinary perspectives between AI and child development experts. arXiv preprint aXiv:2405.19275. doi: 10.48550/arXiv.2405.19275

Crossref Full Text | Google Scholar

Noorhidawati, A., Ghalebandi, S. G., and Hajar, R. S. (2015). How do young children engage with mobile apps? Cognitive, psychomotor, and affective perspective. Comput. Educ. 87, 385–395. doi: 10.1016/j.compedu.2015.07.005

Crossref Full Text | Google Scholar

O'Brien, H. L., and Toms, E. G. (2008). What is user engagement? A conceptual framework for defining user engagement with technology. J. Am. Soc. Inf. Sci. Technol. 59, 938–955. doi: 10.1002/asi.20801

Crossref Full Text | Google Scholar

Ocumpaugh, J. (2012). Baker-Rodrigo Observation Method Protocol (BROMP) 1.0 Training Manual (version 1.0 Oct. 17, 2012). Chicago, IL: Institute of Learning Sciences, University of Illinois at Chicago.

Google Scholar

O'Farrell, S. L., and Morrison, G. M. (2003). A factor analysis exploring school bonding and related constructs among upper elementary students. Calif. Sch. Psychol. 8, 53–72 doi: 10.1007/BF03340896

Crossref Full Text | Google Scholar

*Ouherrou, N., Elhammoumi, O., Benmarrakchi, F., and El Kafi, J. (2019). Comparative study on emotions analysis from facial expressions in children with and without learning disabilities in virtual learning environment. Educ. Inf. Technol. 24, 1777–1792. doi: 10.1007/s10639-018-09852-5

Crossref Full Text | Google Scholar

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement. J. Clin. Epidemiol. 134, 103–112 doi: 10.1016/j.jclinepi.2021.02.003

PubMed Abstract | Crossref Full Text | Google Scholar

Paneru, B., and Paneru, B. (2024). The Nexus of AR/VR, large language models, UI/UX, and robotics technologies in enhancing learning and social interaction for children: a systematic review. arXiv preprint arXiv:2409.18162

Google Scholar

Paneru, B., Shah, K. B., Shrestha, A., Poudyal, R., and Poudyal, K. N. (2024). Autism spectrum disorder prediction using hybrid deep learning model and a recommendation system application for autistic patient. J. Comput. Sci. 20, 1040–1050. doi: 10.3844/jcssp.2024.1040.1050

Crossref Full Text | Google Scholar

Pecini, C., Spoglianti, S., Bonetti, S., Di Lieto, M. C., Guaran, F., Martinelli, A., et al. (2019). Training RAN or reading? A telerehabilitation study on developmental dyslexia. Dyslexia 25, 318–331. doi: 10.1002/dys.1619

PubMed Abstract | Crossref Full Text | Google Scholar

Price, L., Richardson, J. T. E., and Jelfs, A. (2007). Face-to-face versus online tutoring support in distance education. Stud. Higher Educ. 32, 1–20. doi: 10.1080/03075070601004366

PubMed Abstract | Crossref Full Text | Google Scholar

*Reinhold, F., Hoch, S., Schiepe-Tiska, A., and Reiss, K. (2021). Motivational and emotional orientation, engagement, and achievement in mathematics. A case study with one sixth-grade classroom working with an electronic textbook on fractions. Front. Educ. 6:588472. doi: 10.3389/feduc.2021.588472

Crossref Full Text | Google Scholar

*Reinhold, F., Strohmaier, A., Hoch, S., Reiss, K., Böheim, R., and Seidel, T. (2020). Process data from electronic textbooks indicate students' classroom engagement. Learn. Individ. Differ. 83–84:101934. doi: 10.1016/j.lindif.2020.101934

Crossref Full Text | Google Scholar

Ribeiro Silva, L., Maciel Toda, A., Chalco Challco, G., Chamel Elias, N., Ibert Bittencourt, I., and Isotani, S. (2024). Effects of a collaborative gamification on learning and engagement of children with Autism. Universal Access Inf. Soc. 24, 911–932. doi: 10.1007/s10209-024-01119-w

Crossref Full Text | Google Scholar

Rich, B. L., Lepine, J. A., and Crawford, E. R. (2010). Job engagement: antecedents and effects on job performance. Acad. Manage. J. 53, 617–635. doi: 10.5465/amj.2010.51468988

Crossref Full Text | Google Scholar

Rivella, C., Ruffini, C., Bombonato, C., Capodieci, A., Frascari, A., Marzocchi, G. M., et al. (2023). TeleFE: a new tool for the tele-assessment of executive functions in children. Appl. Sci. 13:1728. doi: 10.3390/app13031728

Crossref Full Text | Google Scholar

Ronimus, M., Kujala, J., Tolvanen, A., and Lyytinen, H. (2014). Children's engagement during digital game-based learning of reading: the effects of time, rewards, and challenge. Comput. Educ. 71, 237–246. doi: 10.1016/j.compedu.2013.10.008

Crossref Full Text | Google Scholar

Ruffini, C., Tarchi, C., Morini, M., Giuliano, G., and Pecini, C. (2022). Tele-assessment of cognitive functions in children: a systematic review. Child Neuropsychol. 28, 709–745. doi: 10.1080/09297049.2021.2005011

PubMed Abstract | Crossref Full Text | Google Scholar

Sandlund, M., McDonough, S., and Häger-Ross, C. (2009). Interactive computer play in rehabilitation of children with sensorimotor disorders: a systematic review. Dev. Med. Child Neurol. 51, 173–179. doi: 10.1111/j.1469-8749.2008.03184.x

PubMed Abstract | Crossref Full Text | Google Scholar

Sharma, K., and Giannakos, M. (2020). Multimodal data capabilities for learning: what can multimodal data tell us about learning? Br. J. Educ. Technol. 51, 1450–1484. doi: 10.1111/bjet.12993

PubMed Abstract | Crossref Full Text | Google Scholar

Sharma, K., Giannakos, M., and Dillenbourg, P. (2020). Eye-tracking and artificial intelligence to enhance motivation and learning. Smart Learn. Environ. 7:13. doi: 10.1186/s40561-020-00122-x

PubMed Abstract | Crossref Full Text | Google Scholar

*Sharma, K., Lee-Cultura, S., and Giannakos, M. (2022). Keep calm and do not carry-forward: toward sensor-data driven ai agent to enhance human learning. Front. Artif. Intell. 4:198. doi: 10.3389/frai.2021.713176

PubMed Abstract | Crossref Full Text | Google Scholar

*Sharma, K., Papavlasopoulou, S., and Giannakos, M. (2019). “Joint emotional state of children and perceived collaborative experience in coding activities,” in Proceedings of the 18th ACM International Conference on Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 133–145. doi: 10.1145/3311927.3323145

Crossref Full Text | Google Scholar

Sidner, C. L., Lee, C., Kidd, C. D., Lesh, N., and Rich, C. (2005). Explorations in engagement for humans and robots. Artif. Intell. 166, 140–164. doi: 10.1016/j.artint.2005.03.005

Crossref Full Text | Google Scholar

Sinclair, M. F., Christenson, S. L., Lehr, C. A., and Anderson, A. R. (2003). Facilitating student engagement: lessons learned from Check & Connect longitudinal studies. Calif. Sch. Psychol. 8, 29–41 doi: 10.1007/BF03340894

Crossref Full Text | Google Scholar

Skinner, E., Furrer, C., Marchand, G., and Kindermann, T. (2008). Engagement and disaffection in the classroom: part of a larger motivational dynamic? J. Educ. Psychol. 100, 765–781. doi: 10.1037/a0012840

Crossref Full Text | Google Scholar

*Sridhar, P. K., Chan, S. W., and Nanayakkara, S. (2018). “Going beyond performance scores: understanding cognitive-affective states in kindergarteners,” in Proceedings of the 17th ACM Conference on Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 253–265. doi: 10.1145/3202185.3202739

Crossref Full Text | Google Scholar

Stephen, C., McPake, J., Plowman, L., and Berch-Heyman, S. (2008). Learning from the children: exploring preschool children's encounters with ICT at home. J. Early Childhood Res. 6, 99–117. doi: 10.1177/1476718X08088673

Crossref Full Text | Google Scholar

*Tisza, G., and Markopoulos, P. (2021). Understanding the role of fun in learning to code. Int. J. Child Comput. Interact. 28:100270. doi: 10.1016/j.ijcci.2021.100270

PubMed Abstract | Crossref Full Text | Google Scholar

*Tisza, G., and Markopoulos, P. (2023). FunQ: measuring the fun experience of a learning activity with adolescents. Curr. Psychol. 42, 1936–1956. doi: 10.1007/s12144-021-01484-2

Crossref Full Text | Google Scholar

Tisza, G., Sharma, K., Papavlasopoulou, S., Markopoulos, P., and Giannakos, M. (2022). “Understanding fun in learning to code: a multi-modal data approach,” in Interaction Design and Children (New York, NY: Association for Computing Machinery (ACM)), 274–287. doi: 10.1145/3501712.3529716

Crossref Full Text | Google Scholar

Tobii (2023). Tobii Pro Glasses 3: Latest in Wearable Eye Tracking. Tobii. Available online at: https://www.tobii.com/products/eye-trackers/wearables/tobii-pro-glasses-3 (Accessed April 21, 2023).

Google Scholar

Vedechkina, M., and Borgonovi, F. (2021). A review of evidence on the role of digital technology in shaping attention and cognitive control in children. Front. Psychol. 12:611155. doi: 10.3389/fpsyg.2021.611155

PubMed Abstract | Crossref Full Text | Google Scholar

*Wen, Y. (2021). Augmented reality enhanced cognitive engagement: designing classroom-based collaborative learning activities for young language learners. Educ. Tech Res. Dev. 69, 843–860. doi: 10.1007/s11423-020-09893-z

Crossref Full Text | Google Scholar

Whitton, N. J. (2018). Playful learning: tools, techniques, and tactics. Rese. Learn. Technol. 26, 1–12. doi: 10.25304/rlt.v26.2035

Crossref Full Text | Google Scholar

Yang, Y.-F. (2011). Engaging students in an online situated language learning environment. Comput. Assist. Lang. Learn. 24, 181–198. doi: 10.1080/09588221.2010.538700

Crossref Full Text | Google Scholar

*Zhang, X., Chen, Y., Li, D., Hu, L., Hwang, G.-J., and Tu, Y.-F. (2023). Engaging young students in effective robotics education: an embodied learning-based computer programming approach. J. Educ. Comput. Res. 62, 532–558. doi: 10.1177/07356331231213548

Crossref Full Text | Google Scholar

Zivin, G. (1979). The Development of Self-regulation Through Private Speech, Vol. 6. New York, NY: Wiley.

Google Scholar

Keywords: child, engagement, learning analytics, digital task, robotic task, games, child-computer interaction

Citation: Margheri V, Martucci A, Bei E, Graziani D, Scatigna S, Guazzini A and Pecini C (2025) Child engagement during interaction with digital and robotic activities: a systematic review. Front. Educ. 10:1568028. doi: 10.3389/feduc.2025.1568028

Received: 28 January 2025; Accepted: 02 September 2025;
Published: 30 September 2025.

Edited by:

Amna Mirza, University of Delhi, India

Reviewed by:

Grazia Maria Giovanna Pastorino, University Magna Graecia of Catanzaro, Italy
Katriina Heljakka, University of Turku, Finland

Copyright © 2025 Margheri, Martucci, Bei, Graziani, Scatigna, Guazzini and Pecini. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Chiara Pecini, Y2hpYXJhLnBlY2luaUB1bmlmaS5pdA==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.