Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 02 February 2026

Sec. Higher Education

Volume 11 - 2026 | https://doi.org/10.3389/feduc.2026.1677494

This article is part of the Research TopicEnhancing Learning with Online Educational Videos in the Web 2.0 Era: Learner Engagement, Learning Processes and OutcomesView all 4 articles

From data to teaching: video lessons and learning analytics in blended university contexts

Federica Pelizzari
Federica Pelizzari1*Elena TassaliniElena Tassalini2Flavia Maria ScottFlavia Maria Scott2
  • 1CREMIT, Catholic University of the Sacred Heart, Milan, Italy
  • 2ILAB, Catholic University of the Sacred Heart, Milan, Italy

This study investigates how students use video lessons within a blended master’s degree program (LM-93), addressing a persistent research gap: most studies investigate perceived usefulness or self-reported engagement, whereas far fewer analyze trace-based behavioral data in authentic higher education contexts. Understanding real usage patterns is crucial, as pedagogical effectiveness depends not only on content quality but also on how students interact with video-based materials. While many studies focus on self-reported satisfaction or outcomes, fewer examine fine-grained behavioral data in authentic blended university contexts. Drawing on evidence-based research on educational innovation, we analyzed 5,278 viewing sessions tracked through the Panopto platform across 14 courses over two academic years. A multi-step quantitative approach was applied, combining descriptive statistics, correlations, manual segmentation, and K-means clustering (Jain, 2010). Results show substantial heterogeneity in viewing behaviors: for example, only 18% of sessions fall into an “in-depth” profile, characterized by long viewing time, high completion, and frequent interactions. Four stable behavioral profiles emerged (in-depth, fast, partial, and discontinuous) highlighting diverse strategies in time management and engagement. These findings suggest that asynchronous video lessons can support engagement and self-regulation for a subset of students, but also risk reinforcing superficial or fragmented use for others. The study offers actionable implications for instructional design and inclusivity in blended learning: designing shorter and more navigable video lessons, integrating checkpoints and formative feedback, and using Learning Analytics dashboards to support data-informed and Universal Design for Learning–oriented practices. Limitations include the single-institution context, reliance on platform logs, and the absence of qualitative triangulation, which point to future research integrating behavioral, outcome, and qualitative data. The study also highlights the need for transparent and ethically informed uses of Learning Analytics when interpreting behavioral traces.

1 Introduction

Over the last decade, and especially after the COVID-19 pandemic, video lessons have become a central component of university teaching, both in fully online and in blended models. Students increasingly rely on recorded lectures to catch up on missed sessions, review complex concepts, and organize their study time more flexibly. Evidence from institutional reports indicates a steady growth in video usage in higher education, accompanied by expectations of increased accessibility, personalization, and efficiency in learning. However, the pedagogical effectiveness of this tool is not automatic; it depends on multiple factors related to design, use, and context. Despite this widespread adoption, a limited understanding remains of how students engage with video lessons in authentic blended university contexts. Most existing literature focuses on perceived usefulness, satisfaction, or overall course outcomes, rather than on detailed behavioral data such as viewing duration, completion rates, playback speed, or interaction patterns (Means et al., 2014; Rienties et al., 2016). This creates a significant gap between the promise of video-based blended learning and empirical evidence on real usage behaviors, especially in European master’s programs. Addressing this gap is crucial for both research and practice. It can inform more realistic models of student engagement, self-regulation, and instructional design in blended environments (Boelens et al., 2017).

This study is part of an increasingly important line of research investigating the potential of video lessons as a lever for educational innovation and student engagement. It focuses specifically on their integration in blended environments at the university level. Through the systematic multi-level analysis of tracking data related to video viewing, we aim to explore students’ actual behavior, identify differences between courses, and identify emerging usage profiles. The goal is to provide operational guidelines for designing data-driven learning experiences.

2 Theoretical framework

The adoption of video lessons in university contexts reflects a broader rethinking of teaching practices. This shift is linked to the spread of blended models and the need to respond to growing demand for flexibility, personalization, and accessibility (Bransford et al., 2000). In this scenario, video lessons serve as key tools for activating asynchronous learning paths that integrate synchronous and face-to-face moments, promoting greater organizational and cognitive autonomy among students (Garrison and Vaughan, 2008). From an instructional alignment perspective, video lessons designed and analyzed using data can serve as significant hubs for authentic, flexible, and monitorable learning (Gašević et al., 2017). To fully understand the potential and critical issues of video lessons in blended environments, we must place them in a multidimensional conceptual framework that considers pedagogical, cognitive, technological, and design variables. We detail four fundamental theoretical axes below: (1) the video lesson as a designed pedagogical object, (2) the asynchronous experience and self-regulated learning, (3) the evidence-based approach through Learning Analytics, and (4) personalization, inclusivity, and Universal Design for Learning. Each axis addresses a distinct dimension of how video lessons function in blended learning, while together they form an integrated foundation for our study.

2.1 The video lesson as a designed pedagogical object

The first dimension concerns the educational design of video lessons. Within blended learning, video lessons have become a central component of course design, with research documenting a substantial increase in the use of digital learning resources and educational videos in higher education. Students frequently combine institutional platforms with open resources such as YouTube to complement curriculum delivery. Systematic and scoping reviews demonstrate that integrating video into existing instructional methods tends to improve learning outcomes when videos are pedagogically aligned, clearly structured, and integrated with other activities rather than used as isolated content. However, most research still focuses on perceived effectiveness or experimental comparisons between formats, while fewer studies examine large-scale log data on how students distribute, sequence, and personalize their viewing across a real blended curriculum (Almasi and Zhu, 2019). Moving beyond aggregate measures requires us to map diverse patterns of use over time and across courses. Research emphasizes that video lesson effectiveness depends on specific design choices: optimal duration (10–20 min), clarity of communication, content segmentation, intentional use of images and words, and possible inclusion of the instructor in the video (Guo et al., 2014). According to Mayer (2009) Theory of Multimedia Learning, learning is most effective when verbal and visual information integrate coherently and non-redundantly, following principles such as contiguity, segmentation, and signaling. Cognitive load theory (Sweller, 1988) is equally crucial: poorly designed video lessons can generate cognitive overload, which hinders deep processing (Sweller, 1988).

Instructional Design (Thalheimer, 2017) principles provide the foundation for treating video lessons not as mere “recorded content” but as training artifacts. Engagement elements, such as narration, personal tone, rhetorical questions, and reflective pauses, should integrate throughout to stimulate active and conscious participation (Clark and Mayer, 2016). These elements become even more important in asynchronous contexts lacking direct teacher interaction. The effectiveness of asynchronous learning also depends on perceived teaching presence: the ability of the instructor to make themselves visible, available, and approachable through pre-recorded materials. Derived from the Community of Inquiry Framework (Garrison et al., 2000), this concept calls for video lessons that include “teaching presence” elements such as human interfaces, communicative tone, and consistency between videos and related activities. Recent work on hypervideos demonstrates how interactive architecture can support more flexible navigation, multimodal representation, and data-informed instructional design (Carenzio et al., 2025). In this study, we examine whether design features related to these principles (such as video duration, segmentation, and interactive elements) correlate with student viewing patterns and engagement.

2.2 The asynchronous experience and self-regulated learning models

Building on these design considerations, the second axis addresses how students autonomously engage with video lessons. Self-regulated learning (SRL) refers to learners’ ability to plan, monitor, and reflect on their own learning processes, a capacity crucial in technology-enhanced and blended environments (Zimmerman, 2002; Bandura, 1986). The ability to access video lessons at any time and from any location introduces a new balance between freedom and responsibility: autonomy is valued but can also lead to decision overload (Kirschner et al., 2006). In these contexts, students continuously decide when, how, and to what extent they engage with digital resources. Metacognitive strategies, such as study planning, self-monitoring, and post-viewing reflection, become central to effective learning (Panadero, 2017). Video lessons designed with checkpoints, guided breaks, activating questions, or support materials can facilitate these processes, helping students develop cross-curricular skills related to self-directed learning (Fredricks et al., 2004).

Research on flipped classrooms and blended courses reveals that simply providing video resources does not guarantee deep learning. Learners need structured guidance, clear expectations, and scaffolds to avoid procrastination and superficial viewing (Means et al., 2014; Rienties et al., 2016; Boelens et al., 2017). Effective video lessons are not those passively transmitted, but those including moments of cognitive activation, reflective pauses, questions, invitations to perform tasks, which support self-regulated learning processes (Czerkawski and Lyman, 2016). Recent reviews of SRL in online and hybrid settings (Bergdahl et al., 2024) highlight how tools such as learning dashboards, video platforms, and Learning Management Systems (LMS) provide both opportunities and challenges. These systems can scaffold goal setting and monitoring, but they also place higher demands on learners for self-management, especially when instruction is largely asynchronous. Analyzing concrete traces of how students use video lessons (such as pacing, revisiting key segments, or adjusting playback speed) offers a window into their self-regulatory strategies (Chen and Wu, 2015). In this study, we examine these behavioral traces to infer students’ regulatory approaches and their relationship to learning outcomes.

2.3 The evidence-based approach through learning analytics

Analyzing student behavior requires robust methodological tools. Learning Analytics (LA) provides these by transforming digital traces generated by students’ interactions with video platforms (clickstream data, viewing duration, playback speed, and interaction events) into aggregate indicators and models that inform teaching and course design (Siemens and Long, 2011; Ferguson, 2012). Raw behavioral data thus serve not as an end in themselves but as a starting point for evidence-informed teaching. This approach enables identification of usage patterns, behavioral profiles, critical moments of dropout or acceleration, and correlations with educational success. Automated clustering techniques (such as K-means) or manual segmentation allow us to differentiate student approaches, moving from an “average student” logic to a more fine-grained and personalized view (Papamitsiou and Economides, 2014).

However, LA must develop within robust ethical frameworks. As Drachsler and Greller (2016) note, analyzing digital traces requires balancing potential pedagogical improvement with protection of students’ rights. In this study, we adopt a deliberately restrictive interpretation: we anonymize log data, reduce them to a limited set of behavioral indicators, and use them exclusively to understand and improve teaching and learning (Pan et al., 2024). Rather than aiming at predictive risk models or individual monitoring, we focus on describing collective patterns and deriving design-relevant insights for instructors. We adhere to the following operational principles:

- Transparency: we inform students about what data we collect, for what purposes, and how they may benefit from analysis;

- Anonymization and data minimization: we store and process only information strictly necessary for educational purposes, in forms that do not allow individual identification wherever possible;

- Educational finality: we use data exclusively to support teaching, learning, and student support, never for surveillance or punitive purposes;

- Data literacy: we engage learners in understanding and reflecting on their own data through dashboards or feedback, helping them interpret behaviors and make more informed study choices.

This work positions itself within the broader movement toward evidence-based teaching in higher education, which advocates systematic use of empirical findings to inform instructional decisions while acknowledging data’s limits. We do not assume that behavioral profiles automatically correspond to “deep learning,” but rather ask what we can reasonably infer from observable video usage patterns and how these insights can refine blended course design in a cautious, context-sensitive way.

2.4 Personalization, inclusivity, and universal design for learning

The final theoretical axis extends from individual learning patterns to broader considerations of equity and accessibility. Universal Design for Learning (UDL), as outlined in the CAST (2018, 2024) guidelines, provides a framework for proactively designing learning environments that reduce barriers and support diverse learners. Rather than a fixed checklist of techniques, UDL conceptualizes design principles aimed at fostering learner agency that is purposeful, resourceful, and strategic. We apply UDL’s three fundamental principles directly to video lesson design:

- Multiple means of representation: Video lessons should include captions, transcripts, and accessible visual supports addressing diverse sensory and linguistic needs. We present information in varied formats (text, video, images, maps), enabling learners to access content through their preferred modalities;

- Multiple means of action and expression: We provide learners opportunities to demonstrate understanding in different ways through integrated low-stakes quizzes, reflection prompts, or short tasks. We organize videos into short, clearly labeled segments with navigation cues, enabling learners to flexibly control pacing and focus;

- Multiple means of engagement: We design for choice in pacing and pathway, supporting self-regulation. Learners can flexibly control their viewing experience by adjusting pace, revisiting segments, and using interactive elements.

UDL serves in this study as a heuristic lens to interpret behavioral patterns and formulate design implications rather than as a fully operationalized intervention (Almeqdad et al., 2023). We acknowledge critical discussions on UDL operationalization, particularly the risk of arbitrary or superficial alignment with checkpoints. We describe in detail how these principles informed our design recommendations without claiming exhaustive implementation (AlRawi et al., 2021). Inclusivity is not merely a regulatory or ethical requirement but a criterion of pedagogical quality enabling us to reach a wider, more diverse student audience (Liu and Khalil, 2023). Inclusion reflects a broader vision of pedagogical flexibility capable of responding to the diversity of cognitive styles, socio-technological conditions, and learning preferences (Espada-Chavarría et al., 2023). In this study, we examine whether video design features aligned with UDL principles, such as captions, segmentation, and interactive elements, correlate with more inclusive usage patterns across diverse student populations.

The four theoretical axes (design, self-regulation, analytics, and inclusivity) interconnect to form an integrated framework for understanding video lessons in blended learning. The theoretical framework outlined here highlights how the analysis of video lessons cannot be separated from an integrated vision of instructional design, learning models, strategic use of data, and principles of educational equity. Video lessons should therefore be understood not as neutral or secondary tools but as complex cognitive and communicative environments capable of conveying content, as well as modes of interaction, representations of knowledge, and engagement strategies (Brame, 2016). This study contributes to this reflection by offering empirical evidence useful for methodological innovation in the university setting (Figure 1).

Figure 1
Graphic with four sections illustrating educational concepts. 1. Video lesson design: Uses instructional design, cognitive load, and multimedia learning theories. 2. Asynchronous learning: Enhances self-regulation and metacognition. 3. Learning analytics: Tracks and adapts educational strategies. 4. Universal design: Focuses on personalization, inclusivity, and accessibility for diversified learning paths.

Figure 1. Key pedagogical dimensions of video lectures.

Taken together, these contributions suggest that it is important to move beyond self-reported measures of satisfaction or perceived effectiveness, and to examine actual usage patterns captured by platform logs. In particular, understanding how behaviors vary across courses, roles, and time, and whether recurring profiles emerge, can provide a more grounded basis for discussing self-regulation, inclusivity, and the design of blended learning.

3 Research methods and tools

This study aimed to analyze the impact and modalities of use of video lessons in a blended master’s degree program, focusing on both their instructional design and the behavioral patterns of students.

In this study, didactic teaching refers to the set of planned instructional choices (such as content organization, pacing, use of examples, and visual supports) that shape how video lessons are designed and integrated into the course. We use the term behavior in a strictly operational sense to denote observable interactions with the video platform (e.g., minutes viewed, completion rate, playback speed, interaction events). These behavioral traces are interpreted as indicators of how students manage time and attention rather than as direct measures of cognitive processes.

Viewing time, completion rate, playback speed, and number of events are treated as consumption variables, as they describe how learners “consume” video content in terms of duration, completeness, pace, and interaction. These four variables were selected because they are widely recognized behavioral proxies for engagement, persistence, and self-regulated learning strategies in digital environments (Guo et al., 2014; Giannakos, 2015).

3.1 Context and program description

The study was conducted within the Master’s Degree in Media Education (LM-93) at the Catholic University of the Sacred Heart (Milan campus). The program integrates education and communication through technology, training multi-professional figures for institutional and organizational contexts. Teaching is delivered through a blended model in which 50% of activities occur in person and 50% online (synchronous and asynchronous). The curriculum combines didactic teaching (video lessons, self-learning, e-tivities) with interactive teaching methods (exercises, case studies, simulations, webinars). A large proportion of enrolled students are working professionals, making the flexibility of asynchronous video lessons particularly relevant for participation and success.1 This study acknowledges its limited generalizability due to the single-institution context.

3.2 Research focus, research questions, and objectives

The study focused on two areas: the design of video lessons (structure, content, integration with synchronous and asynchronous activities) and students’ actual use of video lessons through quantitative analysis of platform-generated behavioral data.

The study had three main objectives:

1) To analyze the design and structure of video lessons in a blended university context.

2) To investigate students’ actual use of video lessons through quantitative data.

3) To interpret usage data pedagogically to identify patterns, usage profiles, and implications for inclusive, data-informed instructional design.

Grounded in the theoretical framework on self-regulated learning, blended learning, Learning Analytics, and Universal Design for Learning, the study addressed the following research questions:

- What are the prevailing behaviors of students when using video lessons (viewing duration, completion rate, playback speed, interaction events)?

- What differences emerge between courses regarding viewing time, engagement, and performance?

- How is the use of video lessons distributed over days of the week and time slots?

- Are there significant relationships between behavioral variables (duration, speed, events, completion)?

- Is it possible to identify recurring behavioral clusters, and how can these be interpreted pedagogically in relation to instructional design and inclusivity?

3.3 Dataset and data collection

The dataset includes 5,278 viewing sessions, each representing a unique interaction between a user and a video lesson across 14 courses over two academic years. Sessions were associated with anonymized user identifiers and course codes. Viewing data were automatically collected via the Panopto platform, which tracks granular behavioral traces such as: start and end time of viewing, total minutes viewed, percentage of content completed, average playback speed, interaction events (pauses, rewinds, skips). Data were exported in CSV format and cross-checked with course records to ensure consistency in video counts and student enrollment.

To assess the reliability of the log data, a structured verification procedure was implemented. A randomly selected subset of 120 viewing sessions was manually inspected, comparing system-logged events (pauses, rewinds, skips, minutes viewed) with expected values from the video timeline. The match rate exceeded 95%, indicating high internal consistency of the behavioral traces. Although Panopto2 logs cannot capture the entire learning context (e.g., multitasking, distractions, simultaneous use of other materials), they remain a widely adopted and valid proxy for observable engagement in Learning Analytics research.

3.4 Data cleaning and analytical strategy

A systematic data cleaning procedure was applied:

- removal of duplicate entries

- exclusion of sessions shorter than 10 s, considered accidental or erratic access

- removal of technical outliers (e.g., unrealistically long sessions)

Missing data were rare (<2%). Management procedures included listwise exclusion when missingness affected non-critical variable and mean imputation when only one variable was missing in otherwise complete records. Sensitivity checks confirmed that these decisions did not substantially alter variable distributions (Figure 2).

Figure 2
Circular flowchart illustrating a cyclical process with five stages:

Figure 2. Process model illustrating the iterative phases of development and implementation.

The methodological approach followed the iterative cycle of Learning Analytics (Siemens and Long, 2011; Rajenthiram, 2025):

The analysis proceeded in sequential steps:

1) Descriptive statistics: calculation of means, standard deviations, and distributions for all variables, accompanied by graphical visualizations.

2) Correlations: Pearson bivariate correlations were computed to explore relationships between viewing time, completion rate, playback speed, and interaction events, following recommendations in empirical Learning Analytics research (Ferguson, 2012; Papamitsiou and Economides, 2014).

3) Manual rule-based segmentation (exploratory): an exploratory behavioral categorization was conducted based on empirically derived thresholds informed by previous research on video engagement (Guo et al., 2014; Brinton et al., 2016). As recommended in cluster-analysis literature (Everitt et al., 2011; Hennig et al., 2015), these rule-based categories were used as heuristic, non-fixed types, serving only as preliminary descriptors later validated through K-means.

4) K-means clustering: four standardized variables (viewing time, completion, speed, interaction events) were used to identify recurring behavioral profiles through K-means clustering. The selection of k = 4 was based on: elbow inspection of inertia reduction, average silhouette score (0.52), consistency with exploratory segmentation, pedagogical interpretability of clusters. This value indicates moderate but acceptable separation, typical for behavioral data with partially overlapping patterns.

5) Inferential analyses: to verify whether observed differences were statistically significant, the following tests were applied: one-way ANOVA for differences across courses, Welch’s t-tests for differences between students and teachers, Chi-square tests for the association between behavioral clusters and user role, Effect sizes (η2, Cohen’s d) to assess practical significance.

4 Results

The following analyses were systematically conducted to answer the research questions, aiming to investigate the use, intensity, quality, and modes of use of video lessons in a blended university context. Descriptive differences across courses are reported here, while their pedagogical interpretation is addressed in the Discussion section. The analyses were structured on several levels.

4.1 Descriptive results

This first phase provided an overall picture of usage trends:

• The average viewing time per session was 15.06 min. However, the analysis showed marked variability between users and between courses: some sessions were viewed for only a few seconds (minimum recorded: 0 min), while others exceeded 60 min (maximum recorded: over 90 min). This heterogeneity reflects quite different individual behaviors and, from an educational standpoint, suggests that some courses are more engaging for students, while others fail to retain their attention over time. Further, the standard deviation of viewing time is 26.6 min, indicating a wide dispersion around the average (Figure 3).

• The average completion rate stands at 76.13%, suggesting that, although on average students tend to complete a significant proportion of the video lessons (76.13%), in many cases they do not watch the entire content. The standard deviation for the completion rate is approximately 37.15%, with a minimum value of 0% and a maximum of 100%. This indicates that alongside highly engaged students, some view content only partially or in a fragmented manner, which could reflect difficulties in concentrating, disinterest in the specific content, or lack of time.

• The average playback speed is 1.12x, with significant use of the speed-up function. This behavior, recorded systematically in the dataset, indicates that many students speed up the playback of video lessons (average speed = 1.12x, with peaks above 1.5x). This suggests intentional time optimization strategies, typical of efficient but potentially more superficial use. This choice may depend on familiarity with the content, the desire to quickly review familiar sections, or the need to catch up on material in a brief time, for example, before exams or tests (Figure 4).

• The average number of events (interactions with the video) is 14.35, indicating active and non-linear behavior. Particularly, the high average number of events (14.35 per session) and a standard deviation of 26.9 reveal that many students do not watch the video passively, but actively manipulate it: they pause, resume, rewind, or fast forward. Such use highlights more engaging cognitive modes, potentially associated with self-regulated, selective, or deep understanding-oriented learning strategies. Some users generated over 100 events in a single session, indicating particularly dynamic interaction with the content (Table 1; Figure 5).

Figure 3
Bar chart titled

Figure 3. Average viewing time of video lectures by course.

Figure 4
Bar chart showing mean and standard deviation of viewing metrics for 5278 samples. Viewing Time: 15.06 minutes. Completion: 76.13 percent. Playback Speed: 1.12. Events: 14.35.

Figure 4. Mean and standard deviation of key viewing metrics.

Table 1
www.frontiersin.org

Table 1. Pearson correlations among usage variables.

Figure 5
Bar chart showing average completion percentages for ten courses. Educational Research Methods has the highest at 81.2%, followed by Film Forms and Genres at 79.8%. The mean completion is 54%, marked by a red dashed line. Seminar Theology has the lowest at 10.1%.

Figure 5. Average completion percentage of video lectures by course.

Pearson correlations between behavioral variables were generally small to moderate but statistically significant due to the large sample size. Viewing time correlated positively with completion (r = 0.21, p < 0.001), playback speed (r = 0.26, p < 0.001), and number of interaction events (r = 0.30, p < 0.001). Completion showed small positive correlations with speed (r = 0.12, p < 0.001) and events (r = 0.14, p < 0.001), while speed and events were only weakly related (r = 0.05, p < 0.001). These patterns suggest that longer sessions tend to be completer and more interactive, but high interaction or high speed do not automatically imply deep engagement (Table 2).

Table 2
www.frontiersin.org

Table 2. Average viewing metrics by course.

4.2 Comparison between courses

The following is a comparative table with the main average indicators recorded in the five courses with the highest and lowest viewing times:

The average performance for each course was analyzed in terms of average viewing time, percentage completed, playback speed, and number of events. The results highlighted differences between the various courses: for example, the course “Forms and Genres of Cinema” recorded an average viewing time of over 42 min with an average completion rate of close to 80%, values significantly higher than the overall average. In contrast, courses such as “Theology—Seminar Course” show an average viewing time of less than 3 min and a completion rate of 10%. Table 3 reports the ANOVA results for course-level differences.

Table 3
www.frontiersin.org

Table 3. One-way ANOVA for course differences (course ID).

A one-way ANOVA revealed significant differences in average viewing time between courses, F(42, 5,235) = 16.72, p < 0.001, η2 = 0.12, and in completion rate, F(42, 5,235) = 21.64, p < 0.001, η2 = 0.15. These medium-sized effects indicate that engagement with video lessons varied substantially across the curriculum, with some courses showing consistently higher viewing time and completion than others (Table 4).

Table 4
www.frontiersin.org

Table 4. Average viewing metrics by user role.

4.3 Temporal distribution of viewing sessions

The distribution of sessions was examined based on the day of the week and time slot to understand when students tend to view video lessons the most. The analysis showed that

• Use is mainly concentrated on weekdays, with a 77 marked peak on Fridays (over 950 sessions in total), while there is a sharp drop during the weekend (less than 200 sessions in total on Saturdays and Sundays).

• The busiest hours are between 9:00 a.m. and 6:00 p.m., with a noticeable increase in the afternoon, particularly between 2:00 p.m. and 5:00 p.m., which alone accounts for over 35% of total sessions (Figure 6).

Figure 6
Two bar charts display viewing sessions. The left chart shows sessions by day of the week, with Friday having the highest number (954 sessions) and Sunday the lowest (88 sessions). The right chart represents sessions by time slot, peaking at 1,010 sessions between 14:00 and 16:00, and the lowest at 421 sessions between 08:00 and 10:00.

Figure 6. Distribution of viewing sessions by day of the week (left) and time slot (right).

4.4 Behavioral segmentation (manual)

To identify recurring behaviors in the use of video lessons, behavioral segmentation was performed based on thresholds defined empirically from the dataset. The process consisted of an initial exploration analysis of the distributions of key variables (time spent, percentage completed, viewing speed, number of events) and the definition of interpretative thresholds consistent with the literature on video usage models. This manual segmentation is exploratory and does not aim at producing statistically validated categories. The thresholds were derived from distributional inspection rather than from theoretical or psychometric criteria, and therefore serve only as a preliminary descriptive device. As recommended in cluster-analysis methodological literature (Everitt et al., 2011; Hennig et al., 2015), the interpretations derived from manual thresholds should be treated with caution and used only as complements to the validated K-means analysis reported in Section 4.7.

Five distinct categories were identified:

“In-depth”: includes sessions with time spent >20 min and percentage completed >90%. These represent highly engaged users who consume video lessons in their entirety or almost so, with a linear and continuous approach. These sessions account for approximately 15% of the total.

“Partial”: time spent between 5 and 20 min, with percentage completed between 40 and 80%. This indicates average interaction, often selective or discontinuous, potentially associated with review or verification strategies for specific sections. This is the largest category, accounting for approximately 35% of sessions.

“Discontinuous”: time spent less than 5 min and percentage completed <30%. This reflects very short sessions, probably exploring or interrupted, sometimes random or indicative of disinterest. It covers about 25% of the sample.

“Fast”: includes sessions with an average speed >1.5x, regardless of total time. Indicates strategic acceleration behavior, even on complete content. This category accounts for approximately 10% of sessions.

“Other”: includes cases that cannot be classified according to the previous thresholds or with missing data. It represents approximately 15% of the dataset.

Although simplified, this classification has enabled us to build an initial map of usage profiles, which is useful for subsequent cross-analysis with courses, roles, schedules, and performance (Figure 7).

Figure 7
Bar chart titled

Figure 7. Distribution of behavioral categories of video lecture viewing sessions (manual segmentation).

4.5 Analysis by role

The analysis distinguished between students and teachers by identifying the domain of the email address (e.g., students@ vs. firstname.lastname@). This enabled the exploration of whether and how the user’s role influences their behavior when using video lessons.

The following is a comparative table between students and teachers based on video lesson usage indicators:

Independent-samples t-tests with Welch correction showed systematic differences between students and teachers. Students spent significantly more time watching video lessons (M = 15.85 min, SD = 26.99) than teachers (M = 4.14, SD = 14.85), t(559) = 13.46, p < 0.001, Cohen’s d = 0.44. Students also completed a higher proportion of each video (M = 80.17%, SD = 34.38) than teachers (M = 25.38%, SD = 33.35), t(421) = 30.13, p < 0.001, d = 1.60. Average playback speed was slightly higher for students (M = 1.12, SD = 0.29) than for teachers (M = 1.02, SD = 0.22), t(464) = 8.26, p < 0.001, d = 0.35, and students generated more interaction events per session (M = 15.16, SD = 30.17) than teachers (M = 3.98, SD = 7.55), t(1,568) = 19.10, p < 0.001, d = 0.38.

4.6 Correlations between variables

The correlation matrix among the four behavioral variables showed small to moderate but statistically significant associations (all p < 0.001). Correlations were computed using Pearson’s r after verifying linearity assumptions based on scatterplots. Viewing time was positively correlated with completion (r = 0.21), playback speed (r = 0.26), and interaction events (r = 0.30), indicating that longer sessions tended to be more complete, slightly faster, and more interactive. Completion showed small positive correlations with speed (r = 0.12) and events (r = 0.14), suggesting that users who completed a larger portion of the video were also somewhat more likely to interact with it or accelerate playback. Playback speed and interaction events were only weakly related (r = 0.05). Overall, these correlations indicate that usage behaviors are not random but reflect consistent patterns: longer viewing tends to co-occur with higher completion and more interaction, whereas the use of increased playback speed appears to function mainly as a time-management strategy rather than a marker of reduced engagement (Figure 8).

Figure 8
Correlation matrix of viewing behavior metrics for a sample size of five thousand two hundred seventy. The matrix shows positive and negative relationships between viewing time, completion percentage, interaction events, and playback speed, with color gradations from blue to red indicating correlation strength.

Figure 8. Correlation matrix of viewing behavior metrics.

4.7 Behavioral clustering (K-means)

To identify recurring behavioral patterns in an unsupervised manner, we performed a K-means clustering analysis following established methodological guidelines for exploratory cluster analysis (Ketchen and Shook, 1996; Hennig et al., 2015).

The process was developed in several stages:

1) Variable selection: Four key quantitative variables were selected: viewing time (in minutes), percentage of content completed, average playback speed, and number of interactive events.

2) Cleaning and normalization: The data was filtered to remove outliers and completed in case of missing data. Subsequently, the variables were standardized with StandardScaler to ensure proper comparability.

3) Application of K-means: K-means was run with multiple random initializations (n_init = 10) and a fixed random_state for reproducibility. The choice of k = 4 was based on a combined evaluation of inertia (elbow criterion), the silhouette coefficient, and the pedagogical interpretability of the resulting groupings, as recommended for exploratory clustering (Hennig et al., 2015).

4) Cluster profiling: Each cluster was described by calculating the averages of the selected variables.

The resulting profiles are as follows:

1) Cluster 0—In-depth: sessions with high viewing time (>25 min), high completion rate (>85%), and numerous events. This group represents about 18% of total sessions and denotes in-depth and reflective interaction with the content.

2) Cluster 1—Fast: Characterized by an average speed >1.4x, moderate viewing time, and high completion rate, it represents about 22% of the sample and reflects efficient acceleration strategies.

3) Cluster 2—Discontinuous: includes short sessions (<5 min), low completion rate, and few events. It is the smallest cluster (about 15%) but significant for understanding early dropout or disinterest.

4) Cluster 3—Partial: represents the intermediate category, with average values across all variables. It accounts for approximately 45% of the total and reflects selective behavior, neither completely passive nor particularly in-depth (Table 5).

Table 5
www.frontiersin.org

Table 5. ANOVA results for cluster validation (TIPO_UTENTE).

To validate the behavioral profiles obtained through K-means clustering, we compared clusters on the four usage variables. ANOVAs revealed significant differences between clusters for viewing time, F(4, 5,273) = 1,041.25, p < 0.001, η2 = 0.44, completion, F(4, 5,273) = 628.04, p < 0.001, η2 = 0.32, playback speed, F(4, 5,273) = 279.87, p < 0.001, η2 = 0.18, and number of interaction events, F(4, 5,273) = 191.60, p < 0.001, η2 = 0.13. As expected, the in-depth profile showed the highest viewing time and completion, with more frequent interactions, whereas the discontinuous profile was characterized by very short viewing time and low interaction.

These clusters were then semantically labeled based on their quantitative characteristics and visualized using comparative graphs. Clustering helped to reinforce and validate manual segmentation, providing a more objective reading of the behaviors observed to identify latent usage patterns.

Analyses were performed to answer the research questions (Figure 9).

Figure 9
Bar chart titled

Figure 9. Distribution of viewing behavior clusters (K-means segmentation).

To assess the quality of the solution, internal validation metrics were examined. Inertia showed a marked decrease up to k = 4, and the average silhouette score was 0.52, indicating moderate but acceptable separation between clusters. This level of cohesion is typical for behavioral usage data, where partially overlapping patterns are expected rather than sharply distinct types. The four-cluster solution was therefore retained on the basis of statistical adequacy, stability across initializations, and pedagogical interpretability (Figure 10).

Figure 10
Four panels represent different viewing types using icons and short descriptions. “In-depth viewing” is characterized by complete viewing and frequent interaction. “Fast viewing” involves increased playback speed. “Partial viewing” is shorter and incomplete. “Discontinuous viewing” consists of fragmented sessions with low continuity and limited engagement.

Figure 10. Categories of video lecture viewing behavior.

We examined whether behavioral profiles were distributed differently across students and teachers. As shown in table, a chi-square test indicated a significant association between cluster membership and user role, χ2(4) = 328.94, p < 0.001 (Table 6).

Table 6
www.frontiersin.org

Table 6. Distribution of behavioral profiles by user role (students vs. teachers).

A chi-square test indicated a significant association between behavioral profiles and user role, χ2(4) = 328.94, p < 0.001. Discontinuous usage was relatively more frequent among teachers than among students, whereas in-depth and partial profiles were predominantly associated with student accounts. This pattern suggests that teachers tend to access videos in a more fragmented and instrumental way, while students are more likely to engage in sustained viewing.

5 Discussion

The findings of this study confirm the strong heterogeneity of students’ video usage behaviors in blended higher education, while also revealing important tensions that require a critical interpretation of Learning Analytics results. Although clustering helped identify recurring behavioral configurations, such profiles should not be interpreted as fixed “types” of learners: the moderate silhouette score (0.52) indicates that boundaries between patterns are partially overlapping, as is typical in naturalistic educational data. This implies that clusters represent context-dependent tendencies rather than stable psychological traits, reinforcing the need for caution when interpreting behavioral traces and highlighting the risk of reifying fluid behaviors into rigid categories. At the same time, the results speak to broader debates on the interpretation of learning traces. Consistent with critiques of “behaviorist reductionism” in Learning Analytics (Kitto et al., 2017; Slade and Prinsloo, 2013), behavioral logs, such as viewing time, interaction events, or playback speed, do not capture the richness of cognitive, emotional, and motivational processes. In alignment with Zimmerman’s (2002) model of self-regulated learning, students’ behavioral patterns must be understood as partial manifestations of deeper regulatory strategies involving planning, monitoring, and reflection, dimensions that log traces alone cannot fully reveal. High interaction or long viewing time, therefore, cannot be taken as direct indicators of deep learning; interpretations remain necessarily tentative and require triangulation with qualitative evidence, self-reports, or performance data. These limitations also underline the risk of overinterpreting behavioral traces. Cluster membership cannot be used to infer learning quality, mastery, or academic potential, as log-derived profiles represent surface-level patterns rather than stable learner traits. Behavioral categories must therefore be understood as heuristic and context-dependent, not as diagnostic classifications. Behavioral traces should therefore be interpreted descriptively rather than normatively: neither longer viewing nor greater interaction can be assumed to reflect higher-quality learning without contextual evidence.

A first limitation concerns the nature of the data available. Panopto logs provide granular behavioral traces but cannot capture multitasking, concurrent resource use, emotional engagement, or the cognitive strategies underlying observable actions. The platform’s monitoring is also subject to technical bias (e.g., buffering artifacts, device differences, accidental clicks), reminding us that log data are situated, partial, and fallible. Furthermore, platform-generated metrics reflect design assumptions embedded in the technology itself (e.g., how events are defined, how completion is calculated), introducing platform bias into the analytical process. This calls for LA literacy among instructors, who must be trained to interpret data critically rather than automatically equating metrics with learning processes.

Despite these limitations, the analysis reveals meaningful variability in video lesson usage. Students adopt diverse strategies that reflect different needs, time constraints, metacognitive skills, and self-regulation approaches. The differentiation of behaviors—from “in-depth” to “fast,” “partial,” and “discontinuous”—suggests that flexibility in blended courses enables students to personalize their engagement, but may also exacerbate fragmentation for those with limited motivation, weak self-regulatory skills, or high external constraints. In line with Garrison and Vaughan’s (2008) conceptualization of blended learning as an integration of autonomy and structure, these findings suggest that asynchronous video affords agency but also creates vulnerabilities when structure is insufficient.

Differences between courses further highlight the central role of instructional design, reinforcing the view of assessment as an integral component of learning design rather than a separate evaluative moment (Low, 2025). Courses with segmented content, clear narrative structure, visual signaling, and coherent pacing elicited deeper and more sustained viewing, consistent with multimedia learning principles (Mayer, 2009) and evidence from microlearning research (Brame, 2016). Conversely, theoretical or transmissive lessons prompted accelerated or selective consumption, suggesting a mismatch between content design and learners’ regulatory strategies.

Pedagogical, didactic, and logistical factors jointly shape usage patterns. Experiential or workshop-based courses, which provide stronger contextual anchors, appear to foster more complete viewing. Meanwhile, course timing, video duration, and assessment type influence how students allocate time, reinforcing that instructional design decisions are directly reflected in behavioral traces. The temporal distribution of sessions aligns with students’ autonomous study routines, primarily on weekday afternoons, reflecting the affordances of blended learning for self-paced organization, especially for working students.

Role-related differences confirm that video lessons are primarily a student-centered resource. Students watch longer, complete more content, and generate more interaction events, whereas teachers show short, low-interaction sessions. This justifies the analytic separation of roles and underscores the importance of avoiding aggregated metrics that combine heterogeneous users. “In-depth” users demonstrate the potential of asynchronous content when it is well-structured and aligned with learners’ needs. Conversely, “fast” users, who maintain high completion despite accelerated playback, illustrate strategic time management, which may reflect efficient self-regulation or, alternatively, surface processing or cognitive avoidance (Sweller, 1988; Tempelaar et al., 2015). “Partial” and “discontinuous” behaviors highlight challenges related to attention, cognitive load, motivation, or design misalignment. Early dropouts resemble known patterns of disengagement in online learning (Kizilcec et al., 2014) and reveal structural vulnerabilities in video design that may disproportionately affect learners with higher cognitive load or lower digital competence.

Clustering formalized these behaviors into interpretable profiles, offering a structured lens for designing targeted pedagogical interventions. Yet the moderate cluster separability requires that these profiles be treated as heuristic categories rather than diagnostic classifications. Their educational value lies not in labeling learners but in informing design decisions, such as embedding checkpoints, scaffolds, reflective prompts, or multimodal cues to support diverse regulatory strategies, in line with formative feedback principles emphasizing the importance of timely, task-focused, and non-evaluative feedback (Shute, 2008). Beyond offering a descriptive map of student behaviors, each cluster suggests specific instructional implications. In-depth viewers benefit from structured opportunities for reflection, such as guided prompts, brief summaries, or metacognitive checkpoints that help consolidate learning. Fast users may require strategically placed checkpoints, conceptual anchors, or embedded quizzes to meaningfully slow down processing without compromising autonomy. Partial users appear to respond better to short, modular video units, enabling selective access while maintaining coherence. Discontinuous users may need clearer upfront orientation, explicit learning goals, brief introductions, and navigational cues, to reduce early dropout and support sustained engagement. These differentiated implications highlight the value of tailoring video design to diverse regulatory strategies. Building on these differentiated strategies, Table 7 summarizes cluster-specific instructional implications to support targeted redesign of asynchronous video lessons.

Table 7
www.frontiersin.org

Table 7. Summary of behavioral clusters and instructional implications.

From a didactic perspective, these findings emphasize that video lessons should not be treated merely as repositories of content but as components of a broader learning ecosystem that supports active processing, self-regulation, and inclusive access. Integrating reflection prompts, quizzes, feedback mechanisms (Banihashem et al., 2022), or game-informed features can enhance engagement across profiles (Shute, 2011). The diversity of behaviors underscores the need for personalization and flexibility, consistent with Universal Design for Learning (CAST, 2018). Students with varied cognitive, linguistic, and technological needs benefit from modular content, multiple representations, optional paths, and navigational freedom. The discontinuous profile, characterized by early dropout and fragmented access, may disproportionately affect learners facing accessibility barriers, precisely the population UDL aims to support. This suggests that UDL-aligned design features (e.g., clear navigation, explicit learning goals, multimodal cues) could mitigate early disengagement.

A promising direction concerns the integration of the student voice. Evidence from student–staff partnership research (Cook-Sather, 2009; Bovill et al., 2011; Jääskelä et al., 2017) suggests that involving learners in the co-design and iterative improvement of video lessons enhances relevance, motivation, and teaching presence, particularly in asynchronous contexts where disengagement risks are higher.

A broader interpretation of these behavioral patterns requires anchoring the findings in the main theoretical frameworks requested by the reviewers. First, the differentiated strategies observed, such as selective viewing, acceleration, revisiting specific segments, or prolonged sessions, can be read through the lens of self-regulated learning. According to Zimmerman’s (2002) cyclical model, learners continuously shift between forethought, performance, and self-reflection phases, adjusting their strategies based on goals, perceived task difficulty, and available time. The wide variability in consumption behaviors reflects these micro-regulatory decisions: acceleration may serve as a time-management strategy, pauses and rewinds may indicate monitoring and control, and discontinuous viewing may reflect breakdowns in regulation or contextual constraints. Importantly, log data capture only the visible dimension of regulation, not the metacognitive or motivational processes that underpin it, reinforcing the need for interpretative caution. Second, the findings align with key principles of blended learning. Garrison and Vaughan (2008) emphasize that effective blended environments hinge on purposeful integration of synchronous and asynchronous components to support cognitive presence, teaching presence, and social presence. The differentiated usage patterns observed here demonstrate how asynchronous video lessons function as flexible learning resources embedded within students’ personal study ecologies, particularly in a population of working learners. However, the same flexibility can also amplify fragmentation when videos are not adequately structured or when students lack the regulatory resources to manage self-paced learning. This speaks to the importance of designing blended environments that balance autonomy with scaffolding, providing clear pathways and structured moments of interaction. Third, these results underscore the need for Learning Analytics literacy, both at the institutional and instructional levels. The interpretation of log traces involves epistemic risks: behavioral data are partial, selective, and potentially biased representations of learning processes. Without adequate interpretive competencies, instructors may overestimate engagement based on superficial metrics (e.g., completion or interaction counts) or, conversely, misinterpret efficient strategic behaviors as disengagement, which should be interpreted as pedagogical signals supporting professional judgment rather than deterministic indicators of learning quality or student ability (Youngs et al., 2025). Developing Learning Analytics literacy means equipping educators to read, contextualize, and question analytics outputs; to distinguish actionable patterns from noise; and to integrate quantitative traces with pedagogical judgment. This aligns with recent ethical frameworks that emphasize transparency, proportionality, and dialogic interpretation of student data (Slade and Prinsloo, 2013; Kitto et al., 2017). Finally, the use of behavioral data in instructional decision-making raises critical ethical considerations. Logs may contain inherent biases (e.g., penalizing students with unstable internet, limited devices, or high workload), thereby risking unwarranted inferences about competence or engagement. Ethical learning analytics require institutions to articulate transparent communication policies, ensure students understand what data are collected and why, and provide opportunities to contest or reinterpret analytics representations. In this sense, Learning Analytics should not be used as instruments of surveillance or control, but as shared tools to foster dialogue, self-awareness, and agency in blended learning environments.

Nevertheless, the study presents several limitations. Generalizability is constrained by the single-institution, single-program context. Data sources are limited to behavioral logs and do not incorporate learning outcomes, assessments, or qualitative perspectives. Cluster validity was assessed internally but not externally, and relationships between behavioral profiles and academic performance remain unexplored. Future research should adopt mixed-methods designs integrating log data with interviews, surveys, think-aloud protocols, and performance indicators to deepen interpretability and pedagogical relevance. Moreover, platform-generated logs may contain inherent biases related to device type, connectivity stability, playback buffering, or uneven digital competencies, all of which can introduce distortions in interpreting behavioral patterns. These factors need to be acknowledged when using log data for pedagogical decision-making. Although the present study did not include academic performance, retention, or satisfaction data, future work should examine whether the identified behavioral profiles map meaningful learning outcomes. Establishing these connections is essential to avoid overinterpreting behavioral traces and to assess the real educational significance of such patterns.

Ethically, Learning Analytics raises issues of privacy, consent, interpretability, and potential surveillance. While the data were anonymized and used exclusively for formative purposes, institutions should adopt transparent policies and engage students in discussions about how their data are collected, processed, and used. Providing students with accessible dashboards that show how data inform teaching improvements can foster trust, agency, and participatory data cultures. Such transparency is essential to mitigate the risks of opacity and misinterpretation and to promote responsible, educationally meaningful uses of analytics. A proactive approach to transparency also requires informing students not only about what data are collected, but how behavioral indicators are interpreted and with what limitations (Norušis, 2011). Students should have the possibility to question, reinterpret, or contextualize their own data, reinforcing a participatory and accountable approach to analytics. In parallel, instructors require a solid level of Learning Analytics literacy to correctly interpret behavioral metrics and avoid simplistic or normative readings of the data. Developing educators’ critical data competences is essential to ensure that analytics are used supportively and not prescriptively.

5.1 Educational and teaching implications

The results of this research provide a solid basis for advanced pedagogical reflection on video lessons in blended university contexts. Behavioral segmentation and data clustering suggest the importance of adopting a differentiated perspective in the design and integration of video lessons into training courses.

1) Differentiated design for user profiles: The diversification of the behaviors observed (in-depth, partial, fast, discontinuous) highlights the need for instructional design that respects the plurality of learning styles. According to the Instructional Design for Diverse Learners model (Rose and Dalton, 2009), video lessons should offer multi-level paths, with flexible navigation, optional content, and metacognitive supports, such as concept maps or dynamic indexes.

2) Active engagement and deep learning: Students who interact more with videos (more events, longer duration) show behaviors associated with engaged learning. To support this engagement, it is useful to integrate interactive elements that stimulate active information processing: guiding questions, reflection activities, and narratives structured according to the Managed Cognitive Load Model (Mayer and Moreno, 2003).

3) Monitoring and personalized feedback: Learning Analytics allow video lessons to be transformed into tools for continuous formative feedback. As Dai et al. (2023) point out, effective feedback is specific, timely, and constructive. Personalized dashboards for students and teachers can help develop forms of self-regulation and digital tutoring, in line with the assessment as learning model (Earl, 2003).

4) Balance between flexibility and structure: Data show that students appreciate the ability to access content independently (speed, selectivity); however, the absence of structure can also lead to early dropout or discontinuous use. In this sense, it is useful to combine the freedom of asynchrony with moments of synchronous anchoring, such as Q&A sessions or guided discussions, consistent with the concept of blended connectedness (Smith and Evans, 2024).

5) Personalization and inclusivity: The integration of adaptive tools and modular courses reflects the principles of Universal Design for Learning (Pardo et al., 2019), which promotes accessibility and educational equity. Digital environments must respond to diverse cognitive, linguistic, emotional, and technological needs, offering multiple resources, differentiated support, and diverse opportunities for expression.

A further development concerns the direct involvement of students in the design and review of video lessons. Integrating the student voice, i.e., systematically collecting qualitative feedback and suggestions for improvement from users, enables the creation of more relevant, responsive, and motivating learning paths (Cook-Sather, 2009; Bovill et al., 2011). This logic of co-designing teaching not only promotes better alignment between content and expectations but also greater awareness of metacognitive and regulatory processes, stimulating engagement and a sense of agency in asynchronous environments (Jääskelä et al., 2017; Seale, 2016). The active involvement of students as partners in educational design is particularly strategic in digital contexts, where the risk of cognitive disconnection or alienation can be mitigated by listening, adaptation, and shared revision of materials.

6 Conclusion and prospects

This study examined how students in a blended Master’s program engage with asynchronous video lessons and showed that video usage is far more diverse and complex than often assumed in higher education. By analyzing 5,278 viewing sessions across 14 courses, we identified four recurring behavioral configurations that reflect different ways of managing time, attention, and interaction with content. These findings challenge the idea of video lessons as neutral add-ons and instead position them as central components of blended learning design.

The study contributes to the field in three main ways. First, it provides empirical evidence on actual, fine-grained usage behaviors, addressing a persistent gap in the literature dominated by self-reported perceptions rather than behavioral traces. Second, it demonstrates how Learning Analytics can support pedagogical reflection when used cautiously and in combination with instructional design principles. Third, it highlights the importance of distinguishing user roles and contextual factors when interpreting platform data, an aspect often overlooked in institutional dashboards.

Practically, the results suggest that video lessons should be conceived as part of a broader learning ecosystem that values flexibility, navigability, and alignment with other course components. While the identified profiles cannot be interpreted as stable learner “types,” they offer instructors useful indications for designing more accessible, segmented, and purposeful video resources and for integrating them with synchronous moments or formative checkpoints (Moriña et al., 2025). The study also underlines the importance of developing educators’ data literacy so that behavioral analytics can be interpreted appropriately and used to support (not classify) students. Overall, the heterogeneity of usage patterns confirms that asynchronous video lessons activate diverse self-regulatory strategies, consistent with Zimmerman’s (2002) model of self-regulated learning. These strategies, however, require intentional design support rather than being assumed as given.

This research has limitations related to its single-institution scope and reliance on log data. These constraints restrict generalizability and prevent conclusions about underlying cognitive processes or academic outcomes. Nevertheless, the findings offer a foundation for future work that integrates behavioral traces with qualitative methods, multimodal data, and cross-institutional comparisons. Advancing this line of inquiry will be essential for developing more ethical, transparent, and pedagogically grounded uses of Learning Analytics in blended higher education.

Future institutional efforts should not only integrate multimodal data sources but also develop faculty development programs focused on Learning Analytics literacy, ensuring that behavioral data are interpreted cautiously, transparently, and in ways that support (rather than classify) students.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

FP: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing. ET: Writing – original draft. FS: Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was used in the creation of this manuscript. Generative AI tools were used solely to assist with language editing, refining phrasing, and improving the clarity of expression. No generative AI tools were used to create, analyze, or interpret research data, nor to generate original content or ideas beyond language polishing.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2026.1677494/full#supplementary-material

Footnotes

1^During the three-hour in-person lesson, the course is not limited to a lecture and content delivery but also includes opportunities for inter se discussion among students and/or review of online work. The approach adopted balances content presentation with practice and discussion, facilitating learning based on exploration and overcoming mistakes, and promoting problem-solving skills. The classroom lesson aims to provide concrete examples linked to real problems in future professional life.

2^In Internet: URLhttps://www.panopto.com/it/.

References

Almasi, M., and Zhu, C.. (2019). Studying teaching presence in relation to learner performance in blended learning courses in a tanzanian university: a mixed design approach. Proceedings of the 8th Teaching & Education Conference, Vienna. New York, NY: Springer.

Google Scholar

Almeqdad, Q. I., Alodat, A. M., Alquraan, M. F., Mohaidat, M. A., and Al-Makhzoomy, A. K. (2023). The effectiveness of universal Design for Learning: a systematic review and meta-analysis. Cogent Educ. 10:2218191. doi: 10.1080/2331186X.2023.2218191

Crossref Full Text | Google Scholar

AlRawi, J. M., Almekhlafi, A. G., Ibrahim, R. M., and Ahmed, M. M. (2021). Universal design for learning for educating students with intellectual disabilities: a review. Front. Educ. 6:999065. doi: 10.3389/feduc.2021.768658

Crossref Full Text | Google Scholar

Bandura, A. (1986). Social foundations of thought and action: a social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Google Scholar

Banihashem, S. K., Aliabadi, K., Pourroostaei Ardakani, S., Delaver, A., and Nili Ahmadabadi, M. (2022). A systematic review of the role of learning analytics in improving feedback practices in technology-mediated higher education. Comput. Educ. 188:104571. doi: 10.1016/j.compedu.2022.104571

Crossref Full Text | Google Scholar

Bergdahl, N., Nouri, J., Karunaratne, T., Afzaal, M., and Cerratto-Pargman, T. (2024). Unpacking student engagement in higher education: learning analytics approaches to engagement and disengagement. Educ. Technol. Res. Dev. 72, 1–25. doi: 10.1186/s41239-024-00493-y

Crossref Full Text | Google Scholar

Boelens, R., De Wever, B., and Voet, M. (2017). Four key challenges to the design of blended learning: a systematic literature review. Educ. Res. Rev. 22, 1–18. doi: 10.1016/j.edurev.2017.06.001

Crossref Full Text | Google Scholar

Boud, D., and Falchikov, N. (2007). “Developing assessment for informing judgement” in Rethinking assessment for higher education: Learning for the longer term. eds. D. Boud and N. Falchikov (London, UK: Routledge), 181–197.

Google Scholar

Bovill, C., Cook-Sather, A., and Felten, P. (2011). Students as co-creators of teaching approaches, course design, and curricula: implications for academic developers. Int. J. Acad. Dev. 16, 133–145. doi: 10.1080/1360144X.2011.568690

Crossref Full Text | Google Scholar

Brame, C. J. (2016). Effective educational videos: principles and guidelines for maximizing student learning from video content. CBE Life Sci. Educ. 15:es6. doi: 10.1187/cbe.16-03-0125,

PubMed Abstract | Crossref Full Text | Google Scholar

Bransford, J. D., Brown, A. L., and Cocking, R. R. (2000). How people learn: brain, mind, experience, and school. Washington, DC: National Academy Press.

Google Scholar

Brinton, C. G., Chiang, M., Jain, S., Lam, H., Liu, Z., and Wong, F. M. F. (2016). Learning about social learning in MOOCs: from statistical analysis to generative model. IEEE Trans. Learn. Technol. 9, 117–130. doi: 10.1109/TLT.2015.2453026

Crossref Full Text | Google Scholar

Carenzio, A., Pelizzari, F., and Rivoltella, P. C. (2025) Gli hypervideo: concetto, architettura e spendibilità. Analisi di un’esperienza nella higher education A. PaceDi, C. Panciroli, and P. C. Rivoltella Apprendere con le tecnologie tra presenza e distanza, Brescia, Italy: Morcelliana Scholè, 324–339.

Google Scholar

CAST (2018) Universal design for learning guidelines version 2.2 Wakefield, MA Author. Available online at: https://udlguidelines.cast.org

Google Scholar

CAST. (2024). Universal design for learning guidelines version 3.0. CAST. Available online at: https://udlguidelines.cast.org (Accessed January 10, 2026).

Google Scholar

Chen, C.-M., and Wu, C.-H. (2015). Effects of different video lecture types on sustained attention, emotion, cognitive load, and learning performance. Comput. Educ. 80, 108–121. doi: 10.1016/j.compedu.2014.08.015

Crossref Full Text | Google Scholar

Clark, R. C., and Mayer, R. E. (2016). E-learning and the science of instruction: proven guidelines for consumers and designers of multimedia learning. 4th Edn. Hoboken, NJ: Wiley.

Google Scholar

Cook-Sather, A. (2009). From traditional accountability to shared responsibility: the benefits and challenges of student consultants gathering midcourse feedback in college classrooms. Assess. Eval. High. Educ. 34, 231–241. doi: 10.1080/02602930801955944

Crossref Full Text | Google Scholar

Czerkawski, B., and Lyman, E. W. (2016). An instructional design framework for fostering student engagement in online learning environments. TechTrends 60, 532–539. doi: 10.1007/s11528-016-0110-z

Crossref Full Text | Google Scholar

Dai, C. P., Ke, F., Dai, Z., and Pachman, M. (2023). Improving teaching practices via virtual reality-supported simulation-based learning: scenario design and the duration of implementation. Br. J. Educ. Technol. 54, 836–856. doi: 10.1111/bjet.13296

Crossref Full Text | Google Scholar

Drachsler, H., and Greller, W.. (2016). Privacy and analytics: it's a DELICATE issue. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (LAK '16) (pp. 89–98) Amsterdam, Netherlands: Springer.

Google Scholar

Earl, L. (2003). Assessment as learning: using classroom assessment to maximize student learning. Thousand Oaks, CA: Corwin Press.

Google Scholar

Espada-Chavarría, R., Mosquera-González, M. J., and González-Fernández, D. (2023). Universal design for learning (UDL) and universal design for instruction (UDI): effective strategies for inclusive education. Educ. Sci. 13:620. doi: 10.3390/educsci13060620

Crossref Full Text | Google Scholar

Everitt, B. S., Landau, S., Leese, M., and Stahl, D. (2011). Cluster analysis. 5th Edn: Wiley Series in Probability and Statistics.

Google Scholar

Ferguson, R. (2012). The state of learning analytics in 2012: a review and future challenges. UK: Technical Report KMI-12-01 Knowledge Media Institute, The Open University.

Google Scholar

Fredricks, J. A., Blumenfeld, P. C., and Paris, A. H. (2004). School engagement: potential of the concept, state of the evidence. Rev. Educ. Res. 74, 59–109. doi: 10.3102/00346543074001059

Crossref Full Text | Google Scholar

Garrison, D. R., Anderson, T., and Archer, W. (2000). Critical inquiry in a text-based environment: computer conferencing in higher education. Internet High. Educ. 2, 87–105. doi: 10.1016/S1096-7516(00)00016-6

Crossref Full Text | Google Scholar

Garrison, D. R., and Vaughan, N. D. (2008). Blended learning in higher education: framework, principles, and guidelines. San Francisco, CA: Jossey-Bass.

Google Scholar

Gašević, D., Dawson, S., Rogers, T., and Gasevic, D. (2017). Learning analytics should not promote one size fits all: the effects of instructional conditions in predicting academic success. Internet High. Educ. 28, 68–84. doi: 10.1016/j.iheduc.2015.10.002

Crossref Full Text | Google Scholar

Giannakos, M. N. (2015). Exploring students' engagement with video-based learning. Br. J. Educ. Technol. 46, 1259–1273. doi: 10.1111/bjet.12263

Crossref Full Text | Google Scholar

Guo, P. J., Kim, J., and Rubin, R. (2014). How video production affects student engagement: an empirical study of MOOC videos. In Proceedings of the First ACM Conference on Learning @ scale Conference (pp. 41–50) New York, NY.

Google Scholar

Hennig, C., Meilă, M., Murtagh, F., and Raftery, A. E. (2015). Handbook of cluster analysis. Boca Raton, FL: CRC Press/Taylor & Francis Group.

Google Scholar

Jääskelä, P., Häkkinen, P., and Rasku-Puttonen, H. (2017). Teacher beliefs regarding learning, pedagogy and the role of digital technology: a diagnostic tool. Educ. Media Int. 54, 223–236. doi: 10.1080/09523987.2017.1373069

Crossref Full Text | Google Scholar

Jain, A. K. (2010). Data clustering: 50 years beyond K-means. Pattern Recogn. Lett. 31, 651–666. doi: 10.1016/j.patrec.2009.09.011

Crossref Full Text | Google Scholar

Ketchen, D. J., and Shook, C. L. (1996). The application of cluster analysis in strategic management research: an analysis and critique. Strateg. Manag. J. 17, 441–458. doi: 10.1002/(SICI)1097-0266(199606)17:6

Crossref Full Text | Google Scholar

Kirschner, P. A., Sweller, J., and Clark, R. E. (2006). Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educ. Psychol. 41, 75–86. doi: 10.1207/s15326985ep4102_1

Crossref Full Text | Google Scholar

Kitto, K., Cross, S., Waters, Z., and Lupton, M. (2017). “Learning analytics: a community of practice approach” in Handbook of learning analytics (New York, NY: Society for Learning Analytics and Knowledge), 104–119.

Google Scholar

Kizilcec, R. F., Piech, C., and Schneider, E. (2014). Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. Proceedings of the Third International Conference on Learning Analytics and Knowledge, New York, NY, 170–179.

Google Scholar

Liu, Q., and Khalil, M. (2023). Understanding privacy and data protection issues in learning analytics: a systematic review. Br. J. Educ. Technol. 54, 1715–1747. doi: 10.1111/bjet.13353

Crossref Full Text | Google Scholar

Low, Y. C. (2025). “Embedding formative assessment in the flipped statistics classroom” in Formative assessment and feedback in post-digital learning environments (London, UK: Routledge), 158–163.

Google Scholar

Mayer, R. E. (2009). Multimedia learning. 2nd Edn. Cambridge, UK: Cambridge University Press.

Google Scholar

Mayer, R. E., and Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educ. Psychol. 38, 43–52. doi: 10.1207/S15326985EP3801_6

Crossref Full Text | Google Scholar

Means, B., Toyama, Y., Murphy, R., Bakia, M., and Jones, K. (2014). The effectiveness of online and blended learning: a meta-analysis of the empirical literature. New York, NY: Teachers College Record.

Google Scholar

Moriña, A., Carballo, R., and Doménech, A. (2025). Transforming higher education: a systematic review of faculty training in UDL and its benefits. Teach. High. Educ. 30, 1722–1739. doi: 10.1080/13562517.2025.2465994

Crossref Full Text | Google Scholar

Norušis, M. J. (2011). IBM SPSS statistics 19 statistical procedures companion. Upper Saddle River, NJ: Pearson.

Google Scholar

Pan, Z., Biegley, L., Taylor, A., and Zheng, H. (2024). A systematic review of learning analytics–incorporated instructional interventions on LMSs. J. Learn. Anal. 11, 52–72. doi: 10.18608/jla.2023.8093

Crossref Full Text | Google Scholar

Panadero, E. (2017). A review of self-regulated learning: six models and four directions for research. Front. Psychol. 8:422. doi: 10.3389/fpsyg.2017.00422,

PubMed Abstract | Crossref Full Text | Google Scholar

Papamitsiou, Z., and Economides, A. A. (2014). Learning analytics and educational data mining in practice: a systematic literature review of empirical evidence. Educ. Technol. Soc. 17, 49–64. doi: 10.1016/j.compedu.2014.04.010

Crossref Full Text | Google Scholar

Pardo, A., Jovanović, J., Dawson, S., Gašević, D., and Mirriahi, N. (2019). Using learning analytics to scale the provision of personalised feedback. Br. J. Educ. Technol. 50, 128–138. doi: 10.1111/bjet.12592

Crossref Full Text | Google Scholar

Rajenthiram, K. (2025) Optimizing data analytics workflows through user-driven experimentation: progress and updates. In 2025 IEEE/ACM 4th International Conference on AI Engineering–Software Engineering for AI (CAIN) (pp. 236–240). Singapore: IEEE.

Google Scholar

Rienties, B., Cross, S., and Zdrahal, Z. (2016). Implementing a learning analytics intervention and evaluation framework: what works? J. Learn. Anal. 3, 5–28. doi: 10.1007/978-3-319-06520-5_10

Crossref Full Text | Google Scholar

Rose, D. H., and Dalton, B. (2009). Learning to read in the digital age. Mind Brain Educ. 3, 74–83. doi: 10.1111/j.1751-228X.2009.01057.x

Crossref Full Text | Google Scholar

Seale, J. (2016). Student voice and digital technologies. London, UK: Routledge.

Google Scholar

Shute, V. J. (2008). Focus on formative feedback. Rev. Educ. Res. 78, 153–189. doi: 10.3102/0034654307313795

Crossref Full Text | Google Scholar

Shute, V. J. (2011). “Stealth assessment in computer-based games to support learning” in Computer games and instruction. eds. S. Tobias and J. D. Fletcher (New York, NY: IAP), 503–524.

Google Scholar

Siemens, G., and Long, P. (2011). Penetrating the fog: analytics in learning and education. EDUCAUSE Rev. 46, 30–40. doi: 10.1016/j.educause.2011.09.001

Crossref Full Text | Google Scholar

Slade, S., and Prinsloo, P. (2013). Learning analytics: ethical issues and dilemmas. Am. Behav. Sci. 57, 1509–1528. doi: 10.1177/0002764213479366

Crossref Full Text | Google Scholar

Smith, J. S., and Evans, M. (2024). “Incorporating video modules in simulation education” in 2024 Winter Simulation Conference (WSC) (London, UK: IEEE), 3154–3162.

Google Scholar

Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cogn. Sci. 12, 257–285. doi: 10.1207/s15516709cog1202_4

Crossref Full Text | Google Scholar

Tempelaar, D. T., Rienties, B., and Nguyen, Q. (2015). Towards actionable learning analytics using dispositions. J. Interact. Media Educ. 1, 1–10. doi: 10.1109/TLT.2017.2662679

Crossref Full Text | Google Scholar

Thalheimer, W. (2017). Microlearning: evidence-based perspectives for learning professionals. Somerville, MA: Work-Learning Research.

Google Scholar

Youngs, P., Foster, J. K., Watson, G. S., Korban, M., and Acton, S. T. (2025). “Why instructional activities within classroom activity structures matter and how teacher dashboards can support advancements in instruction” in Research handbook on classroom observation (New York, NY: Edward Elgar Publishing), 327–340.

Google Scholar

Zimmerman, B. J. (2002). Becoming a self-regulated learner: an overview. Theory Pract. 41, 64–70. doi: 10.1207/s15430421tip4102_2

Crossref Full Text | Google Scholar

Keywords: blended learning, instructional design, learning analytics, pedagogical innovation, personalization, usage behavior, video lessons

Citation: Pelizzari F, Tassalini E and Scott FM (2026) From data to teaching: video lessons and learning analytics in blended university contexts. Front. Educ. 11:1677494. doi: 10.3389/feduc.2026.1677494

Received: 31 July 2025; Revised: 04 December 2025; Accepted: 16 January 2026;
Published: 02 February 2026.

Edited by:

Songxin Tan, South Dakota State University, United States

Reviewed by:

Isaiah T. Awidi, University of Southern Queensland, Australia
Edgar R. Eslit, St. Michael’s College (Iligan), Philippines

Copyright © 2026 Pelizzari, Tassalini and Scott. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Federica Pelizzari, ZmVkZXJpY2EucGVsaXp6YXJpQHVuaWNhdHQuaXQ=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.