Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 12 February 2026

Sec. Digital Learning Innovations

Volume 11 - 2026 | https://doi.org/10.3389/feduc.2026.1675872

This article is part of the Research TopicThe Transformative Impact of Digital Tools on Quality Education and Sustainable DevelopmentView all 11 articles

Enhancing digital competence and learning motivation of master’s students: a new educational model for Kazakhstan’s higher education system

Zakira BakirovaZakira Bakirova1Asel Tasova
Asel Tasova2*Marfuga AbsatovaMarfuga Absatova3Dinara SadirbekovaDinara Sadirbekova4Batyrkhan AuezovBatyrkhan Auezov5Gulmira MeirbekovaGulmira Meirbekova2
  • 1Department of Pedagogy and Psychology, Abai Kazakh National Pedagogical University, Almaty, Kazakhstan
  • 2Department of Pedagogy and Psychology, Khoja Akhmet Yassawi International Kazakh-Turkish University, Turkistan, Kazakhstan
  • 3Science Department, Abai Kazakh National Pedagogical University, Almaty, Kazakhstan
  • 4Department of Pedagogy, Psychology, and Social Sciences, Almaty Humanitarian and Economic University, Almaty, Kazakhstan
  • 5Department of Preschool and Primary Education, Khoja Akhmet Yassawi International Kazakh-Turkish University, Turkistan, Kazakhstan

This study introduces and empirically validates a comprehensive educational model designed to enhance digital competence and learning motivation among Master’s students in Kazakhstan. Using a quasi-experimental design with 236 participants from five pedagogical universities, the research examined how an integrated approach combining technology-enhanced pedagogy, microlearning, gamification, online assessment, and collaborative digital projects influenced students’ digital competence, learning motivation, and academic performance. Results demonstrated significant improvements in digital competence across all dimensions (effect size η2 = 0.46) for students in the experimental group, with particularly strong development in content creation and problem-solving competencies. The model also had a positive influence on learning motivation, self-efficacy, student engagement, and academic performance. Path analysis confirmed an integrated theoretical framework where technology-enhanced pedagogy influenced digital competence, which subsequently affected motivation, self-efficacy, engagement, and academic achievement. Students with elementary baseline digital skills showed the largest competence gains, indicating the model’s potential for addressing digital inequality in Kazakhstan’s higher education system. Follow-up assessments revealed durable effects, suggesting sustainable rather than transient changes. The research provides theoretical contributions to understanding the interrelationship between digital competence and motivation while offering practical strategies for modernizing Master’s programs in Kazakhstan and similar educational contexts.

1 Introduction

Global trends indicate a sharp increase in the demand for advanced digital competences (DC) and master’s-level qualifications, requiring universities to update their curricula and establish valid metrics for assessing students’ DC (World Bank, 2025; OECD, 2025a). The «State of the Digital Decade 2025» report highlights a significant gap between set targets and the actual level of DС, specifically the goal for 80% of the population to possess basic DC by 2030 (European Commission, 2025; Reuters, 2025). UNESCO reports that two-thirds of higher education institutions are already implementing or developing regulations for the use of artificial intelligence (AI) in teaching and learning (Osipov et al., 2020; UNESCO, 2025). This trend simultaneously increases the demand for evidence-based models that enhance the digital competence (DC) and learning motivation (LM) of master’s students, particularly in countries such as Kazakhstan that are undergoing accelerated modernization of higher education.

Against the backdrop, Kazakhstan’s significant deficits in advanced DC persist at the master’s level (Nurtayeva et al., 2024; Yermekova et al., 2025). Concurrently, local experiments with digital education interventions demonstrate significant gains in students’ professional abilities, confirming the potential of targeted pedagogical solutions (Kurakbayeva and Xembayeva, 2025). The demand for DC from the labor market is intensifying. OECD analyses emphasize the need for advanced digital skills (DS) and the integration of educational technologies and AI in higher education (OECD, 2025b). However, according to the Coursera Global Skills Report, Kazakhstan ranks 37th globally in overall digital and professional competencies, underscoring the imperative to cultivate advanced-level competencies (Coursera, 2025; Amin et al., 2025).

A decline in LM worsens digital inequality. Abildina et al. (2025) revealed that traditional pedagogical approaches prevalent in Kazakhstan’s graduate programs fail to stimulate student engagement in digitalized environments. This deficit becomes evident when digital tools are implemented without a clear pedagogical rationale and institutional support, leading to technostress and cognitive overload that undermines sustainable LM (Al Alsibani and Al Eladl, 2024; Daud, 2025). Data from Kazakhstan confirm that the motivation of master’s students declines when digital assignments are not aligned with future professional practices and transparent assessment criteria (Omarova and Mirza, 2025). Implementing Education 5.0 requires more structured pathways and deliberate external facilitation, as some learners struggle with self-direction and exhibit diminished self-efficacy (Mynbayeva et al., 2025).

The DC crisis manifests as a significant challenge within Kazakhstan’s higher education system. Master’s students frequently encounter difficulties navigating academic databases, effectively collaborating on digital platforms, and producing multimedia content for scholarly presentations. The rapid advancement of AI has fundamentally reshaped the educational and professional landscapes, creating an acute demand for advanced DC that extends beyond basic literacy to include AI literacy, prompt engineering, and ethical application of AI tools (Arapbayev, 2024; Meiramova and Zagatova, 2025). Graduates lacking these skills face significant obstacles in a labor market that increasingly expects proficiency with AI-powered tools and data analytics platforms (Zholdigaly et al., 2024). This situation highlights a critical misalignment between current educational outcomes and the nation’s digital development goals (Zholdigaly et al., 2024; Arapbayev, 2024). While extensive research has explored DC development and motivation enhancement separately in global educational contexts, integrated approaches tailored to Kazakhstan’s specific socio-cultural and institutional environment that address the contemporary challenges of AI integration in higher education are lacking (Meiramova and Zagatova, 2025; Zhaisanova (2022) examined the impact of educational strategies on student motivation in Kazakhstani universities and found that the primary driver for education in Kazakhstan was credentialism rather than deep subject mastery. Similarly, Tazhitova et al. (2022) identified low motivation as a barrier to the implementation of English-medium instruction. While efforts are underway to shift toward competency-based strategies (Sarmurzin et al., 2021; Vasiljeva et al., 2019), the relationship between DC and LM is crucial. DC, defined as the confident, critical, and responsible use of and engagement with digital technologies for learning, at work, and for participation in society (Greene and Crompton, 2025), significantly influences motivation and engagement in the educational process (Cabero-Almenara et al., 2021). Studies have confirmed a positive correlation, showing that higher DC enhances learning outcomes through self-efficacy and engagement (Zhao et al., 2021; Abdillah and Wahyuilahi, 2025).

Across technology-enhanced learning contexts, prior research consistently demonstrates a positive association between students’ DC and LM, shows that greater digital competence indirectly improves learning outcomes via self-efficacy and engagement, and identifies both digital competence and digital informal learning as mediators within academic-success models (Abdillah and Wahyuilahi, 2025; Dang et al., 2024; Kline, 2023; Redecker and Punie, 2017; Zhao et al., 2021). While the scientific expert community and the European Commission prepare new DigComp with systemic AI integration (Joint Research Centre, European Commission, n.d.; Zakir et al., 2025; OECD, 2025b) and rigorously defined proficiency levels to furnish sharper benchmarks for master’s-level learning outcomes direct empirical validation in Kazakhstan remains scarce and fragmented, as local DigCompEdu assessments of pre-service teachers reveal substantial inter-group variability and domain-specific gaps, and interventional studies are constrained by limited scale and duration (Davletova et al., 2025a).

1.1 Literature review

1.1.1 Digital competences in higher education

DC has shifted from a technical notion to a multidimensional construct spanning cognitive, attitudinal, and social domains (Dang et al., 2024) and is defined as the confident, critical, and responsible use of digital technologies for learning, work, and civic participation (Greene and Crompton, 2025). DC prepares future professionals for work in increasingly digitalized environments. Scholars delineate five core domains: information and data literacy, communication and collaboration, digital content creation, safety, and problem-solving (Hammoda and Foli, 2024; Redecker and Punie, 2017). High level of DC is integral to student learning and the attainment of advanced competencies (Volodina and Volodina, 2020; Baissydyk et al., 2023; Vasiljeva et al., 2018) integrating digital assignments strengthens mastery of professional domains innovation and research vital for international collaboration (Sharonin et al., 2022). In Kazakhstan, pronounced digital divides persist, many students exhibit only moderate basic skills and struggle with content creation, critical information evaluation, and digital problem-solving (Kerimbayev et al., 2017; Greene and Crompton, 2025; Zhao et al., 2021). Despite baseline infrastructure gains, master’s and postgraduate cohorts still lack advanced infrastructure and research-analytics capabilities, constraining engagement with global academia and labour readiness (Guillén-Gámez and Mayorga-Fernández, 2020; Kanyika et al., 2024). Universities’ ability to adapt digital technologies for postgraduate provision is therefore pivotal to institutional performance and sustainability (Davletova et al., 2025b; Palacios-Rodríguez et al., 2025).

1.1.2 Learning motivation in higher education

The critical role of LM in academic achievement becomes especially pronounced at the graduate level, where the emphasis shifts decisively to self-directed learning. Contemporary research frequently draws upon self-determination theory, which distinguishes between intrinsic motivation (driven by internal satisfaction) and extrinsic motivation (driven by external rewards) (Ryan and Deci, 2000; Younas et al., 2022). The ARCS Model of Motivation (Attention, Relevance, Confidence, Satisfaction) provides another influential framework for designing engaging learning experiences (Keller, 2009; Noor et al., 2022). The model’s core premise is that learners’ motivation is sustained when their interest is captured, they perceive the learning as valuable, they believe in their ability to succeed, and they feel a sense of accomplishment.

Motivation exerts a critical influence on student engagement, persistence, and academic achievement (Lo, 2024). Students with intrinsic motivation tend to employ deeper learning strategies and demonstrate higher performance outcomes than those driven primarily by extrinsic factors (Howard et al., 2021). Zhaisanova (2022) found that among Kazakhstan’s students, extrinsic motivation is more prevalent than intrinsic motivation, with parental extrinsic motivation being a particularly strong influence that can affect students’ decision-making abilities. Furthermore, Abildina et al. (2025) found that traditional pedagogical approaches fail to engage students in digital environments, especially when tasks are misaligned with professional needs or cultural preferences. Tazhitova et al. (2022) highlighted insufficient motivation as a key barrier to implementing English-medium instruction. Abayeva et al. (2024a) argued for actively engaging students in digital education, positing that this engagement can serve as a powerful motivational strategy.

1.1.3 Intersection of digital competence and learning motivation

Investigating the relationship between DC and LM has become an increasingly salient research direction, attracting significant scholarly attention worldwide. Evidence assumes a bidirectional relationship in which DC enhances motivation, whereas motivated learners are more adept at developing their DC (Cabero-Almenara et al., 2025). In technology-enhanced environments, students with stronger DC demonstrate heightened self-efficacy, which positively influences their intrinsic motivation (Gisbert Cervera and Caena, 2022) and reduces cognitive load, allowing for better focus on learning materials (Mehrvarz et al., 2021). Researchers have found that balancing DC and LM is critical for fostering a comprehensive and inclusive educational environment, thereby promoting both academic achievement and emotional well-being (Baimyrza et al., 2024; Hogan, 2025).

DC enables greater learner autonomy and self-direction, which are key factors in fostering intrinsic motivation (Thelma et al., 2024). Competent students can better customize learning experiences and connect with broader communities, enhancing motivation through autonomy and relatedness (Esteve-Mon et al., 2019). This relationship is especially significant in collaborative contexts, where Hidayat-Ur-Rehman (2024) found that students with higher DC were more motivated to participate in online and research activities, although comprehensive studies unique socio-cultural environment remain limited (Kerimbayev et al., 2017).

1.1.4 Educational models to enhance DC and motivation

Various educational models can effectively develop both DC and LM. Blended learning combines face-to-face and online instruction, enhancing skills and motivation through diverse experiences (Fernández-Batanero et al., 2021). Learning with digital technologies enhances the ability of students to assess their LM, fostering the development of their own personality (Kushkimbayeva et al., 2024). Project-based learning with digital technologies provides authentic contexts for developing advanced competencies while enhancing motivation through real-world relevance (Romero-García et al., 2020). Gamification elements, such as badges and leaderboards, increase engagement while developing technical proficiency (Cabero-Almenara et al., 2020).

In microlearning approaches, complex skills are broken into shorter modules to build DC while maintaining engagement (Zhao et al., 2021), aligning with cognitive load theory. Collaborative digital projects develop communication competencies while enhancing motivation through social connectedness (Basilotta-Gómez-Pablos et al., 2022). Abayeva et al. (2024b) hypothesized that digital learning technologies are more effective than traditional methods in developing young researchers’ skills in Kazakhstan. However, despite the reported efficacy of these models, researchers report that their use in Kazakhstan’s higher education system remains limited. As Sarmurzin et al. (2021) noted, empirical data confirming the successful implementation of educational models remain insufficient.

1.1.5 Research gaps and opportunities

Despite growing research interest in studying the intersection of DC and LM, significant gaps remain regarding Kazakhstan’s higher education system. Although these constructs have been examined separately, integrated approaches addressing both remain underdeveloped in non-Western contexts (Abdillah and Wahyuilahi, 2025). Most DC frameworks originated in Western contexts with limited validation in Central Asian educational systems, requiring investigation of their transferability (Dang et al., 2024). Longitudinal studies examining the relationship between DC and motivation are scarce, although they would provide valuable insights into how these factors evolve throughout students’ academic journeys (Cabero-Almenara et al., 2021).

Research specifically targeting master’s students in Kazakhstan remains limited despite their unique challenges in developing DC and maintaining research motivation (Kanyika et al., 2024). Finally, while various educational models have been proposed, empirically validated approaches tailored to Kazakhstan’s higher education system remain underdeveloped, representing a significant opportunity for advancing research and practice (Abildina et al., 2025).

1.1.6 Research background and hypothesis development

Technology-enhanced education is pivotal for developing DC in master’s level (Basilotta-Gómez-Pablos et al., 2022). Microlearning improves mastery of complex digital tasks while sustaining engagement (Zhao et al., 2021); gamification (achievement systems, badges) deepens technology use and raises competence scores (Cabero-Almenara et al., 2020); and online assessment creates feedback loops that both measure and build skills (Dang et al., 2024). Crucially, tools must be embedded in blended and project-based designs foster comprehensive DC (Fernández-Batanero et al., 2021). Evidence from Kazakhstan indicates similar multi-dimensional gains, supporting contextual applicability (Kerimbayev et al., 2017). Grounded in prior empirical evidence, first hypothesis states:

H1: Technology-enhanced education has a statistically significant positive effect on master’s students’ digital competence.

DC directly shapes engagement in higher education: students with stronger digital skills encounter fewer technical barriers and participate more readily in technology-enhanced environments, focusing on learning rather than troubleshooting (Cabero-Almenara et al., 2025; Muammar et al., 2023). Higher DC is associated with greater behavioural, emotional, and cognitive engagement more active online participation and improved attendance (Cabero-Almenara et al., 2020; Abdillah and Wahyuilahi, 2025), more positive affect towards TEL (Marcelo and Yot-Domínguez, 2019), and deeper processing through sophisticated learning strategies (Romero-García et al., 2020); institutional initiatives that build DC further elevate these metrics (Cabero-Almenara et al., 2021). Therefore, based on this substantial body of evidence, we hypothesise a direct and positive relationship between digital competence and student engagement:

H2: Digital competence has a statistically significant positive effect on master’s engagement.

DC significantly fosters LM among master’s students, with compelling evidence supporting their positive association across educational contexts (Guillén-Gámez and Mayorga-Fernández, 2020). Students with higher DC demonstrate greater self-efficacy in digital environment, enhancing their intrinsic motivation to engage with TEL (Gisbert Cervera and Caena, 2022) and approaching academic challenges with enthusiasm rather than apprehension (Mehrvarz et al., 2021).

DC enhances master’s students’ autonomy by enabling personalisation and wider community connection, thereby strengthening intrinsic motivation through greater agency (Esteve-Mon et al., 2019); in collaborative settings, digitally competent students are more motivated to participate in online activities (Hidayat-Ur-Rehman, 2024). In line with self-determination theory, DC satisfies needs for competence and relatedness, increasing intrinsic motivation (Thelma et al., 2024; Zhaisanova, 2022). In Kazakhstan, targeted DC development is linked to significantly higher learning motivation, particularly for research and professional skills (Kerimbayev et al., 2017). Hence, the following hypothesis is posited:

H3: Digital competency positively influences the learning motivation of master’s students.

Studies have shown that highly motivated students demonstrate greater engagement across cognitive, emotional, and behavioral dimensions (Lo, 2024; Ryan and Deci, 2000), particularly at the master’s level, where sustained engagement with complex material is essential. Zhaisanova (2022) found that motivation directly influences students’ investment in their studies, with engagement increasing significantly when motivation shifts from extrinsic factors to intrinsic interest in mastering a discipline.

The self-determination theory provides a framework for understanding this relationship. Tazhitova et al. (2022) discovered that when students’ needs for autonomy, competence, and relatedness were satisfied, they showed higher emotional and cognitive engagement in Kazakhstan’s higher education context. Fernández-Batanero et al. (2021) demonstrated that motivated students more actively participated in online communities and persisted through challenges in digital learning in technology-enhanced environments. Abdillah and Wahyuilahi (2025) conducted longitudinal research and confirmed that fluctuations in motivation directly predicted changes in engagement metrics, with higher motivation leading to increased participation, time spent learning, and higher-quality academic work. Based on this substantial body of empirical and theoretical evidence, we propose the following:

H4: Learning motivation has a statistically significant positive effect on student engagement.

DC significantly develops self-efficacy among master’s students in digital academic environments. Self-efficacy is the belief in one’s capacity to execute behaviors for specific performance outcomes that has been consistently linked to DC in higher education research (Basilotta-Gómez-Pablos et al., 2022). Studies show that as students develop DC, they experience enhanced self-efficacy regarding TML tasks. Guillén-Gámez and Mayorga-Fernández (2020) found that master’s students receiving targeted DC development reported significantly higher self-efficacy beliefs across diverse disciplines and demographics.

Dang et al. (2024) identified that mastery experiences with digital technologies provide concrete evidence of capabilities, which is the most powerful source of self-efficacy in Bandura’s theory, with each successful interaction strengthening students’ belief in navigating future technological challenges. DC development in social learning contexts provides opportunities for vicarious learning and social persuasion. Through peer observation and feedback, Cabero-Almenara et al. (2021) demonstrated that collaborative digital projects enhanced technology-related self-efficacy. In Kazakhstan specifically, Kerimbayev et al. (2017) observed that students who developed stronger DC showed marked improvements in technology-related self-efficacy, particularly pronounced for those with previously low confidence, suggesting that DC development benefits those with self-efficacy gaps. Based on this evidence, we propose:

H5: Digital competency has a positive effect on self-efficacy among master’s students.

Self-efficacy significantly influences engagement in higher education (Younas et al., 2025a; Younas et al., 2025b). Students with strong self-efficacy beliefs approach challenging tasks confidently, persist through difficulties, and maintain engagement despite obstacles (Chang et al., 2022; Schunk and DiBenedetto, 2022). Self-efficacy positively affects behavioral, emotional, and cognitive engagement dimensions. Behaviorally, Romero-García et al. (2020) found that students with higher self-efficacy participated more actively in classroom activities and persisted longer with challenging tasks, with self-efficacy beliefs predicting engagement throughout the semester.

Zhao et al. (2021) found that graduate students with high self-efficacy reported greater enjoyment of learning activities, lower anxiety about academic challenges, and stronger identification with their disciplines as key indicators of emotional engagement essential for sustaining engagement with demanding graduate coursework. Cabero-Almenara et al. (2020) demonstrated that master’s students with high self-efficacy employed more complex cognitive strategies, such as critical thinking and metacognitive regulation, with course materials. This relationship is especially important in technology-enhanced environments, where Mehrvarz et al. (2021) found that DS self-efficacy strongly predicted engagement with online platforms and technology-mediated tasks. On the basis of this substantial body of evidence demonstrating the positive influence of self-efficacy on multiple dimensions of student engagement, we propose:

H6: Self-efficacy has a positive effect on student engagement.

Student engagement predicts academic performance across educational contexts, particularly in higher education, where independent learning and deep cognitive processing are essential (Casanova et al., 2024; Wong and Liem, 2022). Engaged students demonstrate superior performance across multiple indicators. Cabero-Almenara et al. (2025) found that highly engaged master’s students achieved significantly higher-grade point average (GPA) and completed programs within expected timeframes, with engagement making unique contributions beyond prior academic achievement.

Based on this empirical evidence linking student engagement to academic performance, we propose:

H7: Student Engagement has a positive effect on academic performance.

This relationship operates through several mechanisms. Engaged students invest a more mental effort in processing course material. Muammar et al. (2023) demonstrated that cognitively engaged students employed more sophisticated learning strategies and critical thinking, translating to superior assessment performance. Fernández-Batanero et al. (2021) found that consistent attendance and participation accumulated learning opportunities over time, substantially affecting performance outcomes. Emotionally, Basilotta-Gómez-Pablos et al. (2022) showed that students’ emotional connection to learning experiences persisted through challenges and maintained motivation, which is particularly important for graduate students navigating intensive coursework. Abdillah and Wahyuilahi (2025) found that active engagement with digital platforms produced better learning outcomes in technology-enhanced environments, with engagement mediating the relationship between digital resource access and performance.

The concept (Figure 1) represents how TEP serves as the foundation that influences DC (H1), which subsequently affects LM (H3) and self-efficacy (H5) while directly affecting Student Engagement (H2). Both LM (H4) and self-efficacy (H6) further contribute to student engagement, creating multiple influence pathways.

Figure 1
Flowchart depicting relationships between variables. Technology-Enhanced Pedagogy influences Digital Competence (β = 0.58***), which impacts Student Engagement (β = 0.32***) and Self-Efficacy (β = 0.51***). Learning Motivation (β = 0.47***) and Self-Efficacy (β = 0.29***) also relate to Student Engagement, which affects Academic Performance (β = 0.43***). Connections are shown with directional arrows and beta values.

Figure 1. Integrated conceptual model of digital competence development and its influence on learning outcomes among master’s students in Kazakhstan (created by authors).

Finally, student engagement leads to improved academic performance (H7), which completes the theoretical framework. This integrated concept captures the direct and indirect relationships between DC development and educational outcomes in the context of Kazakhstan’s higher education system.

2 Materials and methods

2.1 Research design

This study employed a quasi-experimental design with pre- and post-test measurements to evaluate a novel technology-enhanced educational concept for enhancing DC and LM among master’s students in Kazakhstan. An experimental cohort received the targeted intervention, while a control cohort followed the standard curriculum; participants were not randomly assigned. In keeping with applied educational research, the design incorporated iterative plan implement observe reflect cycles, enabling context-sensitive, real-time refinements informed by continuous stakeholder feedback.

2.2 Participants and sampling

Participants were recruited from five public pedagogical universities. The institutions span southern and eastern cities, including a specialized women’s teacher training university and a bilateral multilingual university, providing significant institutional diversity in key aspects that influence educational activities: program focus and language of instruction. A stratified purposive sampling strategy was used at the institutional level to maximize external validity and reduce selection bias. The universities were chosen ex ante to capture variation on four design-relevant axes known to moderate educational intervention effects:

1. This study included two metropolitan universities (Almaty) and three major urban centers (Turkistan, Shymkent, and Ust-Kamenogorsk).

2. A national flagship pedagogical university, a specialized women’s pedagogical university, a bilateral multilingual university, and two comprehensive universities with pedagogical programs are the institutions of this study.

3. Programs are conducted in Kazakh, Russian, and multilingual streams.

4. Institutions with sufficiently large first-year master’s cohorts and parallel course sections to support intact experimental and control groups, thereby minimizing cross-contamination, should be considered.

Universities were included if they met the following criteria: held state accreditation for master’s programs in pedagogical education; offered ≥ 2 parallel first-year cohorts with comparable curricula; could integrate the intervention into the standard curriculum without altering assessment policies; provided administrative approval, a site coordinator, and access to a learning management system; and agreed to data-sharing and ethical procedures. Institutions were excluded from insufficient cohort size (< 30 students per program), non-comparable curricula, organizational instability, or inability to prevent cross-contamination. This strategy yielded a diverse set of institutions (for generalizability) and operationally comparable (for internal validity).

2.3 Participant selection criteria

The target population consisted of first-year master’s students enrolled in pedagogy-related programs (e.g., Curriculum and Instruction, Subject Didactics, and Educational Management).

The inclusion criteria: enrollment in a relevant master’s program during the Fall 2024 semester; age ≥ 18 years; proficiency in the language of instruction (Kazakh or Russian); ability to attend scheduled sessions; provision of informed consent; and completion of baseline assessments.

The exclusion criteria: prior formal exposure to substantially similar interventions within the preceding 12 months; concurrent enrollment in a course duplicating the intervention’s core components; academic leave or exchange student status with an expected attendance rate < 70%; administrative or scheduling conflicts preventing consistent participation; or failure to complete the baseline assessment. Program administrators and course coordinators recruited participants through in-class announcements and university email. Participation was voluntary, and no academic incentives were offered. To reduce selection bias, comparable course sections within each institution were designated as either experimental or control groups, with a 1:1 ratio. Baseline equivalence between groups was assessed for gender, age, undergraduate GPA, and teaching experience (Table 1). Any residual imbalances were statistically controlled for using covariate-adjusted models with university fixed effects and cluster-robust standard errors.

Table 1
www.frontiersin.org

Table 1. Baseline balance and allocation diagnostics.

The final analytical sample comprised 236 students, evenly distributed between the experimental (n = 118) and control (n = 118) groups. The pooled sample had a mean age of M = 24.7 years (SD = 2.3); 76.3% were female; the mean undergraduate GPA was M = 4.2 (SD = 0.4) on a 5-point scale; and 34% reported prior K–12 teaching experience (median 6 months). The language of instruction distribution was 82% Kazakh, 15% Russian, and 3% multilingual. Table 2 presents the detailed distributions of participants by university, group allocation, and key descriptors.

Table 2
www.frontiersin.org

Table 2. The distribution of participants by university, group allocation, and key descriptors.

G*Power 3.1 (Faul et al., 2009) was used for an a priori power analysis for a two-independent-means t-test (two-tailed α = 0.05, power 1–β = 0.80). Anticipating a conservative effect size in an educational setting (Cohen’s d = 0.40), a minimum of 99 participants per group was required (Cohen, 2013). Accounting for an estimated 15–20% attrition and planned subgroup analyses, a target of ≥ 115 per group was set. The achieved sample size (n = 118 per group) provides a statistical power of ≈ 0.87 for d = 0.40 and > 0.95 for d = 0.50, exceeding the threshold and aligning with.

2.4 Ethical compliance

This study was conducted according to the ethical rules of scientific research stated in the Declaration of Helsinki. Ethical approval for this study was granted by ethical committee of Abai Kazakh Natonal Pedagogical University (protocol №30, dated September 2nd 2024). Explicit oral consent to participate in the research was obtained from the participants, and this consent ensured that they voluntarily and consciously participated in the research. The participants were informed that the survey data would only be used for academic purposes and that their identities and personal information would be kept strictly confidential. Ethical rules and confidentiality principles were strictly followed in all research processes. Taking into account that the study posed minimal risk and was conducted within safe educational environment, and full data anonymization procedures were ensured, permission was granted to waive written informed consent. The participants provided their informed oral consent after making acquaintance with information sheet outlining research purpose, research procedures, potential risks and benefits, confidentiality measures, and the right to withdraw from participants at any moment without any penalty. After the specified information was presented verbally and via information sheet, the participants expressed their oral informed consent. Process of obtaining the verbal consent was approved by the ethical committee, registered in the research documentation log and securely stored in accordance with the institution’s internal policy. No video- or audio-records allowing personal identification were published; all the data were collected anonymously and stored on the institutional servers with restricted access.

2.5 Educational concept

The educational concept was a comprehensive 14-week program designed to address digital inequity and low LM among Kazakhstan’s higher education students. It integrates five interconnected pedagogical components grounded in Self-Determination Theory (Ryan and Deci, 2000), Social Cognitive Theory (Bandura, 1997), and the European DigCompEdu.

2.6 Core components

The proposed educational concept consists of five components that work together to create a comprehensive learning environment.

1. TEl integrates diverse digital tools (Google Workspace, Canva, Miro, Trello, and Moodle) into authentic academic tasks combining face-to-face instruction (2 h/week) with online sessions (1.5 h/week). This component emphasizes progressive skill-building from basic DC to advanced content creation and collaboration competencies, ensuring that students develop both technical proficiency and confidence in digital environments.

2. The microlearning architecture divides complex DC into specialized 10-15-min modules focused on specific skills (e.g., data visualization, information evaluation, and digital communication). The structure follows sequential progression with mastery-based advancement criteria, allowing students to access specific modules when they are needed for immediate application in their academic projects.

3. The gamification system incorporates digital badges, leaderboards, and points aligned with learning objectives to maintain engagement through healthy competition and instant feedback. Regular challenges and competitive quizzes provide immediate feedback while creating a dynamic learning environment that appeals to diverse motivational preferences and sustains student interest throughout the implementation period.

4. Innovative Assessment Strategies employs continuous formative assessment via peer review, reflective digital journals, comprehensive e-portfolios, and authentic assignments requiring real-world skill application. These strategies simultaneously measure progress and enhance learning while developing critical thinking and metacognitive awareness, which are essential for lifelong learning.

5. Collaborative Digital Projects create interdisciplinary teams that tackle authentic academic challenges with progressive complexity, from simple co-creation to sophisticated collaborative research. These projects include structured protocols for digital collaboration and communication, ensuring that all participants develop essential skills for working effectively in technology-mediated environments while experiencing the social benefits of collaborative learning.

2.7 Implementation phases

The 14-week implementation was structured into five phases. Foundation Building (Weeks 1–2) includes DC baseline assessment, framework introduction, individual learning plan development, core platform training, and team formation. Information Literacy and Communication (Weeks 3–5) includes microlearning modules, workshops on collaboration tools, mini-projects, gamified assessments, and peer feedback activities. Digital Content Creation (Weeks 6–8) encompasses multimedia development units, creative workshops, portfolio initiation, mid-point assessment, and cross-university collaboration. Problem-Solving and Innovation (Weeks 9–11) introduces AI tools, problem-based learning, complex collaborative projects, peer assessment, and innovation challenges. Integration and Application (Weeks 12–14) involves comprehensive projects, portfolio finalization, self-assessment, collaborative reflection, and final evaluation.

2.8 Data collection methods

The study employed a mixed-methods approach for data collection, gathering both quantitative and qualitative data to provide comprehensive insights into the educational concept’s effectiveness. Two primary instruments were adapted and validated for use in Kazakhstan’s educational context for quantitative assessment.

2.8.1 Quantitative measures

Digital Competence Assessment (DCA): Adapted from the DigCompEdu questionnaire. The original 90 items, covering five competence areas (i.e., information processing, communication, content creation, safety, and problem-solving), were contextually adapted for master’s students (e.g., replacing “training” with “studies,” “colleagues” with “peers,” and “teaching practice” with “academic practice”) while maintaining the original meaning and structure of the items. The items were rated on a 5-point Likert scale from elementary (1) to expert (5). The adapted instrument underwent rigorous translation and cultural adaptation for Kazakhstan’s context, followed by pilot testing with students (n = 45) who did not participate in the main study. The results demonstrated high internal consistency (Cronbach’s α = 0.87). Appendices A and B detail the adapted DigCompEdu questionnaire items and psychometric properties, including content validity, factor analysis, and reliability coefficients.

The Academic Motivation Scale was used to measure students’ motivation. The 28-item scale (Vallerand et al., 1992), based on self-determination theory, was adapted for master’s students and involved changing education level-specific terminology (e.g., “go to school” to “pursue your master’s degree,” “high-school degree” to “master’s degree,” “teachers” to “professors”) while maintaining the original meaning and structure of the items. The assessment management system (AMS) assessed seven motivational subscales across three domains: intrinsic motivation (knowledge of accomplishment and experience of stimulation), extrinsic motivation (identified regulation, introjected regulation, and external regulation), and amotivation. Participants responded to statements such as «Because I experience pleasure and satisfaction while learning new things” on a 5-point Likert scale ranging from “Does not correspond at all” (1) to “Corresponds exactly” (5). The adapted scale showed acceptable reliability (Cronbach’s α = 0.82). Appendices C and D present the adapted AMS questionnaire items and their detailed psychometric properties, including content validity, factor analysis, and reliability coefficients.

Both the DCA and AMS were administered at baseline (pre-test, Week 1) and immediately post-intervention (post-test, Week 15) to capture changes in these constructs over time for both experimental and control groups.

2.8.2 Qualitative measures

Post-intervention, semi-structured interviews were conducted with the students to explore experiences and perceived outcomes. Focus group discussions were held to gather collective insights on the intervention. Weekly structured classroom observations were conducted using a protocol documenting behavioral engagement, technology use, and instructional interactions. The implementation fidelity logs were completed weekly by the instructors in the experimental group. Academic performance data included course grades, portfolio scores, and assignment completion rates collected from universities after the intervention.

2.9 Data analysis

A comprehensive statistical approach was employed for quantitative data analysis. After testing for normality (Shapiro–Wilk) and baseline equivalence (independent t-tests), intervention effects were assessed using paired t-tests for within-group changes and ANCOVA for between-group comparisons, controlling for pretest scores.

Effect sizes were estimated using Cohen’s d to quantify practical significance. The relationships between variables were examined using Pearson correlations, and the hypothesized conceptual model was tested through path analysis within the SEM. The subgroup analyses explored potential moderating demographic factors. All analyses were conducted using SPSS and AMOS with a significance threshold of p < 0.05.

Qualitative data from interviews and focus groups were audio-recorded, transcribed verbatim, and analyzed using a thematic analysis approach. A codebook was developed based on the theoretical constructs of the study, and two researchers achieved intercoder reliability through consensus. Representative quotations were selected to illustrate the quantitative findings.

2.10 Procedure

2.10.1 Phase 1: preparation and baseline assessment (September 1–14, 2024)

During the first 2 weeks of September 2024, comprehensive preparation for the implementation of the proposed educational concept began. Thirty faculty members (six from each participating university) completed 3 days of intensive training from September 1 to 3, 2024. This training program focused on the components of the educational concept, digital tools integration, and implementation protocols, ensuring the consistent application of the new concept across all participating institutions. Participant recruitment was conducted during the first week of classes through departmental announcements and direct invitations. Group assignment was completed by September 10, with participants assigned to experimental or control groups based on their course sections while maintaining demographic balance across age, gender, and initial self-reported DC levels. Baseline assessments of DC and LM were administered to all participants via online Qualtrics surveys between September 11 and 14, 2024. These pre-tests established comparative baselines for both groups before the initiation of the intervention. Concurrently, the initial qualitative data collection included interviews with a sample of 10 students from each group to establish baseline perceptions regarding digital learning, motivation, and expectations for their graduate studies.

2.10.2 Phase 2: intervention implementation (September 18–December 15, 2024)

The implementation of the experimental group’s educational concept commenced on September 18, 2024, and continued through December 15, 2024, spanning 14 weeks of the fall semester. The intervention followed a progressive structure designed to build digital competencies while simultaneously addressing motivational factors. The first 2 weeks (September 18–29, 2024) served as an orientation period where the experimental group participants were introduced to the DC framework and foundational technology skills. The students completed baseline self-assessments, developed individual digital learning plans, and received training on core platforms, including Google Workspace and Moodle. This initial phase established a common foundation while addressing participants’ varying entry-level skills. The subsequent 3 weeks (October 2–20, 2024) focused on DC and communication. Students engaged in short, focused 15-min sessions on microlearning units on information evaluation, participated in workshops on digital collaboration tools such as Miro and Trello, and completed their first collaborative mini-project on information verification. This module incorporated gamified quiz competitions on DC to stimulate engagement and reinforce learning.

The intervention progressed to digital content creation from October 23 to November 10, 2024. This three-week module included microlearning units on multimedia content development, hands-on workshops with creative tools like Canva, and the initiation of digital portfolio assessment. Midpoint data collection occurred during this period, with the first set of focus group discussions conducted between November 8 and 10, 2024, to gather preliminary feedback on the concept implementation. The digital problem-solving and innovation module spanned weeks 9–11 (November 13–December 1, 2024). Participants were introduced to AI tools and their ethical applications, engaged in problem-based learning activities using digital resources, and initiated cross-university collaborative projects that required multiple DC applications. This phase emphasized peer assessment of digital solutions, reinforcing both technical skills and evaluative capabilities.

The final 3 weeks of implementation (December 4–15, 2024) focused on the integration and application of acquired competencies. Students completed comprehensive projects requiring multiple DS, finalized their e-portfolios for presentation, conducted self-assessments of their DC development, and submitted reflective digital journals documenting their learning journey. The implementation process ran for 14 weeks, with each week including 2 h of structured face-to-face instruction, 1.5 h of live online sessions (such as virtual classes and real-time discussions), and 3–4 h of self-paced digital work (including video lessons, independent projects, and forum discussions). The weekly integration of gamification element, such as badges, points, and leaderboard updates, helped maintain student engagement and motivation. The schedule of the experimental group provided consistent exposure to all conceptual components while offering flexibility for application across different disciplinary contexts.

Participants in the control group followed the traditional curriculum for the same courses during this period. They engaged with similar content without the specialized DC framework, structured microlearning approach, gamification elements, or collaborative digital projects. Weekly classroom observations were conducted in parallel sessions for both groups to document differences in engagement patterns and instructional approaches.

2.10.3 Phase 3: post-intervention assessment and data collection (December 18, 2024–January 12, 2025)

The final phase of the study encompassed comprehensive assessment and data collection between December 18, 2024, and January 12, 2025. Post-test administration of the DCA and AMS was conducted for both groups during the week of December 18–22, 2024, using the same online platform employed for pre-testing. Changes in the primary outcome variables after the 14-week intervention were measured using these instruments. Semi-structured interviews were conducted with 30 participants (15 from each group) between December 25 and 29, 2024. Through these interviews, students’ experiences, perceived outcomes, and contextual factors that influenced the effectiveness of the educational concept were explored. Each interview lasted 30–45 min and was conducted by two study authors. Interview transcripts were reviewed to identify representative quotes illustrating key patterns observed in the quantitative findings, with selected quotes providing participant voices to support the interpretation of statistical results. Additionally, between January 3 and 5, 2025, six focus group discussions were held, with three groups from each experimental condition. Each focus group brought together 6–8 students for 60–90-min sessions. These group discussions helped gather shared viewpoints about the new educational concept, including how students worked together, what collaborative learning was like, and how they collectively experienced DS growth.

Academic performance data, including course grades and completion rates, were collected for both groups during the week of January 8–12, 2025, with appropriate permission from university registrars and participant consent. During the same period, structured debriefing sessions with participating faculty were conducted to gather implementation insights and document institutional perspectives on the effectiveness and sustainability of the new concept.

2.11 Data collection procedure

Quantitative data were collected via bilingual (Kazakh and Russian) online surveys administered through the Qualtrics platform during the pre-testing and post-testing, which were aligned with the start and end of the 14-week intervention period. Each participant was assigned a unique alphanumeric study identifier (format: site-cohort-case), which was automatically generated by Qualtrics. The site coordinator stored the re-identification key locally and was never accessible to the research team, ensuring participant anonymity. DCA (90 items, aligned with DigComp) and AMS (28 items) were administered in a fixed order. The surveys incorporated embedded attention checks (e.g., two instructed response items) and employed soft-check logic to minimize missing data. The median completion times were approximately 18–22 min for the DCA and 8–10 min for the AMS. Automated reminder emails were sent to non-completers at 24 and 72-h intervals. The completion rate exceeded 95% for both survey waves; instances of partial completion were flagged in real-time by coordinators to facilitate on-site assistance if needed. In January 2025, academic performance metrics, including course grades, digital portfolio scores, research project ratings, and assignment completion rates, were obtained directly from program offices using only the study identifiers of the participants. Program offices exported guidebooks and final rubric scores directly from institutional systems. A research assistant and the site coordinator then independently verified the match between the academic records and the survey identifiers before the files were fully de-identified for analysis.

Qualitative data were gathered to contextualize the quantitative findings. Semi-structured interviews (30–45 min duration) were conducted with 30 students (15 from the experimental group and 15 from the control group) between December 25 and 29, 2024. A common interview guide was used, focusing on participants’ perceived changes in DC, motivation, and engagement. Additionally, six focus group discussions (three per study condition; 6–8 students per group; duration: 60–90 min) were held from January 3 to 5, 2025. All sessions were audio-recorded with participant consent and transcribed in the language of the interaction. A rigorous forward-backward translation workflow was implemented for cross-site synthesis requiring translation. A coding team of two researchers developed a codebook based on the theoretical constructs of the study (DC areas, self-efficacy, motivational regulation, and engagement). The coders met weekly to calibrate their work and resolve discrepancies through consensus; all process protocols were archived in an audit trail.

Approximately weekly structured classroom observations were conducted (totaling 12 observations per group over the 14-week period). Observers used a structured protocol to document (a) behavioral engagement (on-task behavior, voluntary contributions, persistence in overcoming difficulties), (b) technology use (type and purpose of tools, individual vs. collaborative use), and (c) instructional interactions (forms of feedback, facilitation of microlearning/gamified tasks). For each session, the observers completed a one-page checklist (using a 1–5 Likert scale) and open-ended field notes to document significant contextual factors (e.g., connectivity issues, room changes).

Implementation fidelity within the experimental group was monitored via weekly instructor logs (requiring ≤10 min to complete). These logs recorded which microlearning modules were delivered, which gamification elements were used (badges/leaderboards/points), formative assessment activities conducted (quizzes, peer reviews, reflection journals), and milestones reached in collaborative projects. Instructors attached examples (e.g., prompts, grading rubrics, and screenshots) as necessary. A cross-site coordinator reviewed the logs and conducted brief biweekly check-in meetings with all instructors to identify bottlenecks, adapt to local contexts, and ensure adherence to core intervention components. Telemetry data (e.g., clickstream or automated tracing) were not used.

Audio files, transcripts, and de-identified datasets were stored on encrypted drives only accessible to the core research team. Any protocol deviations (e.g., postponed observations due to university events) were documented in a deviation log and considered during the results interpretation.

Given the described sample, implementation protocol, and validated measurement instruments, the empirical results are presented. First, we describe the sample composition and examine the baseline comparability of the experimental and control cohorts based on demographic characteristics and pretest construct scores (Tables 3, 4). Second, we used ANCOVA to evaluate the intervention effects on DC domains, controlling for pretest scores and incorporating university fixed effects. For each outcome, we report adjusted post-test means, 95% confidence intervals (CIs), partial η2 values, and the results of sensitivity checks for heteroscedasticity and alternative covariate sets (see Table 5). Third, we test hypotheses H1–H7 through correlation and path analysis, examining both the direct and indirect effects of DC on engagement and, subsequently, on academic performance, mediated by motivation and self-efficacy. Quantitative estimates are supplemented with representative quotes from interviews and focus groups to enhance interpretability, illustrating the observed mechanisms (e.g., reduced extraneous cognitive load, increased self-efficacy, shifts in motivational regulation). The section concludes with a series of robustness checks: subgroup analyses based on initial digital proficiency and language of instruction, and diagnostics for potential protocol biases (compliance logs, classroom observations).

Table 3
www.frontiersin.org

Table 3. Demographic characteristics of the participants.

Table 4
www.frontiersin.org

Table 4. Comparison of digital competence and learning motivation.

Table 5
www.frontiersin.org

Table 5. Development of digital competence by group.

3 Results

3.1 Sample demographics and baseline comparisons

The final sample consisted of 236 Master’s students (118 in each group) from five major pedagogical universities in Kazakhstan. Table 2 presents the demographic characteristics of participants in both experimental and control groups.

No significant differences were found between the experimental and control groups in terms of demographic characteristics (p > 0.05), confirming the effectiveness of the matching procedures. Baseline comparisons of digital competence and learning motivation also revealed no significant differences between groups (Table 3), indicating initial equivalence prior to the intervention.

3.2 Hypothesis testing

H1 was about the positive effect of the TEL model on DC development. ANCOVA results showed that the experimental group demonstrated significantly higher post-test DC (M = 4.02, SD = 0.52) than the control group (M = 3.00, SD = 0.58), F (1, 233) = 197.86, p < 0.001, partial η2 = 0.46. This large effect size indicates that the educational concept strongly influenced DC (β = 0.58, p < 0.001) (Table 5). The path analysis results further confirmed this relationship. These findings support H1, demonstrating that the integrated educational concept effectively enhanced DC across all five dimensions. Qualitative data from interviews corroborated these findings, with participants in the experimental group describing progressive skill development: “By the end of each week, I’d look back and think, ‘Wow, I actually know how to do this stuff now.’” It wasn’t always easy, but breaking it down made it manageable.” (Female, 26, Pedagogy).

H2 tested the effect of DC on student engagement. Correlation analysis revealed a significant positive relationship between post-test DC and student engagement (r = 0.64, p < 0.001). The path analysis revealed a significant effect of DC on student engagement (β = 0.32, p < 0.001), supporting H2. Master’s students with higher DC scores demonstrated more active participation in digital learning environments, with significantly higher contributions to online discussions (r = 0.58, p < 0.001), digital content creation (r = 0.61, p < 0.001), and peer collaboration (r = 0.53, p < 0.001). It was also observed in the interviews: “I used to watch everyone else posting in the group chats while I stayed quiet.” Half the time, I could not figure out how to share my screen or upload my work properly. But gradually, I started catching up. Now I’m usually one of the first to respond to our online sessions.” (Male, 25, Mathematics Education).

H3 implies the positive effect of DC on LM. Table 6 shows that the regression analysis showed that post-test DC significantly predicted LM (β = 0.51, p < 0.001, R2 = 0.42). Path analysis confirmed this direct effect (β = 0.47, p < 0.001). The results showed that DC had differential effects on motivation components, with stronger relationships to intrinsic motivation (r = 0.57, p < 0.001) and identified regulation (r = 0.54, p < 0.001) than to external motivation (r = 0.18, p < 0.05). DC was negatively correlated with amotivation (r = −0.42, p < 0.001). Similarly, participants in the experimental group described how improved digital abilities enhanced their motivation: “After I learned how to use that gene mapping software, I realized how much time it saved me on my lab reports. That is when it clicked that these skills were not just for grades; they would help me in my actual career. So, I started looking for other tools that could make my work more efficient.” (Female, 24, Biology Education).

Table 6
www.frontiersin.org

Table 6. Regression analysis: predicting learning motivation by digital competence.

H4 proposed that LM would positively influence student engagement. Correlation analysis revealed a strong relationship between post-test LM and student engagement (r = 0.69, p < 0.001). Path analysis confirmed a direct effect of motivation on engagement (β = 0.38, p < 0.001), supporting H4. Multiple regression analysis revealed that intrinsic motivation (β = 0.41, p < 0.001) and identified regulation (β = 0.36, p < 0.001) were stronger predictors of engagement than external motivation (β = 0.17, p < 0.05), while amotivation negatively predicted engagement (β = −0.28, p < 0.001). Qualitative data illustrated how motivation was translated into engagement:

There was this moment when I realized these tools weren’t just for our tech module. “I started using Canva for my lesson planning assignment, then figured out how to use Google Forms for a mock assessment in another class.” (Female, 23, Primary Education).

H5 was about the positive effect of DC on self-efficacy beliefs. ANCOVA showed that the experimental group demonstrated significantly higher post-test self-efficacy (M = 3.92, SD = 0.55) than the control group (M = 3.23, SD = 0.58), F (1, 233) = 92.47, p < 0.001, partial η2 = 0.28. Regression analysis also revealed that DC significantly predicted self-efficacy after controlling for baseline measures and demographics (β = 0.56, p < 0.001, ΔR2 = 0.31). Path analysis confirmed this relationship (β = 0.51, p < 0.001) (Figure 2). Qualitative data provided insight into the following relationship: “When we started, I was convinced everyone else in class was way ahead of me with this tech stuff. But after I managed to figure out a few tools on my own, I realized maybe I wasn’t so behind after all.” “Now I don’t automatically assume I’ll be the worst in the room when we learn something new.” (Male, 26, Computer Science Education).

Figure 2
Flowchart illustrating relationships between educational factors. Technology-Enhanced Pedagogy influences Digital Competence (H1), which affects Learning Motivation (H3) and Student Engagement (H2). Student Engagement impacts Academic Performance (H7). Digital Competence also influences Self-Efficacy (H5), which affects Student Engagement (H6). Arrows indicate direction of influence.

Figure 2. Relationship between digital competence and self-efficacy (created by authors).

H6 proposed that self-efficacy would positively influence student engagement. Correlation analysis revealed a significant relationship between self-efficacy and student engagement (r = 0.60, p < 0.001). Path analysis confirmed a direct effect (β = 0.29, p < 0.001), supporting H6. Self-efficacy partially mediated the relationship between DC and student engagement, with a significant indirect effect (β = 0.15, 95% CI [0.08, 0.22], p < 0.001). Observations showed that students with higher self-efficacy demonstrated more initiative in technology-enhanced learning activities and persisted longer when facing technical challenges (average persistence time: high self-efficacy = 14.3 min vs. low self-efficacy = 6.8 min, t(82) = 5.74, p < 0.001). Focus group participants articulated this relationship: “Since I know my way around the discussion board and all the collaboration tools, there’s nothing holding me back from fully participating. I can easily post my thoughts, respond to classmates, and contribute to group projects without getting hung up on how to use the platform.” (Female, 24; Special Education).

H7 posited that student engagement positively influences academic performance. Regression analysis showed that student engagement significantly predicted academic performance after controlling for prior achievement and demographics (β = 0.48, p < 0.001, ΔR2 = 0.22). Path analysis confirmed this relationship (β = 0.43, p < 0.001). The experimental group demonstrated significantly higher academic achievement (M = 87.64, SD = 6.85) than the control group (M = 81.35, SD = 7.43), t(234) = 6.73, p < 0.001, Cohen’s d = 0.88. The results also showed that student engagement mediated the relationship between educational intervention and academic performance (indirect effect: β = 0.21, 95% CI [0.14, 0.29], p < 0.001), as shown in Table 7. Faculty observations confirmed the link between engagement and performance: “I’ve noticed that students who regularly use our online platforms tend to turn in much better work’. They create more polished presentations, find better sources for their research, and generally show a deeper understanding of the material. It is not just about technical skills.” (Male Instructor, Zhanibekov University).

Table 7
www.frontiersin.org

Table 7. Academic performance by group (created by authors).

3.3 Integrated path model

Structural equation modeling was employed to test the full theoretical model, which integrates all seven hypotheses. The model achieved a good fit across multiple indicators. The chi-square test yielded χ2(42) = 91.56, p < 0.001, and the normalized chi-square ratio (χ2/df) was 2.18, falling below the recommended threshold of 3.0. The Comparative Fit Index (CFI) reached 0.94 and the Tucker-Lewis Index (TLI) was 0.93, both exceeding the conventional criterion of 0.90. The Root Mean Square Error of Approximation (RMSEA) was 0.057 with a 90% confidence interval ranging from 0.048 to 0.065, indicating good fit as it falls below the 0.08 threshold. Finally, the Standardized Root Mean Square Residual (SRMR) value was 0.059, also below the recommended maximum of 0.08. Collectively, these indices confirm that the theoretical model effectively represented the relationships observed in the data.

Figure 3 illustrates the results of the path analysis for the proposed conceptual framework of this study. The model confirms that Technology-Enhanced Pedagogy has a strong influence on Digital Competence (β = 0.58, p < 0.001). Digital Competence, in turn, directly influences multiple constructs, exerting significant effects on Student Engagement (β = 0.32, p < 0.001), Learning Motivation (β = 0.47, p < 0.001), and Self-Efficacy (β = 0.51, p < 0.001). Figure 3 further illustrates how both Learning Motivation (β = 0.38, p < 0.001) and Self-Efficacy (β = 0.29, p < 0.001) serve as important mediators, significantly influencing Student Engagement. The results also revealed a statistically significant path coefficient between Student Engagement and Academic Performance (β = 0.43, p < 0.001), providing empirical support for the seventh hypothesis.

Figure 3
Scatter plot showing self-efficacy against digital competence for two groups. The experimental group, represented by blue dots and a solid line, shows a positive correlation with a beta of 0.51. The control group, marked by orange dots and a dashed line, has a less steep trend. Statistical significance is indicated by p less than 0.001 and delta R squared equals 0.31.

Figure 3. Path analysis results with standardized coefficients (***p < 0.001) (created by authors).

The path analysis revealed several significant indirect pathways within the integrated model. Technology-Enhanced Pedagogy indirectly influences Academic Performance through the mediating variables (total indirect effect: β = 0.17, p < 0.001), demonstrating the educational model’s effect extends beyond immediate digital competence development to impact academic achievement ultimately. Digital Competence similarly shows a substantial indirect effect on Academic Performance (total indirect effect: β = 0.26, p < 0.001), underlining its role as a critical foundation for educational success. Digital Competence influences Student Engagement through two distinct pathways: via Learning Motivation (β = 0.18, p < 0.001) and Self-Efficacy (β = 0.15, p < 0.001), illustrating the multiple mechanisms through which digital skills enhance student engagement.

4 Discussion and conclusion

This quasi-experimental study evaluated the effectiveness of a novel educational model in enhancing digital competence and learning motivation among 236 Master’s students from five Kazakhstani pedagogical universities. The intervention, which integrated technology-enhanced pedagogy, microlearning, gamification, online assessment, and collaborative digital projects, yielded significant improvements in the experimental group’s digital competence across all dimensions, with particularly strong gains in content creation and problem-solving (average increase of 1.26 points vs. 0.28 points in the control group; partial η2 = 0.46). The study confirmed a clear causal pathway: enhanced digital competence positively influenced learning motivation (β = 0.47) and self-efficacy (β = 0.51), which increased student engagement (β = 0.38 and β = 0.29, respectively), ultimately improving academic performance (β = 0.43) as evidenced by higher course grades, research quality, digital portfolio scores, and assignment completion rates. The educational model had the strongest impact on students with an elementary baseline of digital skills (ΔM = 1.53), suggesting its potential to address digital inequality in Kazakhstan’s higher education. Follow-up assessments 1 month post-intervention revealed minimal decay in both competence and motivation gains, indicating durable rather than temporary effects with potential long-term impact on students’ academic and professional development.

The evidence confirms that technology-enhanced pedagogy has a positive influence on digital competence development, aligning with Basilotta-Gómez-Pablos et al.'s (2022) findings on the strategic integration of digital tools and Fernández-Batanero et al.'s (2021) emphasis on pedagogical frameworks. This study extends these findings to Kazakhstan, demonstrating that a model incorporating diverse digital tools, microlearning, and gamification effectively enhances the digital competence of Master’s students, supporting Kerimbayev et al.'s (2017) observations about technology-enhanced pedagogy in Kazakhstan’s universities.

The data confirms that improved digital competence directly enhances student engagement, with digitally competent students demonstrating more active participation in online discussions and enhanced peer collaboration, aligning with Cabero-Almenara et al.’s (2025) findings. Qualitative data revealed that as students developed confidence in their digital abilities, they became more willing to contribute online and persist through technical challenges — consistent with Muammar et al.'s (2023) findings that digitally competent students experience fewer technical barriers in digital learning platforms.

The evidence confirms that digital competence has a positive influence on learning motivation, particularly intrinsic motivation and identified regulation, aligning with Self-Determination Theory’s position that satisfying needs for competence and autonomy enhances intrinsic motivation (Ryan and Deci, 2000). That extends Gisbert Cervera and Caena's (2022) findings on digital competence and self-efficacy to Kazakhstan’s context, where Abildina et al. (2025) identified motivational challenges in digital learning environments, supporting Thelma et al.'s (2024) observation that digital competence fulfills students’ psychological needs.

Learning motivation plays a critical role in student engagement, with intrinsic motivation and identified regulation serving as stronger predictors than external motivation, aligning with Self-Determination Theory (Ryan and Deci, 2000) and Zhaisanova et al.’s (2022) research on Kazakhstani students. The current study extends this understanding to technology-enhanced environments, supporting Fernández-Batanero et al.'s (2021) findings that motivated students participate more actively in online learning communities and persist through challenges.

Digital competence development has a positive influence on self-efficacy beliefs, aligning with Bandura (1997) theory, which identifies mastery experiences as the most powerful source of efficacy beliefs. This relationship extends the findings of Guillén-Gámez and Mayorga-Fernández (2020) from European contexts to Kazakhstan’s higher education, supporting Dang et al.'s (2024) observation that mastery experiences with digital technologies provide concrete evidence of capabilities.

Self-efficacy positively influences student engagement, partially mediating the relationship between digital competence and engagement, aligning with Chang et al.'s (2022) research. Observation data showed students with higher self-efficacy demonstrated more initiative and persisted longer when facing technical challenges, supporting Mehrvarz et al.'s (2021) finding that self-efficacy regarding digital learning skills predicted engagement with online platforms and technology-mediated academic tasks.

Student engagement plays a crucial role in enhancing academic performance, aligning with research that shows engaged students demonstrate superior academic outcomes (Casanova et al., 2024; Wong and Liem, 2022). The experimental group demonstrated significantly higher academic achievement, with this relationship mediated by student engagement, supporting Abdillah and Wahyuilahi's (2025) finding that students who actively engaged with digital platforms demonstrated better learning outcomes than those who used resources more passively.

The use of intact classroom groups without individual randomization, while minimizing educational disruption, retains a risk of hidden systematic bias. The potential influence of unaccounted factors (e.g., differences in specific cohorts’ learning cultures, variations in schedules, instructors’ informal practices) is reduced but not fully eliminated by balancing and statistical adjustments in the analysis. Several key constructs were measured primarily via self-report scales adapted for the master’s context. Despite rigorous translation, piloting, and reliability testing, the potential for social desirability effects and limited comparability across language tracks remains. Furthermore, measurement invariance across groups, languages, and universities was tested to a limited extent; consequently, observed group differences may partially reflect differential item functioning.

The intervention was designed as an integrated complex of multiple interconnected components. Although this multifaceted approach enhances ecological validity, it concurrently presents challenges in establishing a clear causal attribution of the effects to its constituent elements, such as microlearning, gamification, assessment formats, and cooperative projects. Factor and mediator effects were established at the model level but were not tested using multifactorial experimental designs. The observation period was limited to one semester, and follow-up measures for sustainability had a short time horizon. We did not assess the long-term impacts on research productivity, employment, and professional identity. Furthermore, variability in instructor readiness and local infrastructure may have affected implementation fidelity; monitoring was conducted via regular logs and observations but lacked automated telemetry (e.g., clickstreams and interaction network analysis), limiting the mechanism reconstruction precision.

The generalizability of the findings is constrained by the specific set of participating universities and the focus on pedagogical disciplines focus on pedagogical disciplines and the specific set of participating universities. Disciplinary areas with different task profiles (e.g., engineering, medicine, and IT) may impose distinct demands on digital actions and motivational support formats. This study did not include a cost-effectiveness analysis or a resource model for scaling, nor did it address the regulatory and ethical aspects of using generative AI under local norms.

4.1 Theoretical contributions

This study makes a significant contribution to the development of evidence-based pedagogy in the context of the digital transformation of educational systems. A novel, comprehensive educational concept has been developed, incorporating microlearning, gamification, online assessment, and collaborative digital projects for master’s programs at pedagogical universities in Kazakhstan. The study empirically corroborated all seven hypotheses. The educational intervention produced statistically significant gains in digital competence and learning motivation among master’s students in Kazakhstan’s pedagogical universities. The quantitative indicators, supported by rigorous statistical testing, provide robust evidence of the intervention’s effectiveness. These effects cohere with established theoretical accounts in educational psychology and digital learning, particularly those explaining how competence development and motivational regulation are reinforced within technology-enhanced environments.

First, this study provides clarifies and provides empirical support for the conceptual model that posits a sequential mediation pathway. Our model delineates a process in which effective learning design fosters the development of DC, which in turn functions as a proximal antecedent to motivation and self-efficacy. Our findings specifically demonstrate that operational DC acts as the proximal predictor of motivational regulators and self-efficacy in the digital environment, which subsequently triggers the well-established mechanisms for sustaining effort and persistence through engagement.

Second, this study provides a more detailed analysis of the microlearning and gamified contours for managing a course’s cognitive architecture. Decomposing skills into short, mastered units reduces extraneous load and frees up cognitive resources for intrinsic (content-related) processing, while the recognition of achievements and transparent criteria via e-portfolios enhances the central determinants of perceived competence and autonomy within self-determination theory. This thereby connects the constructs of cognitive load, self-efficacy, and motivation regulation within the digital contexts of postgraduate teacher education.

Third, the effects are observed when conditions constitute a framework in which the anticipated cascade from competence development to engagement and academic outcomes is observed. Specifically, the effects are contingent on the explicit alignment of learning tasks with target DC domains, coupled with the consistent implementation of formative assessment that provides feedback bridging the learning process and the final product. A further essential condition is the design of collaborative teamwork in which digital communication and the co-construction of artifacts are positioned as the central objective of instruction rather than merely the environment for learning.

Fourth, the compensatory effect higher gains among students with low initial proficiency indicates that DC acts as a mediator: its targeted development helps to equalize educational opportunities without lowering academic standards. This refines the discourse on the digital divide and proposes a mechanism for its reduction within teacher training programs.

4.2 Practical implications

This study offers significant practical implications for Kazakhstan’s higher education system and similar contexts. The empirically validated educational model, integrating technology-enhanced pedagogy, microlearning, gamification, online assessment, and collaborative digital projects, can be immediately implemented across various universities and disciplines to enhance digital competence and learning motivation in Master’s programs. The microlearning approach, breaking complex skills into 10–15 min units, proved particularly effective and easily implementable, while gamification elements (achievement systems, badges, leaderboards) significantly enhanced student motivation through immediate feedback and recognition.

The findings emphasize the importance of collaborative digital projects that simultaneously develop technical skills and social engagement. Educators can implement these projects progressively, beginning with simple co-creation tasks and gradually advancing to more complex collaborative challenges as students’ competencies develop. For effective implementation, universities must ensure adequate technological infrastructure (reliable internet access, necessary hardware, and software), provide training to enhance faculty digital competence, and establish clear institutional policies that support digital innovation.

The model demonstrated particular effectiveness for students with lower baseline digital competence, suggesting a strategic approach to addressing digital inequality in Kazakhstan’s higher education. Additionally, the integrated assessment strategies (peer assessment, self-evaluation, reflective journals, e-portfolios) simultaneously measure and develop competence through continuous feedback loops, becoming integral components of the learning process rather than mere evaluation methods, allowing ongoing adjustments to individual learning pathways.

4.3 Conclusion

This study investigated the impact of an integrated educational model on digital competence and learning motivation among Master’s students in Kazakhstan’s higher education system. Results showed that the model led to improvements in students’ digital skills, particularly in content creation and problem-solving areas. Students who began with basic digital skills showed greater improvement compared to those who already had advanced skills. The study found connections between improved digital competence and increased learning motivation, self-efficacy, and engagement, which corresponded with better academic results. When students were assessed again after the study ended, they maintained their improved skills and motivation, suggesting the educational model had lasting effects.

It is also revealed that digital competence frameworks can be effectively applied in Kazakhstan’s educational setting, contributing to a new understanding of both theory and practice in higher education. The findings demonstrate that when digital skills are taught using structured approaches, such as microlearning, gamification, and collaborative projects, both technical abilities and student motivation improve. Educational institutions in Kazakhstan and countries with similar contexts can use this model to update their graduate programs, help reduce digital skill gaps among students, and better prepare graduates for workplaces that increasingly require digital proficiency.

Future research should explore these outcomes longitudinally throughout entire graduate programs and expand to more diverse academic disciplines beyond pedagogical programs. Studies should incorporate authentic performance assessments measuring applied digital competencies in real-world contexts and examine institutional variations in implementation quality. Researchers should investigate how emerging technologies, such as artificial intelligence, might further enhance digital competence development and conduct comparative studies across Central Asian countries to evaluate the model’s regional adaptability and cultural responsiveness in various educational settings.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Ethics statement

The ethical approval was considered unnecessary since the only human involvement included semi-structured interviewing. The interviewees provided anonymous written consent. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

ZB: Data curation, Formal analysis, Investigation, Writing – original draft, Writing – review & editing. AT: Conceptualization, Methodology, Project administration, Writing – original draft, Writing – review & editing. MA: Conceptualization, Methodology, Supervision, Writing – original draft, Writing – review & editing. DS: Data curation, Resources, Validation, Writing – original draft, Writing – review & editing. BA: Data curation, Investigation, Validation, Writing – original draft, Writing – review & editing. GM: Investigation, Software, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2026.1675872/full#supplementary-material

References

Abayeva, N., Mustafina, L., Zhurov, V., Yerakhtina, I., and Mustafina, B. (2024a). Leveraging mathematics to enhance critical thinking in technical universities. Asian J. Univ. Educ. 20, 566–581. doi: 10.24191/ajue.v20i3.27861

Crossref Full Text | Google Scholar

Abayeva, N., Zhurov, V., and Mustafina, L. (2024b). Innovative massive open online course for young researchers: impact of implementation in Kazakhstani universities. Cogent Educ. 11:2378269. doi: 10.1080/2331186X.2024.2378269

Crossref Full Text | Google Scholar

Abdillah, M. R. N., and Wahyuilahi, M. (2025). Transforming logistics education in higher institutions: the role of digital technologies in global training contexts. Sinergi Int. J. Logist. 3, 29–42. doi: 10.61194/sijl.v3i1.737

Crossref Full Text | Google Scholar

Abildina, S., Omarova, M., and Kudarinova, A. (2025). Learning motivation as an indicator and condition of learners' psychological well-being. Nat. Center High. Educ. Dev. 49, 105–114. doi: 10.59787/2413-5488-2025-49-1-105-114

Crossref Full Text | Google Scholar

Al Alsibani, A., and Al Eladl, A. (2024). New ranking system for higher education institutions in Oman as a tool for achieving SDGs. Istanbul J. Soc. Sci. Humanit. 2, 54–61. doi: 10.62185/issn.3023-5448.2.2.6

Crossref Full Text | Google Scholar

Amin, S. M., El-Fattah Mahgoub, S. A., Tawfik, A. F., Khalil, D. E., El-Sayed, A. A. I., Atta, M. H. R., et al. (2025). Nursing education in the digital era: the role of digital competence in enhancing academic motivation and lifelong learning among nursing students. BMC Nurs. 24:571. doi: 10.1186/s12912-025-03199-2 40399954,

PubMed Abstract | Crossref Full Text | Google Scholar

Arapbayev, N. L. (2024). AI integration in the Kazakhstan school curriculum. Sci. Anal. J. 11, 868–875.

Google Scholar

Baimyrza, A., Akkuzov, A., Kaiyrbekova, U., Akmaral Kurmanayeva,, and Aizhan Serikbayeva, (2024). The impact of native and foreign languages on the linguistic identity formation of Kazakhstani youth. Forum Linguist. Stud. 6, 247–257. doi: 10.30564/fls.v6i5.6878

Crossref Full Text | Google Scholar

Baissydyk, I., Kuderinova, K., Shakhanova, R., Rizakhojayeva, G., and Zhyltyrova Z, (2023). Lexical aspect in language and culture communication. XLinguae 16, 216–223. doi: 10.18355/XL.2023.16.01.16

Crossref Full Text | Google Scholar

Bandura, A. (1997). Self-efficacy: The Exercise of Control. New York: W.H. Freeman and Company.

Google Scholar

Basilotta-Gómez-Pablos, V., Matarranz, M., Casado-Aranda, L. A., and Otto, A. (2022). Teachers’ digital competencies in higher education: a systematic literature review. Int. J. Educ. Technol. High. Educ. 19, 1–16. doi: 10.1080/21532974.2019.1646169

Crossref Full Text | Google Scholar

Cabero-Almenara, J., Guillén-Gámez, F. D., Ruiz-Palmero, J., and Palacios-Rodríguez, A. (2021). Digital competence of higher education professor according to DigCompEdu. Statistical research methods with ANOVA between fields of knowledge in different age ranges. Educ. Inf. Technol. 26, 4691–4708. doi: 10.1007/s10639-021-10476-5,

PubMed Abstract | Crossref Full Text | Google Scholar

Cabero-Almenara, J., Gutiérrez-Castillo, J. J., Palacios-Rodríguez, A., and Barroso-Osuna, J. (2020). Development of the teacher digital competence validation of DigCompEdu check-in questionnaire in the university context of Andalusia (Spain). Sustainability 12:6094. doi: 10.3390/su12156094

Crossref Full Text | Google Scholar

Cabero-Almenara, J., Palacios-Rodríguez, A., Rojas Guzmán, H., de la, Á., and Fernández-Scagliusi, V. (2025). Prediction of the use of generative artificial intelligence through ChatGPT among Costa Rican university students: A PLS model based on UTAUT2. Appl. Sci 15:3363. doi: 10.3390/app15063363

Crossref Full Text | Google Scholar

Casanova, J., Sinval, J., and Almeida, L. (2024). Academic success, engagement and self-efficacy of first-year university students: personal variables and first-semester performance. Anales de Psicología 40, 44–53. doi: 10.6018/analesps

Crossref Full Text | Google Scholar

Chang, C. F., Hall, N. C., Lee, S. Y., and Wang, H. (2022). Teachers’ social goals and classroom engagement: the mediating role of teachers' self-efficacy. Int. J. Educ. Res. 113:101952. doi: 10.1016/j.ijer.2022.101952

Crossref Full Text | Google Scholar

Cohen, J. (2013). Statistical Power Analysis for the Behavioral Sciences. New York: Routledge.

Google Scholar

Coursera (2025). Global skills report 2025 Available online at: https://www.coursera.org/skills-reports/global (Accessed August 1, 2025).

Google Scholar

Dang, T. D., Phan, T. T., Vu, T. N. Q., La, T. D., and Pham, V. K. (2024). Digital competence of lecturers and its impact on student learning value in higher education. Heliyon 10:e3731. doi: 10.1016/j.heliyon.2024.e3731

Crossref Full Text | Google Scholar

Daud, N. M. (2025). From innovation to stress: analyzing hybrid technology adoption and its role in technostress among students. Int. J. Educ. Technol. High. Educ. 22:31. doi: 10.1186/s41239-025-00529-x

Crossref Full Text | Google Scholar

Davletova, A., Orazova, N., Renobell, V., Khairullin, R., Ponkratov, V., Burmykina, I., et al. (2025b). The modern university’s mission and transformation: addressing challenges in a multipolar world. Emerg. Sci. J. 9, 398–418. doi: 10.28991/ESJ-2025-09-01-022

Crossref Full Text | Google Scholar

Davletova, A., Tolegenova, Z., Akhmetova, G., Kanaibekova, G., and Yerkegaliyeva, G. (2025a). Fostering digital culture of future teachers via open educational environment in the Republic of Kazakhstan. Int. J. Innov. Res. Sci. Stud. 8, 4579–4585. doi: 10.53894/ijirss.v8i3.7552

Crossref Full Text | Google Scholar

Esteve-Mon, F. M., Adell-Segura, J., Nebot, M. Á. L., Novella, G. V., and Aparicio, J. P. (2019). The development of computational thinking in student teachers through an intervention with educational robotics. J. Inf. Technol. Educ. Innov. Pract. 18, 139–152. doi: 10.28945/4442

Crossref Full Text | Google Scholar

European Commission (2025). 2025 state of the digital decade package Available online at: https://digital-strategy.ec.europa.eu/en/policies/2025-state-digital-decade-package (Accessed August 1, 2025).

Google Scholar

Faul, F., Erdfelder, E., Buchner, A., and Lang, A. G. (2009). Statistical power analyses using G*power 3.1: tests for correlation and regression analyses. Behav. Res. Methods 41, 1149–1160. doi: 10.3758/BRM.41.4.1149,

PubMed Abstract | Crossref Full Text | Google Scholar

Fernández-Batanero, J. M., Román-Graván, P., Montenegro-Rueda, M., López-Meneses, E., and Fernández-Cerero, J. (2021). Digital teaching competence in higher education: a systematic review. Educ. Sci. 11:689. doi: 10.3390/educsci11110689

Crossref Full Text | Google Scholar

Gisbert Cervera, M., and Caena, F. (2022). Teachers’ digital competence for global teacher education. Eur. J. Teach. Educ. 45, 451–455. doi: 10.1080/02619768.2022.2135855

Crossref Full Text | Google Scholar

Greene, J. A., and Crompton, H. (2025). Synthesizing definitions of digital literacy for the Web 3.0. TechTrends. 69, 21–37. doi: 10.1007/s11528-024-01015-3

Crossref Full Text | Google Scholar

Guillén-Gámez, F. D., and Mayorga-Fernández, M. J. (2020). Prediction of factors that affect the knowledge and use higher education professors from Spain make of ICT resources to teach, evaluate and research: a study with research methods in educational technology. Educ. Sci. 10:276. doi: 10.3390/educsci10100276

Crossref Full Text | Google Scholar

Hammoda, B., and Foli, S. (2024). A digital competence framework for learners (DCFL): a conceptual framework for digital literacy. Knowl. Manag. E-Learn. 16, 477–500. doi: 10.34105/j.kmel.2024.16.022

Crossref Full Text | Google Scholar

Hidayat-Ur-Rehman, I. (2024). Digital competence and students’ engagement: a comprehensive analysis of smartphone utilization, perceived autonomy and formal digital learning as mediators. Interact. Technol. Smart Educ. 21, 461–488. doi: 10.1108/ITSE-09-2023-0189

Crossref Full Text | Google Scholar

Hogan, C. (2025). Implementing mental health first aid training in schools: a model for teacher preparedness. Br. J. Community Nurs. 30, 340–344. doi: 10.12968/bjcn.2025.0038,

PubMed Abstract | Crossref Full Text | Google Scholar

Howard, J. L., Bureau, J. S., Guay, F., Chong, J. X., and Ryan, R. M. (2021). Student motivation and associated outcomes: a meta-analysis from self-determination theory. Perspect. Psychol. Sci. 16, 1300–1323. doi: 10.1177/1745691620966789,

PubMed Abstract | Crossref Full Text | Google Scholar

Joint Research Centre, European Commission. (n.d.). DigComp framework [digital competence framework for citizens]. Available online at: https://joint-research-centre.ec.europa.eu/projects-and-activities/education-and-training/digital-transformation-education/digital-competence-framework-citizens-digcomp/digcomp-framework_en (Accessed September 10, 2025)

Google Scholar

Kanyika, M. E., Sadykova, R., and Kosmyrza, Z. (2024). Digital literacy competencies among students in higher learning institutions in Kazakhstan. Glob. Knowl. Mem. Commun. doi: 10.1108/GKMC-04-2024-0224

Crossref Full Text | Google Scholar

Keller, J. M. (2009). Motivational Design for Learning and Performance: The ARCS Model Approach. New York: Springer Science & Business Media.

Google Scholar

Kerimbayev, N., Kultan, J., Abdykarimova, S., and Akramova, A. (2017). LMS moodle: distance international education in cooperation of higher education institutions of different countries. Educ. Inf. Technol. 22, 2125–2139. doi: 10.1007/s10639-016-9534-5

Crossref Full Text | Google Scholar

Kline, R. B. (2023). Principles and Practice of Structural equation Modeling. 5th Edn. New York: Guilford Press.

Google Scholar

Kurakbayeva, A., and Xembayeva, S. (2025). Enhancing professional abilities of university students through digital educational interventions: a study in Kazakhstani universities. Front. Educ. 9:1478622. doi: 10.3389/feduc.2024.1478622

Crossref Full Text | Google Scholar

Kushkimbayeva, A., Tymbolova, A., Yessenova, K., Sultaniyazovad, I., and Remetove, M. (2024). Increasing students’ self-esteem based on the pragmatic level of linguistic personality. Eurasian J. Appl. Linguist. 10, 72–80. doi: 10.32601/ejal.10107

Crossref Full Text | Google Scholar

Lo, N. P. K. (2024). Cross-cultural comparative analysis of student motivation and autonomy in learning: perspectives from Hong Kong and the United Kingdom. Front. Educ. 9:1393968. doi: 10.3389/feduc.2024.1393968

Crossref Full Text | Google Scholar

Marcelo, C., and Yot-Domínguez, C. (2019). From chalk to keyboard in higher education classrooms: changes and coherence when integrating technological knowledge into pedagogical content knowledge. J. Furth. High. Educ. 43, 975–988. doi: 10.1080/0309877X.2018.1429584

Crossref Full Text | Google Scholar

Mehrvarz, M., Heidari, E., Farrokhnia, M., and Noroozi, O. (2021). The mediating role of digital informal learning in the relationship between students' digital competence and their academic performance. Comput. Educ. 167:104184. doi: 10.1016/j.compedu.2021.104184

Crossref Full Text | Google Scholar

Meiramova, S. A., and Zagatova, S. B.. (2025). AI-powered smart technologies for enhancing innovative English teaching in higher education in Kazakhstan. Proceeding of International Conference on Social Science and Humanity, 2, 861–869.

Google Scholar

Muammar, S., Hashim, K. F. B., and Panthakkan, A. (2023). Evaluation of digital competence level among educators in UAE higher education institutions using digital competence of educators (DigComEdu) framework. Educ. Inf. Technol. 28, 2485–2508. doi: 10.1007/s10639-022-11296-x,

PubMed Abstract | Crossref Full Text | Google Scholar

Mynbayeva, A., Yessenova, K., and Karabutova, A. (2025). Master’s degree students’ perspectives on heutagogy: self-directed learning in the context of education 3.0 and 4.0. J. Learn. Dev. 12, 154–166. doi: 10.56059/jl4d.v12i1.1150

Crossref Full Text | Google Scholar

Noor, U., Younas, M., Saleh Aldayel, H., Menhas, R., and Qingyu, X. (2022). Learning behavior, digital platforms for learning and its impact on university student’s motivations and knowledge development. Front. Psychol. 13:933974. doi: 10.3389/fpsyg.2022.933974,

PubMed Abstract | Crossref Full Text | Google Scholar

Nurtayeva, D., Kredina, A., Kireyeva, A., Satybaldin, A., and Ainakul, N. (2024). The role of digital technologies in higher education institutions: the case of Kazakhstan. Probl. Perspect. Manage. 22, 562–574. doi: 10.21511/ppm.22(1).2024.45

Crossref Full Text | Google Scholar

OECD (2025a). Trends Shaping Education 2025. Geneva: OECD Publishing.

Google Scholar

OECD (2025b). Education at a Glance 2025: OECD Indicators. Geneva: OECD Publishing.

Google Scholar

Omarova, M. K., and Mirza, N. V. (2025). Factors’ learning motivation dynamics in training course: a study with students in Kazakhstan. Front. Educ. 10:1611184. doi: 10.3389/feduc.2025.1611184

Crossref Full Text | Google Scholar

Osipov, G., Karepova, S., Ponkratov, V., Karaev, A., Masterov, A., and Vasiljeva, M. (2020). Economic and mathematical methods for ranking eastern European universities. Ind. Eng. Manage. Syst. 19, 273–283. doi: 10.7232/iems.2020.19.1.273

Crossref Full Text | Google Scholar

Palacios-Rodríguez, A., Llorente-Cejudo, C., Lucas, M., and Bem-haja, P. (2025). Macroassessment of teachers' digital competence. DigCompEdu study in Spain and Portugal. Rev. Iberoam. Educ. Distancia 28, 177–192. doi: 10.5944/ried.28.1.41379

Crossref Full Text | Google Scholar

Redecker, C., and Punie, Y. (2017). European framework for the digital competence of educators. Seville: DigCompEdu Publications Office of the European Union Available online at: https://publications.jrc.ec.europa.eu/repository/handle/JRC107466 (Accessed May 1, 2025).

Google Scholar

Reuters. (2025). EU to invest €1.4 billion in artificial intelligence, cybersecurity, digital skills. Available online at: https://www.reuters.com/technology/artificial-intelligence/eu-invest-14-billion-artificial-intelligence-cybersecurity-digital-skills-2025-03-28/ (Accessed August 1, 2025).

Google Scholar

Romero-García, C., Buzón-García, O., and de Paz-Lugo, P. (2020). Improving future teachers’ digital competence using active methodologies. Sustainability 12:7798. doi: 10.3390/su12187798

Crossref Full Text | Google Scholar

Ryan, R. M., and Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 55, 68–78. doi: 10.1037/0003-066X.55.1.68,

PubMed Abstract | Crossref Full Text | Google Scholar

Sarmurzin, Y., Amanzhol, N., Toleubayeva, K., Zhunusova, M., and Amanova, A. (2021). The impact of OECD research on the education system of Kazakhstan. Asia Pac. Educ. Rev. 22, 757–766. doi: 10.1007/s12564-021-09715-8

Crossref Full Text | Google Scholar

Schunk, D. H., and DiBenedetto, M. K. (2022). “Self-efficacy and engaged learners” in Handbook of Research on Student Engagement eds. A. L. Reschly and S. L. Christenson (Cham: Springer International Publishing), 155–170.

Google Scholar

Sharonin, Y. V., Volodina, E. V., and Volodina, I. V. (2022). Forming communicative creativity in students of a technical university on the example of the discipline “foreign language”. Prosp. Sci. Educ. 2, 200–218. doi: 10.32744/pse.2022.2.12

Crossref Full Text | Google Scholar

Tazhitova, G., Kurmanayeva, D., Kalkeeva, K., Sagimbayeva, J., and Kassymbekova, N. (2022). Local materials as a means of improving motivation to EFL learning in Kazakhstan universities. Educ. Sci. 12:604. doi: 10.3390/educsci12090604

Crossref Full Text | Google Scholar

Thelma, C. C., Sain, Z. H., Shogbesan, Y. O., Phiri, E. V., and Akpan, W. M. (2024). Digital literacy in education: preparing students for the future workforce. Int. J. Res. 11, 327–344. doi: 10.5281/zenodo.13347718

Crossref Full Text | Google Scholar

UNESCO (2025). UNESCO survey: two thirds of higher education institutions have or are developing guidance on AI use

Google Scholar

Vallerand, R., Pelletier, L., Blais, M., Brière, N., Senecal, C., and Vallieres, E. F. (1992). The academic motivation scale: a measure of intrinsic, extrinsic, and amotivation in education. Educ. Psychol. Meas. 52, 1003–1003. doi: 10.1177/0013164492052004025

Crossref Full Text | Google Scholar

Vasiljeva, M. V., Ivleva, M. I., Volkov, Y. G., Karaev, A. K., Nikitina, N. I., and Podzorova, M. I. (2019). The development of meta-competencies in undergraduate students using personality development theory. Opción 35, 1524–1543.

Google Scholar

Vasiljeva, M. V., Vasiljev, I. V., Chizhevskaya, E. L., and Sokolov, A. A. (2018). Assessing the efficiency of knowledge management system and its impact on GDP growth in Kazakhstan. Int. J. Knowl. Manag. Stud. 9, 176–192. doi: 10.1504/IJKMS.2018.091252

Crossref Full Text | Google Scholar

Volodina, E. V., and Volodina, I. V. (2020). Forming the readiness for communication in innovative engineering and scientific research activities among students of a technical university by means of a foreign language. Prospects Sci. Educ. 5, 122–134. doi: 10.32744/pse.2020.5.8

Crossref Full Text | Google Scholar

Wong, Z. Y., and Liem, G. A. D. (2022). Student engagement: current state of the construct, conceptual refinement, and future research directions. Educ. Psychol. Rev. 34, 107–138. doi: 10.1007/s10648-021-09628-3

Crossref Full Text | Google Scholar

World Bank (2025). Digital skills development Available online at: https://thedocs.worldbank.org/en/doc/a607bb6e3b76d2be0f3db8db34dcf73e-0140022025/related/3EDU-WP-14-Digital-skills-development.pdf (Accessed September 1, 2025).

Google Scholar

Yermekova,, Yrymbayeva, N., Rakhimgalieva, P., and Akbidash, A. (2025). Bridging the digital divide: assessing future educators' competence in Kazakhstan's higher education through the DigCompEdu framework. Int. J. Innov. Res. Sci. Stud. 8:1224–1238. doi: 10.53894/ijirss.v8i1.4572

Crossref Full Text | Google Scholar

Younas, M., Dong, Y., Zhao, G., Menhas, R., and Luan, L. (2025a). Unveiling digital transformation and teaching prowess in English education during COVID-19 with structural equation modelling. Eur. J. Educ. 60:12818. doi: 10.1111/ejed.12818

Crossref Full Text | Google Scholar

Younas, M., Iskander, I., El-Dakhs, D. A. S., Anwar, B., and Uzma, N. (2025b). Evaluating the effectiveness of digital scenario-based English teaching at the university level using artificial intelligence generated content. Front. Educ. 10:1670892. doi: 10.3389/feduc.2025.1670892

Crossref Full Text | Google Scholar

Younas, M., Noor, U., Zhou, X., Menhas, R., and Qingyu, X. (2022). COVID-19, students satisfaction about e-learning and academic achievement: mediating analysis of online influencing factors. Front. Psychol. 13:948061. doi: 10.3389/fpsyg.2022.948061,

PubMed Abstract | Crossref Full Text | Google Scholar

Zakir, S., Hoque, M. E., Susanto, P., Nisaa, V., Alam, M. K., Khatimah, H., et al. (2025). Digital literacy and academic performance: the mediating roles of digital informal learning, self-efficacy, and students’ digital competence. Front. Educ. 10:1590274. doi: 10.3389/feduc.2025.1590274

Crossref Full Text | Google Scholar

Zhaisanova, D. (2022). “Students' motivational profiles in the high education of Kazakhstan in the context of self-determination theory: big data application” in 2022 International Conference on Smart Information Systems and Technologies; 2022 April 28–30; Nur-sultan, Kazakhstan (New York: IEEE), 1–6.

Google Scholar

Zhao, Y., Llorente, A. M. P., and Gómez, M. C. S. (2021). Digital competence in higher education research: a systematic literature review. Comp. Educ. 168:104212. doi: 10.1016/j.compedu.2021.10421

Crossref Full Text | Google Scholar

Zholdigaly, B., Zhumabayeva, L. O., and Abdykerimova, E. A. (2024). Artificial intelligence in the education sector of Kazakhstan: opportunities and prospects. Yessenov Sci. J. 48, 77–82. doi: 10.56525/dqhg9635

Crossref Full Text | Google Scholar

Keywords: digital competence, learning motivation, higher education, technology-enhancedpedagogy, student engagement, microlearning, gamification, digital inequality

Citation: Bakirova Z, Tasova A, Absatova M, Sadirbekova D, Auezov B and Meirbekova G (2026) Enhancing digital competence and learning motivation of master’s students: a new educational model for Kazakhstan’s higher education system. Front. Educ. 11:1675872. doi: 10.3389/feduc.2026.1675872

Received: 29 July 2025; Revised: 29 October 2025; Accepted: 20 January 2026;
Published: 12 February 2026.

Edited by:

Sümeyye Öcal Dörterler, Dumlupinar University, Türkiye

Reviewed by:

Larisa Loredana Dragolea, 1 Decembrie 1918 University, Romania
Uzma Noor, Soochow University, China

Copyright © 2026 Bakirova, Tasova, Absatova, Sadirbekova, Auezov and Meirbekova. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Asel Tasova, bnVyZGVsaW1hLndyQGhvdG1haWwuY29t

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.