- 1Department of Vocational Training and Fine Arts, South Kazakhstan Pedagogical University named after O. Zhanibekov, Shymkent, Kazakhstan
- 2Department of Fine Arts, Technology and Music Education, South Kazakhstan University named after M. Auezov, Shymkent, Kazakhstan
- 3Department of Design, Zh. A. Tashenev University, Shymkent, Kazakhstan
- 4Department of Informatics, Kazakh National Pedagogical University named after Abai, Almaty, Kazakhstan
Objective: In university-level Information and Communication Technology (ICT) training, a persistent gap remains between students’ declarative proficiency with digital tools and the actual quality of their educational graphics. The aim of the study was to enhance the graphic digital competence of undergraduate students enrolled in vocational teacher education programs in Kazakhstan. A three-component approach was introduced, combining cognitive principles, instrumental practice, and reflective analysis.
Methods: A quasi-experimental design was implemented at a single university (n = 86; experimental group = 44; control group = 42) over the course of an academic year (~120 academic hours). Diagnostics included a self-assessment questionnaire (Cronbach’s α = 0.82), performance-based tasks in graphic editors, and expert evaluation of students’ artifacts using an analytic rubric. Statistical analyses followed established procedures: paired t-tests for pre–post comparisons within groups, and independent t-tests for posttest comparisons between groups.
Results: The EG demonstrated a significant pre–post gain on the integrated GDC index: M₁ = 2.09 (SD = 0.44) → M₂ = 2.32 (SD = 0.51), t (43) = 2.74, p < 0.01; no significant change was observed in the CG (p > 0.05). At posttest, the EG showed higher mean scores than CG (M = 2.32 vs. 2.11); the independent t-test approached significance: t (≈84) = 1.95, p = 0.055. Distributional shifts within the EG indicated an upward trend: the percentage of students at a high graphic digital competence (GDC) level increased from 13.6 to 40.9%, while the low-level share decreased from 34.1 to 9.1%. The most pronounced growth occurred in the instrumental component, though cognitive and reflective aspects also improved steadily.
Conclusion: A modular blend of micro-theory, targeted practicum activities, and reflective portfolio tasks leads to measurable improvement in the quality of students’ educational graphics. This work contributes to the literature by focusing specifically on the graphic dimension of digital competence and offers replicable materials for integration into ICT-based courses.
Introduction
In the context of accelerated digital transformation, universities in Kazakhstan are gradually shifting from a decorative approach to visual materials toward the design of meaningful visual explanations. For students of vocational teacher education, this shift is particularly relevant: visual artifacts function not only as tools for communication but also as instruments for constructing meaning when working with complex systems, algorithms, and data.
In response to this challenge, we developed a three-component framework - cognitive, instrumental, and reflective. Although the three-component structure may appear to echo the general logic of DigCompEdu, the framework used in this study has a different functional purpose and a more specific operational focus. DigCompEdu maps broad professional competencies of educators, whereas the present model isolates a single domain graphic digital competence and translates it into concrete, observable learning processes. The cognitive component in our model refers not to generic digital knowledge but specifically to visual grammar, perceptual load, and multimedia design principles. The instrumental component is likewise domain-specific, emphasizing production workflows in professional graphic editors rather than general tool operation. Finally, the reflective component is tied to the evaluation of visual artifacts and their pedagogical alignment, which DigCompEdu does not operationalize at this granularity. This differentiation allows the model to function not as an adaptation of DigCompEdu but as an application-level extension tailored to the challenges of instructional graphics in vocational teacher education.
The contemporary higher education system is undergoing an active phase of digital transformation, which requires students to possess not only basic ICT skills but also the ability to create visually rich, graphically precise, and methodologically grounded digital resources. Infographics, data visualizations, interactive diagrams, and multimedia content are becoming essential components of the learning process, enhancing clarity, engagement, and the assimilation of complex professional knowledge.
However, an analysis of current university practices in Kazakhstan reveals that students’ levels of graphic digital competence remain uneven. A significant proportion rely solely on ready-made presentation templates and basic graphic tools, which no longer meet the demands of modern educational programs or labor market expectations. A key contradiction emerges: on the one hand, the need to develop graphic digital competence is acknowledged at policy and strategic levels (e.g., national digitalization programs); on the other hand, there is still no clearly defined model for fostering this competence through university instruction.
The academic literature increasingly addresses the issue of digital competence (Redecker and Punie, 2017; Ilomäki et al., 2016), emphasizing its layered structure that includes technical, methodological, and creative components. However, studies focusing specifically on the graphic dimension tend to prioritize technical training in software usage, often neglecting the pedagogical and methodological contexts of applying visual tools in learning environments. Within Kazakhstan, relevant research is still fragmented, underscoring the need for experimental studies aimed at integrating graphic digital tools into student training.
Accordingly, the development and empirical validation of pedagogical conditions and instructional strategies aimed at fostering university students’ graphic digital competence is driven by both global challenges in educational digitalization and the specific priorities of Kazakhstan’s higher education system.
The purpose of this study was to examine whether a structured instructional framework can improve undergraduate students’ graphic digital competence. To address this aim, the study implemented an integrated sequence of theoretical instruction, hands-on training in professional graphic editors, and reflective portfolio activities. This approach was designed to bridge the gap between students’ declarative ICT skills and their ability to create high-quality instructional graphics.
Research objectives
1. To operationalize the components of graphic digital competence for the purposes of this study.
2. To implement a structured instructional framework designed to enhance this competence.
3. To evaluate the effectiveness of the framework through quantitative and qualitative evidence.
It was hypothesized that the application of the proposed framework, which integrates cognitive, instrumental, and reflective components, would contribute to a measurable improvement in students’ graphic digital competence compared to the control group.
Research questions
Q1: What are the key structural components of graphic digital competence among university students?
Q2: In what ways do targeted instructional practices enhance students’ ability to use and apply graphic digital tools in educational contexts?
Q3: What pedagogical conditions facilitate the consistent and meaningful development of this competence across diverse learning environments?
The proposed three-component framework responds to the observed gap between students’ declarative proficiency with digital tools and the actual quality of their educational visual products. Its originality lies in the deliberate integration of cognitive visualization principles, structured practice using professional graphic editors, and reflective portfolio-based assessment. This approach shifts the pedagogical emphasis from surface-level formatting toward the purposeful design of explanatory visual materials. It also demonstrates that, even in settings with limited institutional resources, measurable improvements can be achieved through modular course structure, explicit quality criteria, and systematic feedback mechanisms. The relevance of this study is shaped by several characteristics of the Kazakhstani educational context. National digitalisation programs emphasize infrastructure development, yet visual design literacy remains insufficiently addressed in teacher-education curricula. University courses tend to prioritize general ICT skills, leaving little space for domain-specific work with instructional graphics. Moreover, the linguistic and cultural diversity of the student population places additional demands on clarity, multimodality, and accessibility of instructional visuals. These contextual factors make the development of a dedicated model for graphic digital competence not only timely but also structurally necessary for the national teacher-education system.
The practical value of this framework is found in its scalability: the instructional scenarios, evaluation rubric, and assignments can be readily adapted and applied to related ICT-oriented courses with minimal modification.
Literature review
In higher education, digital competence is increasingly conceptualized as a multi-component construct that integrates instrumental skills, pedagogical-design strategies, and reflective-ethical orientations. At the same time, this construct remains a “borderline” category, often difficult to operationalize and compare across programs and institutions (Ilomäki et al., 2016). One widely used ontological reference is the Digital Competence of Educators framework (DigCompEdu) framework: although originally designed for educators, its domains are frequently adapted to student contexts when properly aligned with instructional goals (Redecker and Punie, 2017; Spante et al., 2018; Zhao et al., 2021). Within this study, graphic digital competence is defined as the ability to design and evaluate instructional visual artifacts such as presentations, infographics, and diagrams-based on cognitive principles and the conventions of visual language.
The theoretical foundation of the graphic dimension draws upon scholarship in visual literacy, understood as the capacity to interpret and create visual messages as a form of academic communication in which compositional choices and representational modes are tailored to a given task and audience (Avgerinou and Pettersson, 2011). These principles are further articulated through the notion of visual grammar, which enables the operationalization of quality criteria such as hierarchy, modality, legibility, and coherence (Kress and van Leeuwen, 2006). A cognitive perspective enriches this view, particularly through studies of diagrams as external tools for reasoning (Tversky, 2011) and through multimedia learning principles - segmentation, signaling, channel alignment, and redundancy control that demonstrably enhance the assimilation of complex content (Clark and Mayer, 2016; Mayer, 2020). The Design, Functions, and Tasks (DeFT) model of multiple external representations offers a further explanation of how coordinated use of diagrams, graphs, and animations builds robust conceptual connections and facilitates knowledge transfer (Ainsworth, 2006). Collectively, this body of work shifts the focus from mere software proficiency to pedagogically motivated visual explanation design.
At the level of assessment, the literature consistently highlights a persistent issue: the dominance of self-reporting, which tends to inflate students’ perceived skills and shows only a moderate correlation with the actual quality of their produced artifacts (Spante et al., 2018; Zhao et al., 2021). Validation studies of short instruments based on DigCompEdu confirm acceptable internal structure and reliability, provided they are culturally contextualized and combined with independent assessment tools (Ghomi and Redecker, 2019; Llorente-Cejudo et al., 2022). Mixed-methods designs that integrate questionnaires with artifact analysis and expert review have demonstrated greater diagnostic power, particularly when targeting the graphic dimension of digital competence (Tzafilkou et al., 2023). Additional tools such as the Visualization Literacy Assessment Test (VLAT) are increasingly being adopted as independent measures of students’ ability to read common visualizations and interpret data (Lee et al., 2017). Embedding such tools into the design cycle - “task → data → design → validation” is consistent with research methodologies in data visualization and strengthens the evidence base for instructional impact (Sedlmair et al., 2012). Empirical findings on the memorability of visualizations also inform the quality criteria applied in expert rubrics, highlighting aspects such as structural clarity, salience of key elements, and overall composition (Borkin et al., 2013).
A parallel line of inquiry concerns the effectiveness of instructional strategies in adjacent ICT domains. Meta-analytic evidence favors modular instructional architectures “micro-theory → focused practice → project/portfolio → reflection” when paired with explicit quality criteria and consistent feedback (Scherer et al., 2020). This configuration closely aligns with the three-part model of competence development (cognitive–instrumental–reflective) and facilitates a pedagogical shift from superficial formatting to meaningful visual explanation. Supporting findings also emerge from digital skills research, which confirms that clarity of purpose, audience awareness, and task alignment significantly enhance the instructional value of student-created artifacts (Siddiq et al., 2016).
Finally, the sustainability of the observed effects is closely tied to broader ecosystemic conditions. International reports consistently stress the importance of institutional strategies, adequate infrastructure, and quality monitoring systems for digital content without which local initiatives rarely scale beyond isolated courses (OECD, 2023). One widely used tool for digital self-assessment and transformation planning is Self-reflection on Effective Learning by Fostering Innovation through Educational technologies (SELFIE), whose diagnostic value increases when combined with subject-specific metrics, including expert-designed rubrics and visualization tasks (European Commission, 2024). For national and regional systems, a core challenge remains the cultural and institutional adaptation of international frameworks and indicators; without such adaptation, the metrics risk losing both their validity and their managerial utility (Rakisheva and Witt, 2023).
Taken together, the reviewed literature outlines a coherent trajectory. Theories of visual literacy and multimedia learning define the skillset required (Avgerinou and Pettersson, 2011; Clark and Mayer, 2016; Mayer, 2020); the DeFT model and diagram research elucidate the cognitive mechanisms that support understanding (Ainsworth, 2006; Tversky, 2011); design-based research and memorability studies refine the quality criteria for visual artifacts (Sedlmair et al., 2012; Borkin et al., 2013); and the combination of self-reports with artifact analysis and independent testing helps close diagnostic gaps (Ghomi and Redecker, 2019; Llorente-Cejudo et al., 2022; Lee et al., 2017; Tzafilkou et al., 2023). This conceptual logic directly supports the study’s three-block model and its chosen evaluation procedures, ensuring strong internal coherence across the introduction, methodology, and interpretation of results (OECD, 2023; Redecker and Punie, 2017).
Visual literacy has traditionally been studied within communication theory and cognitive psychology, but its pedagogical relevance becomes evident when examined through the lens of vocational teacher education. In practice-oriented programs, teachers frequently rely on diagrams, workflows, instructional sequences, and safety protocols forms of professional communication that are inherently visual. Therefore, the ability to design clear, accurate, and pedagogically aligned visual materials is not an auxiliary skill but a core element of vocational pedagogy. By connecting visual-design principles with instructional planning, assessment tasks, and competency-based teaching approaches, the present model embeds visual literacy within authentic pedagogical activity. This integration demonstrates that graphic digital competence is not a technical add-on but an essential component of professional teacher readiness in the vocational sector.
Methodology and materials
Research design
The quasi-experimental design included an experimental group (EG) and a control group (CG). Diagnostic instruments combined self-assessment questionnaires aligned with the DigCompEdu framework and multimedia learning principles, performance-based tests in graphic editors (Canva, Photoshop, AutoCAD, CorelDRAW), and expert evaluation using a standardized rubric. Reliability was confirmed with Cronbach’s alpha (α = 0.82).
Participants
The study involved 86 undergraduate students enrolled in a vocational teacher education program with a major in Information and Communication Technologies at a single higher education institution. Participants were between 18 and 22 years old. Group allocation ensured comparability in year of study, baseline ICT proficiency, and academic performance. The experimental group (EG) included 44 students, while the control group (CG) comprised 42 students.
Instruments and procedures
To assess the initial and final levels of graphic digital competence, three types of instruments were used:
1. A self-assessment questionnaire covering the cognitive, instrumental, and reflective components.
2. Performance tasks measuring proficiency in professional graphic editors (Canva, Photoshop, AutoCAD, CorelDRAW).
3. A rubric-based evaluation of student work using five criteria on a 0–3 scale (see Table 1 for descriptors).
Evaluation was carried out by three independent experts who were not involved in the instructional process. Prior to assessment, the experts underwent a brief calibration session using a set of benchmark samples; discrepancies were resolved through consensus based on rubric criteria. All artifacts were anonymized using encrypted codes, ensuring blind evaluation with no group identifiers. Final scores were calculated as the average across the five rubric criteria, and detailed criterion-specific profiles were additionally analyzed.
The formative stage consisted of three interrelated components (see Figure 1).
The self-assessment questionnaire was structured into thematic blocks corresponding to the three components of the proposed framework. Example statements (rated on a 1–5 Likert scale) included: cognitive - “I can explain when the segmentation principle reduces cognitive load”; instrumental - “I am proficient in using layers and masks to create infographics in Photoshop/CorelDRAW”; reflective - “Before publishing, I check whether the visual emphasis aligns with the explanatory goals.” The overall index was calculated as the average across the blocks; the internal consistency of the scale was satisfactory (Cronbach’s α = 0.82).
The expert rubric included the following criteria: “structure and hierarchy,” “alignment with objectives and audience,” “data handling and chart accuracy,” “typography and color,” and “technical quality of export.” Scores were assigned using a four-level scale (low–medium–good–high), and an overall score was computed as the mean across all criteria.
To ensure consistency during the formative stage, all sessions followed standardized instructional scenarios, including presentation templates, checklists for hands-on activities, and a curated set of exemplary and counter-exemplary visualizations. Each session had clearly defined learning outcomes tied to rubric criteria (e.g., “after the workshop, the student applies the signaling principle by highlighting key relationships in an infographic”) (see Table 2). Attendance and completion of a minimum task set were tracked via the learning management system (LMS); students who missed a session were required to complete asynchronous alternative tasks. This structure helped minimize variability unrelated to the intervention and increased the reproducibility of the results.
The formative stage was designed to span one academic year and encompassed approximately 120 academic hours delivered through a variety of formats, including in-person seminars, hands-on trainings, online workshops, lectures, and self-directed project work. This blended structure ensured a balanced integration of theoretical understanding, practical application, and reflective analysis, thereby supporting the comprehensive development of digital graphic competence.
Data analysis methods
The within-group dynamics were assessed using the paired Student’s t-test by comparing pre- and post-test scores. Between-group differences in post-test outcomes were evaluated using the independent samples t-test. A significance level of α = 0.05 (two-tailed) was adopted for all tests. Descriptive statistics are reported as M ± SD, with corresponding t(df) and p values. It is important to note that t-values from paired comparisons (within-group pre–post) are not transferable to between-group post-test analyses. The reliability of the self-assessment questionnaire was verified using Cronbach’s alpha (α = 0.82), indicating a high level of internal consistency.
Qualitative data including student portfolios and reflective essays were analyzed using content analysis. The resulting categories reflected key dimensions of digital graphic competence development: functionality, creativity, and integration into the educational process.
Ethical considerations
The study adhered to established principles of academic ethics. Participation was voluntary, and all respondents signed informed consent forms. Data were collected and analyzed in aggregate form only. The research project received approval from the university’s Academic and Methodological Council.
Results
Baseline diagnostics
The baseline assessment encompassed three interrelated components: cognitive (a self-assessment questionnaire evaluating knowledge of visualization and multimedia learning principles), instrumental (practical test tasks using Canva, Photoshop, AutoCAD, and CorelDRAW), and reflective (students’ awareness of the pedagogical value of graphics, corroborated by expert analysis of their initial digital products).
Findings indicated that students in both groups predominantly demonstrated a moderate level of graphic digital competence. Their understanding of visualization principles was fragmented, practical skills were largely limited to basic use of entry-level tools such as Canva, and meaningful integration of visual materials into educational projects was largely absent. Fewer than 10% of participants demonstrated a high level of competence.
A comparative analysis of mean scores between the control group (M = 2.08) and the experimental group (M = 2.09) revealed no statistically significant difference (p > 0.05), confirming equivalence in initial conditions (see Table 3). These findings are consistent with prior studies highlighting the gap between students’ self-perceived competence and the actual quality of their digital outputs (Norhagen et al., 2024; Smestad et al., 2023).
Figure 2 illustrates that the majority of students in both groups demonstrated a medium level of graphic digital competence. A high level was observed in only a small fraction of students (approximately 12–14%), underscoring the need for targeted development of this competence within the framework of university training.
Figure 3 illustrates the distribution of indicators across the three components of competence: cognitive, instrumental, and reflective. In both groups, a similar pattern can be observed: the cognitive and reflective components are closer to the medium level (57–60%), while the instrumental component shows lower values (52–53%). This suggests that students generally recognize the value of visualization and possess basic theoretical knowledge, but face difficulties in the practical application of professional graphic tools.
Qualitative data (baseline diagnostics)
The analysis of student artifacts including presentations, infographics, and digital diagrams as well as their reflective essays, revealed several important patterns not captured through quantitative measures alone. Expert evaluations indicated that most student work was highly templated, relying on pre-designed visual solutions with little or no adaptation to the specific content of their instructional projects. Common issues included excessive text, lack of visual hierarchy, and the use of inappropriate color schemes.
Students’ reflective responses confirmed these observed challenges. Many cited time constraints and limited skills as barriers to creating original visual materials, and tended to perceive visuals primarily as “decoration” rather than as pedagogical tools to enhance clarity and comprehension:
“I usually use ready-made templates because creating something from scratch takes too much time and effort.” (Participant #14, control group)
“For me, visualization is mainly about making a presentation look nice not necessarily a way to explain the material.” (Participant #27, experimental group)
At the same time, a few participants demonstrated awareness of the instructional value of visual tools and expressed a desire to develop these skills more deliberately:
“I want to learn how to make diagrams and infographics because it’s easier to understand the content when it’s presented visually.” (Participant #33, experimental group)
These qualitative findings support the results of the pre-assessment: students generally possess fragmented knowledge and basic technical skills, but do not yet associate visual tools with improved learning effectiveness. The weakest area is the instrumental component, while cognitive and reflective elements are at a moderate level consistent with findings reported in earlier studies (Smestad et al., 2023; Norhagen et al., 2024).
Overall dynamics
A follow-up assessment conducted 1 year later revealed a statistically significant increase in the level of graphic digital competence among students in the experimental group. In contrast, no substantial changes were observed in the control group (p > 0.05). The most pronounced improvements were recorded in the instrumental component, while the cognitive and reflective components showed steady yet moderate growth.
Figure 4 illustrates the post-intervention results of graphic digital competence across three components, presented as a percentage of the maximum score. The experimental group demonstrates a consistent profile ranging from 70.3 to 73.3%, with scores of 70.3% in the cognitive component, 72.3% in the instrumental, and 73.3% in the reflective. In contrast, the control group shows significantly lower outcomes: 61% (cognitive), 54% (instrumental), and 58.7% (reflective). Even at the descriptive level, the data indicate a stable advantage for the experimental group: +9.3 percentage points in the cognitive component, +18.3 in the instrumental, and +14.6 in the reflective.
The most substantial improvement is observed in the instrumental component, which directly reflects the design of the formative phase, focused on regular hands-on practice with graphic editors, layer manipulation, grid alignment, typography, and output optimization.
Qualitative interpretation of the experimental group’s profile suggests a meaningful shift from superficial decorative design toward purposeful visual communication. The cognitive improvement (70.3%) accompanied by a statistically significant increase (p < 0.01) confirms the acquisition and transfer of core multimedia learning principles into academic contexts. The instrumental growth (72.3%) the most pronounced gap with the control group (p < 0.001) demonstrates the impact of skill-based workshops and project assignments. Students in the experimental group learned to reduce visual clutter and apply professional layout principles, which had previously been weak points.
The high reflective scores (73.3%) and a significant lead over the control group (+14.6 percentage points, p < 0.05) indicate that portfolio analysis and collaborative reflection effectively encouraged deliberate design choices, rather than incidental or decorative use of visuals. This outcome is particularly important, as instrumental proficiency alone rarely translates into long-term competence without reflective support.
Meanwhile, the control group maintained the structure observed at the initial stage cognitive scores remained slightly above reflective ones, while the instrumental component lagged behind. The changes, though present, did not substantially alter the quality of student work, which remained at a medium level. By contrast, the experimental group demonstrated balanced growth across all three components, validating the framework’s threefold structure: the theoretical-methodological block (explaining the purpose and principles), the practical-technological block (providing tools and training), and the reflective-analytical block (promoting conscious application).
Taken together, the results shown in Figure 4 reflect not only measurable gains over the academic year but also qualitative shifts in how students approach the design and pedagogical use of visual materials.
In the experimental group, the proportion of students demonstrating a high level of competence increased more than threefold compared to the baseline assessment (see Table 4), while the number of those with a low level of competence decreased by nearly three times.
The analysis of the final assessment (Figure 5) reveals a clear contrast between the control and experimental groups. In the control group, the distribution of competence levels remained largely unchanged, with mean scores showing no significant deviation from the pre-test values. In contrast, the experimental group demonstrated substantial improvement: the proportion of students with a high level of competence more than tripled, while the share of those at the low level decreased nearly threefold. This shift confirms the effectiveness of the proposed framework, which facilitated comprehensive development across cognitive, instrumental, and reflective components of competence.
Figure 5. Final assessment of graphic digital competence levels among students in the control and experimental groups.
Cognitive component
The average scores increased from M = 2.05 to M = 2.41 (p < 0.01), indicating a significant advancement in the understanding of multimedia learning principles and visual literacy. Student portfolios showed a notable increase in structured projects, where visual elements were employed not merely for decoration but as meaningful tools for organizing content. These improvements can be attributed to the implementation of the theoretical and methodological block of the program.
Instrumental component
The most pronounced gains were observed in the instrumental domain: M = 1.98 → 2.47 (p < 0.001). While many students initially struggled with layering, scaling, and layout principles, by the end of the intervention they demonstrated confident use of software such as Photoshop and CorelDRAW, along with basic competencies in AutoCAD. Expert evaluation confirmed a reduction in common design errors and an overall improvement in the quality of visual outputs evidence of the effectiveness of the practice-oriented block.
Reflective component
Final scores in the reflective dimension also showed growth (M = 2.12 → 2.38, p < 0.05). Students’ reflective essays illustrated a shift in their conceptualization of visual materials—from esthetic enhancement to functional communication tools. One student, for instance, noted: “Now I design diagrams so that both classmates and instructors can more easily grasp the logic of the topic” (Participant #27). These developments were largely fostered by the reflective and evaluative block, which emphasized portfolio development and group discussions.
Comparative analysis (t-test)
As shown in Table 5, the experimental group demonstrated a statistically significant pre–post gain on the overall competence score: t (43) = 2.74, p < 0.01. In contrast, no significant change was observed in the control group (p > 0.05). In the post-test, the mean score in the experimental group (M = 2.32, SD = 0.51) exceeded that of the control group (M = 2.11, SD = 0.49). However, an independent samples t-test revealed only marginal statistical significance: t (≈84) = 1.95, p = 0.055 (two-tailed). It is important to note that the t value of 2.74 pertains solely to the within-group comparison in the experimental condition and should not be misinterpreted as evidence of between-group differences.
Figure 6 illustrates a clear redistribution of competence levels within the experimental group (n = 44). At baseline, the profile was skewed toward the lower range, with 34.1% of students at a low level and only 13.6% at a high level. After the intervention, the proportion of low-level students declined to 9.1%, while the proportion at a high level rose to 40.9%. This demonstrates a substantial upward shift, with a significant number of students moving directly from the low to the high category. Such results confirm the effectiveness of the intervention, particularly in the instrumental component, where practical workshops and portfolio tasks had the greatest impact.
Figure 6. Experimental group: distribution of graphic digital competence levels before and after the formative stage, % (n = 44).
This pattern aligns with previously demonstrated component-level shifts: the most pronounced gains were seen in the instrumental component (mastery of layers, grids, typography, and proper export techniques), accompanied by consistent improvement in cognitive and reflective elements. The combination of targeted practical training and reflective portfolio work embedded in the framework enabled not just a rise in average scores but a qualitative leap in the nature of student work toward structurally sound, data-appropriate, audience-oriented visual artifacts.
Equally important is what the graph does not show: the middle-level proportion barely increased. This can be interpreted in two ways. On one hand, a substantial group of students consolidated within the middle range a likely zone of proximal development requiring finer differentiation of tasks and feedback. On the other, some students advanced directly to the highest level, reflecting the impact of the modular cycle: “micro-theory → hands-on workshop → project → peer review and revision.” In sum, Figure 6 encapsulates the qualitative effect of the intervention: a contraction of the “lower tail” and expansion of the “upper shelf” of the distribution, thus confirming the effectiveness of the formative phase and cohering with both the quantitative (pre–post t-tests) and qualitative (portfolios, essays) evidence presented in the Results section.
Discussion
The quasi-experimental results demonstrated a significant pre–post improvement within the experimental group, while the control group remained unchanged. While the between-group comparison approached but did not reach conventional statistical significance (p = 0.055), the observed patterns are consistent with pedagogically important improvements. In particular, the reduction of “low-level” students and the emergence of a stable group at the “high” level, confirmed by expert evaluations of artifacts, underline the practical value of the framework. These findings align with previous studies emphasizing the importance of modular design and reflective practices in digital competence development. The results obtained provide answers to the research questions and support the effectiveness of the proposed framework for developing graphic digital competence among students of vocational teacher education.
Structural components of graphic digital competence
The diagnostic phase revealed that this competence is composed of three interrelated components: cognitive, instrumental, and reflective. This aligns with current views of digital competence as a multidimensional construct (Ilomäki et al., 2016; Redecker and Punie, 2017) and resonates with research in visual literacy and multimedia learning (Avgerinou and Pettersson, 2011; Mayer, 2020). The instrumental component appeared to be the weakest at baseline, indicating a gap between the recognized value of visualization and students’ actual technical skills similar findings have been reported by Smestad et al. (2023) and Norhagen et al. (2024).
Impact of the framework on competence development
The quasi-experimental design confirmed a statistically significant improvement in the experimental group, where the proportion of students at a high competence level more than tripled. The cognitive component improved through the acquisition of multimedia design principles, the instrumental component showed the most substantial gains due to regular hands-on training in graphic editors, and the reflective component benefited from the use of portfolios and collective discussion. The study contributes to the literature by demonstrating how targeted pedagogical interventions can effectively enhance graphic digital competence in vocational teacher education. While international research has emphasized the importance of digital competence more broadly (Instefjord and Munthe, 2017; Gudmundsdottir and Hatlevik, 2018), this study shows that focused work on the graphic dimension—combining theory, tools, and reflection can yield substantial improvements in students’ readiness to create instructional visuals.
Conditions for sustainable competence development
The effectiveness of the framework was supported by three interrelated factors: (1) the regularity and variety of learning formats (in-person and online), (2) the inclusion of reflective practices, and (3) a focus on the production of real student-created visual content (presentations, diagrams, infographics). This comprehensive approach supports the findings of Deschênes et al. (2024), who emphasize the importance of integrating practice-based projects into university curricula, and is in line with OECD (2023) recommendations on systemic support for digital competence development.
Novelty and contextual contribution
Unlike most studies that examine general digital literacy among students, this research specifically addresses the graphic dimension and its role in preparing ICT specialists in Kazakhstan. The key contribution lies in the development and empirical validation of a targeted framework, thus addressing a gap in the existing literature (Rakisheva and Witt, 2023).
The combination of modularity (micro-theory → targeted practice → project → reflection), clearly defined quality criteria, and a mandatory portfolio proved to be essential for sustainable improvements. Importantly, all tasks were linked to the core content of ICT courses (e.g., algorithms, computer architecture, networks), rather than being disconnected exercises. This ensured transferability of visual strategies into subsequent coursework. For scaling purposes, it is recommended that the assessment rubric be formally integrated into course regulations and that each class includes short-cycle feedback loops—an approach that was found to be especially effective in boosting the instrumental component.
Limitations
This study has several limitations. First, it was conducted at a single institution with a limited sample size, which restricts the generalizability of the findings to other educational programs and contexts. Second, the intervention lasted for one academic year, making it difficult to assess the long-term sustainability of the observed improvements. Third, reliance on self-assessment instruments, even when combined with expert evaluation, may introduce subjective bias. Future studies should therefore extend the scope to multiple institutions, adopt longitudinal designs, and employ additional independent measures.
Conclusion
The present study confirmed the relevance of developing graphic digital competence among students in ICT-related programs, as initially framed by the contradiction between strategic demands for digitalization and the actual level of students’ preparedness. The designed and tested framework, encompassing cognitive, instrumental, and reflective components, demonstrated its effectiveness within the quasi-experimental framework: a significant improvement was recorded across all components in the experimental group, with the strongest gains observed in the instrumental domain. These findings address the first research question concerning the structure and dynamics of competence.
The data further indicate that the proposed framework enhances students’ proficiency in using graphic digital tools and facilitates their integration into academic projects and practical tasks responding to the second research question. The combination of diverse instructional formats (in-person lectures and workshops, online master classes, and independent learning), reliance on multimedia learning theory, and the inclusion of reflective practices created favorable conditions for sustainable competence development, thereby answering the third research question. Thus, the study contributes to the scholarly understanding of digital competence in higher education by refining its graphic dimension and addressing a gap in the literature, particularly in the context of Kazakhstan, where such frameworks had not yet undergone empirical validation. The practical significance lies in the framework’s potential application within ICT and related educational programs to purposefully foster students’ graphic digital competence. At the same time, the results should be interpreted in light of the study’s limitations namely, the local character of the sample (one university and one specialization), the short duration of the intervention, and the need for further validation of the diagnostic tools. Future research directions include testing the framework across broader and more diverse student populations, adapting it to interdisciplinary contexts, and examining the long-term sustainability of the acquired competences.
Beyond the Kazakhstani context, the proposed framework demonstrates potential for application in other ICT-related disciplines and higher education settings. Because the instructional scenarios, rubric, and reflective practices are not bound to local specifics, they can be adapted to courses in computer science, engineering, and vocational training internationally. This scalability underlines the framework’s practical significance and suggests that it can support digital competence development in diverse institutional and cultural environments.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
Ethics statement
Ethical approval was not required for the studies involving humans. The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required from the participants or the participants’ legal guardians/next of kin in accordance with the national legislation and institutional requirements.
Author contributions
SB: Conceptualization, Writing – original draft, Investigation, Writing – review & editing. SZho: Writing – original draft, Writing – review & editing, Methodology, Investigation. UK: Writing – review & editing, Writing – original draft, Resources, Visualization. GK: Funding acquisition, Writing – original draft, Writing – review & editing, Software. SZha: Methodology, Validation, Writing – review & editing, Writing – original draft, Project administration. AT: Writing – review & editing, Supervision, Writing – original draft, Validation, Visualization.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Ainsworth, S. (2006). DeFT: a conceptual framework for considering learning with multiple representations. Learn. Instr. 16, 183–198. doi: 10.1016/j.learninstruc.2006.03.001
Avgerinou, M. D., and Pettersson, R. (2011). Toward a cohesive theory of visual literacy. J. Vis. Liter. 30, 1–19. doi: 10.1080/23796529.2011.11674687
Borkin, M. A., Vo, A. A., Bylinskii, Z., Isola, P., Sunkavalli, S., Oliva, A., et al. (2013). What makes a visualization memorable? IEEE Trans. Vis. Comput. Graph. 19, 2306–2315. doi: 10.1109/TVCG.2013.234,
Clark, R. C., and Mayer, R. E. (2016). E-learning and the science of instruction: proven guidelines for consumers and designers of multimedia learning. London: Wiley.
Deschênes, M., Dionne, L., and Parent, S. (2024). Supporting digital competency development for vocational education student teachers in distance education. Front. Educ. 9:1452445. doi: 10.3389/feduc.2024.1452445
European Commission. (2024). About SELFIE. Available online at: https://education.ec.europa.eu/selfie/about-selfie (Accessed September 5, 2024)
Ghomi, M., and Redecker, C. 2019. Digital competence of educators (DigCompEdu): development and evaluation of a self-assessment instrument for teachers’ digital competence. In Proceedings of the 11th International Conference on Computer Supported Education (pp. 541–548). SCITEPRESS: Portugal
Gudmundsdottir, G. B., and Hatlevik, O. E. (2018). Newly qualified teachers’ professional digital competence: implications for teacher education. Eur. J. Teach. Educ. 41, 214–231. doi: 10.1080/02619768.2017.1416085
Ilomäki, L., Paavola, S., Lakkala, M., and Kantosalo, A. (2016). Digital competence—an emergent boundary concept for policy and educational research. Educ. Inf. Technol. 21, 655–679. doi: 10.1007/s10639-014-9346-4
Instefjord, E. J., and Munthe, E. (2017). Educating digitally competent teachers: a study of integration of professional digital competence in teacher education. Teach. Teach. Educ. 67, 37–45. doi: 10.1016/j.tate.2017.05.016
Kress, G., and van Leeuwen, T. (2006). Reading images: the grammar of visual design. New York: Routledge.
Lee, S., Kim, S.-H., and Kwon, B.-C. (2017). VLAT: development of a visualization literacy assessment test. IEEE Trans. Vis. Comput. Graph. 23, 551–560. doi: 10.1109/TVCG.2016.2598920,
Llorente-Cejudo, C., Barragán-Sánchez, R., Puig-Gutiérrez, M., and Romero-Tena, R. (2022). Validation of the DigCompEdu check-in questionnaire in higher education. Educ. Inf. Technol. 27, 7927–7948. doi: 10.1007/s10639-022-11020-3
Norhagen, S. L., Krumsvik, R. J., and Røkenes, F. M. (2024). Developing professional digital competence in Norwegian teacher education: a scoping review. Front. Educ. 9:1363529. doi: 10.3389/feduc.2024.1363529
OECD (2023). OECD digital education outlook 2023: Towards an effective digital education ecosystem. France: OECD Publishing.
Rakisheva, A., and Witt, A. (2023). Digital competence frameworks in teacher education: a literature review. Issues Trends Learn. Technol. 11:5205. doi: 10.2458/itlt.5205
Redecker, C., and Punie, Y. (2017). European framework for the digital competence of educators: DigCompEdu. Luxembourg: Publications Office of the European Union.
Scherer, R., Siddiq, F., and Viveros, B. S. (2020). A meta-analysis of teaching and learning computer programming: effective instructional approaches and conditions. Comput. Human Behav. 109:106349. doi: 10.1016/j.chb.2020.106349
Sedlmair, M., Meyer, M., and Munzner, T. (2012). Design study methodology: reflections from the trenches and the stacks. IEEE Trans. Vis. Comput. Graph. 18, 2431–2440. doi: 10.1109/TVCG.2012.213,
Siddiq, F., Scherer, R., and Tondeur, J. (2016). Teachers’ emphasis on developing students’ digital information and communication skills (TEDDICS): a new construct in 21st century education. Comput. Educ. 92, 1–14. doi: 10.1016/j.compedu.2015.10.006
Smestad, B., Hatlevik, O. E., Johannesen, M., and Øgrim, L. (2023). Examining dimensions of teachers’ digital competence: a systematic review pre- and during COVID-19. Heliyon 9:e16677. doi: 10.1016/j.heliyon.2023.e16677,
Spante, M., Hashemi, S. S., Lundin, M., and Algers, A. (2018). Digital competence and digital literacy in higher education research: systematic review of concept use. Cogent Educ. 5:1519143. doi: 10.1080/2331186X.2018.1519143
Tversky, B. (2011). Visualizing thought. Top. Cogn. Sci. 3, 499–535. doi: 10.1111/j.1756-8765.2010.01113.x,
Tzafilkou, K., Perifanou, M., and Economides, A. A. (2023). Teachers’ digital competence: a European perspective using the DigCompEdu framework. Comput. Educ. 193:104673. doi: 10.1016/j.compedu.2023.104673
Keywords: analytic rubric, graphic digital competence, multimedia learning, quasi-experiment, students of vocational teacher education, visual literacy
Citation: Bitemirova S, Zholdasbekova S, Konakbayeva U, Karataev G, Zhanbirshiyev S and Turalbayeva A (2026) Developing graphic digital competence in future vocational education teachers: from theory to practice. Front. Educ. 11:1721205. doi: 10.3389/feduc.2026.1721205
Edited by:
Aiedah Khalek, Monash University Malaysia, MalaysiaReviewed by:
Elsa Ribeiro-Silva, University of Coimbra, PortugalAgung Listiadi, Surabaya State University, Indonesia
Alejandro Higuera Zimbron, Universidad Autónoma del Estado de México, Mexico
Copyright © 2026 Bitemirova, Zholdasbekova, Konakbayeva, Karataev, Zhanbirshiyev and Turalbayeva. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Saule Zholdasbekova, c2F1bGV6LjYzQG1haWwucnU=
†ORCID: Sholpan Bitemirova, orcid.org/0000-0003-3142-7763
Saule Zholdasbekova, orcid.org/0000-0003-2857-7939
U. Konakbayeva, orcid.org/0000-0002-5017-7459
G. Karataev, orcid.org/0000-0001-5673-3199
S. Zhanbirshiyev, orcid.org/0000-0002-9335-4988
A. Turalbayeva, orcid.org/0000-0002-7147-9528
G. Karataev2†