- 1Research, Education, Science, and Technology Group - GIECYT, Universidad Técnica del Norte, Ibarra, Ecuador
- 2Network Science Research Group e-CIER, Universidad Técnica del Norte, Ibarra, Ecuador
- 3Faculty of Education, Universidad de Salamanca, Salamanca, Spain
Teaching Information and Communication Technologies (ICT) in higher education faces the challenge of engaging students accustomed to interactive and digital learning environments. Although gamification has emerged as a promising pedagogical strategy to enhance motivation, participation, and learning, there is still no consensus on which gamification elements and strategies should be prioritized in ICT courses. To address this gap, this study aimed to identify, through expert consensus, the most relevant gamification components for ICT teaching in higher education. A modified Delphi study with a mixed-methods design was conducted in two rounds: an open exploratory phase and a structured round using a five-point Likert scale. Twelve experts were selected based on a competence coefficient (K ≥ 0.75). Quantitative analyses included means, standard deviations, interquartile range (IQR), coefficient of variation (CV), and Kendall’s coefficient of concordance (W), complemented by qualitative thematic coding with an inter-coder reliability of κ = 0.82. Consensus thresholds followed current Delphi standards (≥ 80% agreement, IQR ≤ 1, CV ≤ 0.30, and W ≥ 0.60). The results indicate that experts prioritized challenges and quests (83% agreement; W = 0.66), followed by points and rewards (67%) and the complementary use of badges and leaderboards. Strong consensus was reached regarding the potential of gamification to strengthen problem-solving, critical thinking, and collaboration (100%; W = 0.90), and to increase active participation (100%) and promote deep understanding (83%). However, heterogeneity persisted in relation to the selection of technological platforms (W = 0.27) and specific strategies (W = 0.28), and the effects on academic performance were reported as inconsistent. Overall, the findings suggest that gamification is most effective when systematically integrated into ICT curricula, supported by teacher training and immediate feedback mechanisms. As no universal technological platform is recommended, the use of contextual selection criteria is essential. These results provide a conceptually validated reference framework for the design of gamified learning experiences in higher education and underscore the need for further empirical validation across diverse educational contexts.
1 Introduction
Teaching ICT in higher education faces increasing challenges related to promoting active, meaningful, and student-centered learning for learners immersed in digital environments. Several studies have demonstrated that traditional approaches focused on passive content transmission are insufficient to meet the learning preferences and expectations of contemporary students (Aguilera Meza et al., 2020; Arias and Mon, 2022). Consequently, there has been a growing emphasis on adopting innovative pedagogical approaches that foster autonomy, participation, and collaborative knowledge construction in hybrid and online settings (Azogue-Punina and Barrera-Erreyes, 2020; Londoño and Rojas, 2020). Recent reviews have further highlighted that the integration of motivational digital resources—particularly gamification—can enhance student engagement and reduce disengagement across educational levels (Faure-Carvallo et al., 2022; Game et al., 2024).
Over the past decade, gamification has emerged as one of the most influential strategies in educational innovation, given its potential to integrate game design principles for pedagogical purposes and stimulate intrinsic motivation, sustained engagement, and self-regulated learning (Majuri et al., 2018; Dreimane, 2019; García-Martín and García-Sánchez, 2020). These benefits are particularly evident in technology-related subjects, where progressive challenges, levels, and immediate feedback enable students to interact with complex content in engaging and iterative ways (Hong et al., 2024; Sangucho et al., 2020). Bibliometric studies confirm this growing academic interest, showing a sharp increase in publications on gamification in higher education between 2020 and 2025, yet revealing persistent theoretical and methodological inconsistencies that limit systematic application (Guerrero-Alcedo et al., 2022; Magpusao, 2024).
Empirical evidence consistently supports gamification’s capacity to strengthen essential cognitive and digital competencies, such as critical thinking, problem-solving, and collaboration (Guanoluisa et al., 2022; Mamekova et al., 2021; Navarro-Mateos et al., 2024). However, its effectiveness depends heavily on the coherence between gamified elements and intended learning outcomes. This has led to an important conceptual distinction between superficial gamification, primarily focused on extrinsic rewards (points, badges, leaderboards), and meaningful gamification, which nurtures intrinsic motivation through narrative, autonomy, challenge, and continuous feedback (Munblit et al., 2022; Koivisto and Hamari, 2019). While the former can foster short-term engagement, the latter promotes deep and sustained learning aligned with constructivist and self-regulated learning principles (Werbach and Hunter, 2020).
The theoretical foundations of gamification can be explained through well-established motivational and learning frameworks. Self-Determination Theory (SDT) (Deci and Ryan, 1985; Ryan and Deci, 2020) posits that intrinsic motivation emerges when individuals experience autonomy, competence, and relatedness. Gamified environments that provide freedom of choice, clear goals, and meaningful feedback can satisfy these psychological needs, resulting in sustained engagement and satisfaction (Van Roy and Zaman, 2015). Similarly, Flow Theory (Csikszentmihalyi, 1990) explains optimal learning as a balance between challenge and skill that leads to immersive and enjoyable experiences. Game mechanics such as progressive challenges, immediate feedback, and goal clarity are particularly effective in fostering flow states (Koivisto and Hamari, 2019). Complementary frameworks, including Goal Orientation Theory and Expectancy-Value Theory, emphasize how achievement goals and perceived task value shape persistence and learning outcomes (Dichev and Dicheva, 2017).
Despite this solid theoretical base, the literature still reflects a significant methodological gap: the absence of validated criteria to guide the selection and prioritization of gamification elements according to learning objectives, disciplinary context, and student profiles (Carbajal et al., 2022; Fitria, 2022; Pérez and Muñoz, 2024; Sánchez, 2022). This gap limits comparability across studies and hinders the development of coherent, evidence-based instructional designs. Therefore, there is a growing need for structured methodologies capable of integrating expert judgment to build validated and contextually grounded frameworks for gamified learning design.
The Delphi method offers a robust and systematic approach to addressing this need, as it enables the aggregation of expert knowledge and the establishment of consensus on emerging educational practices through iterative rounds of consultation and controlled feedback (Beiderbeck et al., 2021). Its iterative and evidence-driven nature makes it particularly suitable for identifying and validating design principles in technology-mediated learning contexts (Oxley et al., 2025; Ramírez Chávez and Ramírez Torres, 2024; Rüetschi et al., 2022; Sablatzky, 2022).
Accordingly, the aim of this study is to reach expert consensus, through a modified Delphi method, on the most relevant gamification elements for ICT education in higher education. The findings are expected to provide a validated instructional design framework that integrates motivational theory with pedagogical innovation and supports the effective implementation of gamified learning experiences in university-level ICT courses.
2 Methodology
2.1 Study design
This study adopted a modified Delphi method with a mixed-methods design to identify, prioritize, and validate the most relevant gamification elements for ICT teaching in higher education. The Delphi technique was chosen for its ability to structure expert judgment through iterative rounds, ensuring anonymity, controlled feedback, and systematic analysis of convergence and divergence (Beiderbeck et al., 2021; Navarro-Espinosa et al., 2022).
Its use in educational research has proven effective for the validation of instruments, identification of competencies, and development of pedagogical frameworks (Baldrich et al., 2024; Pérez and Muñoz, 2024).
2.2 Expert selection
The panel was composed of 12 experts, selected through purposeful sampling based on competence criteria to ensure the inclusion of professionals with recognized expertise in gamification, instructional design, ICT teaching, and educational innovation. Selection criteria included at least 5 years of teaching experience, involvement in educational technology projects, and relevant academic production.
This approach aligns with recent recommendations advocating for multidimensional panels to enhance interpretive coherence and diversity (Guerrero-Alcedo et al., 2022). The literature emphasizes that the quality of Delphi studies depends largely on the rigor of participant selection (Arias and Mon, 2022; Oxley et al., 2025).
The panel size (n = 12) is consistent with methodological standards for Delphi studies in educational research, which recommend between 10 and 18 experts to ensure both diversity of perspectives and statistical stability of consensus results (Hsu and Sandford, 2007; Okoli and Pawlowski, 2004).
Disciplinary heterogeneity was also ensured to strengthen the validity of consensus, as highlighted by Cardoso et al. (2025). All 12 experts met the selection criteria and participated voluntarily. The process was conducted virtually, maintaining anonymity and independence among panelists (Sablatzky, 2022; Pinchover et al., 2024).
The expert panel consisted of 12 participants with proven experience in higher education teaching, ICT, curriculum development and gamification. Seven experts were affiliated with the Universidad Técnica del Norte (UTN), including one with a PhD degree and the remaining members holding master’s degrees, primarily specialized in ICT and curriculum studies. Additionally, two experts from the Universidad Técnica de Ambato (UTA) participated, one holding a PhD degree and the other a master’s degree, both with expertise in ICT. The panel was further complemented by the inclusion of a Brazilian gamification expert currently based in Mexico, a Peruvian gamification expert certified by Google, and a doctoral candidate affiliated with the Universidad UIIX in Mexico. This composition enabled the integration of academic, technological and pedagogical perspectives, while also reflecting a predominant representation of the Latin American higher education context, which should be considered when interpreting and generalizing the findings.
2.3 Procedure
2.3.1 Round 1: exploratory phase
In the first round, an open-ended question was used to identify perceptions, opportunities, and recommendations regarding the use of gamification in ICT teaching within higher education.
This strategy allowed the emergence of categories without imposing predefined structures, following the methodological guidelines of Wang et al. (2022).
Responses were analyzed using descriptive statistics and frequency analysis with SPSS v.25, in order to detect initial patterns and reduce conceptual dispersion. Subsequently, the findings were organized conceptually around the research questions (RQ1–RQ8), serving as the foundation for developing the structured instrument used in the second round. This process was grounded in recent studies addressing the use of gamification in university contexts (Faure-Carvallo et al., 2022; Lorenzo-Lledó et al., 2023).
Based on the results obtained in the first round, a systematic analysis and refinement process was developed, which enabled the construction of the items used in the second Delphi round. The qualitative results obtained in the first Delphi round were subjected to a systematic process of analysis and refinement in order to construct the items used in the second round. First, the experts’ open-ended responses were coded thematically, identifying recurring ideas and common conceptual patterns. Subsequently, proposals with equivalent or highly similar meanings were merged into a single item, while ambiguous, redundant, or off-topic suggestions were discarded. Finally, some statements were reworded to improve their semantic clarity and conceptual coherence, while maintaining the original meaning expressed by the experts. This process made it possible to reduce and structure the initial contributions into a final set of 25 items, which were subjected to quantitative assessment in the second Delphi round. The definition of these criteria sought to ensure traceability between the initial qualitative contributions and the final items, strengthening the replicability of the Delphi procedure.
2.3.2 Instrument development
Based on the results of the first round, a structured instrument was designed, consisting of 25 items organized according to the eight research questions (RQ1–RQ8). Each item was rated on a 5-point Likert scale (1 = very low, 5 = very high), according to its relevance, impact, and level of agreement.
Additionally, open comment fields were included to collect qualitative insights complementing the quantitative data.
The items were distributed across five conceptual domains identified in Round 1: gamification strategies; technological tools; design elements; student motivation and academic performance. The instrument was reviewed by three external experts, who assessed its conceptual clarity, internal coherence, and pedagogical relevance, following best practices for content validation in Delphi studies (Ramírez Chávez and Ramírez Torres, 2024; Potter et al., 2025; Hernández-Fernández et al., 2020). The final version of the instrument is provided in Appendix 1.
2.3.3 Round 2: structured phase
In the second round, experts evaluated the 25 items using the same 5-point Likert scale, rating their relevance, clarity, and potential educational impact. The use of structured rating scales is a well-established practice in Delphi studies combining qualitative feedback and quantitative consensus metrics (Lampropoulos and Sidiropoulos, 2024).
The open-ended comments and qualitative observations were analyzed using ATLAS.ti 9, following a three-step coding process (open, axial, and selective). This procedure allowed the identification of interpretive patterns and justifications associated with consensus or disagreement among the panelists.
Items were grouped according to the eight research questions (RQ1–RQ8), facilitating comparative analysis across domains.
To maintain full participation, automated and personalized reminders were sent, an approach supported by Potter et al. (2025) as effective for sustaining engagement in Delphi studies. The response rate reached 100% in both rounds, with no participant attrition. The results confirmed patterns reported in prior research regarding the effectiveness of gamification in higher education, particularly in enhancing motivation, feedback, and active participation (Game et al., 2024).
2.3.4 Research questions
Following the recommendations of Lopez-Catalan and Bañuls (2017), the instrument items were structured around the following research questions, which guided the entire Delphi process:
• RQ1. How can gamification elements be effectively integrated into the ICT curriculum?
• RQ2. Which specific competencies can be strengthened through gamification in ICT teaching?
• RQ3. Which gamification tools are most suitable for ICT education, and what features make them effective?
• RQ4. Which gamification elements produce a positive impact on student motivation and academic performance?
• RQ5. Which gamified strategies are most effective for improving student motivation and achievement in ICT courses?
• RQ6. How does gamification influence student motivation in ICT courses?
• RQ7. How does gamification affect students’ academic performance?
• RQ8. How have students responded to the incorporation of gamification, and how has it influenced their motivation?
All data were collected electronically and anonymously, ensuring independent judgment and confidentiality throughout the process.
2.4 Consensus criteria
Quantitative analysis included the calculation of means, standard deviations, IQR, and CV for each item, to assess the dispersion and stability of expert ratings. Consensus thresholds were established according to recent methodological standards:
• Strong consensus: ≥ 80% of experts rated an item ≥ 4/5.
• Acceptable convergence: IQR ≤ 1 and CV ≤ 0.30.
• Substantial concordance: Kendall’s coefficient of concordance (W) ≥0.60.
These thresholds follow the methodological recommendations of Lopez-Catalan and Bañuls (2017) and Diamond et al. (2014), widely adopted in Delphi studies in education.
Kendall’s W was used to quantify the overall degree of agreement among experts for each item group, distinguishing areas of high consensus from those with greater variability.
2.5 Data analysis
2.5.1 Quantitative analysis
Quantitative data were analyzed using SPSS v.25 and Microsoft Excel 2021, applying measures of central tendency and dispersion (mean, standard deviation, IQR, CV), as well as Kendall’s coefficient of concordance (W). The obtained values ranged from W = 0.27 to W = 0.90, allowing the identification of domains with low, moderate, and strong agreement. The combination of these indicators reinforced the internal validity and reliability of the expert consensus.
2.5.2 Qualitative analysis
Qualitative data were processed using ATLAS.ti 9, through a three-phase coding procedure (open, axial, and selective), conducted independently by two coders. The Cohen’s kappa coefficient (κ = 0.82) indicated a high level of inter-coder reliability. The resulting categories helped to interpret the underlying reasons behind numerical consensus and to identify areas requiring further empirical validation (Pinchover et al., 2024).
No participant dropout was recorded; all experts completed both rounds. Content validity was ensured through literature review, instrument piloting, and expert selection with a competence coefficient of K ≥ 0.75. Overall reliability was strengthened through triangulation between quantitative (IQR, CV, W) and qualitative (κ) indicators.
All participants provided written informed consent, ensuring anonymity, voluntary participation, and ethical data use.
2.6 Limitations
Some limitations were identified.
The geographical and disciplinary representation of the expert panel may limit the generalization of findings to other contexts. The two-round design, while effective for maintaining full participation, reduced the potential for further iterative refinement. Additionally, the Delphi method reflects expert judgment, which requires subsequent empirical validation in real educational settings.
Nevertheless, the combination of methodological triangulation, the use of specialized analytical software (SPSS v.25 and ATLAS.ti 9), and transparent consensus criteria (IQR, CV, W, κ) reinforce the credibility, validity, and scientific rigor of the study (Malkawi et al., 2023; Sablatzky, 2022).
3 Results
The analysis derived from the two Delphi rounds allowed the identification of the most relevant gamification elements for teaching ICT in higher education. The combination of quantitative and qualitative methods enabled the establishment of solid expert consensus regarding the elements, strategies, and conditions that enhance the pedagogical effectiveness of gamification.
During the first round, experts openly proposed the elements they considered most important for the instructional design of ICT courses. The responses were grouped into six main categories: challenges or quests, points and rewards, badges, leaderboards, progression levels, and identification avatars. Challenges and quests were mentioned by 11 out of 12 experts (91.7%), followed by points and rewards (75%), badges and leaderboards (58.3%), levels (50%), and avatars (25%). This first phase provided the basis for developing the structured instrument used in the second round, which focused on the elements with the highest initial consensus.
In the second round, experts quantitatively assessed the relevance, clarity, and impact of each gamification element using a 5-point Likert scale. Challenges and quests achieved the highest consensus (83% agreement; M = 4.6; SD = 0.5; IQR = 1.0; W = 0.66), followed by points and rewards (67%; M = 4.2; SD = 0.7), while badges and leaderboards reached moderate acceptance levels (58%; M = 4.0–4.1; W = 0.60). Levels (50%) and avatars (25%) showed lower homogeneity, reflecting differences in their applicability across technological contexts. The overall Kendall’s coefficient of concordance (W = 0.74) indicated a high level of agreement among experts, with values ranging from 0.27 to 0.90 depending on the domain. Additionally, IQR ≤ 1 was observed in 85% of the items, and the average CV (0.22) confirmed low dispersion and high stability in the ratings.
The strongest consensus emerged in domains related to transversal competencies, student motivation, and active participation, with agreement levels ranging between 83 and 100%. In contrast, domains associated with technological tools and specific strategies exhibited higher variability (W between 0.27–0.43), reflecting the influence of contextual factors and resource availability in gamification implementation.
Table 1 summarizes the results by research question (RQ1–RQ8), integrating the percentage of agreement and Kendall’s W for each thematic domain. All values have been verified to ensure consistency with the Delphi standards established in this study (≥80% agreement, IQR ≤ 1, CV ≤ 0.30, W ≥ 0.60).
The qualitative results, analyzed with ATLAS.ti 9, reinforced the quantitative findings by identifying three cross-cutting themes: curricular coherence, understood as the need to integrate gamification in a planned manner aligned with learning objectives; immediate feedback and autonomy, as key elements to sustain motivation and a sense of achievement; and technological contextualization, referring to the selection of tools suited to institutional resources and student characteristics.
Overall, these findings confirm that gamification is perceived as an effective pedagogical strategy to increase participation and engagement in ICT courses, particularly when applied with formative intent and supported by teacher guidance. However, the results also reveal a lack of consensus regarding the most appropriate technological platforms and specific strategies, underscoring the need for complementary empirical studies to assess their impact in diverse educational contexts.
It is important to clearly distinguish between specific gamification strategies and technological tools, as both domains exhibited low levels of consensus for fundamentally different reasons. In this study, specific strategies (W ≈ 0.28) refer to pedagogical and instructional design decisions, such as the use of progressive challenges, narrative structures, feedback mechanisms, and formative assessment, whereas technological tools (W ≈ 0.27) denote concrete digital platforms and systems (e.g., Kahoot, Genially, Moodle plugins).
Although both domains showed relatively low concordance, the variability observed in specific strategies reflects the pedagogical diversity inherent to ICT courses, which differ substantially in learning objectives, task complexity, and instructional approaches. In contrast, the heterogeneity associated with technological tools is primarily driven by contextual, infrastructural, and institutional factors, as well as by the rapid evolution and obsolescence of digital technologies in the ICT field. This distinction clarifies that low consensus does not indicate limited relevance, but rather highlights the context-dependent nature of gamification in technological education.
Figure 1 presents a visual synthesis of the consensus hierarchy across gamification domains identified in the second Delphi round. The x-axis represents the percentage of expert agreement, while the y-axis indicates Kendall’s coefficient of concordance (W). Bubble size reflects the relative strength of consensus, and color coding differentiates agreement levels: dark blue indicates high consensus (W ≥ 0.60), light blue represents moderate consensus, and green denotes domains with low consensus (W < 0.40). The figure clearly illustrates that transversal competencies, student motivation, and feedback achieved the highest levels of agreement, whereas technological tools and specific strategies exhibit greater variability, reinforcing the need for contextualized and flexible approaches to gamification in ICT education.
The modified Delphi results demonstrate strong expert consensus regarding the potential of gamification to strengthen key cognitive and socio-emotional competencies such as problem-solving, critical thinking, and collaboration. High levels of agreement (≥83%) and concordance (W ≥ 0.66) confirm the robustness of expert perceptions. Nevertheless, variability persists concerning the use of technological platforms and instructional strategies, highlighting the need to establish context-specific selection criteria and empirically validate their effectiveness across different educational environments.
4 Discussion
The results of this modified Delphi study confirm that gamification, when systematically integrated and aligned with curricular objectives, constitutes an effective pedagogical strategy for strengthening ICT education in higher education. The experts prioritized progressive challenges, immediate feedback, rewards, and narrative, elements widely associated in recent literature with increased motivation, persistence, and meaningful learning (Majuri et al., 2018; Di Nardo et al., 2024; Pérez and Muñoz, 2024).
From a theoretical perspective, the gamification elements that achieved the highest levels of consensus can be interpreted through the lenses of SDT and Flow Theory. In particular, progressive challenges and clear rules are directly linked to the psychological need for autonomy, as they allow students to perceive control over their learning process and to make informed decisions within a structured framework. Likewise, immediate feedback and reward systems contribute to satisfying the need for competence by providing continuous performance-related information and reinforcing perceptions of personal efficacy.
Complementarily, the gradual progression of challenges and the incorporation of narrative elements foster conditions associated with Flow, especially the balance between challenge level and student skills, as well as immersion in the learning activity. In the context of ICT education, where learning tasks frequently involve complex problem-solving and iterative processes, these theoretical principles are particularly relevant, as they support sustained concentration, reduce frustration, and promote deeper and more meaningful learning experiences.
The strong agreement regarding progressive challenges and immediate feedback is consistent with Faure-Carvallo et al. (2022), who emphasize their contribution to learner autonomy and self-regulation. Likewise, consensus on the use of rewards and badges aligns with Muñoz and Gasca-Hurtado (2023), highlighting their positive influence on perceived achievement and intrinsic motivation. These results corroborate international reviews supporting the effectiveness of meaningful gamification over superficial gamification, as it promotes emotional connection and the internalization of learning goals (Munblit et al., 2022; Van Roy and Zaman, 2015).
Although elements such as challenges and feedback are recognized as cross-cutting principles of gamification, they take on particular relevance in the specific context of ICT teaching. Unlike other disciplines, ICT learning is characterized by open-ended problem solving, iterative experimentation, and constant error management, especially in tasks related to programming, system design, and the use of digital tools. In this context, gamified challenges not only act as motivational mechanisms, but also as structures that organize cognitive progression and facilitate the transfer of knowledge to practical situations. Similarly, immediate feedback is especially critical in ICT, as it allows students to identify failures, debug processes, and adjust strategies in real time, reducing the frustration associated with technical complexity. These findings suggest that, although the principles of gamification are universal, their effectiveness and pedagogical significance are intensified in technological disciplines where cognitive load and complex problem solving are central. In this sense, the contribution of the study lies not only in confirming general principles of gamification, but also in demonstrating how these principles relate to the cognitive and procedural demands of ICT learning.
The experts’ open comments complement the quantitative interpretation of the results. For example, several participants pointed out that “challenges should be increased progressively to avoid student frustration,” while others emphasized that “immediate feedback is key in ICT, as students need to identify errors quickly in order to continue advancing.” Likewise, some experts pointed out that “there is no single platform that works in all contexts,” highlighting the need to adapt technological tools to institutional conditions and the type of training activity. These qualitative contributions reinforce the interpretation of the levels of consensus and add greater richness to the analysis of the Delphi process.
Nevertheless, variability in expert opinions regarding academic performance echoes prior studies suggesting that motivational gains do not always translate into higher grades (Lampropoulos and Sidiropoulos, 2024; Díaz et al., 2024). This reinforces the role of instructional design and teacher mediation as determining factors for pedagogical impact (Navarro-Espinosa et al., 2022; Hernández-Fernández et al., 2020). Experts agreed that gamification amplifies engagement and comprehension when framed within a coherent learning design, rather than functioning as an isolated motivational technique.
4.1 Comparison with previous Delphi studies
The consensus patterns observed mirror those reported in other Delphi studies in educational research. Oxley et al. (2025) found that expert panels tend to achieve faster and stronger consensus on pedagogical dimensions than on technological aspects, due to institutional diversity and differences in digital literacy. Similarly, this study exhibited high convergence in pedagogical areas (motivation, feedback, transversal competencies) but greater variability in technological tools (W = 0.27–0.43).
Rüetschi et al. (2022) and Wang et al. (2024) also observed that e-Delphi studies achieve greater stability when evaluating conceptual or pedagogical constructs than when addressing tool-specific or instrumental decisions. The same pattern appears here: experts agreed on design principles but diverged on platform preferences, confirming the need for context-sensitive selection criteria.
The lower level of consensus observed in relation to technology platforms and certain specific strategies (W = 0.27–0.28) should not be interpreted as a weakness of the Delphi process, but rather as an expression of the dynamic and highly contextual nature of the ICT field. Unlike pedagogical principles, which tend to remain stable over time, technological tools are subject to accelerated processes of obsolescence, constant updating, and differential adoption among institutions. In addition, ICT subjects combine conceptual and procedural tasks with different cognitive demands, which influences the assessment of certain platforms or strategies depending on the type of predominant activity (e.g., programming, digital design, or data analysis). In this sense, the heterogeneity captured by the Delphi reflects the real diversity of teaching practices in ICT and reinforces the need to adopt flexible and contextualized criteria for the selection of tools, rather than promoting universal technological solutions. In this way, the study provides evidence not only about which elements generate consensus, but also about those areas where diversity of contexts prevents standardization, which is key information for instructional design in technological disciplines.
In addition, the composition of the expert panel may have contributed to the lower levels of consensus observed in specific research questions related to technological platforms and implementation strategies (RQ3, RQ5 and RQ8). Given that most experts were affiliated with Latin American higher education institutions, their assessments may reflect shared contextual conditions regarding infrastructure, institutional policies, and access to digital tools. This contextual convergence may partially explain the divergence observed in tool-specific domains, reinforcing the idea that technological choices in ICT education are strongly shaped by local conditions rather than universally applicable criteria.
Methodologically, the achievement of strong consensus in two rounds supports the efficiency of the study design, consistent with Malkawi et al. (2023), who note that a two-round Delphi is valid when robust statistical indicators are applied (IQR ≤ 1; CV ≤ 0.30; W ≥ 0.60). The intercoder reliability (κ = 0.82) exceeds the thresholds reported by Diamond et al. (2014) and Potter et al. (2025), reinforcing the methodological rigor of this research.
Comparatively, this study aligns with recent Delphi-based analyses (Hong et al., 2024; Seo et al., 2024) identifying challenges, progression, and feedback as the most consistent pillars of educational gamification. Similarly, Game et al. (2024) validated the relevance of these elements in Latin American contexts with technological limitations, supporting the international transferability of the findings.
4.2 Pedagogical implications
The findings offer actionable contributions for instructional design and university teaching practice. First, they provide a validated reference framework for integrating gamification into ICT courses, emphasizing the need to align game elements with learning outcomes and formative assessment. Second, they highlight teacher training in gamified design as a critical factor for ensuring meaningful and sustainable implementation.
The relationship between gamification, motivation, and academic performance must be interpreted differently. Although the study results show a clear consensus on the positive impact of gamification on student motivation and participation, this does not necessarily imply immediate improvements in traditional assessment results. In contexts such as ICT teaching, gamification can promote deep learning processes—such as conceptual understanding, knowledge transfer, and autonomous problem solving—whose effects are not always directly reflected in short-term grades. In addition, assessment practices focused on final products or summative tests may not adequately capture the benefits of greater cognitive and emotional engagement. In this sense, the effects of gamification on academic performance may manifest gradually and depend on both instructional design and the alignment between gamified strategies and the assessment systems used. Therefore, future research should consider longitudinal designs and broader assessment approaches that allow for analyzing the impact of gamification beyond immediate performance indicators.
From a pedagogical standpoint, experts concur that gamification is most effective when conceptualized as an active and meaningful learning model that fosters autonomy, collaboration, and critical thinking. The results support the development of an integrated gamification model in ICT education, comprising three interdependent dimensions: pedagogical, curricular alignment and coherent instructional design; motivational, application of principles from SDT and Flow Theory; and technological, context-based selection of digital tools according to institutional resources and digital competence levels.
4.3 Methodological discussion and limitations
Methodologically, this study exhibits several strengths aligned with current Delphi research standards. The integration of quantitative (W, IQR, CV) and qualitative (κ) indicators enhanced transparency and interpretive validity, following recommendations by Diamond et al. (2014) and Ramírez Chávez and Ramírez Torres (2024). The diversity and expertise of the panel contributed to the robustness of the consensus, in line with Oxley et al. (2025) and Guerrero-Alcedo et al. (2022).
While the framework proposed in this study is described as a validated reference framework, it is important to clarify that this validation is based on expert consensus achieved through the Delphi method and therefore constitutes a form of conceptual validation. This approach ensures theoretical coherence and pedagogical relevance; however, it does not replace empirical validation derived from the application of the framework in real educational contexts. Consequently, future research should focus on empirically testing the proposed framework through experimental or quasi-experimental designs to assess its impact on learning, motivation and academic performance.
However, limitations should be acknowledged. Although two rounds sufficed to achieve stable consensus, additional iterations might improve longitudinal stability in domains with higher conceptual variability (Malkawi et al., 2023; Rüetschi et al., 2022). The regional composition of the panel may influence technological preferences, reflecting infrastructure disparities observed in other studies (Alsadoon et al., 2022; Faure-Carvallo et al., 2022). Moreover, Delphi consensus represents structured expert judgment rather than empirical evidence, requiring subsequent validation through experimental or longitudinal studies (Hernández-Fernández et al., 2020; Navarro-Espinosa et al., 2022).
Finally, other limitations of the present study concerns the composition of the expert panel. Although the number of participants and the selection criteria applied meet the methodological standards commonly recommended for Delphi studies, most experts were affiliated with higher education institutions in Latin America, with a substantial representation of academics linked to Ecuadorian universities. While this composition ensured strong contextual relevance for ICT education in the region, it also implies that the assessments largely reflect similar institutional, technological and pedagogical realities. Consequently, the findings should be interpreted within this context, and future research could benefit from the inclusion of more geographically and institutionally diverse expert panels in order to enhance the generalizability of the results.
4.4 Contributions and future directions
This study provides a robust theoretical and methodological foundation for integrating gamification into ICT teaching in higher education. It delivers a validated instrument for prioritizing gamification elements and establishes clear design criteria for pedagogical innovation. Future research should pursue: empirical validation of prioritized elements through experimental or quasi-experimental designs; cross-cultural comparisons of gamification practices; and the development of international standards for techno-pedagogical tool selection.
These directions align with recent trends identified by Magpusao (2024) and Faure-Carvallo et al. (2022), reinforcing the relevance of evidence-based innovation in higher education.
5 Conclusion
The results of this modified Delphi study allowed for the identification and consensus on the most relevant gamification elements for teaching ICT in higher education. The 12-member expert panel agreed that gamification can generate significant pedagogical benefits when it is strategically integrated into the curriculum and aligned with clearly defined learning objectives. A strong consensus was achieved regarding its potential to enhance key competencies such as problem-solving, critical thinking, and collaboration—skills considered essential in contemporary higher education.
Experts prioritized challenges, immediate feedback, and clear rules as central components for fostering motivation and active student engagement. These findings confirm that meaningful gamification, grounded in autonomy and engagement, is more effective than reward-based or superficial approaches. However, variability was observed in perceptions of its direct impact on academic performance, which aligns with previous studies and underscores the need for further empirical research to clarify these effects.
A key finding was the lack of consensus regarding a single technological platform for gamified implementation. The coefficient of concordance (W = 0.27–0.43) revealed considerable heterogeneity in tool selection, indicating that technological suitability depends largely on contextual factors such as institutional infrastructure and the digital competence of both teachers and students. Thus, the study proposes adopting context-sensitive selection criteria rather than pursuing universal technological solutions.
Methodologically, the modified Delphi design proved rigorous and effective for structuring and refining expert opinions, achieving statistically consistent consensus levels (overall W = 0.74; IQR ≤ 1; CV ≤ 0.30) and high inter-coder reliability (κ = 0.82). The triangulation of quantitative and qualitative analyses provided interpretive depth and reinforced the robustness of the findings.
Overall, this study delivers a validated reference framework to guide the design and implementation of gamified learning experiences in higher education, particularly in ICT instruction. Future research should expand the expert sample, compare platform effectiveness, and conduct experimental and longitudinal studies to empirically validate the impact of gamification on motivation, learning, and academic achievement. By integrating motivational theory with instructional design, this study contributes to a more coherent, evidence-based, and practice-oriented understanding of gamification as a transformative pedagogical strategy in higher education.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: OSF-DOI: https://doi.org/10.17605/OSF.IO/TCP3Z.
Ethics statement
Ethical approval was not required for this study in accordance with local legislation and institutional requirements, as it involved a non-interventional educational study based on anonymous questionnaires completed voluntarily by adult participants. Written informed consent was obtained from all participants. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
LJ-M: Writing – review & editing, Conceptualization, Methodology, Writing – original draft, Supervision, Validation, Data curation, Investigation. AB-A: Investigation, Conceptualization, Validation, Data curation, Writing – review & editing, Supervision, Writing – original draft, Methodology. SC-M: Data curation, Writing – review & editing, Validation, Methodology, Investigation, Writing – original draft, Conceptualization, Supervision. MC-G: Writing – review & editing, Supervision, Writing – original draft, Methodology, Data curation, Investigation, Validation, Conceptualization.
Funding
The author(s) declared that financial support was received for this work and/or its publication. The main source of funding is the Universidad Técnica del Norte (UTN), through an institutional budget specifically allocated for scientific publications.
Acknowledgments
The authors would like to express their sincere appreciation and gratitude to the Technical University of the North for the support provided during the development of this research, which forms an integral part of the doctoral thesis within the PhD Program “Education in the Knowledge Society” at the University of Salamanca (Spain).
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. The author(s) verify and take full responsibility for the use of generative AI in the preparation of this manuscript. Generative AI was used for language editing and improvement of clarity and academic writing style. The authors reviewed, validated, and approved all content, and the use of generative AI did not influence the study design, data collection, analysis, or interpretation of the results.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2026.1781564/full#supplementary-material
References
Aguilera Meza, C. K., Loor, C. P. S., Párraga, B. A. P., and Delgado, J. R. E. (2020). Gamificación: estrategia didáctica motivadora en el proceso de enseñanza aprendizaje del primer grado de educación básica. Rev. Cognosis 5, 51–70. doi: 10.33936/cognosis.v5i3.2083
Alsadoon, E., Alkhawajah, A., and Bin Suhaim, A. (2022). Effects of a gamified learning environment on students’ achievement, motivations, and satisfaction. Heliyon 8:e10249. doi: 10.1016/j.heliyon.2022.e10249,
Arias, J., and Mon, F. (2022). Gamified flipped classroom as a pedagogical strategy in higher education: a systematic review. Edutec. 80, 84–98. doi: 10.21556/edutec.2022.80.2435
Azogue-Punina, J. G., and Barrera-Erreyes, H. M. (2020). La motivación intrínseca en el aprendizaje significativo. Polo. Conoc. Rev. Cient. 46, 99–116. Available at: https://polodelconocimiento.com/ojs/index.php/es/article/view/1469
Baldrich, K., Pérez-García, C., Domínguez-Oller, J. C., and Sánchez-Fortún, J. M. A. (2024). Gamified experiences in educational academic contexts: a systematic review. Qual. Res. Educ. 13, 221–242. doi: 10.17583/qre.13552
Beiderbeck, D., Frevel, N., von der Gracht, H., Schmidt, S. L., and Schweitzer, V. M. (2021). Preparing, conducting, and analyzing Delphi surveys: cross-disciplinary practices, new directions, and advancements. MethodsX 8:101401. doi: 10.1016/j.mex.2021.101401,
Carbajal, P., Barboza, J. R. R., Garay, J. P., Sánchez, G. A. Á., and Albornoz, V. C. (2022). Gamificación como técnica de motivación en el nivel superior. Rev. Invest. Ciencias. Educ. 6, 484–496. doi: 10.33996/revistahorizontes.v6i23.351
Cardoso, C., Canavarro, M. C., and Pereira, M. (2025). Motivation in doctoral students: development and psychometric validation of the European Portuguese version of the motivation for PHD studies scale. Int. J. Dr. Stud. 20:2. doi: 10.28945/5436
Csikszentmihalyi, M. (1990). Flow: the psychology of optimal experience. New York (NY): Harper & Row.
Deci, E. L., and Ryan, R. M. (1985). The general causality orientations scale: self-determination in personality. J. Res. Pers. 19, 109–134. doi: 10.1016/0092-6566(85)90023-6
Di Nardo, V., Fino, R., Fiore, M., Mignogna, G., Mongiello, M., and Simeone, G. (2024). Usage of gamification techniques in software engineering education and training: a systematic review. Computers 13:196. doi: 10.3390/computers13080196
Diamond, I. R., Grant, R. C., Feldman, B. M., Pencharz, P. B., Ling, S. C., Moore, A. M., et al. (2014). Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J. Clin. Epidemiol. 67, 401–409. doi: 10.1016/j.jclinepi.2013.12.002,
Díaz, D. M., Díaz, E. J., and Pilco, M. A. (2024). Gamificación como estrategia educativa para incrementar la motivación y rendimiento académico. Inter. Vis. Cult. Rev. 16, 61–69. doi: 10.62161/revvisual.v16.5323
Dichev, C., and Dicheva, D. (2017). Gamifying education: what is known, what is believed and what remains uncertain: a critical review. Int. J. Educ. Technol. High. Educ. 14:9. doi: 10.1186/s41239-017-0042-5
Dreimane, S. (2019). “Gamification for education: review of current publications: smart pedagogy for technology enhanced learning” in Didactics of Smart Pedagogy. ed. L. Daniela (Cham: Springer), 453–464.
Faure-Carvallo, A., Calderón-Garrido, D., and Gustems-Carnicer, J. (2022). Gamificación digital en la educación secundaria: una revisión sistemática. Rev. Latina. Comunic. Soc. 80, 137–154. doi: 10.4185/RLCS-2022-1773
Fitria, T. N. (2022). The impact of gamification on students’ motivation: a systematic literature. LingTera 9, 47–61. doi: 10.21831/lt.v9i2.56616
Game, P. I. Z., Zúñiga, V. J. C., Murrieta, N. P. G., and Gómez, L. X. R. (2024). La gamificación para el mejoramiento del proceso de enseñanza - aprendizaje en educación básica. Uniandes Episteme 11, 32–44. doi: 10.61154/rue.v11i1.3350
García-Martín, J., and García-Sánchez, J.-N. (2020). The effectiveness of four instructional approaches used in a MOOC promoting personal skills for success in life. Revista. Psicodidáctica 25, 36–44. doi: 10.1016/j.psicoe.2019.08.001
Guanoluisa, J., Quichimbo, J., and Muevecela, S. (2022). La gamificación cooperativa como estrategia de enseñanza inclusiva en estudiantes de la unidad educativa “molleturo”. Una revisión de la literatura. Religación. Revista. Cien. Sociales. Humanidades 7:e210939. doi: 10.46652/rgn.v7i34.939
Guerrero-Alcedo, J. M., Espina-Romero, L. C., and Nava-Chirinos, Á. A. (2022). Gamification in the university context: bibliometric review in scopus (2012–2022). Int. J. Learn. Teach. Educ. Res. 21, 309–325. doi: 10.26803/ijlter.21.5.16
Hernández-Fernández, A., Olmedo-Torre, N., and Peña, M. (2020). Is classroom gamification opposed to performance? Sustainability 12:9958. doi: 10.3390/su12239958
Hong, Y., Saab, N., and Admiraal, W. (2024). Approaches and game elements used to tailor digital gamification for learning: a systematic literature review. Computers. Educ. 212:105000. doi: 10.1016/j.compedu.2024.105000
Hsu, C. C., and Sandford, B. A. (2007). The Delphi technique: making sense of consensus. Pract. Assess. Res. Eval. 12:10. doi: 10.7275/pdz9-th90
Koivisto, J., and Hamari, J. (2019). Gamification of physical activity: a systematic literature review of comparison studies. Proceedings of the 2019 GamiFIN conference, (pp. 106–117), New York (NY): ACM
Lampropoulos, G., and Sidiropoulos, A. (2024). Impact of gamification on students’ learning outcomes and academic performance: a longitudinal study comparing online, traditional, and gamified learning. Educ. Sci. 14:367. doi: 10.3390/educsci14040367
Londoño, L. M., and Rojas, M. D. (2020). De los juegos a la gamificación: propuesta de un modelo integrado. Tecnologías. Información. Comunicación. Pedagogía 23, 493–512. doi: 10.5294/edu.2020.23.3.7
Lopez-Catalan, B., and Bañuls, V. A. (2017). A Delphi-based approach for detecting key e-learning trends in postgraduate education: the Spanish case. Educ. Train. 59, 590–604. doi: 10.1108/ET-12-2016-0186
Lorenzo-Lledó, A., Pérez Vázquez, E., Andreu-Cabrera, E., and Lorenzo Lledó, G. (2023). Application of gamification in early childhood education and primary education: thematic analysis. Retos 50, 858–875. doi: 10.47197/retos.v50.97366
Magpusao, J. R. (2024). Gamification and game-based learning in primary education: a bibliometric analysis. Computers. Children 3:em005. doi: 10.29333/cac/14182
Majuri, J., Koivisto, J., and Hamari, J. (2018) ‘Gamification of education and learning: a review of empirical literature’, Proceedings of the 2018 GamiFIN conference, 2186, (pp. 11–19). New York (NY): ACM
Malkawi, A. R., Bakar, M. S. A., and Dahlin, Z. M. (2023) Review of the Delphi method in the higher educational research. Available online at: https://www.researchgate.net/profile/Aminah-Malkawi/publication/370962568
Mamekova, A. T., Toxanbayeva, N. K., Naubaeva, K. T., Ongarbayeva, S. S., and Akhmediyeva, K. N. (2021). A meta-analysis on the impact of gamification over students’ motivation. J. Intellect. Disab. Diagnos. Treat. 9, 417–422. doi: 10.6000/2292-2598.2021.09.04.9
Munblit, D., Nicholson, T., Akrami, A., Apfelbacher, C., Chen, J., De Groote, W., et al. (2022). A core outcome set for post-COVID-19 condition in adults for use in clinical practice and research: an international Delphi consensus study. Lancet Respir. Med. 10, 715–724. doi: 10.1016/S2213-2600(22)00169-2,
Muñoz, M., and Gasca-Hurtado, G. P. (2023). Gamificación para atender los desafíos de la enseñanza ingeniería de software en instituciones de educación superior. Rev. Ibérica. Sistemas. Tecno. Inform. 49, 5–21. doi: 10.17013/risti.49.5-21
Navarro-Espinosa, J. A., Vaquero-Abellán, M., Perea-Moreno, A. J., Pedrós-Pérez, G., Martínez-Jiménez, M. D. P., and Aparicio-Martínez, P. (2022). Gamification as a promoting tool of motivation for creating sustainable higher education institutions. Int. J. Environ. Res. Public Health 19:2599. doi: 10.3390/ijerph19052599,
Navarro-Mateos, C., Pérez-López, I. J., and Trigueros Cervantes, C. (2024). Analysis of the teaching role in a gamification proposal in the teacher’s master’s degree. Revista. Educac. 405, 275–301. doi: 10.4438/1988-592X-RE-2024-405-635
Okoli, C., and Pawlowski, S. D. (2004). The Delphi method as a research tool: an example, design considerations and applications. Information & Managemen, 42, 15–29. doi: 10.1016/j.im.2003.11.002
Oxley, E., Nash, H. M., and Weighall, A. R. (2025). Consensus building using the Delphi method in educational research: a case study with educational professionals. Int. J. Res. Method. Educ. 48, 29–43. doi: 10.1080/1743727X.2024.2317851
Pérez, L., and Muñoz, L. (2024). La gamificación en el ámbito educativo: desafíos, potencialidades y perspectivas para su implementación. Revista. Educac. 405, 249–274. doi: 10.4438/1988-592X-RE-2024-405-634
Pinchover, S., Berger Raanan, R., Gadassi, H., Shalev, A., Dahari, D., Gutentag, T., et al. (2024). Pediatricians at the forefront of child mental health? A Delphi method exploration. Isr. J. Health Policy Res. 13:73. doi: 10.1186/s13584-024-00661-5,
Potter, A., Munsch, C., Watson, E., Hopkins, E., Kitromili, S., O’Neill, I. C., et al. (2025). Identifying research priorities in digital education for health care: umbrella review and modified Delphi method study. J. Med. Internet Res. 27:e66157. doi: 10.2196/66157,
Ramírez Chávez, M. A., and Ramírez Torres, T. Z. (2024). El método DELPHI como herramienta de investigación. Una revisión: the DELPHI method as a research tool. A review. LATAM Rev. Latinoamericana. Cienc. Social. Human. 5, 3368–3383. doi: 10.56712/latam.v5i1.1842
Rüetschi, U., Furnstahl, P., Favre, J., Eid, K., and Schmid, T. (2022). Methodology of an e-Delphi study to explore future trends in orthopedics education and the role of technology in orthopedic surgeon learning. J. Musculoskelet. Surgery. Res. 6, 121–129. doi: 10.25259/JMSR_134_2021
Ryan, R. M., and Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self-determination theory perspective: definitions, theory, practices, and future directions. Contemp. Educ. Psychol. 61:101860. doi: 10.1016/j.cedpsych.2020.101860
Sánchez, M. E. (2022). Tendencias y estado del arte de la gamificación para materias de programación en educación superior, vol. 18. Juárez: Theorema Revista Científica.
Sangucho, M., Janeth, A., Aillón, F., and Milena, T. (2020). Gamificación como técnica didáctica en el aprendizaje de las ciencias naturales. INNOVA Res. J. 5, 164–181. doi: 10.33890/innova.v5.n3.2020.1391
Seo, Y. K., Kang, C. M., Kim, K. H., and Jeong, I. S. (2024). Effects of gamification on academic motivation and confidence of undergraduate nursing students: a systematic review and meta-analysis. Nurse Educ. Today 143:106388. doi: 10.1016/j.nedt.2024.106388,
Van Roy, R., and Zaman, B. (2015). The inclusion or exclusion of teaching staff in a gamified system: an example of the need to personalize. In Proceedings of the CHI play ‘15 workshop’ personalization in serious and persuasive games and gamified interactions. Leuven: KU Leuven
Wang, Y. F., Hsu, Y. F., and Fang, K. (2022). The key elements of gamification in corporate training – the Delphi method. Entertainment Computing 40:100463. doi: 10.1016/j.entcom.2021.100463
Wang, Y. F., Hsu, Y. F., Fang, K. T., and Kuo, L. T. (2024). Gamification in medical education: identifying and prioritizing key elements through Delphi method. Med. Educ. Online 29:2302231. doi: 10.1080/10872981.2024.2302231,
Keywords: active learning, Delphi method, expert consensus, gamification, higher education, ICT, instructional design, motivation
Citation: Jaramillo-Mediavilla L, Basantes-Andrade A, Casillas-Martín S and Cabezas-González M (2026) Expert consensus on gamification strategies for ICT courses in higher education: a modified Delphi approach. Front. Educ. 11:1781564. doi: 10.3389/feduc.2026.1781564
Edited by:
Chinaza Solomon Ironsi, University of Mediterranean Karpasia, CyprusReviewed by:
Federica Pelizzari, Università Cattolica del Sacro Cuore, ItalyJuncheng Guo, UCSI University, Malaysia
Copyright © 2026 Jaramillo-Mediavilla, Basantes-Andrade, Casillas-Martín and Cabezas-González. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Lorena Jaramillo-Mediavilla, bGphcmFtaWxsb0B1dG4uZWR1LmVj