- 1Coordinación de Matemáticas y Estadística de la Facultad de Ciencias Básicas de la Universidad Técnica de Manabí, Manabí, Ecuador
- 2Facultad de Posgrado de la Universidad Estatal de Milagro, Milagro, Ecuador
- 3Centro de Investigación en Ciencias Humanas y de la Educación-CICHE, Facultad de Ciencias de la Educación, Universidad Tecnológica Indoamérica, Ambato, Ecuador
- 4Pontificia Universidad Católica del Ecuador, Santo Domingo de los Colorados, Ecuador
- 5Unidad Educativa Nuevo Ecuador, Quito, Ecuador
- 6Universidad Autónoma de Sinaloa, Mazatlán, México
Introduction: In the context of university education in Ecuador, the application of Artificial Intelligence (AI) for the assessment and adaptation of teaching models marks significant progress toward enhancing educational quality. The integration of AI into pedagogical processes is increasingly recognized as a strategic component for fostering innovation and improving instructional outcomes in higher education.
Methods: This study focused on the validation of an AI-based instrument, specifically designed for the evaluation and adaptation of pedagogical strategies in the Ecuadorian university environment. A quantitative methodology was adopted, employing multivariate statistical analyses and structural equation modeling (SEM) to examine the internal consistency, construct validity, and interrelations among various didactic dimensions. The instrument was applied to a statistically representative sample of university professors across both undergraduate and graduate levels.
Results: The statistical analysis demonstrated high levels of internal consistency and discriminative validity among the constructs representing different teaching models. The confirmatory factor analysis and SEM procedures verified the adequacy of the theoretical structure and the robustness of the proposed measurement model. Coefficients obtained for reliability and model fit met or exceeded established thresholds in educational research.
Discussion: The findings confirm the empirical soundness of the AI-based instrument and support the feasibility of using such tools to assess and enhance teaching models in higher education. These results underscore the importance of adopting innovative, data-driven methodologies that respond to the demands of contemporary educational environments. Furthermore, the use of AI in the validation process enables a more precise interpretation of educational information, reinforcing the relevance of AI-supported models in optimizing teaching and learning processes.
1 Introduction
In the current context of rapid digital transformation and the proliferation of emerging technologies, the educational sector, particularly at the university level, encounters a multifaceted landscape marked by both challenges and opportunities (Apata, 2024; George and Wooden, 2023; Moreira-Choez et al., 2024c). Within this framework, the integration of Artificial Intelligence (AI) is increasingly recognized as a pivotal factor in enhancing and adapting contemporary educational demands. According to Lameras and Arnab (2021), AI supports the development of personalized and efficient teaching strategies while teaching strategies while also transforming pedagogical interactions at various levels, thereby redefining the dynamics of teaching and learning.
The evolution of teaching models reflects a transition from traditional, teacher-centered approaches to interactive, student-focused methodologies. This shift has been influenced by both pedagogical imperatives and technological (Bakar, 2021; Kanwar et al., 2019). Constructivist, collaborative, and other innovative frameworks have replaced rote memorization, emphasizing critical thinking, problem-solving, and learner autonomy (Einum, 2019; Murphy et al., 2021). Despite these advancements, a gap remains: the absence of validated tools capable of evaluating and adapting teaching methodologies to specific contexts, which limits the effective implementation of these models.
AI emerges as a viable solution to this issue, offering capabilities that enable the processing of large datasets, identification of patterns, and provision of adaptive recommendations (Dwivedi et al., 2021). In the context of this study, AI is for the validation of an instrument designed to evaluate teaching models in higher education. By utilizing advanced analytical techniques, AI ensures the reliability, internal consistency, and discriminative capacity of the instrument, making it a robust tool for application in diverse educational environments (Cowls et al., 2023).
The relevance of this research is underscored by its potential to address critical deficiencies in university didactics. The integration of AI in the validation process not only contributes to the development of more effective and personalized teaching processes but also aligns with broader goals of improving educational quality (Naseer et al., 2024). This alignment is particularly pertinent in Ecuador, where the adaptation of teaching models to meet the needs of students represents an essential objective (Ingavelez-Guerra et al., 2022; Ruiz-Rojas et al., 2023). The instrument validated in this study is designed to enable educators to assess and implement innovative teaching methodologies, addressing contemporary educational challenges and supporting the evolution of quality education in the face of technological advancements.
In response to this problem, the research question is formulated: ¿How to validate a teaching model instrument for university education in Ecuador using an artificial intelligence algorithm? To address this question, the following general objective is established: validate a teaching model instrument for university education in Ecuador through artificial intelligence algorithm. The formulation of this question and objective seeks to address the specific needs of evaluating and adapting teaching models within the context of Ecuadorian university education, utilizing advanced technological tools to ensure precision and efficiency in the results.
To fulfill both the problem statement and the study objective, the following hypotheses are proposed, serving as the foundation for the scientific validation of the proposed instrument.
• H1: The factorial loadings of the regression items and each teaching model are acceptable in the questionnaire for higher education teaching through artificial intelligence.
• H2: The factors are significantly related to the teaching model, with parameters obtained from the best model fit.
• H3: The variance coefficients are statistically significant for the observed variables and the teaching models.
• H4: The teaching models in higher education are distinguishable from one another through discriminant analysis, convergent analysis, and the Heterotrait-Monotrait (HTMT) ratio.
2 Theoretical framework
2.1 Traditional didactic model
The traditional didactic model is defined by a teacher-centered approach, where the unidirectional transmission of knowledge predominates (Hoidn and Reusser, 2020; Yang, 2008). In this paradigm, learning is conceived as a passive process of information reception, evaluated primarily through memorization and repetition of data. The assessment used in this model tends to be summative, focusing on final outcomes while neglecting a comprehensive evaluation of the learning process. Although this approach has been widely employed, critics such as Paul (1989) highlight that it fails to foster the development of critical skills and independent thinking, which are essential in contemporary education.
The evaluation of this model involves analyzing key attributes, such as reliance on teacher authority, the hierarchical structure of the learning process, and the emphasis on outcomes over procedures (Stufflebeam, 2001). These attributes are crucial for understanding how the model impacts the development of student competencies (Gamage et al., 2023; Hu et al., 2023). Specifically, measuring the predominance of unidirectional transmission and limited interaction helps identify its influence on students' ability to apply knowledge critically and autonomously.
The importance of measuring these attributes lies in the need to assess the relevance of the model in current educational contexts, which demand transversal competencies such as problem-solving and adaptability. The literature provides evidence that validates these measurements as relevant elements of the construct (Sarstedt et al., 2019; Yang et al., 2004). For instance, studies have shown that teacher-centered approaches correlate with limited performance in tasks requiring analysis and creativity (Oyelana et al., 2022; Wagner et al., 2020). Furthermore, criticisms of the model suggest that its lack of emphasis on the educational process can perpetuate superficial and fragmented learning.
2.2 Collaborative didactic model
The collaborative didactic model, in contrast to the traditional model, is based on the importance of social interaction within the educational process (Kaasila and Lauriala, 2010). This approach fosters collaboration among students, creating an environment conducive to the exchange of ideas and joint problem-solving. Beyond improving social skills, this model enriches learning by providing it with greater depth and meaning. According to Mora et al. (2020), it is particularly effective in developing competencies such as critical thinking, problem-solving, and teamwork.
The evaluation of this model involves analyzing key attributes such as active peer interaction, the ability to construct knowledge collectively, and the inclusion of diverse perspectives in learning (Lombardi et al., 2021). These elements are crucial to understanding how the model promotes essential competencies that go beyond academic content and translate into skills applicable in various contexts. Active social interaction and structured collaboration are measurable indicators that reflect the model's ability to facilitate meaningful and transferable learning experiences (de Freitas and Neumann, 2009; Patel et al., 2012; Qin and Yu, 2024).
The importance of measuring these attributes lies in the need to assess the effectiveness of this approach in meeting the demands of contemporary educational environments, which require transversal skills and social competencies. The literature supports the validity of these measurements, as studies have shown that collaborative settings enhance deep learning and improve performance in tasks requiring creativity and critical thinking (Chen et al., 2018; Graesser et al., 2018). Moreover, collaborative dynamics allow students to develop negotiation, leadership, and conflict resolution skills, which are fundamental in professional and social contexts.
2.3 Spontaneist didactic model
The spontaneist didactic model emphasizes the significance of direct and spontaneous student experiences, framing learning as a natural and organic process that should be facilitated rather than imposed (Green, 2015; Reigeluth, 2013). Within this paradigm, students' curiosity and personal interests serve as primary drivers of their educational journey, positioning the teacher as a facilitator who supports exploration and discovery rather than a source of unidirectional knowledge transmission. According to Alkhawalde and Khasawneh (2024), this approach proves particularly effective in fostering creativity and intrinsic motivation, as it aligns closely with the learner's internal inclinations and interests.
The evaluation of this model requires examining attributes such as the degree of autonomy afforded to students, the role of curiosity in guiding learning activities, and the extent to which the learning environment supports spontaneous exploration (Ten et al., 2021). These attributes are critical for understanding how the model influences student engagement and promotes competencies like creative problem-solving and self-directed learning (Loyens et al., 2008). Measuring these elements allows for the identification of how effectively the model facilitates adaptive and meaningful learning experiences.
The importance of assessing these attributes lies in their potential to provide insights into how well the spontaneist model aligns with the demands of modern education, where adaptability and lifelong learning are increasingly valued (Kergel, 2023). Research evidence supports the validity of these attributes as relevant components of the construct. For instance, studies have shown that environments promoting student autonomy and curiosity are associated with higher levels of engagement and deeper learning (Arnone et al., 2011; Tas, 2016; Tu and Lee, 2024). Furthermore, such settings foster resilience and the ability to navigate complex, real-world problems, outcomes often linked to the development of intrinsic motivation and creativity.
2.4 Constructivist didactic model
The constructivist didactic model posits that learning is an active process through which individuals construct new knowledge by engaging with their experiences and interacting with their environment (Loyens and Gijbels, 2008; Zajda, 2021). This perspective shifts the role of the educator from a transmitter of information to a facilitator who designs diverse and meaningful contexts that enable students to integrate new knowledge with their prior understanding. According to Tsui (2002), this model is particularly effective in promoting a deeper and more lasting comprehension of the subject matter, as it encourages learners to internalize concepts through meaningful connections.
Evaluating the constructivist model involves examining attributes such as the degree to which students actively participate in their learning process, the richness of the contexts provided, and the strategies employed to encourage reflection and critical thinking (Honebein et al., 1993; Le and Nguyen, 2024; Lee and Hannafin, 2016). These attributes are essential for understanding how this model supports the development of higher-order cognitive skills, such as analysis, synthesis, and evaluation (Kwangmuang et al., 2021; Richland and Simms, 2015). Measuring these elements helps to determine how effectively the constructivist approach facilitates the application and retention of knowledge in diverse and complex situations.
The importance of measuring these attributes lies in their alignment with contemporary educational demands, which prioritize lifelong learning, adaptability, and the ability to transfer knowledge to real-world problems (Aithal and Mishra, 2024; Zamiri and Esmaeili, 2024). Empirical evidence supports the validity of these measurements, as studies have consistently demonstrated that constructivist environments foster active engagement and critical inquiry, leading to improved problem-solving abilities and long-term retention of knowledge (Huang et al., 2010; Kwan and Wong, 2015). For instance, student-centered activities that require reflection and application of concepts to new scenarios have been shown to enhance comprehension and foster intellectual independence (Klemenčič, 2017; Peters, 2010).
2.5 Technological didactic model
The technological didactic model emphasizes the integration of information and communication technologies (ICT) into the teaching-learning process, responding to the demands of contemporary society and leveraging digital tools to enrich the educational experience (Didmanidze et al., 2023; Okoye et al., 2023). This model recognizes technology as a transformative agent in education, providing diverse advantages, including access to extensive digital resources, opportunities for personalized learning, and the development of essential digital competencies. According to Kirkwood (2014), the model has the potential to revolutionize educational methodologies by facilitating more flexible, interactive, and accessible approaches to teaching and learning.
The evaluation of this model involves analyzing critical attributes, such as the extent of ICT integration in instructional design, the promotion of digital literacy, and the adaptability of learning processes to individual student needs (Mohammadyari and Singh, 2015; Valverde-Berrocoso et al., 2021). These attributes are essential to understanding how the technological model enhances learning outcomes by fostering engagement, interactivity, and autonomy. For example, measuring the use of adaptive learning systems and digital tools to support diverse learning styles provides insights into the model's effectiveness in personalizing education (Moreira-Choez et al., 2024b; Sajja et al., 2024; Truong, 2016).
The importance of assessing these attributes lies in the necessity to evaluate the model's relevance and impact within modern educational environments (Angeli and Valanides, 2009; Schunk, 2003). The increasing ubiquity of technology in all spheres of life necessitates a focus on developing students' digital fluency and their ability to navigate, evaluate, and utilize technological resources effectively. Empirical studies underscore the validity of these attributes, with research demonstrating that technology-rich environments can enhance student engagement, improve access to education, and support the acquisition of transferable skills (Aljehani, 2024; Lajoie et al., 2020). Furthermore, ICT-based approaches have been shown to facilitate collaborative learning, critical thinking, and problem-solving, all of which align with the broader goals of 21st-century education (Moreira-Choez et al., 2024a; Peña-Ayala, 2021).
3 Materials and methods
The methodology adopted in this study was framed within the positivist paradigm, employing a quantitative approach that allowed for objective and systematic data analysis. The research design was non-experimental, with a descriptive-correlational level, which facilitated the characterization of the participating faculty and the exploration of significant relationships between relevant variables for instrument validation. A deductive method was applied, starting from the theoretical analysis of conceptual frameworks related to artificial intelligence and educational innovation, and arriving at specific conclusions regarding the relevance of the instrument in university contexts.
The study population consisted of active university professors during the 2023 academic year at two higher education institutions in Ecuador: The Technical University of Manabí (UTM) and the State University of Milagro (UNEMI). According to institutional records, the total population included 843 professors: 276 at UTM and 567 at UNEMI. A representative sample of 413 professors was determined using the statistical formula for finite populations, with a 95% confidence level and a 4% margin of error.
The sampling technique was non-probabilistic by convenience, due to the voluntary nature of participation and logistical constraints. However, considering that the study employed inferential statistics, specifically Structural Equation Modeling (SEM), a normality test was conducted prior to model application. Kolmogorov-Smirnov and Shapiro-Wilk tests, as well as skewness and kurtosis coefficients, indicated an acceptable normal distribution for most variables, justifying the use of SEM for exploratory and validation purposes.
The information presented in Table 1 reveals a heterogeneous distribution of the sample based on university, gender, and academic level, which enhances the representativeness of the study. Most participants belong to the State University of Milagro (67.3%), while 32.7% are from the Technical University of Manabí. This difference may be attributed to the larger faculty size at UNEMI or a greater willingness among its professors to participate in research related to educational innovation. Additionally, a slightly higher female participation (56.9%) is observed, reflecting a growing trend toward gender parity in the Ecuadorian academic field. This gender diversity strengthens the analysis of results by allowing the identification of possible differences in the perceptions of the validated instrument.
Regarding academic level, 55.9% of participants are involved in postgraduate programs, while 44.1% teach at the undergraduate level. This overrepresentation of postgraduate faculty may be linked to their greater familiarity with research processes and topics such as artificial intelligence in educational environments. Specifically, postgraduate faculty from UNEMI constitute the largest individual subgroup in the sample (20.8%). The combination of these variables demonstrates a solid and diverse sample composition, which supports the external validity of the study. Nevertheless, it is advisable to conduct additional analyses to determine whether the observed differences significantly influence responses to the instrument, which would enable contextual adjustments and enhance its applicability.
This integrated table provides a detailed view of the sample's composition based on key sociodemographic variables, facilitating a more analytical understanding of the study participants. The inclusion of faculty members of both genders, various academic levels, and from two institutions contributes to the diversity of the sample and strengthens the external validity of the validated instrument. It is recommended to conduct comparative statistical analyses to determine whether sociodemographic differences significantly influence perceptions and evaluations of the instrument.
3.1 Statistical analysis through artificial intelligence
Figure 1 illustrates the structure of relationships between different teaching models applied in the university context and how artificial intelligence contributes to their development and efficacy. This schema is presented as a structural equation model, where different latent variables representing specific teaching models, such as the Traditional, Technological, Constructivist, Spontaneist, and Collaborative models, can be observed. Each of these models is associated with various indicators reflected by the observed variables, denoted with the letter “P” followed by a number.
In this research, a quantitative approach supported by artificial intelligence tools is employed. For this purpose, the Statistical Software for the Social Sciences (SPSS), version 25, and the structural equation modeling software AMOS, version 24, were used. These programs operate in an integrated manner to validate the coefficients of the instrument designed to evaluate teaching models in Higher Education.
Multivariate statistics are utilized, specifically the Principal Component Analysis (PCA) technique, both confirmatory and exploratory, to examine the underlying structure of the observed variables. To assess the reliability of the content and the construct, an internal consistency analysis is conducted using Cronbach's alpha coefficient, which measures the homogeneity of the items, and McDonald's omega, through additional extensions of the software (Omega, Alpha, and All Subsets reliability Procedure).
Additionally, a plugin in AMOS called Model Fit Measure is incorporated, used to evaluate the goodness of fit of the structural model. The criteria for excellence are set according to the Comparative Fit Index (CFI) with a threshold above 0.95 and the Root Mean Square Residual (RMR) lower than 0.08. To further strengthen the model, through the use of artificial intelligence, the Root Mean Square Error of Approximation (RMSEA) is also considered with an optimal value lower than 0.06, following the recommendations of Schubert et al. (2017) and McNeish and Wolf (2022).
The selection and calibration of the models are based on the Akaike Information Criterion (AIC), according to Portet (2020) and Asadi and Seyfe (2024). Regarding the functionality of artificial intelligence, the extension for validity and reliability tests is used, which facilitates discriminant analysis, the Average Variance Extracted (AVE), the Maximum Shared Squared Variance (MSV), and the correlation between each dimension of the teaching model, based on Wang and Wang (2022).
After establishing a neural network for each dimension with its corresponding observed variables, the extension to name unobserved variables (Name Unobserved variables) is implemented. To measure the correlation between dimensions, the Draw Covariances tool is used. Finally, in AMOS, the analysis properties are activated to apply the Maximum Likelihood estimation and various outputs are selected for the interpretation of results, which have included standardized estimates, squared multiple correlations, simple and implicit moments, residual moments, modification indices, factor score weights, covariances and correlations of estimates, critical ratios for differences, and tests for normality and outlier detection.
4 Results and discussion
Table 2 provides a quantitative evaluation of the internal consistency and discriminative ability of five teaching models applied in university education. The reliability analysis is performed using Cronbach's alpha coefficient, while Critical Reliability (CR) and the Average Variance Extracted (AVE) are measures of the consistency and convergence of the evaluated constructs. Lastly, the correlation (R2) offers a perspective on the relationship between the observed variables and the theoretical construct they represent.
Table 2 compiles the results of the reliability and validity analysis of the constructs of the university teaching models. Through the application of Cronbach's alpha coefficient and the factor loadings of Critical Reliability (CR), the internal consistency of the scales is determined. The Average Variance Extracted (AVE) and the Pearson correlation among the dimensions of the teaching models provide a measure of the convergent and discriminant validity, respectively.
Regarding the reliability of the dimensions, the results indicate a reliability above the generally accepted threshold of 0.70, suggesting excellent internal consistency for the measured constructs. According to Taber (2018), a Cronbach's alpha above 0.70 is indicative of good internal reliability, corroborating the accuracy of the scales in the context of higher education.
In parallel, the Critical Reliability for each teaching model reveals values exceeding the recommended minimum standard of 0.70, indicating strong consistency and reliability of the items within each construct. Authors such as Sujati et al. (2020) assert that CR values above 0.70 denote adequate composite reliability, which strengthens the legitimacy of the construct measurements.
The Average Variance Extracted, surpassing the parameter of 0.30 suggested in relevant literatura (Dos Santos and Cirillo, 2023), reflects the amount of variance that a factor has in relation to the variance due to measurement error. The values obtained in this research demonstrate that the constructs possess acceptable convergent validity, as they capture a significant proportion of the variance in the observed variables.
Moreover, the Pearson correlation for each teaching model exceeds the coefficient of 0.50, indicating positive and strong relationships between the variables. This is consistent with the findings of Diamantopoulos et al. (2012), who maintain that substantial correlations between the items and the underlying construct are indicative of high construct validity.
Next, Figure 2 presents a detailed analysis of a structural equation model applied to university teaching models, where the relationships between theoretical constructs and their corresponding items are evaluated. The values in the schema reflect the factor loadings, indicating the magnitude of the relationships between the items (observed variables) and the constructs of each teaching model, as well as the metrics of the overall model fit, providing evidence of the quality of the model's fit to the collected data.
The confirmatory factor analysis (CFA) carried out on the teaching models in Higher Education, for which artificial intelligence tools were used, is reflected in Figure 2. A Chi-square fit index over degrees of freedom (CMIN/DF) of 3.681 is observed, which, despite exceeding the ideal value of 3 suggested by Pasamonk (2004), is considered acceptable within the tolerance range in Social Sciences. A significance value (p) of 0.000 confirms the statistical relevance of the model, resulting from ten computational iterations.
Regarding the model fit, the Root Mean Square Error of Approximation (RMSEA) reaches a value of 0.081, which is close to the excellence threshold established at 0.065, as proposed by O'Loughlin and Coenders (2004), implying a satisfactory fit of the model to the data. The Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI) yield values of 0.827 and 0.812 respectively, indicating an acceptable level according to the recommendations of Yildiz and Güngörmüş (2016). In turn, the Parsimony Normed Fit Index (PNFI) of 0.715 and the elevated Akaike Information Criterion (AIC) of 2003.301, although not optimal, reflect manageable complexity and an adequate specification of the model respectively, in line with the contributions of Zacharia et al. (2011).
The robustness of the instrument is attributed to the high factor loadings of the items in each dimension, as detailed in Figure 2. In the collaborative model, factor loadings range from 0.62 to 0.84, this value exceeds the standard of 0.50, indicating a significant association with the underlying construct, in line with what was reported by Shrestha (2021). The spontaneist model presents loadings ranging from 0.66 to 0.84, and reflects a strong relationship with the established questions. The items of the constructivist model exhibit loadings from 0.51 to 0.81, while the technological model shows values from 0.64 to 0.83, both denoting a substantial contribution to their respective constructs.
Conversely, the traditional model displays the lowest factor loadings in some items, below the established coefficient of 0.50, which could indicate lower internal consistency or relevance in these indicators, according to Fayers (1997). However, other items within the same model show loadings from 0.54 to 0.76, suggesting that, mostly, the questions are suitable for assessing the proposed construct. The correlation between dimensions reveals the highest covariance between the spontaneist model and other constructs, suggesting a possible conceptual overlap or shared didactic approach, as might be inferred from the observations of Høgheim et al. (2023). The results allow for the acceptance of the alternative hypothesis H2, which asserts the relevance and adequate fit of the factor loadings (p < 0.001).
Regarding the lowest correlation observed in the traditional model, this could suggest, according to Raykov et al. (2016), that certain observed variables have lesser congruence with the construct. This could be interpreted as an indication that revising or eliminating certain items could enhance the correlation of the traditional model with other constructs.
Table 3 presents the results of the regression analysis applied to the items grouped according to five teaching models: Traditional, Collaborative, Spontaneist, Constructivist, and Technological. For each item, the estimated coefficient, standard error (S.E.), critical ratio (C.R.), and p-value are reported. The table also indicates which items were used as reference indicators (with a fixed regression weight of 1) to identify each latent construct.
The use of artificial intelligence (AI) has enabled the development of estimators, such as standard error (S.E.), critical reliability (C.R.), and statistical significance (P), which are essential in the evaluation of teaching models. These indicators are consolidated in Table 3 for each observed variable, facilitating a detailed understanding of the effectiveness of various pedagogical approaches.
In particular, the analysis revealed that the traditional model, when examining responses to eight specific questions, generated estimators significantly different from zero, showing high critical reliability and notable statistical significance (indicated with three asterisks ***). This finding suggests robustness in predicting educational outcomes when employing this model, reaffirming its validity in specific didactic contexts.
Similarly, the collaborative model, which incorporates eleven observed variables, yielded estimated values greater than one, accompanied by critical reliability exceeding 10 points, indicating outstanding statistical significance. This result not only emphasizes the effectiveness of the collaborative approach in teaching but also reinforces the importance of interaction and cooperation in learning.
The spontaneist, constructivist, and technological models showed similar patterns, with estimators significantly different from zero, high critical reliability, and statistical significance for each evaluated question. These findings corroborate the hypothesis that the pedagogical approaches examined have a measurable and significant impact on the teaching-learning process, allowing for the validation of the alternative hypothesis H3. This posits that the estimators generated through linear regression, under the auspices of artificial intelligence, are significant and, therefore, of great value for educational research.
The significance of these results lies not only in the validation of the investigated teaching models but also in the potential of AI to enrich teaching methodologies. According to Chen et al. (2020) the application of advanced technologies in education facilitates a more precise and personalized analysis of learning needs, allowing for the development of more effective and tailored didactic strategies.
Table 4 presents the estimated variances, standard errors (S.E.), critical ratios (C.R.), and significance levels (p-values) for each of the five teaching models Traditional, Collaborative, Spontaneist, Constructivist, and Technological as well as for all 33 items that compose the measurement instrument. All variances were statistically significant at the 0.001 level, suggesting strong internal consistency and robust construct identification.
Table 4 details the variance coefficients corresponding to the teaching models, along with data from the associated questions. This analysis reveals that the five examined teaching models present elevated estimators, reduced standard errors, high critical reliability, and notable values of statistical significance (indicated by three asterisks ***). This uniform pattern, observed across all analyzed variables, underscores the robustness of the results and the reliability of the methods employed to evaluate the variances associated with the dimensions and questions of the teaching model instrument.
The presence of high estimators suggests a strong influence of the teaching models on the variables of interest, while the minimal standard errors indicate precision in the estimations made. The high critical reliability reinforces the consistency of these findings, and the significant p-value confirms the statistical relevance of the observed variances. Such a conjunction of factors strongly supports the acceptance of hypothesis H3, which posits the significance of the variances for the dimensions and questions included in the analysis of the teaching models.
The importance of these results lies in their ability to validate the teaching models from a statistical perspective, thereby providing empirical evidence of their effectiveness. The significance of the variances, in particular, highlights the relevance of the differences between the models, pointing toward a clear differentiation in their impact on the teaching and learning processes. According to Lawless and Pellegrino (2007), variance analysis is crucial for understanding how different didactic strategies can be adapted to specific educational needs, thereby improving the quality and effectiveness of education.
Table 5 reports the values for Composite Reliability (CR), Average Variance Extracted (AVE), Maximum Shared Variance (MSV), and Maximum Reliability (MaxR(H)) for each of the five teaching models: Traditional, Collaborative, Spontaneist, Constructivist, and Technological. In addition, it includes the inter-construct correlation coefficients. The results provide the necessary indicators to confirm that each model is statistically distinct from the others, based on established thresholds for discriminant validity.
Table 5 sheds light on the coefficients of discriminant validity, these values demonstrate the establishment of different levels of acceptance for the evaluated teaching models. This approach emphasizes the precision with which the construct validity reflects each dimension of the instrument through its observed variables. Specifically, it is observed that the dimensions associated with the teaching model in higher education achieve an acceptable reliability, which exceeds the 0.70 threshold for composite reliability, as indicated by Pérez Rave and Muñoz Giraldo (2016). This measure of composite reliability suggests robust internal consistency within the evaluated dimensions.
Regarding discriminant and convergent validity, indicators such as the square root of the Average Variance Extracted (AVE) and the Maximum Shared Variance squared (MSV) are crucial. For the technological teaching model, an AVE of 0.559 and an MSV of 0.472 are reported, indicating satisfactory discriminant and convergent validity, reflected through a correlation of 0.748. These results demonstrate that the technological dimension maintains a clear distinction from other dimensions while showing internal consistency in its variables.
When applying the contrast technique, it is found that the collaborative and spontaneist models present AVEs greater than 0.50, which meets the criterion for discriminant validity. However, the high MSV of 0.698 in both dimensions indicates a limitation in their ability to be distinctly differentiated from each other, as established by Blustein et al. (1989). This situation raises questions about the precise delimitation between similar constructs within these models.
On the other hand, the traditional and constructivist models do not meet the standards for discriminant nor convergent validity, failing to meet the established parameters. However, a high correlation is noted between the variables of these models, extending to all the teaching models included in the study. This universal correlation underscores the interconnection among the different pedagogical approaches evaluated and provides substantial evidence to accept hypothesis H4. This acceptance implies that the instrument used demonstrates reliability, discriminative capacity, convergence, and significant correlation across the various teaching models examined.
The instrument's ability to reflect these crucial aspects suggests a robust and versatile assessment tool, capable of capturing the complexity and interrelationship of the teaching models in higher education. In turn, these results emphasize the importance of discriminant and convergent validity as essential criteria for evaluating constructs in educational research, as supported by previous studies in the field by Cheung et al. (2023). The identification of strengths and limitations in the discrimination and convergence of the teaching models provides a solid basis for future research, aimed at optimizing pedagogical strategies and fostering effective and differentiated learning.
Table 6 displays the HTMT ratio values calculated among the five teaching models: Traditional, Collaborative, Spontaneist, Constructivist, and Technological. Each value represents the degree of correlation between constructs. Lower HTMT values indicate greater discriminant validity, suggesting that each model captures a distinct pedagogical approach within the framework of higher education.
Table 6 presents the Heterotrait-Monotrait (HTMT) ratios, crucial for determining the correlation between different traits, derived from the discriminant analysis (as shown in Table 4). This analysis focuses on the correlation between traits of teaching models, providing a critical measure of discriminant validity between constructs, as highlighted by Touron et al. (2018). The results indicate a weak correlation of the traditional model in relation to other models, with scores below 0.50. This finding suggests that the traditional model possesses significant distinctive characteristics compared to the other models evaluated.
On the other hand, the collaborative model shows coefficients close to 0.850, which is considered acceptable according to criteria established by contemporary researchers such as Tarkkonen and Vehkalahti (2005). This level of correlation implies proximity in characteristics between the analyzed models, although it remains within limits that allow for adequate discrimination between them.
More specifically, it is observed that the spontaneist model and the constructivist model present a statistical indistinction, with an HTMT index of 0.859. This result, interpreted through artificial intelligence, finds support in the research of Henseler et al. (2015) and Hamid et al. (2017), who argue about the difficulty of statistically distinguishing between constructs when HTMT coefficients are high. This phenomenon highlights the conceptual and operational similarity between the spontaneist and constructivist models, suggesting that, although different, they share common elements that make them statistically indistinguishable in certain respects.
Overall, these coefficients provide empirical evidence in support of hypothesis 4, proposed by Ibrahim and Nat (2019), which anticipated that the teaching models employed in higher education significantly discriminate against each other. The data suggest that the teaching models are empirically distinguishable through this instrument, establishing the discriminant validity of the evaluated dimensions, as described by Salessi and Omar (2019). This finding is crucial as it confirms the instrument's ability to effectively differentiate between pedagogical approaches, providing a valuable tool for educational research and the improvement of teaching practice in Higher Education.
The identification of discriminant validity among the teaching models underscores the importance of developing and employing rigorous assessment instruments in educational research. These instruments should not only be capable of capturing the subtleties of the different pedagogical approaches but also effectively distinguish between them, to facilitate a deeper understanding of their impacts and relative efficiencies. Consequently, these findings pave the way for future research aimed at exploring and optimizing teaching methods in Higher Education, with the goal of improving educational outcomes and adapting to the changing needs of students and society.
After confirming the factorial validity of the theoretical constructs, the structural model's hypotheses were tested. This stage allowed for the statistical verification of the proposed relationships between the teaching models and the observed variables, employing structural equation modeling (SEM). The analysis was conducted using the maximum likelihood estimation method, complemented by standardized coefficients, critical ratios (CR), and significance values (p-values), which collectively provided empirical support for the proposed theoretical model. The specific results of the hypothesis testing, including the direction, strength, and statistical significance of each relationship, are detailed in Table 7.
The results of the structural equation modeling analysis confirmed the statistical validity of the four hypotheses initially proposed in the study. Each hypothesis demonstrated a highly significant relationship (p < 0.001), with standardized coefficients and critical ratios (CR) exceeding accepted thresholds, thereby providing robust empirical support for the theoretical model of teaching practices in higher education mediated by artificial intelligence tools.
Hypothesis H1, which assessed the factorial validity of the items within each teaching model, revealed regression weights ranging from 0.211 to 2.223 and CR values between 3.753 and 15.572. These findings are aligned with psychometric standards, indicating satisfactory item representativeness within each latent construct. The significance of these results underscores the structural coherence of the questionnaire and its utility for evaluating pedagogical strategies in university settings. This is consistent with prior research that validates structural models through confirmatory factor analysis, demonstrating strong item reliability when factor loadings exceed 0.40 (Sukkamart et al., 2023).
Hypothesis H2 examined the predictive associations between latent factors and the overall model, reporting standardized coefficients from 0.320 to 0.790 and CR values from 0.759 to 0.927. These values suggest that the factors integrated into the model are statistically capable of anticipating the behaviors associated with each teaching modality. Such results reinforce the idea that well-structured instructional models can predict teaching performance and educational innovation outcomes. In line with findings from educational contexts focused on sustainability and digital readiness, properly identified causal constructs show predictive power when embedded in higher-order structural models (Pimdee, 2020).
For H3, the results showed statistically significant variance estimators across the observed variables and teaching models, with coefficients ranging from 0.146 to 0.916 and CR values between 5.118 and 7.778. These findings confirm that the teaching models are consistently measured and that the variability explained by each item is not due to random error but rather to latent factors grounded in empirical evidence. This coincides with previous studies that emphasize the importance of robust variance structures for interpreting complex educational phenomena (Chuenban et al., 2021).
Finally, H4 confirms that teaching models in higher education are statistically distinguishable through discriminant analysis, convergent validity, and the Heterotrait-Monotrait (HTMT) ratio. The observed coefficient range (0.463–0.859) and CR values (0.759–0.927) meet the criteria for adequate discriminant validity. According to Yusoff et al. (2020), HTMT values below 0.90 indicate a strong distinction between related yet conceptually different constructs. Therefore, the acceptance of H4 supports the instrument's ability to differentiate between teaching models within university contexts.
5 Conclusions
This study has demonstrated, through a rigorous methodology and the application of advanced tools such as artificial intelligence, the ability of different higher education teaching models to distinguish themselves from each other in terms of internal consistency, discriminative capacity, and their relationship with the observed variables. The reliability analyses, using Cronbach's alpha coefficient along with Critical Reliability (CR) and Average Variance Extracted (AVE), have corroborated the consistency and convergence of the evaluated constructs, surpassing thresholds established in the literature as indicative of excellent internal consistency and convergent validity.
The integration of these models into a detailed analysis, using a structural equation model, has effectively assessed the relationships between the theoretical constructs and the observed variables, reflecting the depth of the association through factor loadings and confirming the quality of the model's fit to the collected data. The results obtained, such as the fit indices and factor loadings, have provided a solid basis for asserting the reliability and validity of the constructs within the context of higher education.
Crucially, the empirical validation of the model was substantiated by the acceptance of the four hypotheses formulated (H1, H2, H3, and H4), which further reinforces the robustness and relevance of the instrument. Hypothesis H1 confirmed the factorial validity of the items, with statistically significant factor loadings well above recommended benchmarks, ensuring the representativeness of each indicator within its respective latent construct. Hypothesis H2 identified strong predictive relationships between latent factors, supported by standardized coefficients and critical ratios (CR) exceeding conventional thresholds, validating the model's explanatory capacity in capturing the dynamics of innovative teaching practices. Hypothesis H3 verified the significance of the variance coefficients across dimensions and indicators, thereby strengthening the instrument's internal reliability. Lastly, Hypothesis H4 confirmed discriminant validity among the five teaching models evaluated traditional, collaborative, spontaneist, constructivist, and technological—through cross-loading analysis, Heterotrait-Monotrait (HTMT) ratios, and shared variance measures (MSV and AVE), ensuring the conceptual distinctiveness of each construct.
The discrimination between the teaching models, as demonstrated through measures of discriminant and convergent validity, and HTMT ratios, reflects a clear and significant differentiation in their approaches and methodologies. This distinction has been further reinforced by the correlation between the dimensions of the models, revealing the conceptual coherence and uniqueness of each model in its contribution to the educational process.
The confirmed empirical differentiation between teaching models demonstrates their unique methodological orientations and contributions to the educational process. This distinction is not only statistically significant but pedagogically meaningful, highlighting how different instructional paradigms shape the delivery and outcomes of higher education. Furthermore, the integration of artificial intelligence facilitated the processing and interpretation of complex datasets, enhancing the precision of the validation process and enabling a deeper understanding of the latent structures that underpin teaching effectiveness.
This study contributes significantly to the body of knowledge in the field of Didactics and Pedagogy, offering valuable insights into how different pedagogical approaches impact the teaching-learning process. The results underscore the importance of adopting adaptive and evidence-based teaching methods to meet contemporary educational needs and prepare students for future challenges.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The study received formal approval from the Graduate School of the State University of Milagro and was conducted in strict accordance with applicable institutional guidelines and national ethical standards. All participants provided their prior informed written consent through a digital form incorporated into the initial section of the instrument. It is important to note that the research did not include or process any human images or biometric data. The study's purpose was solely to identify, evaluate, and validate latent dimensions associated with university teaching models through statistical and artificial intelligence-based analysis. The exclusive use of self-reported survey data ensured the anonymity and protection of participants, thus complying with ethical principles of confidentiality, integrity, and voluntary participation.
Author contributions
JM-C: Writing – original draft, Writing – review & editing, Conceptualization, Investigation, Supervision, Visualization. TR: Conceptualization, Data curation, Methodology, Visualization, Writing – original draft, Writing – review & editing. AN-N: Conceptualization, Investigation, Supervision, Visualization, Writing – original draft, Writing – review & editing. ÁS-G: Data curation, Formal analysis, Software, Supervision, Validation, Writing – original draft, Writing – review & editing. MR-Á: Conceptualization, Formal analysis, Supervision, Visualization, Writing – original draft, Writing – review & editing. CO-M: Conceptualization, Data curation, Investigation, Writing – original draft, Writing – review & editing. DN-L: Conceptualization, Data curation, Investigation, Writing – original draft, Writing – review & editing. JS-E: Conceptualization, Data curation, Investigation, Writing – original draft, Writing – review & editing.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Aithal, P. S., and Mishra, N. (2024). Integrated framework for experiential learning: approaches & impacts. Int. J. Case Stud. Bus. IT Educ. 145–173. doi: 10.47992/IJCSBE.2581.6942.0340
Aljehani, S. B. (2024). Enhancing student learning outcomes: the interplay of technology integration, pedagogical approaches, learner engagement, and leadership support. Educ. Adm. Theory Pract. 30, 418–437. doi: 10.53555/kuey.v30i4.1485
Alkhawalde, M. A., and Khasawneh, M. A. S. (2024). Designing gamified assistive apps: a novel approach to motivating and supporting students with learning disabilities. Int. J. Data Netw. Sci. 8, 53–60. doi: 10.5267/j.ijdns.2023.10.018
Angeli, C., and Valanides, N. (2009). Epistemological and methodological issues for the conceptualization, development, and assessment of ICT-TPCK: advances in technological pedagogical content knowledge (TPCK). Comp. Educ. 52, 154–168. doi: 10.1016/j.compedu.2008.07.006
Apata, O. (2024). “Exploring the nexus between digital transformation and sustainability,” in 2024 IEEE Conference on Technologies for Sustainability (Shenzhen: SusTech), 120–127. doi: 10.1109/SusTech60925.2024.10553624
Arnone, M. P., Small, R. V., Chauncey, S. A., and McKenna, H. P. (2011). Curiosity, interest and engagement in technology-pervasive learning environments: a new research agenda. Educ. Technol. Res. Dev. 59, 181–198. doi: 10.1007/s11423-011-9190-9
Asadi, H., and Seyfe, B. (2024). Model order estimation based on the correntropy of observation eigenvalues. Sig. Process. 216:109310. doi: 10.1016/j.sigpro.2023.109310
Bakar, S. (2021). Investigating the dynamics of contemporary pedagogical approaches in higher education through innovations, challenges, and paradigm shifts. Soc. Sci. Chron. 1, 1–19. doi: 10.56106/ssc.2021.009
Blustein, D. L., Ellis, M. V., and Devenis, L. E. (1989). The development and validation of a two-dimensional model of the commitment to career choices process. J. Vocational Behav. 35, 342–378. doi: 10.1016/0001-8791(89)90034-1
Chen, J., Wang, M., Kirschner, P. A., and Tsai, C.-C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: a meta-analysis. Rev. Educ. Res. 88, 799–843. doi: 10.3102/0034654318791584
Chen, L., Chen, P., and Lin, Z. (2020). Artificial intelligence in education: a review. IEEE Access 8, 75264–75278. doi: 10.1109/ACCESS.2020.2988510
Cheung, G. W., Cooper-Thomas, H. D., Lau, R. S., and Wang, L. C. (2023). Reporting reliability, convergent and discriminant validity with structural equation modeling: a review and best-practice recommendations. Asia Pac. J. Manage. 41, 745–783. doi: 10.1007/s10490-023-09871-y
Chuenban, P., Sornsaruht, P., and Pimdee, P. (2021). How brand attitude, brand quality, and brand value affect Thai canned tuna consumer brand loyalty. Heliyon 7:e06301. doi: 10.1016/j.heliyon.2021.e06301
Cowls, J., Tsamados, A., Taddeo, M., and Floridi, L. (2023). The AI gambit: leveraging artificial intelligence to combat climate change-opportunities, challenges, and recommendations. AI Soc. 38, 283–307. doi: 10.1007/s00146-021-01294-x
de Freitas, S., and Neumann, T. (2009). The use of ‘exploratory learning' for supporting immersive learning in virtual environments. Comp. Educ. 52, 343–352. doi: 10.1016/j.compedu.2008.09.010
Diamantopoulos, A., Sarstedt, M., Fuchs, C., Wilczynski, P., and Kaiser, S. (2012). Guidelines for choosing between multi-item and single-item scales for construct measurement: a predictive validity perspective. J. Acad. Mark. Sci. 40, 434–449. doi: 10.1007/s11747-011-0300-3
Didmanidze, I., Tavdgiridze, L., Zaslavskyi, V., Khasaia, I., Dobordginidze, D., and Olga, Y. (2023). “The impact of digital technologies in education,” in 2023 13th International Conference on Dependable Systems, Services and Technologies (DESSERT) (Athens: IEEE), 1–7. doi: 10.1109/DESSERT61349.2023.10416515
Dos Santos, P. M., and Cirillo, M. Â. (2023). Construction of the average variance extracted index for construct validation in structural equation models with adaptive regressions. Commun. Stat. Simul. Comput. 52, 1639–1650. doi: 10.1080/03610918.2021.1888122
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., et al. (2021). Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manage. 57:101994. doi: 10.1016/j.ijinfomgt.2019.08.002
Einum, E. (2019). Discursive lecturing: an agile and student-centred teaching approach with response technology. J. Educ. Change 20, 249–281. doi: 10.1007/s10833-019-09341-7
Fayers, P. M. (1997). Factor analysis, causal indicators and quality of life. Q. Life Res. 6, 139–50. doi: 10.1023/A:1026490117121
Gamage, K. A. A., Jeyachandran, K., Dehideniya, S. C. P., Lambert, C. G., and Rennie, A. E. W. (2023). Online and hybrid teaching effects on graduate attributes: opportunity or cause for concern? Educ. Sci. 13:221. doi: 10.3390/educsci13020221
George, B., and Wooden, O. (2023). Managing the strategic transformation of higher education through artificial intelligence. Adm. Sci. 13:196. doi: 10.3390/admsci13090196
Graesser, A. C., Fiore, S. M., Greiff, S., Andrews-Todd, J., Foltz, P. W., and Hesse, F. W. (2018). Advancing the science of collaborative problem solving. Psychol. Sci. Public Interest 19, 59–92. doi: 10.1177/1529100618808244
Green, M. E. (2015). “Gramsci and subaltern struggles today: spontaneity, political organization and occupy wall street,” in Antonio Gramsci (UK: Palgrave Macmillan), 156–178. doi: 10.1057/9781137334183_9
Hamid, M. R., Sami, W., and Mohmad Sidek, M. H. (2017). Discriminant validity assessment: use of fornell & larcker criterion versus HTMT criterion. J. Phys. Conf. Ser. 890:012163. doi: 10.1088/1742-6596/890/1/012163
Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8
Høgheim, S., Jenssen, E. S., and Federici, R. A. (2023). Do lectures matter? Exploring students' situational interest in two learning arenas in teacher education. Scand. J. Educ. Res. 67, 1027–1040. doi: 10.1080/00313831.2022.2115131
Hoidn, S., and Reusser, K. (2020). “Foundations of student-centered learning and teaching,” in The Routledge International Handbook of Student-Centered Learning and Teaching in Higher Education, 1st Edn, eds. S. Hoidn & M. Klemenčič (London: Routledge), 30. doi: 10.4324/9780429259371-3
Honebein, P. C., Duffy, T. M., and Fishman, B. J. (1993). “Constructivism and the design of learning environments: context and authentic activities for learning,” in Designing Environments for Constructive Learning (Berlin Heidelberg: Springer), 87–108. doi: 10.1007/978-3-642-78069-1_5
Hu, A., Li, X., and Song, H. (2023). The influence of big five personality traits on college students' key competencies: the mediating effect of psychological capital. Front. Psychol. 14:1242557. doi: 10.3389/fpsyg.2023.1242557
Huang, H.-M., Rauch, U., and Liaw, S.-S. (2010). Investigating learners' attitudes toward virtual reality learning environments: based on a constructivist approach. Comp. Educ. 55, 1171–1182. doi: 10.1016/j.compedu.2010.05.014
Ibrahim, M. M., and Nat, M. (2019). Blended learning motivation model for instructors in higher education institutions. Int. J. Educ. Technol. Higher Educ. 16:12. doi: 10.1186/s41239-019-0145-2
Ingavelez-Guerra, P., Robles-Bykbaev, V. E., Perez-Munoz, A., Hilera-Gonzalez, J., and Oton-Tortosa, S. (2022). Automatic adaptation of open educational resources: an approach from a multilevel methodology based on students' preferences, educational special needs, artificial intelligence and accessibility metadata. IEEE Access 10, 9703–9716. doi: 10.1109/ACCESS.2021.3139537
Kaasila, R., and Lauriala, A. (2010). Towards a collaborative, interactionist model of teacher change. Teach. Teach. Educ. 26, 854–862. doi: 10.1016/j.tate.2009.10.023
Kanwar, A., Balasubramanian, K., and Carr, A. (2019). Changing the TVET paradigm: new models for lifelong learning. Int. J. Train. Res. 17, 54–68. doi: 10.1080/14480220.2019.1629722
Kergel, D. (2023). “Postmodern cyberspace,” in Digital Cultures (Wiesbaden: Springer), 35–121. doi: 10.1007/978-3-658-35250-9_2
Kirkwood, A. (2014). Teaching and learning with technology in higher education: blended and distance education needs 'joined-up thinking' rather than technological determinism. Open Learn. J. Open Distance e-Learn. 29, 206–221. doi: 10.1080/02680513.2015.1009884
Klemenčič, M. (2017). From student engagement to student agency: conceptual considerations of european policies on student-centered learning in higher education. High. Educ. Policy 30, 69–85. doi: 10.1057/s41307-016-0034-4
Kwan, Y. W., and Wong, A. F. L. (2015). Effects of the constructivist learning environment on students' critical thinking ability: cognitive and motivational variables as mediators. Int. J. Educ. Res. 70, 68–79. doi: 10.1016/j.ijer.2015.02.006
Kwangmuang, P., Jarutkamolpong, S., Sangboonraung, W., and Daungtod, S. (2021). The development of learning innovation to enhance higher order thinking skills for students in Thailand junior high schools. Heliyon 7:e07309. doi: 10.1016/j.heliyon.2021.e07309
Lajoie, S. P., Pekrun, R., Azevedo, R., and Leighton, J. P. (2020). Understanding and measuring emotions in technology-rich learning environments. Learn. Instr. 70:101272. doi: 10.1016/j.learninstruc.2019.101272
Lameras, P., and Arnab, S. (2021). Power to the teachers: an exploratory review on artificial intelligence in education. Information 13:14. doi: 10.3390/info13010014
Lawless, K. A., and Pellegrino, J. W. (2007). Professional development in integrating technology into teaching and learning: knowns, unknowns, and ways to pursue better questions and answers. Rev. Educ. Res. 77, 575–614. doi: 10.3102/0034654307309921
Le, H. V., and Nguyen, L. Q. (2024). Promoting L2 learners' critical thinking skills: the role of social constructivism in reading class. Front. Educ. 9:1241973. doi: 10.3389/feduc.2024.1241973
Lee, E., and Hannafin, M. J. (2016). A design framework for enhancing engagement in student-centered learning: own it, learn it, and share it. Educ. Technol. Res. Dev. 64, 707–734. doi: 10.1007/s11423-015-9422-5
Lombardi, D., Shipley, T. F., Bailey, J. M., Bretones, P. S., Prather, E. E., Ballen, C. J., et al. (2021). The curious construct of active learning. Psychol. Sci. Public Int. 22, 8–43. doi: 10.1177/1529100620973974
Loyens, S. M. M., and Gijbels, D. (2008). Understanding the effects of constructivist learning environments: introducing a multi-directional approach. Instr. Sci. 36, 351–357. doi: 10.1007/s11251-008-9059-4
Loyens, S. M. M., Magda, J., and Rikers, R. M. J. P. (2008). Self-directed learning in problem-based learning and its relationships with self-regulated learning. Educ. Psychol. Rev. 20, 411–427. doi: 10.1007/s10648-008-9082-7
McNeish, D., and Wolf, M. G. (2022). Dynamic fit index cutoffs for one-factor models. Behav. Res. Methods 55, 1157–1174. doi: 10.3758/s13428-022-01847-y
Mohammadyari, S., and Singh, H. (2015). Understanding the effect of e-learning on individual performance: the role of digital literacy. Comput. Educ. 82, 11–25. doi: 10.1016/j.compedu.2014.10.025
Mora, H., Signes-Pont, M. T., Fuster-Guilló, A., and Pertegal-Felices, M. L. (2020). A collaborative working model for enhancing the learning process of science & engineering students. Comput. Hum. Behav. 103, 140–150. doi: 10.1016/j.chb.2019.09.008
Moreira-Choez, J. S., Gómez Barzola, K. E., Lamus de Rodríguez, T. M., Sabando-García, A. R., Cruz Mendoza, J. C., and Cedeño Barcia, L. A. (2024a). Assessment of digital competencies in higher education faculty: a multimodal approach within the framework of artificial intelligence. Front. Educ. 9:1425487. doi: 10.3389/feduc.2024.1425487
Moreira-Choez, J. S., Lamus de Rodríguez, T. M., Arias-Iturralde, M. C., Vega-Intriago, J. O., Mendoza-Fernández, V. M., Zambrano-Acosta, J. M., et al. (2024b). Influence of gender and academic level on the development of digital competencies in university teachers: a multidisciplinary comparative analysis. Front. Educ. 9:1436368. doi: 10.3389/feduc.2024.1436368
Moreira-Choez, J. S., Lamus de Rodríguez, T. M., Coronel-Flores, M. L., Campuzano-Rodríguez, M. A., Bravo-Saltos, R. K., Pacheco-Gómez, J. C., et al. (2024c). Factors and manifestations of resilience in graduate students: a multidimensional perspective. J. Educ. Soc. Res. 14:195. doi: 10.36941/jesr-2024-0066
Murphy, L., Eduljee, N. B., and Croteau, K. (2021). Teacher-centered versus student-centered teaching. J. Eff. Teach. High. Educ. 4, 18–39. doi: 10.36021/jethe.v4i1.156
Naseer, F., Khan, M. N., Tahir, M., Addas, A., and Aejaz, S. M. H. (2024). Integrating deep learning techniques for personalized learning pathways in higher education. Heliyon 10:e32628. doi: 10.1016/j.heliyon.2024.e32628
Okoye, K., Hussein, H., Arrona-Palacios, A., Quintero, H. N., Ortega, L. O. P., Sanchez, A. L., et al. (2023). Impact of digital technologies upon teaching and learning in higher education in Latin America: an outlook on the reach, barriers, and bottlenecks. Educ. Inf. Technol. 28, 2291–2360. doi: 10.1007/s10639-022-11214-1
O'Loughlin, C., and Coenders, G. (2004). Estimation of the European customer satisfaction index: maximum likelihood versus partial least squares. Application to postal services. Total Qual. Manage. Bus. Excell. 15, 1231–1255. doi: 10.1080/1478336042000255604
Oyelana, O. O., Olson, J., and Caine, V. (2022). An evolutionary concept analysis of learner-centered teaching. Nurse Educ. Today 108:105187. doi: 10.1016/j.nedt.2021.105187
Pasamonk, B. (2004). The paradoxes of tolerance. Soc. Stud. 95, 206–210. doi: 10.3200/TSSS.95.5.206–210
Patel, H., Pettitt, M., and Wilson, J. R. (2012). Factors of collaborative working: a framework for a collaboration model. Appl. Ergon. 43, 1–26. doi: 10.1016/j.apergo.2011.04.009
Paul, R. W. (1989). Critical thinking in North America: a new theory of knowledge, learning, and literacy. Argumentation 3, 197–235. doi: 10.1007/BF00128149
Peña-Ayala, A. (2021). A learning design cooperative framework to instill 21st century education. Telematics Inf. 62:101632. doi: 10.1016/j.tele.2021.101632
Pérez Rave, J., and Muñoz Giraldo, L. (2016). ClassroomQual: a scale for measuring the use-of-classrooms-for-teaching-learning service quality. Total Qual. Manage. Bus. Excellence 27, 1063–1090. doi: 10.1080/14783363.2015.1060850
Peters, E. E. (2010). Shifting to a student-centered science classroom: an exploration of teacher and student changes in perceptions and practices. J. Sci. Teach. Educ. 21, 329–349. doi: 10.1007/s10972-009-9178-z
Pimdee, P. (2020). Antecedents of Thai student teacher sustainable consumption behavior. Heliyon 6:e04676. doi: 10.1016/j.heliyon.2020.e04676
Portet, S. (2020). A primer on model selection using the akaike information criterion. Infect. Dis. Modell. 5, 111–128. doi: 10.1016/j.idm.2019.12.010
Qin, R., and Yu, Z. (2024). Extending the UTAUT model of tencent meeting for online courses by including community of inquiry and collaborative learning constructs. Int. J. Hum. Comput. Interact. 40, 5279–5297. doi: 10.1080/10447318.2023.2233129
Raykov, T., Marcoulides, G. A., and Tong, B. (2016). Do two or more multicomponent instruments measure the same construct? Testing construct congruence using latent variable modeling. Educ. Psychol. Measure. 76, 873–884. doi: 10.1177/0013164415604705
Reigeluth, C. M. (2013). Instructional-design Theories and Models. London: Routledge. doi: 10.4324/9781410603784
Richland, L. E., and Simms, N. (2015). Analogy, higher order thinking, and education. WIREs Cogn. Sci. 6, 177–192. doi: 10.1002/wcs.1336
Ruiz-Rojas, L. I., Acosta-Vargas, P., De-Moreta-Llovet, J., and Gonzalez-Rodriguez, M. (2023). Empowering education with generative artificial intelligence tools: approach with an instructional design matrix. Sustainability 15:11524. doi: 10.3390/su151511524
Sajja, R., Sermet, Y., Cikmaz, M., Cwiertny, D., and Demir, I. (2024). Artificial intelligence-enabled intelligent assistant for personalized and adaptive learning in higher education. Information 15:596. doi: 10.3390/info15100596
Salessi, S., and Omar, A. (2019). Validez discriminante, predictiva e incremental de la escala de comportamientos laborales proactivos de Belschak y Den Hartog/Discriminant, Predictive and Incremental Validity of Belschak & Den Hartog's Proactive Work Behaviors Scale. Rev. Costarricense Psicología 38, 75–93. doi: 10.22544/rcps.v38i01.05
Sarstedt, M., Hair, J. F., Cheah, J.-H., Becker, J.-M., and Ringle, C. M. (2019). How to specify, estimate, and validate higher-order constructs in PLS-SEM. Aust. Mark. J. 27, 197–211. doi: 10.1016/j.ausmj.2019.05.003
Schubert, A.-L., Hagemann, D., Voss, A., and Bergmann, K. (2017). Evaluating the model fit of diffusion models with the root mean square error of approximation. J. Mathe. Psychol. 77, 29–45. doi: 10.1016/j.jmp.2016.08.004
Schunk, D. H. (2003). Self-efficacy for reading and writing: influence of modeling, goal setting, and self-evaluation. Read. Writ. Q. 19, 159–172. doi: 10.1080/10573560308219
Shrestha, N. (2021). Factor Analysis as a tool for survey analysis. Am. J. Appl. Mathe. Stat. 9, 4–11. doi: 10.12691/ajams-9-1-2
Sujati, H., Sajidan Akhyar, M., and Gunarhadi. (2020). Testing the construct validity and reliability of curiosity scale using confirmatory factor analysis. J. Educ. Soc. Res. 10:229. doi: 10.36941/jesr-2020-0080
Sukkamart, A., Pimdee, P., Leekitchwatana, P., Kongpiboon, W., and Kantathanawat, T. (2023). Predicting student-teacher self-directed learning using intrinsic and extrinsic factors: a theory of planned behavior adoption. Front. Psychol. 14:1211594. doi: 10.3389/fpsyg.2023.1211594
Taber, K. S. (2018). The use of cronbach's alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 48, 1273–1296. doi: 10.1007/s11165-016-9602-2
Tarkkonen, L., and Vehkalahti, K. (2005). Measurement errors in multivariate measurement scales. J. Multivar. Anal. 96, 172–189. doi: 10.1016/j.jmva.2004.09.007
Tas, Y. (2016). The contribution of perceived classroom learning environment and motivation to student engagement in science. Eur. J. Psychol. Educ. 31, 557–577. doi: 10.1007/s10212-016-0303-z
Ten, A., Kaushik, P., Oudeyer, P.-Y., and Gottlieb, J. (2021). Humans monitor learning progress in curiosity-driven exploration. Nat. Commun. 12:5972. doi: 10.1038/s41467-021-26196-w
Touron, J., Martin, D., Navarro Asencio, E., Pradas, S., and Ínigo, V. (2018). Validación de constructo de un instrumento para medir la competencia digital docente de los profesores (CDD). Rev. Esp. Pedagogía 76, 25–54. doi: 10.22550/REP76-1-2018-02
Truong, H. M. (2016). Integrating learning styles and adaptive e-learning system: Current developments, problems and opportunities. Comput. Hum. Behav. 55, 1185–1193. doi: 10.1016/j.chb.2015.02.014
Tsui, L. (2002). Fostering critical thinking through effective pedagogy. J. Higher Educ. 73, 740–763. doi: 10.1080/00221546.2002.11777179
Tu, H.-Y., and Lee, S. W.-Y. (2024). Curiosity, interest, and engagement: unpacking their roles in students' learning within a virtual game environment. J. Educ. Comput. Res. 62, 1995–2019. doi: 10.1177/07356331241277904
Valverde-Berrocoso, J., Fernández-Sánchez, M. R., Revuelta Dominguez, F. I., and Sosa-Díaz, M. J. (2021). The educational integration of digital technologies preCovid-19: Lessons for teacher education. PLoS ONE 16:e0256283. doi: 10.1371/journal.pone.0256283
Wagner, L., Holenstein, M., Wepf, H., and Ruch, W. (2020). Character strengths are related to students' achievement, flow experiences, and enjoyment in teacher-centered learning, individual, and group work beyond cognitive ability. Front. Psychol. 11:1324. doi: 10.3389/fpsyg.2020.01324
Wang, Y.-Y., and Wang, Y.-S. (2022). Development and validation of an artificial intelligence anxiety scale: an initial application in predicting motivated learning behavior. Interact. Learn. Environ. 30, 619–634. doi: 10.1080/10494820.2019.1674887
Yang, B., Watkins, K. E., and Marsick, V. J. (2004). The construct of the learning organization: Dimensions, measurement, and validation. Hum. Resource Dev. Q. 15, 31–55. doi: 10.1002/hrdq.1086
Yang, Y.-T. C. (2008). A catalyst for teaching critical thinking in a large university class in Taiwan: asynchronous online discussions with the facilitation of teaching assistants. Educ. Technol. Res. Dev. 56, 241–264. doi: 10.1007/s11423-007-9054-5
Yildiz, E., and Güngörmüş, Z. (2016). The validity and reliability study of the Turkish version of the evidence based practice evaluation competence questionnaire. Nurse Educ. Today 45, 91–95. doi: 10.1016/j.nedt.2016.05.030
Yusoff, A. S. M., Peng, F. S., Razak, F. Z. A., and Mustafa, W. A. (2020). Discriminant validity assessment of religious teacher acceptance: the Use of HTMT criterion. J. Phys. Conf. Ser. 1529:042045. doi: 10.1088/1742-6596/1529/4/042045
Zacharia, Z. G., Nix, N. W., and Lusch, R. F. (2011). Capabilities that enhance outcomes of an episodic supply chain collaboration. J. Oper. Manage. 29, 591–603. doi: 10.1016/j.jom.2011.02.001
Zajda, J. (2021). “Constructivist learning theory and creating effective learning environments,” in Globalisation and Education Reforms, (Cham: Springer), 35–50. doi: 10.1007/978-3-030-71575-5_3
Keywords: teaching, artificial intelligence, assessment, educational sciences, algorithm, educational model, pedagogical innovation
Citation: Moreira-Choez JS, Lamus de Rodríguez TM, Núñez-Naranjo AF, Sabando-García ÁR, Reinoso-Ávalos MB, Olguín-Martínez CM, Nieves-Lizárraga DO and Salazar-Echeagaray JE (2025) Validation of a teaching model instrument for university education in Ecuador through an artificial intelligence algorithm. Front. Educ. 10:1473524. doi: 10.3389/feduc.2025.1473524
Received: 31 July 2024; Accepted: 31 March 2025;
Published: 30 April 2025.
Edited by:
Xinyue Ren, Old Dominion University, United StatesReviewed by:
Paitoon Pimdee, King Mongkut's Institute of Technology Ladkrabang, ThailandAli Ateeq, Gulf University, Bahrain
Copyright © 2025 Moreira-Choez, Lamus de Rodríguez, Núñez-Naranjo, Sabando-García, Reinoso-Ávalos, Olguín-Martínez, Nieves-Lizárraga and Salazar-Echeagaray. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jenniffer Sobeida Moreira-Choez, amVubmlmZmVyLm1vcmVpcmFAdXRtLmVkdS5lYw==