- 1Escuela de Posgrado, Universidad Peruana Unión, Lima, Peru
- 2Universidad Nacional José Faustino Sánchez Carrión, Huacho, Peru
The aim of this study was to adapt and validate a questionnaire to measure the use of artificial intelligence (AI) applications in mathematics teaching by Peruvian secondary school teachers. A previously validated instrument from Jordan was translated, culturally adapted, and psychometrically validated in the Peruvian context. The study followed a quantitative, cross-sectional, and psychometric design with intentional non-probabilistic sampling. The sample included 150 teachers for exploratory analysis and 266 for confirmatory analysis. The validation process involved expert judgment, reliability analysis, exploratory factor analysis, and confirmatory factor analysis. The instrument showed excellent internal consistency (Cronbach's α = 0.96) and sampling adequacy (KMO = 0.95). EFA identified a three-factor structure explaining 66% of the variance, later reorganized into two dimensions: didactic use of AI and strategic and formative use of AI. CFA supported a second-order hierarchical model, showing strong factor loadings. Despite moderate fit indices, the questionnaire demonstrates conceptual validity and practical relevance. It is a reliable tool for assessing AI-related teaching practices and supporting teacher training and educational innovation in Peru and Latin America, with potential to inform inclusive and equitable approaches to artificial intelligence integration in secondary mathematics education.
1 Introduction
This study analyzes the use of instruments for artificial intelligence in the field of education. Its purpose is to adapt and validate a questionnaire on the use of artificial intelligence in mathematics teaching among Peruvian secondary school teachers. To this end, an exhaustive literature review was conducted, which determined that there is no validated instrument of this kind for the Peruvian context. An instrument validated in Jordan has been translated and adapted, and a validation process has been carried out for the Peruvian case. The sample was chosen in a non-probabilistic and intentional manner. Initially, for the exploratory analysis, it consisted of 150 teachers. In a second stage, to perform the factor analysis, it consisted of 266 Peruvian mathematics teachers, who were surveyed from November 2024 to March 2025 using a Google form. The results show that a questionnaire with adequate psychometric properties has been adapted and validated to evaluate the use of artificial intelligence applications in the teaching practice of secondary school mathematics teachers in the Peruvian context. The instrument allows for the identification of patterns of AI integration in different areas of educational practice didactic and strategic-formative based on self-reported practices, without attempting to classify teachers into pre-established normative levels or measure attitudes, perceptions, or affective dispositions toward technology. We obtained a moderate or high Aiken coefficient (between 0.515 and 0.660) in almost all items except for three items; and also a Cronbach's alpha internal consistency coefficient of 0.96. Likewise, an exploratory analysis was carried out, obtaining a KMO index of 0.95, determining that the instrument can be two-dimensional for the Peruvian case. These results complement previous studies conducted in Jordan and provide a validated instrument for measuring the use of mathematics teachers in the use of Artificial Intelligence Applications in the Educational Process in Peru. In the confirmatory analysis, a model adjusted with maximum likelihood estimation is obtained, although the Chi-square statistic (χ2 = 926.594, p < 0.001) indicates an imperfect fit. This result should be interpreted with caution, because the comparative fit indices CFI = 0.808 and TLI = 0.783 are below the optimal threshold of 0.90, suggesting a moderate fit. The RMSEA was 0.148 (90% CI: 0.140–0.158), with a p-value < 0.001 for H0: RMSEA ≤ 0.05, indicating a notable discrepancy between the model and the data. The SRMR reached 0.394, also exceeding the desired value (<0.08).
Currently in Latin America, students use generative artificial intelligence (AI) relatively frequently (Ríos Hernández et al., 2024). In Peru, students also use AI (Gálvez Marquina et al., 2024; Sevilla and Barrios, 2024). AI has revolutionized all areas of knowledge, but in education, it has been implemented at all educational levels with favorable results (Gallent-Torres et al., 2024; Esteves Fajardo et al., 2024). The insertion of AI in mathematics education develops better personalized learning experiences, assertive assessments, and real-time feedback, and allows for greater academic engagement of students with their learning (Khan and Ali, 2025). The use of artificial intelligence (AI) in mathematics teaching and learning should be understood as the incorporation of algorithmic systems and intelligent applications that support problem solving, personalized learning, and instant feedback in educational contexts. The use of AI is not just about automating tasks, but about integrating tools that simulate human cognitive processes to facilitate the understanding of mathematical concepts, optimize teaching practice, and enhance school performance.
In recent years, several instruments have been developed to measure students' attitudes toward AI (Grassini, 2023). Alissa and Hamadneh (2023), in Jordan, developed a 22-item questionnaire to measure the level of AI use in science and mathematics teaching. Their results show a moderate level of use of artificial intelligence applications by science and mathematics teachers. Ng et al. (2024) developed and validated an AI literacy instrument (AILQ) in Hong Kong, a self-report that measures how secondary school students develop and perceive their learning outcomes. The results indicated a four-factor structure of the AILQ and revealed good reliability and validity. Montoya Asprilla (2024) in Colombia conducted research on perceptions and attitudes toward AI integration and concluded that teachers have moderate knowledge of AI. Grassini (2023) developed and validated an AI Attitude Scale (AIAS) in Norway, a simple self-report instrument with four items developed and validated for researchers and professionals working in AI development. Marango et al. (2024) developed and validated a scale in Italy to assess university students' attitudes toward the use of AI such as ChatGPT in educational settings. Similarly, a TRACK questionnaire validated in Spain to assess teachers' technological, pedagogical, and disciplinary knowledge has an optimal factorial structure, satisfactory levels of reliability, and validity (Saz-Pérez et al., 2024). Finally, Rodríguez-Gutiérrez et al. (2024) validated a questionnaire in Mexico that has 12 factors and 48 items to measure the acceptance of smart technologies among Generation Z students.
Some authors, such as Alissa and Hamadneh (2023), Opesemowo and Adewuyi (2024), Silva et al. (2024), Wang et al. (2025), and Panqueban and Huincahue (2024) suggest further research into the uses and validation of AI attributes. The use of AI in teaching and learning is proving successful (Vankúš, 2024). On the other hand, in our environment and throughout Latin America, there is little literature on validated instruments for evaluating the use of technological tools with artificial intelligence (Nemt-allah et al., 2024; Ng et al., 2024), particularly in the development of mathematical skills (Bejarano Cordoba and Guerrero Godoy, 2021). Consequently, this work contributes to filling that gap by validating a culturally relevant questionnaire regarding the use of artificial intelligence in mathematics teaching in secondary schools. This provides a tool for quantitatively measuring the benefits of AI in mathematics teaching, which implies improving the development of mathematical skills. As a result, the beneficiaries are students and the educational community in general. However, this study has its own limitations, as the sampling was non-probabilistic, with a modest sample size of 266, and geographically only teachers from the Lima region were considered. We therefore recommend that future research be conducted on a larger scale and in other educational contexts.
In this regard, the overall objective of the study is to adapt and validate a questionnaire that measures the use of artificial intelligence applications in mathematics teaching based on self-reported practices by Peruvian secondary school teachers. To achieve this purpose, the following specific objectives are established: to translate and culturally adapt the original questionnaire by Alissa and Hamadneh (2023) to the Peruvian context, ensuring its semantic, idiomatic, and conceptual equivalence; to evaluate the content validity of the instrument through expert judgment and Aiken's V index; to determine its internal reliability using Cronbach's alpha coefficient; to analyze its factorial structure using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA); and, finally, to propose a two-dimensional model representing the didactic and strategic and formative use of artificial intelligence in mathematics teaching. The article is organized into six sections: Section 1 presents the introduction; Section 2 describes the literature review; Section 3 describes the methodology used for the adaptation and validation of the questionnaire; Section 4 presents the results of the exploratory and confirmatory analysis; Section 5 discusses the findings in relation to previous studies; and Section 6 concludes the article and proposes future lines of research.
2 Literature review
In secondary education, AI is directly linked to innovative teaching practices, as it allows teachers to: Personalize teaching and learning according to each student's pace and level. Promote active methodologies such as problem-based and project-based learning by offering relevant simulations and interactive environments. Support formative assessment by providing real-time data on student performance and difficulties (Panqueban and Huincahue, 2024). Strengthen teachers' digital competence by integrating AI into activity design and classroom management (Núñez De Luca et al., 2024). In this way, the use of AI becomes a powerful strategy for mathematical theory and its practical application, helping teachers transform their pedagogical practice into more learning-centered models. Artificial intelligence is a key teaching tool for enriching mathematics education by offering personalized environments, immediate feedback, and interactive resources that promote active knowledge construction.
Artificial intelligence is a technological tool developed by private companies that capture and analyze data using algorithms that simulate human intelligence (Gálvez Marquina et al., 2024; Quiroz Rosas, 2023). Mathematics education is conceived as an interactive process using methodologies that incorporate AI to develop mathematical activities connecting theories with an understanding of the real world (Davis, 2024), to select what is relevant from the explanations given by AI, to create designs, diagrams, or graphs when solving problems in real or fictional mathematical contexts, according to the needs and resources of both schools and students (Quiroz Rosas, 2023).
The theory of machine learning is one of the foundations of AI. It refers to the ability of computers to learn from data analysis in order to improve their performance in specific activities such as mathematics (Núñez De Luca et al., 2024), In this regard, this theory is essential in the development of algorithms not only to solve complex mathematical problems but also to provide guidance to students so that they understand the underlying concepts (Cordero Monzón, 2024). The cognitive theory of learning developed by Bandura, Bruner, and others focuses on imitation and the guidance provided to students in problem solving. In this case, AI also performs this function through the use of intelligent systems. Likewise, the constructivism proposed by Vygotsky and Piaget maintains that students construct their learning by interacting with their environment. In this regard, AI with its educational platforms promotes interactivity in solving mathematical problems according to the student's learning levels and pace (Esteves Fajardo et al., 2024), becoming personalized and providing immediate feedback that promotes the self-construction of knowledge (Núñez De Luca et al., 2024).
Traditionally, education has focused on teaching, although in recent decades there has been an attempt to focus on learning (Seivane and Brenlla, 2021). One learning strategy is to measure explicit methods, procedures, and behaviors (Wang et al., 2022). To achieve this, AI is being used because it allows for the accurate measurement of student performance with the aim of continuously improving learning (Sandoval and López, 2023). AI should contribute to the implementation of curriculum plans and new teaching-learning methodologies in mathematics (Alissa and Hamadneh, 2023). Personalizing teaching-learning is a feature of AI, as are systems that provide learning characteristics and patterns, as well as the development of collaborative activities (Gallent-Torres et al., 2024).
AI is emerging as a key tool for developing academic skills and abilities (Barros et al., 2023), in the area of mathematics with certain limitations and challenges for solving problems (Quiroz Rosas, 2023; Davis, 2024), but it improves retention and accelerates the autonomous resolution of complex problems (Núñez De Luca et al., 2024). However, if we combine artificial intelligence with active methodologies, there are significant improvements in the development of activities in mathematics education (Silva et al., 2024; Saz-Pérez et al., 2024), AI transforms mathematics education and promises future innovations for improvement (Coy García et al., 2024), has a positive impact on student performance in mathematics based on personalization and instant feedback (Val-Fernández, 2023; León Naranjo et al., 2025). For all this to happen, it is necessary for mathematics teacher training institutions to include the integration of AI in their curricula and to provide training to active teachers through continuing education courses (Saz-Pérez et al., 2024), The goal is to enable teachers to incorporate into their practice a model that emphasizes the interrelation and connection of the following components: pedagogical, disciplinary, and technological (TPACK model; Saz-Pérez et al., 2024). It should be remembered that AI has emerged to improve students' academic performance and to enhance teaching in the classroom (Marango et al., 2024).
2.1 Conceptual framework of the study
In this study, the use of AI in mathematics teaching and learning is conceived as a construct composed of two dimensions: didactic use and strategic and formative use. Strategic and formative use of AI refers to the way teachers use intelligent tools for their own professional development, task organization, information search, and disciplinary updating. This area is directly related to teachers' digital competence, i.e., the ability to integrate technologies into professional practice in an efficient, critical, and reflective manner. Didactic use of AI: refers to the integration of intelligent tools in the design of activities, classroom management, and the assessment of mathematical learning. This aspect is closely related to teachers' beliefs about teaching and learning, since the willingness to transform pedagogical practices depends on the conception of AI as a resource that has an impact on education.
The relationship between these two dimensions is based on theories of pedagogical innovation and the TPACK (Technological Pedagogical Content Knowledge) model, which explains the interrelationship between disciplinary, pedagogical, and technological knowledge, complemented by the SAMR (Substitution, Augmentation, Modification, and Redefinition) model (Scorzo and Ocampo, 2025). In line with this thinking, the strategic and formative use of AI strengthens teachers' digital competence and self-confidence, while didactic use reflects the ability to transfer these skills to the classroom, promoting innovation and educational transformation. For the purposes of this study, the aim is to evaluate the use of AI applications in teaching practice, based on self-reported practices.
3 Methodology
This research is psychometric in nature with a quantitative, cross-sectional, and instrumental approach, developing procedures that lead to the validation of a questionnaire regarding the use of generative Artificial Intelligence in the teaching of mathematics by Peruvian secondary school teachers. The initial strategy consisted of reviewing the related literature and previous studies, as well as translating and adapting the questionnaire “Level of science and mathematics teachers in the use of Artificial Intelligence Applications in the Educational Process” developed and validated in Jordan by Alissa and Hamadneh (2023), which is unidimensional and consists of 22 items that measure teachers' level of AI use in three categories: Low, Medium, and High. The authors validate this questionnaire through expert judgment, consulting 13 specialists in AI and education, and using the test-retest method to determine internal consistency reliability equivalent to 0.91. The study also presents the use of descriptive statistical methods, ANOVA variance analysis, and Cronbach's alpha (see Tables 1–6).
Table 2. Reliability and validity of the scales for the use of artificial intelligence in mathematics teaching.
3.1 Translation and adaptation process
To ensure the cultural and linguistic validity of the instrument, a systematic translation and adaptation process was followed in several stages: (1) Direct translation: Two bilingual translators (Spanish–English) with experience in education and psychometrics independently translated the original questionnaire (22 items). Subsequently, both versions were compared and a preliminary Spanish version was agreed upon. (2) Back translation: A third bilingual translator (Spanish–English), who was not involved in the initial process, translated the questionnaire back into English. This version was compared with the original instrument to verify semantic equivalence and detect possible misrepresentations of meaning. (3) Expert review: A panel of five professionals in mathematics education and artificial intelligence was convened. The criteria used in the review were: Semantic clarity: that is, that the items were understandable to Peruvian secondary school teachers. Idiomatic equivalence: that the expressions were consistent with the usual use of Spanish in the Peruvian educational context. Conceptual relevance: that each item adequately reflected the construct of “use of AI in mathematics teaching.” Cultural relevance: the terms used were applicable to the Peruvian school context. (4) Pilot test and feedback: The preliminary version was administered to a small group of teachers (n = 20). Comments were collected on the comprehensibility and relevance of the items, which allowed the wording to be adjusted, ambiguities to be eliminated (see Table 1), and the number of items to be reduced from 22 to 19.
3.2 The instrument
Using the final 19-item instrument and the experts' opinions, the data is processed to obtain the respective AIKEN index; the reliability of the instrument is then determined by applying Cronbach's alpha coefficient in a pilot test.
The questionnaire assesses the use of artificial intelligence applications based on self-reported teaching practices and is not intended to measure attitudes, beliefs, or normative levels of technological competence. This instrument does not seek to evaluate agreement or disagreement in terms of attitude, but rather the frequency and intensity of specific usage practices. For this reason, the items are formulated affirmatively to facilitate the identification of usage levels. In instrumental studies of technology use, positive wording is consistent with the measurement of self-reported practices and avoids semantic confusion in diverse educational contexts.
3.3 Rationale for item reduction
During the questionnaire adaptation and validation process, the original 22-item version developed by Alissa and Hamadneh (2023) was used as a starting point. However, after expert review, it was decided to reduce the instrument to 19 items. This decision was not arbitrary, but rather based on a lack of contextual clarity and limited use in the Peruvian context due to existing technological gaps, as detailed in the Table 2.
This reduction from 22 to 19 items allowed us to improve the internal consistency of the instrument (α = 0.96) and obtain a more parsimonious factorial structure consistent with the proposed two-dimensional model. In addition, this refinement ensures that each item provides relevant and differentiated information, avoiding redundancies and strengthening content and construct validity.
The final version of the instrument will be administered to the study sample through a Google Forms questionnaire disseminated via social media, ensuring appropriate precautions are taken to safeguard data integrity and participant confidentiality. In parallel, semi-structured interviews will be conducted with selected teachers and students to complement the quantitative data. Data analysis will proceed in two phases: first, an exploratory factor analysis (EFA) will be carried out using responses from a sample of 150 teachers; subsequently, after a 2-month interval, a confirmatory factor analysis (CFA) will be performed with a larger sample of 266 mathematics teachers from basic education. Inclusion and exclusion criteria were based on current employment as a mathematics teacher in an educational institution and prior experience with the use of artificial intelligence in teaching. Sampling was consistently non-probabilistic and purposive. Regarding ethical considerations, all participants were fully informed about the objectives, procedures, and potential benefits of the study, and the research team committed to protecting participants' identities and personal data in accordance with established ethical standards, ensuring no harm or disadvantage resulted from participation.
4 Results of the exploratory analysis
4.1 Description of the sample
The sample was selected using a non-probabilistic, purposive sampling strategy. Initially, it comprised 150 Peruvian secondary-level mathematics teachers who responded to the 19 items of the instrument. Among them, 66% were between 30 and 60 years of age, 41% identified as female and 59% as male. With respect to geographic distribution, 23% were employed in rural schools and 76% in urban institutions, with location data unavailable for the remaining 1%. In terms of academic qualifications, 44% (n = 66) held a professional degree along with a bachelor's degree, 37.33% (n = 56) had completed a master's degree, and 6% (n = 9) reported holding a doctorate. Additionally, 12.67% (n = 19) had obtained a professional degree but had not yet received a bachelor's degree. These figures indicate a relatively high academic profile within the sample, with a considerable proportion possessing postgraduate training, a factor that may influence both teaching practices and receptivity toward continuous professional development and the integration of artificial intelligence into pedagogy. An examination of the descriptive statistics for the subsequent sample of 266 participants revealed a consistent demographic and academic profile.
4.2 Description of the measurement instrument
The questionnaire employed in this study was adapted from the instrument titled “Level of Science and Mathematics Teachers in the Use of Artificial Intelligence Applications in the Educational Process,” originally developed and validated in Jordan by Alissa and Hamadneh (2023). The adapted instrument is a unidimensional scale comprising 19 items. The initial section collects demographic information about the participants, while the remaining items are designed to assess the extent to which mathematics teachers in Peru integrate artificial intelligence into their educational practice. The primary objective of the instrument is to evaluate the current use of AI tools in the teaching of mathematics at the secondary level within the Peruvian context.
4.3 Validation process
4.3.1 Reliability of the instrument
Cronbach's alpha coefficient is used, as shown in Table 3.
These results were highly satisfactory. The Cronbach's alpha coefficient (raw alpha) was 0.96, which reflects an excellent level of internal consistency according to widely accepted psychometric standards. Likewise, the standardized alpha (std. alpha) also reached a value of 0.96, further confirming the internal coherence of the items that compose the instrument.
The average inter-item correlation (average r = 0.57) indicates adequate homogeneity among the items, suggesting that all contribute meaningfully to the measurement of the underlying construct. The signal-to-noise ratio (S/N = 25) was also high, supporting the instrument's capacity to capture meaningful variance relative to random error. Additionally, the conditional reliability analysis revealed that the removal of any individual item did not result in a substantial increase in the overall reliability index. All items presented corrected item–total correlations (r.drop) above 0.55, indicating that each item is strongly associated with the total scale score, while avoiding redundancy.
Taken together, these results support the conclusion that the instrument demonstrates robust internal consistency, and that its items exhibit adequate psychometric performance. Accordingly, the questionnaire can be considered a reliable tool for assessing the use of artificial intelligence applications by mathematics teachers in secondary education in Peru. Furthermore, its psychometric properties make it suitable for subsequent analyses, including correlational studies, structural equation modeling, and factorial analysis.
4.3.2 Content validity
With regard to content validity, Aiken's V index was employed, yielding values ranging from 0.515 to 0.660. Overall, several items exceeded the commonly accepted threshold of 0.60, which is considered indicative of acceptable content validity in educational and social research contexts. Specifically, items P1 (V = 0.629), P2 (0.618), P9 (0.660), and P13 (0.606) received favorable ratings from the expert judges, suggesting that these items are perceived as relevant and representative of the construct being assessed. However, some items obtained lower values, such as P14 (0.523), P16 (0.515), and P15 (0.551), potentially reflecting weaknesses in their wording or their alignment with the underlying concept of the instrument. These items should be reviewed in terms of clarity, relevance, and contextual appropriateness and, if necessary, reformulated or replaced in future versions of the instrument. Overall, the results allow us to affirm that the instrument has acceptable content validity in most of its components for the conceptual domain associated with the educational use of artificial intelligence-based applications.
The Table 2 shows the items by dimensions of the validated instrument, which is later confirmed by the factor analysis performed.
4.3.3 Exploratory factor analysis (EFA)
The validated instrument provides empirical support for understanding how teachers integrate AI tools into mathematics teaching, offering practical implications for professional development and promoting the development of mathematical skills in secondary education.
4.3.4 Sample adequacy analysis: KMO test
In order to assess the suitability of the data for exploratory factor analysis, the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was applied. This index estimates the proportion of variance among variables that may be attributed to common underlying latent factors. The overall KMO value obtained was 0.95, indicating an excellent level of sample adequacy. According to Kaiser (1974), values above 0.90 are classified as “marvelous,” suggesting that the dataset is highly appropriate for factor analysis. In addition, the individual Measure of Sampling Adequacy (MSA) was examined for each item, all of which yielded values above 0.90, ranging from 0.92 (P2) to 0.97 (P1 and P13). These results confirm that each item exhibits sufficiently low partial correlations with the others and contributes meaningfully to the latent factor structure. In summary, both the overall and item-level KMO values provide strong empirical support for proceeding with exploratory factor analysis, as they reflect optimal statistical conditions for uncovering latent dimensions of the construct.
4.3.5 Bartlett's sphericity test
Complementing the KMO index, Bartlett's test of sphericity was conducted to determine whether the correlations among the instrument's items were sufficiently significant to warrant factor analysis. This test evaluates the null hypothesis that the correlation matrix is an identity matrix, implying no intercorrelation among variables and, consequently, the absence of common underlying factors. The test yielded a highly significant result: = 2401.538, p < 0.001. This indicates that the null hypothesis can be rejected with a high degree of confidence, supporting the presence of meaningful correlations among the variables. In conjunction with the high KMO value (0.95), these findings provide strong statistical justification for applying exploratory factor analysis. Accordingly, both the sphericity test and the sampling adequacy index confirm the methodological relevance of employing factorial techniques in the present study.
4.3.6 Exploratory factor analysis
In order to explore the latent structure of the instrument designed to assess the use of artificial intelligence applications by mathematics teachers, an exploratory factor analysis (EFA) was conducted using the minimum residual (minres) estimation method. Three factors were extracted and subjected to Varimax orthogonal rotation, which facilitated the identification of item clusters with distinct factor loadings and a theoretically interpretable structure. The analysis yielded factor loadings above 0.60 for several items, including P2 (0.80 on factor MR2), P11 (0.75 on MR1), and P6 (0.75 on MR3), indicating a strong association between these items and their respective latent dimensions. Item communalities (h2) were generally high, ranging from 0.38 to 0.79, suggesting that a substantial proportion of each item's variance was accounted for by the extracted factors. Additionally, the average complexity across items was 2.0, indicating that most items loaded meaningfully on more than one factor.
In terms of explained variance, the three extracted factors accounted for a substantial proportion of the total variance: the first factor explained 35%, the second 17%, and the third 13%, yielding a cumulative total of 66%, which is considered adequate for studies in the social sciences. The model's goodness-of-fit indices further supported the adequacy of the three-factor solution. The Tucker-Lewis Index (TLI) was 0.917, indicating an excellent fit of the factorial model to the data. The Root Mean Square Error of Approximation (RMSEA) was 0.084, with a 90% confidence interval ranging from 0.069 to 0.099, suggesting a reasonable level of fit. Additionally, the Root Mean Square Residual (RMSR) was 0.03, also reflecting a satisfactory model fit. As a complementary step, a factor analysis using Promax oblique rotation was conducted to assess the potential correlation among the factors. This analysis revealed moderately high inter-factor correlations, ranging from 0.67 to 0.73, thereby justifying the use of oblique rotation in subsequent confirmatory analyses.
The identified factors demonstrated adequate suitability for factor score estimation, with correlations between the computed factor scores and the corresponding latent variables exceeding 0.90 in all cases. The multiple R-squared values were 0.96 for MR1, and 0.88 for both MR2 and MR3. These results indicate that the proposed factorial model is not only statistically robust but also conceptually meaningful, providing strong support for its interpretive validity in measuring the targeted construct.
4.3.7 Summary of principal component analysis (PCA) results
Principal Component Analysis (PCA) was conducted on the standardized dataset. The results indicate that the first principal component (PC1) accounts for approximately 60% of the total variance, capturing the majority of the information contained in the original variables. Moreover, the first two components (PC1 and PC2) together explain 66.7% of the cumulative variance, suggesting that a two-component solution retains a substantial proportion of the dataset's variability.
Four principal components (PC1–PC4) are required to explain approximately 75% of the total variance. The cumulative variance increases gradually with each additional component, ultimately reaching 100% when all 19 components are included. These results suggest that a relatively small number of components—such as four to six—are sufficient to capture a substantial proportion of the total variability in the dataset. This finding is particularly relevant for purposes of dimensionality reduction and interpretability, as it enables the retention of essential information while simplifying the data structure.
The importance of the components is shown in Table 4.
As part of the exploratory factor analysis, the importance of the components was assessed by examining the standard deviations, the proportion of variance explained, and the cumulative proportion of variance. The results indicate that the first principal component (PC1) has a standard deviation of 3.3757, reflecting a substantial contribution to the overall factor structure. This component alone accounts for 59.98% of the total variance in the dataset. The second component (PC2), with a standard deviation of 1.1338, explains an additional 6.77% of the variance, bringing the cumulative variance explained by the first two components to 66.74%. From the third component (PC3) onward, the proportion of variance explained by each successive component drops below 5%, suggesting reduced individual relevance and supporting the decision to focus on the first few components for interpretation and dimensionality reduction.
Taken together, the first five components account for approximately 77.97% of the total variance, while the tenth component (PC10) marks a saturation point at which 89.95% of the cumulative variance has been explained. From PC11 onward, the incremental gains in cumulative variance are marginal. As expected in a principal component analysis applied to a set of 19 variables, all components cumulatively account for 100% of the variance. This pattern supports the retention of a limited number of components, particularly the first two, which demonstrate the most substantial explanatory power regarding the phenomenon under study. Such an interpretation aligns with established statistical criteria for component retention and contributes to a parsimonious, statistically robust, and conceptually interpretable model.
The following scree plot is presented to visually support the identification of the optimal number of components to retain. The plot clearly illustrates the point at which the curve begins to level off, reflecting the diminishing explanatory power of subsequent components. In addition, the factor loadings indicate how each original variable is projected onto the principal components, thereby facilitating the interpretation of the underlying latent dimensions represented in the reduced data structure.
4.3.8 Selection of the number of factorial components
The decision regarding the optimal number of components to retain was based on a combined analysis of the scree plot and Kaiser's criterion. The scree plot revealed a clear inflection point (or “elbow”) after the first component, indicating that the majority of the common variance was captured by the first two components. Specifically, the first component accounted for approximately 60% of the total variance, while the second explained an additional 6.8%. From the third component onward, the proportion of variance explained dropped below 5%, suggesting a marked decline in the explanatory power of subsequent components.
Kaiser's criterion, which recommends retaining only components with eigenvalues greater than 1—based on the premise that each retained component should account for at least as much variance as an original variable—was also applied. In the present analysis, only the first two components met this threshold, reinforcing the decision to retain two components. The convergence of both criteria supports a two-component solution, which is further justified by the theoretical coherence of the construct under study and the principle of interpretive parsimony. This solution captures a substantial proportion of total variance while yielding a conceptually meaningful and empirically useful factorial model.
Figure 1 displays a clear inflection point (“elbow”) following the first component, with the first two components cumulatively accounting for approximately 69.6% of the total variance (60% and 9.6%, respectively). This pattern, in conjunction with Kaiser's criterion (eigenvalues > 1), provides empirical justification for the retention of two principal components. Although the confirmatory factor analysis (CFA) initially extracted three factors, further examination of the factor loadings and inter-factor correlations revealed that two of the extracted factors were highly interrelated and could be conceptually integrated as subdimensions of the same pedagogical domain. These findings support the adoption of a two-dimensional model comprising: (1) Didactic Use of AI, and (2) Strategic and formative Use of AI, the latter encompassing dimensions related to instructional design, assessment practices, and classroom interaction.
4.3.9 Confirmatory factor analysis (CFA)
The confirmatory factor analysis (CFA) model was estimated using maximum likelihood estimation on a sample of 266 observations. Although the Chi-square statistic was significant (χ2 = 926.594, p < 0.001), this result should be interpreted with caution due to the test's well-documented sensitivity to sample size. The comparative fit indices—CFI = 0.808 and TLI = 0.783—fell below the commonly accepted threshold of 0.90, indicating a moderate model fit. The Root Mean Square Error of Approximation (RMSEA) was 0.148, with a 90% confidence interval ranging from 0.140 to 0.158, and a p-value < 0.001 for the null hypothesis H0: RMSEA ≤ 0.05, suggesting a substantial degree of model misfit. Additionally, the Standardized Root Mean Square Residual (SRMR) was 0.394, well above the recommended cutoff of 0.08, further highlighting the need for model respecification and refinement (see Table 5).
Despite the limitations observed in the overall fit indices, the standardized factor loadings indicate a strong relationship between the items and their corresponding latent factors. All loadings were statistically significant (p < 0.001), ranging from 0.612 to 0.972, thereby supporting good convergent validity within each dimension. Moreover, the residual variances were low in most cases, further supporting the internal consistency of the measurement scales.
Taken together, these findings suggest that, although the overall model fit could be improved, the proposed factorial structure is empirically supported and theoretically grounded, highlighting its potential applicability in real educational settings. It is recommended to proceed with targeted model modifications—guided by the modification indices—in order to enhance statistical fit and achieve a more parsimonious representation of the construct. The Table 6 presents the standardized loadings that provide evidence for the validity of the current model.
To improve model fit and arrive at a more statistically appropriate solution, a second-order hierarchical model was tested. This structure included two first-order factors (Factor 1 and Factor 2) subordinated to a higher-order general factor. The overall model fit was acceptable, with fit indices supporting the presence of a well-defined latent structure. Results indicated that the general factor significantly accounted for the shared variance between the two first-order factors, providing empirical support for the existence of an overarching global dimension underlying the measurement. The corresponding results are presented in the Table 6.
For better visualization, see Figure 2.
Figure 2. Second-order hierarchical model obtained through confirmatory factor analysis. The model includes two first-order factors (Factor 1 and Factor 2) that group 19 observed items (P1–P19), and a general second-order factor that explains the covariance between both factors. The arrows indicate standardized factor loadings; the curved arrows represent variances and errors. All factor loadings are significant and show an adequate contribution of the items to their respective factors, as well as a strong influence of the general factor on the first-order factors.
Figure 2 presents a second-order hierarchical model in which two first-order factors—Factor 1 and Factor 2—account for the variance in the 19 observed items (P1–P19), while a general second-order factor explains the covariance between these two primary dimensions. The standardized factor loadings of the items on their respective factors range from 0.65 to 0.90, indicating satisfactory internal consistency. Additionally, the general factor demonstrates very strong loadings on Factor 1 and Factor 2 (both 0.98), supporting the existence of a higher-order latent dimension that integrates both constructs. Most residual variances are moderate or low, which reinforces the accuracy and reliability of the measurement. The resulting second-order model can be summarized as follows: General Factor (Use of AI in the Teaching Context) = Factor 1 (Didactic Use of AI) + Factor 2 (Strategic and formative Use of AI).
4.3.9.1 Confirmation of the two-dimensional model in the AFC
The CFA path diagram clearly illustrates a hierarchical model consisting of two latent factors—Factor 1 and Factor 2—influenced by a higher-order latent variable, reflecting a second-order factorial structure. Factor 1 comprises items P1–P10, with standardized loadings ranging from 0.71 to 0.89, and represents didactic use of AI. Factor 2 includes items P11–P19, with loadings between 0.57 and 0.94, and captures strategic and formative use of AI. Both first-order factors are strongly associated with the higher-order factor, with loadings of 0.96 and 0.98, respectively. These results indicate that, while the two dimensions share a common underlying construct, they are conceptually distinct. This structure empirically supports the two-dimensional model proposed in the theoretical framework and validates the decision to consolidate the three exploratory factors into two broader latent dimensions: Factor 1: Didactic Use of AI, and Factor 2: Strategic and formative Use of AI.
4.3.9.2 Theoretical basis of the second-order hierarchical model
The adoption of a second-order hierarchical model in the confirmatory factor analysis is supported not only by statistical evidence, but also by theoretical foundations that justify the conceptual structure of the construct “use of AI in the teaching context.” From an educational standpoint, teachers' use of AI is not a unidimensional phenomenon; rather, it manifests across two interrelated domains: (1) Strategic and formative Use of AI, which involves digital competence, technological appropriation, and individual professional development; and (2) Didactic Use of AI, which refers to the integration of intelligent tools in instructional planning, classroom teaching, assessment, and real-time feedback. These two domains, while conceptually distinct, are strongly connected, reinforcing the appropriateness of a higher-order factor model to capture the shared variance while preserving the interpretive clarity of each dimension.
Both elements can be considered first-order factors, which are themselves influenced by a higher-order construct: “use of AI in the teaching context,” understood as the teacher's overall disposition toward integrating artificial intelligence into pedagogical practice. This conceptualization aligns with theoretical frameworks such as TPACK (Technological Pedagogical Content Knowledge) and SAMR (Substitution, Augmentation, Modification, Redefinition), which describe progressive levels of technological integration and its implications for pedagogical and professional development. The hierarchical model provides a coherent representation of this structure, acknowledging that teachers may exhibit varying levels of competence across the two dimensions, while both factors reflect a unified underlying attitude toward the educational use of AI.
Furthermore, this modeling approach facilitates the interpretation of results in comparative studies, professional development interventions, and educational policy design by offering an integrated and multidimensional view of the construct. In the CFA, this structure was empirically supported by strong standardized loadings between the first-order factors and the second-order latent variable, confirming the validity of the hierarchical model from both statistical and conceptual perspectives.
5 Discussion of results
The results obtained in this study provide strong empirical evidence for the validation of a documentary instrument in the educational field. The questionnaire titled “Level of Science and Mathematics Teachers in the Use of Artificial Intelligence Applications in the Educational Process,” originally developed and validated in Jordan by Alissa and Hamadneh (2023), was successfully adapted and validated for use with mathematics teachers in the Peruvian context. The adapted version achieved an Aiken's V coefficient greater than or equal to 0.515 for nearly all items, and a Cronbach's alpha coefficient of 0.96, indicating excellent internal consistency. Although such a high alpha may raise concerns about item redundancy, it is justified by the conceptual homogeneity of the construct and the consistency of the teaching practices assessed by the instrument.
These findings reinforce the importance of developing new tools to explore the use of AI in mathematics education. The present study aligns with similar research efforts in various international contexts. For instance, Ng et al. (2024) developed and validated a reliable instrument in China to evaluate secondary students' AI literacy and its impact on learning outcomes. Similarly, Marango et al. (2024) validated a scale in Turkey to measure user attitudes toward generative AI in education, reporting strong reliability indicators (Cronbach's α = 0.84; test–retest reliability = 0.90). Grassini (2023) developed a concise instrument in Norway aimed at researchers and professionals working in AI development, which has been useful for analyzing factors associated with AI acceptance and adoption. Finally, the findings are consistent with Montoya Asprilla (2024), whose study in Colombia offers a robust foundation for understanding how AI can be effectively integrated into teaching practices.
Compared to the work carried out by Alissa and Hamadneh (2023), who obtained a Cronbach's alpha internal consistency coefficient of 0.91 in Jordan and did not report a validity analysis for the items, considering it a unidimensional instrument, this work offers a more in-depth analysis to achieve greater accuracy and validate the structure of the instrument, performing an exploratory analysis to obtain two dimensions for which representative items are shown. Furthermore, these results are noteworthy because they provide an instrument for measuring the use of mathematics teachers in the use of artificial intelligence applications in the educational process. The instrument validated in this study focuses on self-reported teaching practices, allowing us to describe how teachers integrate AI into different areas of their pedagogical work. This approach is particularly relevant in educational contexts where access, training, and institutional conditions directly influence the use of technology. In this sense, the results should not be interpreted as a classification of teachers into normative levels of technological competence, but rather as the identification of patterns of pedagogical integration of artificial intelligence.
By performing exploratory and confirmatory analysis, the following model is obtained: General Factor (Use of AI) = Factor 1 (Didactic Use) + Factor 2 (Strategic and Formative Use), demonstrating that the general factor has very high loadings on Factor 1 (0.98) and Factor 2 (0.98). The two-dimensional model was adopted in the CFA, guaranteed by the “elbow” criterion and the Kaiser criterion (eigenvalues > 1) and by the second-order hierarchical two-factor model in the Confirmatory Factor Analysis by being empirically confirmed by high loadings between the first-order factors and the second-order factor, which supports the validity of the model, both in statistical and conceptual terms.
We consider this instrument to be a valuable tool that teachers can use to apply frameworks such as TPACK or SAMR in teaching and learning processes, enabling the integration of artificial intelligence into pedagogy and across various educational disciplines. From a teacher training perspective, the availability of a validated instrument such as the one presented in this study enables the diagnosis of teachers' preparedness levels regarding AI, which is essential for the design of targeted and relevant professional development programs. Training institutions can employ this questionnaire to identify gaps in digital competence and to align instructional planning with the development of technological skills specifically applied to mathematics teaching. Furthermore, the results suggest that the pedagogical use of AI is closely associated with teachers' beliefs and innovative practices, reinforcing the importance of incorporating reflective and critical perspectives on educational technology into both initial teacher education and continuing professional development.
In the field of educational policy, this study provides important empirical evidence to support decision-making regarding the integration of artificial intelligence into school curricula. The validation of a culturally and contextually adapted instrument for Peru and the broader Latin American region enables the assessment of technological appropriation across diverse educational settings and facilitates diagnostic evaluations. These are essential for designing inclusive and sustainable strategies for teacher training and educational innovation. The instrument can also function as a monitoring tool, allowing the evaluation of training outcomes and tracking progress in the integration of AI into pedagogical practice. Furthermore, the proposed hierarchical model supports longitudinal assessments of AI's impact on teaching practices, offering a valuable framework for monitoring and supporting the implementation of innovation-oriented education policies. This type of measurement is critical for informing public policy, guiding the allocation of training resources, and ensuring that the incorporation of intelligent technologies in the classroom is carried out in an effective, equitable, and context-sensitive manner across Peru and Latin America.
However, it is important to acknowledge certain limitations of the present study, such as the sample size, which was limited to 266 participants, and the geographic scope, as all surveyed teachers were from the Lima region. While these limitations do not undermine the significance of the findings, they do highlight the need for further complementary research to address these and other related aspects in broader and more diverse educational contexts.
6 Conclusion
This study successfully adapted and validated a questionnaire designed to evaluate the use of artificial intelligence (AI) applications in mathematics teaching among secondary school teachers in Peru. The instrument demonstrated strong psychometric properties, with solid evidence of reliability (Cronbach's α = 0.96) and validity (Aiken's V ≥ 0.515; KMO index = 0.95). Through exploratory and confirmatory factor analyses, a two-dimensional model was established—comprising Didactic Use of AI and Strategic and Formative Use of AI—which provides a parsimonious and conceptually coherent representation of the construct.
6.1 Theoretical contribution
The study provides a conceptual framework that distinguishes between individual appropriation of AI and its pedagogical integration in the classroom, aligned with educational innovation models such as TPACK and SAMR. This differentiation allows us to understand how teachers' digital competencies and pedagogical beliefs are articulated in the adoption of intelligent technologies, providing a theoretical basis for future research in mathematics education and other disciplines.
6.2 Practical contribution
The validated questionnaire is a useful tool for diagnosing teachers' use of technological preparedness, monitoring their digital competencies, and guiding continuing education programs. It can also be used by educational institutions and public policy makers to assess teachers' readiness for AI integration, facilitating evidence-based pedagogical and curricular decision-making.
It is important to note that the validated instrument does not assess attitudes, beliefs, or normative levels of technological competence, but is based on self-reports of specific teaching practices related to the use of artificial intelligence applications in the educational context. This orientation strengthens its usefulness as a diagnostic tool and differentiates it from other instruments focused on the acceptance or perception of technology.
6.3 Limitations
The study has limitations that should be acknowledged: the sampling was non-probabilistic and intentional, limited to teachers in the Lima region, which reduces the generalizability of the results. However, this does not diminish the relevance of our study, but it does suggest the importance of continuing with complementary research that can address these and other related aspects.
6.4 Areas for future research
It is recommended that the study be replicated in other regions of Peru and in different Latin American contexts, with larger and more representative samples, to strengthen the external validity of the instrument. Future research could also explore the relationship between the use of AI and variables such as student performance, teacher attitudes toward innovation, and the effectiveness of training programs. Finally, it is suggested that the application of the questionnaire be extended to other disciplinary areas in order to evaluate the cross-cutting nature of AI use in education.
Data availability statement
Publicly available datasets were analyzed in this study. This data can be found at: https://drive.google.com/drive/folders/1KzXR0ci6RqJ6clztPPeplBp4NYINWyDh?usp=drive_link.
Ethics statement
The studies involving humans were approved by Comité de ética de la Escuela de Posgrado, Universidad Peruana Unión. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
ES: Conceptualization, Data curation, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing. AT: Investigation, Writing – original draft, Writing – review & editing, Funding acquisition, Resources, Visualization. RT: Investigation, Project administration, Validation, Writing – original draft, Writing – review & editing, Software. EA: Funding acquisition, Resources, Writing – original draft, Writing – review & editing, Methodology, Project administration, Supervision. ET-C: Investigation, Project administration, Validation, Writing – original draft, Writing – review & editing, Conceptualization, Formal analysis, Methodology. JL-G: Funding acquisition, Methodology, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing, Conceptualization, Data curation, Formal analysis, Investigation, Software, Validation, Visualization.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Alissa, R. A. S., and Hamadneh, M. A. (2023). The level of science and mathematics teachers' employment of artificial intelligence applications in the educational process. Int. J. Educ. Math. Sci. Technol. 11, 1597–1608. doi: 10.46328/ijemst.3806
Barros, A., Prasad, A., and Sliwa, M. (2023). Generative artificial intelligence and academia: implication for research, teaching and service. Manage. Learn. 54, 597–604. doi: 10.1177/13505076231201445
Bejarano Cordoba, A. S., and Guerrero Godoy, R. S. (2021). Uso de herramientas tecnológicas para la resolución de problemas en el área de las matemáticas. Rev. Educare UPEL IPB Segunda Nueva Etapa 2.0 25, 7–27. doi: 10.46498/reduipb.v25i3.1522
Cordero Monzón, M. Á. (2024). Inteligencia Artificial en el aula: Oportunidades y desafíos para la didáctica de la matemática y física universitaria. Rev. Int. Pedagogía Innovación Educativa 4, 193–207. doi: 10.51660/ripie.v4i1.154
Coy García, G. G. C., Bermeo, A. M. F., Pardo, V. H. D., and Añazco, J. P. C. (2024). La Inteligencia Artificial aplicada a la enseñanza dela matemática. Conocimiento Global 9, 234–242. doi: 10.70165/cglobal.v9i1.357
Davis, E. (2024). Mathematics, word problems, common sense, and artificial intelligence. Bull. New Ser. Am. Math. Soc. 61, 287–303. doi: 10.1090/bull/1828
Esteves Fajardo, Z. I., Cevallos Gamboa, M. A., Herrera Valdivieso, M. V., and Muñoz Murillo, J. P. (2024). Cómo impacta la inteligencia artificial en la educación. Reciamuc 8, 62–70. doi: 10.26820/reciamuc/8.(1).ene.2024.62-70
Gallent-Torres, C., Arenas Romero, B., Vallespir Adillón, M., and Foltýnek, T. (2024). Inteligencia Artificial en educación: Entre riesgos y potencialidades. Praxis Educativa 19, 1–29. doi: 10.5212/PraxEduc.v.19.23760.083
Gálvez Marquina, M. C., Pinto-Villar, Y. M., Mendoza Aranzamendi, J. A., and Anyosa Gutiérrez, B. J. (2024). Adaptación y validación de un instrumento para medir las actitudes de los universitarios hacia la inteligencia artificial. Rev. Comunicación 23, 125–142. doi: 10.26441/RC23.2-2024-3493
Grassini, S. (2023). Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence. Front. Psychol. 14:1191628. doi: 10.3389/fpsyg.2023.1191628
Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika 39, 31–36. doi: 10.1007/BF02291575
Khan, Z., and Ali, K. (2025). Revolutionizing Mathematics Education: The Role of Concept Mapping, AI, and Multimodal Semiotic Reasoning. Unpublished.
León Naranjo, J. R., Vargas San Lucas, G. J., and García Vásquez, H. R. (2025). El modelo TPACK como marco para la Integración Pedagógica de la Tecnología en el Aula. Aula Virtual 6:e427. doi: 10.5281/zenodo.15126677
Marango, A., Yilmas, F. G. K., Ceylan, M., and Soomro, K. A. (2024). Development and validation of generative artificial intelligence attitude scale for students. Front. Comput. Sci. 16:4952232. doi: 10.2139/ssrn.4952232
Montoya Asprilla, J. Y. (2024). Percepciones y Actitudes hacia la Integración de la Inteligencia Artificial en la Enseñanza de las Ciencias Sociales en la Universidad Tecnológica del Chocó. Technol. Rain J. 3:26. doi: 10.55204/trj.v3i2.e41
Nemt-allah, M., Khalifa, W., Badawy, M., Elbably, Y., and Ibrahim, A. (2024). Validating the ChatGPT Usage Scale: psychometric properties and factor structures among postgraduate students. BMC Psychol. 12:497. doi: 10.1186/s40359-024-01983-4
Ng, D. T. K., Wu, W., Leung, J. K. L., Chiu, T. K. F., and Chu, S. K. W. (2024). Design and validation of the AI literacy questionnaire: the affective, behavioural, cognitive and ethical approach. Br. J. Educ. Technol. 55, 1082–1104. doi: 10.1111/bjet.13411
Núñez De Luca, J. M., Avila Valdez, J. L., Ávila Guamán, L. O., and Cuecuecha Sánchez, L. Á. (2024). Empleo de la inteligencia artificial para resolver problemas matemáticos en el ámbito de la educación superior. Reincisology 3, 3415–3433. doi: 10.59282/reincisol.V3(6)3415-3433
Opesemowo, O. A. G., and Adewuyi, H. O. (2024). A systematic review of artificial intelligence in mathematics education: the emergence of 4IR. Eurasia J. Math. Sci. Technol. Educ. 20:em2478. doi: 10.29333/ejmste/14762
Panqueban, D., and Huincahue, J. (2024). Inteligencia Artificial en educación matemática: Una revisión sistemática. Uniciencia 38, 1–17. doi: 10.15359/ru.38-1.20
Quiroz Rosas, V. (2023). Aplicaciones de Inteligencia Artificial Aliadas en la Enseñanza de las Matemáticas. Ciencia Latina Revista Científica Multidisciplinar 7, 10547–10560. doi: 10.37811/cl_rcm.v7i4.8070
Ríos Hernández, I. N., Mateus, J.-C., Rivera-Rogel, D., and Ávila Meléndez, L. R. (2024). Percepciones de estudiantes latinoamericanos sobre el uso de la inteligencia artificial en la educación superior. Austral. Comunicación 13, 2–25. doi: 10.26422/aucom.2024.1301.rio
Rodríguez-Gutiérrez, S. A., Vidrio-Barón, S. B., and Vásquez Sánchez, J. R. (2024). Modelo para evaluar la aceptación de la herramienta ChatGPT en la generación Z. Vinculatégica EFAN 10, 138–154. doi: 10.29105/vtga10.5-1069
Saz-Pérez, F., Pizá-Mir, B., and Lizana Carrió, A. (2024). Validación y estructura factorial de un cuestionario TPACK en el contexto de Inteligencia Artificial Generativa (IAG). Hachetetepé Rev. científica educación comunicación 28, 1–14. doi: 10.25267/Hachetetepe.2024.i28.1101
Scorzo, R., and Ocampo, G. (2025). Clasificación y justificación de estrategias de Microlearning siguiendo los modelos ADDIE y SAMR en la enseñanza de matemáticas para promover el autoaprendizaje en aspirantes a carreras de ingeniería. Jornadas Argentinas Informática 11, 60–73. Available online at: https://revistas.unlp.edu.ar/JAIIO/article/view/19933
Seivane, M., and Brenlla, M. (2021). Evaluación de la Calidad Docente Universitaria desde la Perspectiva de los Estudiantes. Rev. Iberoamericana Evaluación Educativa 14, 35–46. doi: 10.15366/riee2021.14.1.002
Sevilla, T. C. S., and Barrios, M. B. (2024). Actitudes de los estudiantes de educación básica hacia la inteligencia artificial: Una adaptación. Rev. Invecom 4:e040228. doi: 10.5281/zenodo.10612162
Silva, M., Correa, R., and Mc-Guire, P. (2024). Metodologías Activas con Inteligencia Artificial y su relación con la enseñanza de la matemática en la educación superior en Chile. Estado del arte. Rev. Iberoamericana Tecnología Educación Educación Tecnología 37:e2. doi: 10.24215/18509959.37.e2
Sandoval, A. V., and López, G. A. S. (2023). 1. Evaluación de los aprendizajes con inteligencia artificial en educación media superior. Rev. Investigación Transdisciplinar Educación Empresa y Sociedad. 10:159.
Val-Fernández, P. (2023). The Symbiosis between Artificial Intelligence and Secondary School Mathematics Teaching = La Simbiosis entre la Inteligencia Artificial y la Enseñanza de Matemáticas en la Escuela Secundaria. Adv. Build. Educ. 7, 23–31. doi: 10.20868/abe.2023.3.5203
Vankúš, P. (2024). Generative artificial intelligence on mobile devices in the university preparation of future teachers of mathematics. Int. J. Interact. Mobile Technol. 18, 19–33. doi: 10.3991/ijim.v18i18.51221
Wang, C., Li, T., Lu, Z., Wang, Z., Alballa, T., Alhabeeb, S. A., et al. (2025). Application of artificial intelligence for feature engineering in education sector and learning science. Alexandria Eng. J. 110, 108–115. doi: 10.1016/j.aej.2024.09.100
Keywords: artificial intelligence (AI), mathematics teaching, secondary schools, teaching, inclusive education
Citation: Susanibar Ramirez ET, Tapia Díaz A, Torres Gallegos RR, Andrade Girón EC, Tocto-Cano E and López-Gonzales JL (2026) Validation of a questionnaire assessing artificial intelligence use in secondary mathematics education in Peru. Front. Educ. 10:1680330. doi: 10.3389/feduc.2025.1680330
Received: 15 October 2025; Revised: 26 December 2025;
Accepted: 29 December 2025; Published: 23 January 2026.
Edited by:
Musa Adekunle Ayanwale, University of Johannesburg, South AfricaReviewed by:
Akorede Asanre, Tai Solarin University of Education, NigeriaHabeeb Habeeb, University of Johannesburg, South Africa
I. Putu Ade Andre Payadnya, Universitas Mahasaraswati Denpasar, Indonesia
Niken Utami, PGRI University of Yogyakarta, Indonesia
Copyright © 2026 Susanibar Ramirez, Tapia Díaz, Torres Gallegos, Andrade Girón, Tocto-Cano and López-Gonzales. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Javier Linkolk López-Gonzales, amF2aWVybGlua29sa0BnbWFpbC5jb20=
Edgar Tito Susanibar Ramirez1