Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 19 August 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1649034

This article is part of the Research TopicReimagining Higher Education: Responding Proactively to 21st Century Global ShiftsView all 14 articles

Sociodemographic predictors and usability perceptions explaining academic use intention of ChatGPT among university students in Ecuador

  • 1Universidad Estatal de Milagro, Carrera de Nutrición y Dietética, Milagro, Ecuador
  • 2Universidad Estatal de Milagro, Carrera de Derecho, Milagro, Ecuador
  • 3Universidad Técnica de Ambato, Dirección de Técnologias de la Información, Ambato, Ecuador
  • 4Escuela Superior Politécnica de Chimborazo, Carrera de Promoción y Cuidados de la Salud, Riobamba, Ecuador

Background: The rapid integration of artificial intelligence (AI) in higher education has transformed how students interact with academic content. ChatGPT, as a prominent AI-based language model, has been increasingly adopted by students to support learning tasks. However, the factors influencing its academic use intention remain underexplored in Latin American contexts.

Objective: This study aims to identify the sociodemographic and perceptual predictors that explain the academic use intention of ChatGPT among university students in Ecuador.

Methods: A cross-sectional, analytical study was conducted with 210 students from seven Ecuadorian universities. Data were collected through a validated questionnaire encompassing six constructs: compatibility with students’ learning styles, efficiency, perceived ease of use, perceived usefulness, satisfaction, and intention of continued use. Descriptive statistics, exploratory factor analysis, binary logistic regression, and k-means clustering were performed using Python in Google Colab.

Results: The logistic regression model revealed that perceived usefulness (OR = 2.37) and compatibility with learning style (OR = 1.87) were the most significant predictors of high academic use intention. Cluster analysis identified three user profiles: enthusiastic adopters, neutral users, and reluctant adopters. Sociodemographic factors showed limited predictive power.

Conclusion: Students’ perceptions of the academic value and alignment of ChatGPT with their learning preferences are stronger predictors of usage intention than sociodemographic characteristics. These findings highlight the need for pedagogically aligned and inclusive AI integration strategies in higher education.

1 Introduction

Artificial intelligence (AI) tools have gained increasing relevance in higher education, transforming how students access information, complete academic tasks, and interact with digital content. These tools provide new avenues for content generation, personalized learning, and real-time assistance, particularly in the context of self-regulated and autonomous learning environments. Among the various AI platforms, ChatGPT—a large language model developed by OpenAI—has emerged as one of the most widely adopted tools in academic settings due to its ability to produce coherent, context-sensitive, and human-like responses in multiple languages (George and Wooden, 2023; Kuleto et al., 2021).

The integration of generative AI into educational practice raises critical questions related not only to pedagogical value but also to ethical, cognitive, and technological dimensions. Specifically, the widespread use of ChatGPT by students has prompted scholarly debate regarding its impact on academic integrity, learning outcomes, and the development of critical thinking skills (Jo, 2024). At the same time, there is a growing interest in understanding how students perceive these tools in terms of usability, usefulness, and alignment with their learning needs (Chellappa and Luximon, 2024).

To explain the acceptance and use of new technologies in educational contexts, several theoretical frameworks have been proposed. The Technology Acceptance Model (TAM) developed by Davis (1989) emphasizes two key predictors of behavioral intention: perceived usefulness and perceived ease of use (Granić and Marangunić, 2019). The Unified Theory of Acceptance and Use of Technology (UTAUT), proposed later by Venkatesh et al. (2003), expands on this by incorporating variables such as social influence, facilitating conditions, and individual differences (Williams et al., 2015). More recent adaptations, such as the UTAUT2, consider hedonic motivation and habit formation as part of the user’s decision-making process (de Blanes et al., 2025). These models have been widely applied to examine digital learning environments and technology-mediated education.

However, most existing studies on ChatGPT have been conducted in institutions located in North America, Europe, or Asia, where digital literacy, infrastructure, and institutional support tend to be more developed. In contrast, empirical evidence from Latin America—especially in Spanish-speaking countries—is still emerging and remains largely descriptive or exploratory (Ciampa et al., 2023; Lau et al., 2024). There is a pressing need to examine how contextual and structural factors, such as gender, ethnicity, and parental education, may influence students’ intention to adopt AI tools for academic purposes in less studied regions such as Ecuador (Buele et al., 2025).

This study adopts constructs from the Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT), focusing on compatibility, satisfaction, and usability. Compatibility refers to how well ChatGPT aligns with students’ preferred learning styles, while satisfaction reflects emotional and cognitive responses to the tool’s performance (Han et al., 2025; Majeed and Rasheed, 2025).

Research questions:

1. What sociodemographic characteristics predict students’ intention to use ChatGPT for academic purposes?

2. How do usability perceptions—such as compatibility, satisfaction, and ease of use—affect students’ behavioral intention?

According to UTAUT2, compatibility can be conceptually linked to “facilitating conditions” and the alignment of the tool with the user’s learning style. Similarly, satisfaction is commonly associated with “performance expectancy” and perceived benefit derived from system usage, reinforcing its conceptual relevance in behavioral intention modeling (Rudhumbu, 2022).

Moreover, few studies to date have integrated both psychosocial perceptions (usability, satisfaction, compatibility) and sociodemographic determinants into a comprehensive model to explain behavioral intention to use ChatGPT. Understanding how these variables interact is crucial to inform institutional policies, curriculum development, and digital inclusion strategies. Such insights can also guide responsible AI implementation in education that is equitable and pedagogically sound (Garcia, 2025).

This study seeks to fill this gap by analyzing how sociodemographic characteristics and perceptions of usability, compatibility, and satisfaction with ChatGPT predict students’ academic use intention in Ecuadorian universities. The findings aim to contribute to the regional literature on educational technology acceptance and offer evidence-based recommendations for higher education institutions planning to integrate AI tools into teaching and learning processes.

Compatibility is defined here as the perceived alignment of ChatGPT with students’ individual learning styles. The study is grounded in the Technology Acceptance Model (TAM), which frames how perceived usefulness and ease of use influence behavioral intention. The data collection instrument is available in the Supplementary material section.

2 Research methods

2.1 Context of the research

In Ecuador, there are currently no formal institutional policies or official regulations governing the academic use of generative artificial intelligence tools such as ChatGPT. Its implementation in higher education settings remains largely informal, with usage patterns monitored primarily through online activity and anecdotal evidence. Despite this regulatory gap, university students have rapidly incorporated AI tools into their academic routines, employing them for tasks such as literature searches, essay composition, test preparation, and practical assignments. Based on direct academic observation, the presence and frequency of such use are increasingly evident in classroom and virtual learning environments. Accordingly, this study aims to explore students’ actual experiences and perceptions regarding the use of ChatGPT, with a particular focus on identifying the sociodemographic and perceptual factors that influence their intention to use it for academic purposes.

2.2 Research participants

The study sample consisted of undergraduate students enrolled in various Ecuadorian universities who voluntarily and anonymously completed an online survey. Data collection was conducted between October 17 and November 2, 2024, resulting in a total of 210 valid responses. The gender distribution was predominantly female (75%), with male students representing 25% of the sample. Four additional participants initiated but did not complete the survey and were therefore excluded from the final analysis. Respondents represented seven different universities and a range of academic disciplines, including medicine, nutrition, systems engineering, and education, among others. The sample composition reflects voluntary participation from students enrolled in faculties with greater female representation, particularly in health sciences. While this limits generalizability, it provides insight into populations more actively adopting AI tools. This study involved anonymous voluntary surveys from students and did not involve sensitive personal data or intervention. As such, it was exempt from full ethical board review but adhered to institutional guidelines for research ethics. Informed consent was obtained electronically prior to survey completion (Table 1).

Table 1
www.frontiersin.org

Table 1. Demographic information of the participants.

2.3 Data collection and analysis

Data were collected using a structured and previously validated survey instrument designed to assess students’ opinions and experiences related to the use of ChatGPT for academic tasks. The questionnaire was adapted from a study developed by Yu et al. (2024), which focused on measuring factors influencing the use of ChatGPT in educational contexts.

The survey instrument was adapted from Yu et al. (2024) and translated into Spanish. Linguistic and contextual adjustments were made to reflect the Ecuadorian higher education environment. Content validity was confirmed through expert review. Factor analysis assumptions were tested: Kaiser-Meyer-Olkin measure = 0.882 and Bartlett’s Test of Sphericity: χ2 = 1643.2, p < 0.001. Multicollinearity was assessed using variance inflation factors (VIF < 2), indicating no significant collinearity between predictors. Internal consistency was assessed via Cronbach’s alpha: compatibility (α = 0.81), efficiency (α = 0.84), perceived ease of use (α = 0.79), perceived usefulness (α = 0.85), satisfaction (α = 0.82), and intention (α = 0.88). Factor loadings from EFA exceeded 0.60.

The key variables analyzed included six constructs: compatibility, efficiency, perceived ease of use, perceived usefulness, satisfaction, and intention of continued use. Grouping variables used for comparisons included gender, area of residence (urban/rural), and academic program.

The instrument consisted of six key constructs: compatibility (3 items), efficiency (4 items), perceived ease of use (3 items), perceived usefulness (3 items), satisfaction (3 items), and continued use intention (3 items). All items were measured on a five-point Likert scale, ranging from “strongly disagree” (1) to “strongly agree” (5). The survey included an informed consent section at the beginning, ensuring that participation was entirely voluntary and that respondents had the option to discontinue at any point. The dichotomization of the 5-point Likert scale into high vs. low intention (cut-off at >3) was adopted to align with common practice in behavioral intention modeling, where values above the midpoint reflect meaningful agreement with intent to act (Pellegrino et al., 2024). While this reduces variability, it enhances interpretability and logistic modeling efficiency (Table 2).

Table 2
www.frontiersin.org

Table 2. Structure of the questionnaire applied.

The statistical analysis was conducted using Python programming language within the Google Colab environment to ensure transparency, reproducibility, and computational efficiency. In addition to descriptive statistics and exploratory factor analysis, independent samples t-tests were conducted to evaluate differences in key variables (e.g., perceived ease of use, usefulness) across demographic groups such as gender and place of residence. Assumptions of normality and homogeneity of variances were assessed using the Shapiro–Wilk test and Levene’s test, respectively. When these assumptions were violated, non-parametric Mann–Whitney U tests were used. Effect sizes were calculated using Cohen’s d for t-tests and Rosenthal’s r for non-parametric comparisons. Core libraries included pandas and numpy for data preprocessing and descriptive statistics, factor_analyzer for exploratory factor analysis (EFA), scikit-learn for clustering, and statsmodels for regression modeling (Kuroki, 2021).

Descriptive analyses were first performed to summarize the participants’ sociodemographic characteristics and their responses to each item, including measures of central tendency and dispersion. Subsequently, an Exploratory Factor Analysis (EFA) was conducted using principal axis factoring with varimax rotation to evaluate the construct validity of the perception-based items. The Kaiser-Meyer-Olkin (KMO) test and Bartlett’s test of sphericity were used to assess sampling adequacy and the factorability of the data matrix (Luo et al., 2019).

To predict the academic intention to use ChatGPT, a binary logistic regression model was employed (Mahmud et al., 2024). The dependent variable was defined by dichotomizing the Likert-based intention item into two categories: low intention (≤ 3) and high intention (> 3). Independent variables included factor scores derived from the EFA (e.g., compatibility, perceived usefulness, ease of use, satisfaction) and sociodemographic variables such as gender, residence, and parental education.

In addition, a k-means clustering analysis was performed using the standardized factor scores to explore latent student profiles based on their perception patterns. The optimal number of clusters was determined through the elbow method and silhouette score. A significance level of p < 0.05 was adopted for all inferential procedures (Kanungo et al., 2002).

2.4 Findings

A total of 210 valid responses were analyzed from university students in Ecuador. The majority of respondents were female (75%) and resided in urban areas (69%). Participants represented seven universities and various academic disciplines.

2.5 Univariate analysis

Descriptive statistics indicated generally positive student perceptions of ChatGPT. The item “ChatGPT improves the quality of my results” received a mean score of 3.89 (SD = 0.92), while “I plan to continue using ChatGPT” had a mean of 3.73 (SD = 0.98). Conversely, the lowest score was observed in the item “I find it easy to make ChatGPT do what I want” (mean = 3.23), suggesting potential usability challenges. Overall, the Likert-scale responses displayed consistency and internal reliability across constructs such as perceived usefulness, compatibility, ease of use, and satisfaction.

2.6 Bivariate analysis

A correlation matrix revealed strong associations between items related to perceived usefulness, satisfaction, and continued use intention. For instance, “ChatGPT improves the quality of my results” correlated strongly with “I am satisfied with ChatGPT’s performance” (r = 0.68) and “I intend to recommend ChatGPT” (r = 0.71). A moderate positive correlation was observed between ease of use and compatibility constructs.

Independent samples t-tests were conducted to compare perception scores across grouping variables. Urban students reported significantly higher perceived ease of use (t = 2.32, p = 0.021, Cohen’s d = 0.42) compared to rural students. Although female students scored slightly higher in perceived usefulness and satisfaction, these differences were not statistically significant (p > 0.05, d < 0.2). All t-tests were conducted after confirming assumptions of normality and homogeneity of variances; in cases where assumptions were violated, Mann–Whitney U tests were applied.

2.7 Logistic regression analysis

To identify predictors of high academic intention to use ChatGPT (defined as Likert score > 3), a binary logistic regression model was estimated. Four perception variables were included as predictors:

“ChatGPT improves the quality of my results” (β = 0.861, OR = 2.37).

“ChatGPT fits my learning style” (β = 0.627, OR = 1.87).

“I find it easy to use ChatGPT” (β = 0.342, OR = 1.41).

“I am satisfied with ChatGPT’s performance” (β = 0.516, OR = 1.68).

The intercept was −2.013. The model was statistically significant (χ2 = 48.23, df = 4, p < 0.001) and explained 39.5% of the variance (Nagelkerke R2). The two strongest predictors were perceived usefulness and compatibility.

log ( P ( Y = 1 ) 1 P ( Y = 1 ) ) = 2.013 + 0.861 X 1 + 0.627 X 2 + 0.342 X 3 + 0.516 X 4

The logistic regression model predicts the probability P(Y = 1), where Y = 1 denotes a high academic use intention of ChatGPT. The predictor variables included are: X1, representing the perception that “ChatGPT improves the quality of my results”; X2, “ChatGPT fits my learning style”; X3, “I find it easy to use ChatGPT”; and X4, “I am satisfied with ChatGPT’s performance.” The model coefficients (β) are expressed in log-odds, and their exponentials correspond to odds ratios. Specifically, a one-unit increase in X1 is associated with a 2.37-fold increase in the odds of high use intention. Similarly, the odds increase by 1.87 for X2, 1.41 for X3, and 1.68 for X4, indicating that perceived usefulness and compatibility are the strongest predictors of continued academic use of ChatGPT.

Logistic regression results showed the following: “ChatGPT improves the quality of my results” (β = 0.864, OR = 2.37, CI: 1.45–3.89, p < 0.001), “ChatGPT fits my learning style” (β = 0.627, OR = 1.87, CI: 1.15–3.04, p = 0.011), “I find it easy to use ChatGPT” (β = 0.576, OR = 1.78, CI: 1.02–3.11, p = 0.041), and “I am satisfied with ChatGPT’s performance” (β = 0.499, OR = 1.65, CI: 1.04–2.61, p = 0.036).

Model fit was acceptable: Nagelkerke R2 = 0.395; Hosmer-Lemeshow test = 0.423 (Table 3).

Table 3
www.frontiersin.org

Table 3. Predictors of high academic use intention of ChatGPT: results from logistic regression analysis.

2.8 Cluster analysis

K-means clustering (k = 3) was conducted using standardized scores of all Likert items. A silhouette score of 0.52 indicated moderate internal consistency. The resulting profiles were:

• Cluster 0: Enthusiastic adopters (42%)—high across all dimensions.

• Cluster 1: Neutral users—moderate scores on compatibility and usefulness.

• Cluster 2: Reluctant adopters—low satisfaction and intention scores.

Principal Component Analysis (PCA) was used to visualize the clusters, confirming clear separations based on students’ perceptions of ChatGPT. Despite the moderate silhouette score (0.52), cluster interpretability was supported by significant differences in external variables such as satisfaction and usage frequency. Stability was further tested by rerunning k-means on 80% bootstrap samples, yielding consistent grouping patterns (Figure 1).

Figure 1
Scatter plot titled

Figure 1. PCA visualization of student clusters based on perceptions of ChatGPT (k-means clustering, k = 3).

3 Discussion and conclusion

Limitations include the cross-sectional nature of the study, reliance on self-reported measures, and dichotomization of the dependent variable. These factors may constrain generalizability and granularity of behavioral insights.

Nonetheless, the findings offer actionable guidance for higher education institutions. We recommend integrating training programs to promote inclusive, critical, and ethical use of AI tools like ChatGPT. Particularly in Latin American contexts with limited digital infrastructure, tailored strategies could foster equitable technology adoption. Additionally, the exclusive reliance on self-report data introduces potential common method variance (CMV). A Harman’s single-factor test was conducted, indicating that no single factor accounted for the majority of variance, reducing concerns about CMV.

This study aimed to explore the predictive value of sociodemographic variables and students’ perceptions of usability in explaining their academic use intention of ChatGPT in Ecuadorian universities. The findings align with existing theoretical frameworks such as the Technology Acceptance Model (TAM) and its extensions, affirming the central role of perceived usefulness and compatibility with learning styles in shaping behavioral intention toward educational technologies (Al-Mamary, 2025; Mahmud et al., 2024).

The logistic regression model showed that perceived usefulness and compatibility were the strongest predictors of high academic use intention. Students who believed that ChatGPT improved the quality of their academic outputs and fit their preferred learning style were significantly more likely to express continued use intention (Ngo et al., 2024). This result supports the findings of Zhang et al. (2023), who reported that perceived usefulness was the most influential factor in students’ adoption of AI writing tools within higher education settings (Zou et al., 2023). Similarly, Yu et al. (2024) demonstrated that both perceived ease of use and compatibility were strong drivers of ChatGPT acceptance among university students when applying an extended TAM framework (Zhao et al., 2024).

In our study, satisfaction and ease of use also emerged as relevant, although less potent, predictors. This suggests that while technical usability contributes to adoption, it is the educational alignment and perceived academic benefit that most strongly motivate students to continue using ChatGPT. These findings echo the conclusions of Mhlanga (2023), who emphasized that students’ perceived academic value of generative AI outweighed concerns about complexity or learning curves (Mhlanga, 2023).

Sociodemographic factors such as gender and residence showed only limited explanatory power. Although urban students reported slightly higher scores in ease of use, these variables were not significant predictors in the regression model. This is consistent with López-Vázquez-Cano et al. (2023), who found that in Latin American contexts, personal attitudes and access to digital environments were more relevant than structural variables like gender or socioeconomic status in determining engagement with ChatGPT (Mena-Guacas et al., 2025). These findings suggest that digital familiarity may be more influential than demographic background in shaping students’ relationships with AI tools.

Cluster analysis provided further insight by segmenting students into three distinct profiles based on their perception scores: enthusiastic adopters, neutral users, and reluctant adopters. This segmentation aligns with the typologies proposed by Cotton et al. (2024), who identified similar categories in their study of students’ use of ChatGPT for academic tasks in the UK (Cotton et al., 2024). Enthusiastic adopters in our study reported high levels of perceived usefulness, satisfaction, and intention, suggesting strong alignment between ChatGPT’s capabilities and their academic needs (Ma et al., 2025).

However, the presence of a reluctant user profile underscores the importance of addressing barriers to adoption. These students may have concerns about ethical implications, information accuracy, or a lack of perceived relevance to their discipline. As pointed out by Mhlanga (2023), fostering responsible and equitable AI use requires institutional support, transparent policies, and training to ensure that students understand both the benefits and limitations of such tools (Mhlanga, 2023).

Our study contributes to a growing body of literature by contextualizing ChatGPT use in a Spanish-speaking, Latin American setting—an area underrepresented in current AI-in-education research. While most prior studies have focused on English-speaking or high-resource contexts, our findings suggest that the motivational drivers of adoption are consistent across cultural boundaries, though implementation strategies may need localization.

This study is not without limitations. The cross-sectional design restricts causal inference, and self-reported data may be influenced by social desirability bias. Moreover, the study focused solely on students, without capturing the perspectives of faculty or institutional policymakers. Future research should incorporate longitudinal designs, triangulate student responses with academic performance data, and explore cross-cultural comparisons in greater depth.

Data availability statement

The data is anonymized and can be released upon request to the lead author.

Ethics statement

Ethical approval was not required for the study involving humans in accordance with the local legislation and institutional requirements. Informed consent was obtained electronically prior to survey completion. Written informed consent to participate in this study was not required from the participants or the participants' legal guardians/next of kin in accordance with the national legislation and the institutional requirements.

Author contributions

EC: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. DD: Data curation, Investigation, Methodology, Software, Writing – original draft. CC: Data curation, Software, Supervision, Validation, Writing – original draft, Writing – review & editing. FC: Resources, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This work was supported by Milagro State University for the publication costs.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that Gen AI was used in the creation of this manuscript. The author(s) verify and take full responsibility for the use of generative AI in the preparation of this manuscript. Generative AI (specifically, ChatGPT by OpenAI) was used to assist in the organization of ideas, language refinement, and formatting of academic sections such as the abstract, introduction, and discussion. All outputs generated by the AI were critically reviewed, edited, and validated by the authors to ensure accuracy, originality, and compliance with ethical standards in scientific writing.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1649034/full#supplementary-material

Supplementary file 1 | Survey questions.

References

Al-Mamary, Y. H. (2025). A comprehensive model for AI adoption: analysing key characteristics affecting user attitudes, intentions and use of ChatGPT in education. Hum. Syst. Manag. :01672533251340523. doi: 10.1177/01672533251340523

Crossref Full Text | Google Scholar

Buele, J., Sabando-García, Á. R., Sabando-García, B. J., and Yánez-Rueda, H. (2025). Ethical use of generative artificial intelligence among Ecuadorian university students. Sustainability 17:4435. doi: 10.3390/su17104435

Crossref Full Text | Google Scholar

Chellappa, V., and Luximon, Y. (2024). Understanding the perception of design students towards ChatGPT. Comput. Educ. Artif. Int. 7:100281. doi: 10.1016/j.caeai.2024.100281

Crossref Full Text | Google Scholar

Ciampa, K., Wolfe, Z. M., and Bronstein, B. (2023). ChatGPT in education: transforming digital literacy practices. J. Adolesc. Adult. Lit. 67, 186–195. doi: 10.1002/jaal.1310

Crossref Full Text | Google Scholar

Cotton, D. R. E., Cotton, P. A., and Shipway, J. R. (2024). Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148

Crossref Full Text | Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

de Blanes, G., Sebastián, M., Sarmiento Guede, J. R., Azuara Grande, A., and Filipe, A. F. (2025). UTAUT-2 predictors and satisfaction: implications for mobile-learning adoption among university students. Educ. Inf. Technol. 30, 3201–3237. doi: 10.1007/s10639-024-12927-1

Crossref Full Text | Google Scholar

Garcia, M. B. (2025). ChatGPT as an academic writing tool: factors influencing researchers’ intention to write manuscripts using generative artificial intelligence. Int. J. Hum.-Comput. Interact. 2, 1–15. doi: 10.1080/10447318.2025.2499158

Crossref Full Text | Google Scholar

George, B., and Wooden, O. (2023). Managing the strategic transformation of higher education through artificial intelligence. Admin. Sci. 13:196. doi: 10.3390/admsci13090196

Crossref Full Text | Google Scholar

Granić, A., and Marangunić, N. (2019). Technology acceptance model in educational context: a systematic literature review. Br. J. Educ. Technol. 50, 2572–2593. doi: 10.1111/bjet.12864

Crossref Full Text | Google Scholar

Han, P., Liu, S., Zhang, D., Li, X., and Li, X. (2025). Research on the factors affecting the adoption of health short videos by the college students in China: unification based on TAM and UTAUT model. Front. Psychol. 16:1547402. doi: 10.3389/fpsyg.2025.1547402

Crossref Full Text | Google Scholar

Jo, H. (2024). From concerns to benefits: a comprehensive study of ChatGPT usage in education. Int. J. Educ. Technol. High. Educ. 21:35. doi: 10.1186/s41239-024-00471-4

Crossref Full Text | Google Scholar

Kanungo, T., Mount, D. M., Netanyahu, N. S., Piatko, C. D., Silverman, R., and Wu, A. Y. (2002). An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 24, 881–892. doi: 10.1109/TPAMI.2002.1017616

Crossref Full Text | Google Scholar

Kuleto, V., Ilić, M., Dumangiu, M., Ranković, M., Martins, O. M., Păun, D., et al. (2021). Exploring opportunities and challenges of artificial intelligence and machine learning in higher education institutions. Sustainability 13:10424. doi: 10.3390/su131810424

Crossref Full Text | Google Scholar

Kuroki, M. (2021). Using Python and Google Colab to teach undergraduate microeconomic theory. Int. Rev. Econ. Educ. 38:100225. doi: 10.1016/j.iree.2021.100225

Crossref Full Text | Google Scholar

Lau, J., Tammaro, A. M., Miltenoff, P., Begum, D., Zanichelli, F., Orru, D., et al. (2024). Artificial intelligence/ChatGPT-like: perceptions and use among library and information professionals from Italy, Bangladesh, Bulgaria, and Mexico. Int. Inf. Libr. Rev. 56, 404–418. doi: 10.1080/10572317.2024.2413773

Crossref Full Text | Google Scholar

Luo, L., Arizmendi, C., and Gates, K. M. (2019). Exploratory factor analysis (EFA) programs in R. Struct. Equ. Model. Multidiscip. J. 26, 819–826. doi: 10.1080/10705511.2019.1615835

Crossref Full Text | Google Scholar

Ma, J., Wang, P., Li, B., Wang, T., Pang, X. S., and Wang, D. (2025). Exploring user adoption of ChatGPT: a technology acceptance model perspective. Int. J. Hum.-Comput. Interact. 41, 1431–1445. doi: 10.1080/10447318.2024.2314358

Crossref Full Text | Google Scholar

Mahmud, A., Sarower, A. H., Sohel, A., Assaduzzaman, M., and Bhuiyan, T. (2024). Adoption of ChatGPT by university students for academic purposes: partial least square, artificial neural network, deep neural network and classification algorithms approach. Array 21:100339. doi: 10.1016/j.array.2024.100339

Crossref Full Text | Google Scholar

Majeed, A., and Rasheed, A. (2025). Environmental responsibility, Environmental concerns, and green banking adoption in Pakistan: using the unified theory of acceptance and use of technology. Hum. Behav. Emerg. Technol. 2025:7268813. doi: 10.1155/hbe2/7268813

Crossref Full Text | Google Scholar

Mena-Guacas, A. F., López-Catalán, L., Bernal-Bravo, C., and Ballesteros-Regaña, C. (2025). Educational transformation through emerging technologies: critical review of scientific impact on learning. Educ. Sci. 15:368. doi: 10.3390/educsci15030368

Crossref Full Text | Google Scholar

Mhlanga, D. (2023). “Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning” in FinTech and artificial intelligence for sustainable development. ed. D. Mhlanga (Berlin: Springer Nature Switzerland), 387–409.

Google Scholar

Ngo, T. T. A., Tran, T. T., An, G. K., and Nguyen, P. T. (2024). ChatGPT for educational purposes: investigating the impact of knowledge management factors on student satisfaction and continuous usage. IEEE Trans. Learn. Technol. 17, 1341–1352. doi: 10.1109/TLT.2024.3383773

Crossref Full Text | Google Scholar

Pellegrino, F., Falagario, U. G., Proietti, F., Brasetti, A., Hagman, A., Briganti, A., et al. (2024). PD45-06 accuracy of the MRI 5-point Likert score to predict extra-prostatic extension and seminal vesicle invasion in patients undergoing radical prostatectomy for prostate cancer and development of a new MRI-based nomogram. J. Urol. 211:e969. doi: 10.1097/01.JU.0001008792.09108.b4.06

Crossref Full Text | Google Scholar

Rudhumbu, N. (2022). Applying the UTAUT2 to predict the acceptance of blended learning by university students. Asian Assoc. Open Univ. J. 17, 15–36. doi: 10.1108/AAOUJ-08-2021-0084

Crossref Full Text | Google Scholar

Vázquez-Cano, E., Ramírez-Hurtado, J. M., Sáez-López, J. M., and López-Meneses, E. (2023). ChatGPT: The brightest student in the class. Thinking Skills and Creativity. 49:101380. doi: 10.1016/j.tsc.2023.101380

Crossref Full Text | Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quarterly. 27, 425–478. doi: 10.2307/30036540

Crossref Full Text | Google Scholar

Williams, M. D., Rana, N. P., and Dwivedi, Y. K. (2015). The unified theory of acceptance and use of technology (UTAUT): a literature review. J. Enterp. Inf. Manag. 28, 443–488. doi: 10.1108/JEIM-09-2014-0088

Crossref Full Text | Google Scholar

Yu, C., Yan, J., and Cai, N. (2024). ChatGPT in higher education: factors influencing ChatGPT user satisfaction and continued use intention. Front. Educ. 9:929. doi: 10.3389/feduc.2024.1354929

Crossref Full Text | Google Scholar

Zhang, H., Liu, X., and Zhang, J. (2023). Extractive summarization via chatgpt for faithful summary generation (No. arXiv:2304.04193). arXiv. doi: 10.48550/arXiv.2304.04193

Crossref Full Text | Google Scholar

Zhao, Y., Li, Y., Xiao, Y., Chang, H., and Liu, B. (2024). Factors influencing the acceptance of ChatGPT in high education: an integrated model with PLS-SEM and fsQCA approach. SAGE Open 14:21582440241289835. doi: 10.1177/21582440241289835

Crossref Full Text | Google Scholar

Zou, B., Lyu, Q., Han, Y., Li, Z., and Zhang, W. (2023). Exploring students’ acceptance of an artificial intelligence speech evaluation program for EFL speaking practice: An application of the integrated model of technology acceptance. Comput. Assist. Lang. Learn. 22, 1–26. doi: 10.1080/09588221.2023.2278608

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: ChatGPT, higher education, technology acceptance, usability, university students

Citation: Caluña ERM, Diaz DJC, Caluña CIM and Capelo FXA (2025) Sociodemographic predictors and usability perceptions explaining academic use intention of ChatGPT among university students in Ecuador. Front. Educ. 10:1649034. doi: 10.3389/feduc.2025.1649034

Received: 18 June 2025; Accepted: 21 July 2025;
Published: 19 August 2025.

Edited by:

Ramon Ventura Roque Hernández, Universidad Autónoma de Tamaulipas, Mexico

Reviewed by:

Ahmed-Nor Mohamed Abdi, SIMAD University, Somalia
Sukirman Sukirman, Muhammadiyah University of Surakarta, Indonesia

Copyright © 2025 Caluña, Diaz, Caluña and Capelo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Edgar Rolando Morales Caluña, ZW1vcmFsZXNjNEB1bmVtaS5lZHUuZWM=

ORCID: Edgar Rolando Morales Caluña, orcid.org/0000-0001-9545-1282
Dario Javier Cervantes Diaz, orcid.org/0009-0003-2791-4005
Cristian Ismael Morales Caluña, orcid.org/0000-0002-1480-5892
Fernando Xavier Altamirano Capelo, orcid.org/0000-0001-5138-6538

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.