- Faculty of Education, University of Malaga, Málaga, Spain
Technological development is reshaping teaching and learning environments, particularly through the integration of artificial intelligence (AI) tools such as ChatGPT in higher education. The objective of this study was to analyze university students’ intention to use ChatGPT, considering the gender variable. A quantitative methodology was employed, using an ad hoc questionnaire administered to a sample of 368 students from the Faculty of Education Sciences at the University of Málaga. Data analysis was conducted using the Jamovi software v2.3.26.0, applying techniques such as descriptive statistics, hypothesis testing, exploratory and confirmatory factor analysis, and ANOVA. The results showed widespread acceptance of ChatGPT, with no significant gender differences, suggesting a homogeneous adoption of this technology. These findings contrast with previous research indicating a gender gap in technology use. The importance of integrating AI into educational processes in a critical and pedagogically sound manner is emphasized, as well as the need to ensure equitable access. This study contributes to the understanding of how ChatGPT is perceived and used in academic contexts at the university level. It is recommended that the study be replicated in other contexts to validate the findings and broaden their applicability.
1 Introduction
The digital revolution has permeated every aspect of contemporary society, transforming the way we communicate, work, and learn. In the educational sphere, emerging technologies have significantly changed how students interact with information and generate knowledge. Among these innovations, natural language processing (NLP) systems have taken on a prominent role. In particular, the development and popularization of language models such as GPT-3.5 have opened new possibilities in the academic field (Chiu, 2023; Javaid et al., 2023). This article contributes knowledge about the use of ChatGPT by university students in their academic work, analyzing the intention behind its use and its implications for the learning process and knowledge production. Specifically, we focus on students’ intention to use ChatGPT for university assignments and coursework-related academic tasks.
The adoption of ChatGPT in higher education contexts is not a passing trend, but rather a reflection of how teaching and learning are adapting to the demands of the digital age (Pradana et al., 2023). The ability of natural language systems to generate coherent and contextually relevant text has sparked interest in their application across various educational and training contexts (Caldarini et al., 2022; Rospigliosi, 2023). Moreover, the flexibility of these tools to adapt to different disciplines and fields of knowledge has encouraged their use in a wide range of areas (Kang et al., 2020).
The study of the intention behind the use of ChatGPT in the university context is essential to understand the motives and goals that drive students to incorporate this technology into their academic production processes. Chassignol et al. (2018) argue that students find NLP systems a valuable tool to improve the clarity, coherence, and fluency of their writing. They also have the opportunity to receive suggestions and immediate guidance during the drafting process. The impact of using ChatGPT on the academic work of university students goes beyond simply improving the quality of their writing. By integrating these tools into their workflow, students become immersed in a learning environment that fosters critical reflection and the development of advanced writing skills (Zhu et al., 2023). Moreover, interaction with NLP systems can enhance creativity and originality in the formulation of ideas and arguments, giving students the opportunity to explore new forms of expression (Bozkurt et al., 2023).
Despite its potential benefits, the use of ChatGPT in academic contexts raises a series of challenges and ethical considerations that should not be overlooked (Cooper, 2023). Excessive reliance on these tools could compromise the development of students’ independent writing skills and their capacity for critical thinking (Strzelecki, 2023). The introduction of ChatGPT into education not only transforms the production of academic work but also raises important questions about ethics in academic writing. Some students, encouraged by the convenience this tool offers, may develop a dependency on it, potentially compromising the authenticity and originality of their contributions (Susnjak, 2022). This concern is amplified by the possibility that the improper use of ChatGPT could lead to unethical practices such as plagiarism or misattribution of authorship, thereby challenging the very foundations of academic integrity (Cotton et al., 2023).
To address and mitigate these risks, it is essential to understand students’ perspectives on the use of such tools. This understanding can help establish preventive measures and promote a culture of academic integrity that encourages responsible use of these technologies (Chaudhry et al., 2023). However, various studies have shown that individuals’ perspectives on technology use may differ depending on personal variables (Joo et al., 2018; Lin and Yu, 2023; Razmak and Bélanger, 2018).
Our research examines the intention to use ChatGPT among university students based on the personal variable of gender. In a similar line of inquiry, Romero-Rodríguez et al. (2023) found no evidence of correlation. Nevertheless, several studies highlight the importance of analyzing how gender identities influence the way students interact with and use technological tools in their learning processes and knowledge production (Aesaert and Van Braak, 2015; Siddiq and Scherer, 2019; Vela-Acero and Jiménez-Cortés, 2022). Recent research has increasingly examined the implications of ChatGPT for university assignments, particularly regarding academic writing, assessment integrity, and learning outcomes. Meishar-Tal (2024) highlights the pedagogical tensions introduced by generative AI in writing tasks, emphasizing both its potential to support learning and the challenges it poses for assessment design and authorship. Shahsavar et al. (2024) analyze the use of ChatGPT as a writing assistant in medical education, showing that while AI tools can enhance writing efficiency and confidence, their benefits are unevenly distributed depending on students’ prior skills and access to guidance. Together, these studies reinforce the need to examine not only students’ acceptance of ChatGPT, but also their intentions for using it in university assignments, as well as the pedagogical and ethical implications of its integration into academic work.
Considering the gender variable in the use of ChatGPT is essential to address the diversity of perspectives and experiences that university students bring to their educational process. Gender differences can influence how students engage with technology—from the frequency of use to the nature of the queries they make (Jangjarat et al., 2023; Tsai and Tsai, 2010). An approach that takes these differences into account not only enriches the understanding of how students make use of technological tools but also provides a framework for designing more effective educational interventions tailored to individual needs (Tømte and Hatlevik, 2011).
The possibilities and risks that ChatGPT brings to higher education represent both a major challenge and an opportunity to encourage a more thoughtful and ethical use of technology in the learning process. By fostering in students an understanding of the ethical implications and best practices for using ChatGPT, academic integrity is promoted while also cultivating the development of critical and creative skills essential for long-term learning (Crawford et al., 2023).
The implementation of technologies such as ChatGPT in educational contexts opens a crucial dialogue on ethics and responsibility in academic production. By understanding students’ true intentions in using these tools, it becomes more feasible to address dishonest use and promote a culture of integrity. In this way, we can make the most of these tools’ potential to enrich the educational process and prepare students for a digital and ethical future (Perkins, 2023). From a broader theoretical perspective, university students’ intention to use ChatGPT can be framed within well-established technology adoption models, such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). These frameworks emphasize factors such as perceived usefulness, perceived ease of use, and behavioral intention as key determinants of technology adoption. In this sense, the high levels of acceptance observed in this study are consistent with these models and provide a complementary theoretical lens for interpreting students’ use of generative AI tools in higher education. Recent research has also highlighted the pedagogical potential of artificial intelligence in higher education contexts, particularly when combined with active methodologies such as project-based learning, emphasizing its role in personalization, feedback, and teaching–learning processes (Ruiz Viruel et al., 2025).
Based on the theoretical background presented above, the present study addresses the following research question:
RQ: Are there statistically significant differences in university students’ intention to use ChatGPT for academic work and university assignments according to gender.
2 Materials and methods
The study was conducted with a sample of 368 students from the Faculty of Education Sciences at the University of Málaga. The participants were selected from first-year Primary Education programs (both the regular and double-degree tracks) and second-year Pedagogy programs. This selection made it possible to encompass a representative spectrum of future educators and professionals in the educational field.
The choice of the sample was based on several key criteria. First, students in Primary Education and Pedagogy represent a significant group within the educational context, as they are future teachers and educators. Their familiarity with technological tools and their openness to new teaching methodologies make them an ideal group for exploring the adoption and application of emerging technologies such as ChatGPT. Furthermore, by selecting students from both first and second years, the study provides a broader perspective on attitudes and behaviors toward digital technologies during the early stages of university education.
This sample is representative of a young, digitally native population in the process of professional training, whose perceptions and experiences can provide valuable insights into the intention to use and implications of ChatGPT in academic contexts. Moreover, by focusing on a single educational institution, external variables such as curricular differences or variations in technological infrastructure between institutions were controlled by ensuring that all participants were enrolled in comparable degree programs within the same faculty, followed the same institutional curriculum, and had access to identical technological resources, digital platforms, and institutional policies regarding the use of educational technologies and artificial intelligence tools.
For data collection, an ad hoc questionnaire was designed and distributed to participants via Google Forms. The questionnaire included a series of questions aimed at assessing the intention to use ChatGPT in academic production. It contained items related both to the participants’ demographic profile and to three main areas of information: quality of the information obtained, satisfaction and intention to use, and reasons for engaging in plagiarism. The questions within these sections used a seven-point Likert scale ranging from “strongly disagree” (1) to “strongly agree” (7):
• Strongly disagree (1): “I completely reject the idea that using ChatGPT is beneficial or appropriate for my university studies.”
• Disagree (2): “I generally do not agree that ChatGPT contributes positively to my academic work.”
• Slightly disagree (3): “I tend to think that the use of ChatGPT does not clearly improve my academic tasks.”
• Neutral (4): “I do not have a clear opinion about the usefulness of ChatGPT in my university assignments.”
• Slightly agree (5): “I believe that ChatGPT can be useful for some aspects of my academic work.”
• Agree (6): “I largely agree that ChatGPT contributes positively to my academic work.”
• Strongly agree (7): “I am fully convinced that ChatGPT is a valuable tool for my university studies.”
The questionnaire was specifically developed for this study. Due to ethical and data protection considerations, the full instrument is available from the corresponding author upon reasonable request.
The ad hoc questionnaire designed for this study was chosen for its flexibility to adapt to the specific needs of the research. By focusing on questions related to the quality of the information obtained, satisfaction and intention to use, and reasons for engaging in plagiarism, we were able to gather data directly aligned with the objectives of our study.
The use of a seven-point Likert scale allowed for the capture of subtle nuances in students’ responses, providing a more detailed measure of their attitudes and perceptions. This type of scale is widely used in educational and psychosocial research due to its effectiveness in measuring opinions and attitudes.
Regarding the reliability analysis of our questionnaire, given the three-factor approach, McDonald’s omega was employed. Our questionnaire yielded an index of ω = 0.966, indicating excellent reliability. This high level of internal consistency suggests that the items reliably measure the same construct. Furthermore, the item reliability statistics show that removing any of the analyzed items does not significantly alter the omega value, suggesting that each item makes a meaningful contribution to the scale’s reliability and that there are no items that require further evaluation.
The administration of the questionnaire through Google Forms was chosen for its accessibility and ease of use, both for the researchers and the participants. The anonymous and electronic nature of the questionnaire also encouraged greater honesty in the responses by reducing participants’ potential hesitation to share personal opinions or experiences. The questionnaire was administered between March and May 2023. It was distributed electronically via Google Forms. Participants were informed about the anonymity of their responses and the handling of their data solely for research purposes.
For the statistical analysis of the collected data, the software Jamovi v2.3.26.0 was used—a tool based on the R language with a graphical interface. First, a descriptive analysis of all responses was conducted to gain a general understanding of the sample. Subsequently, a hypothesis test was carried out to compare whether there were differences between the responses of participants who had previously used Generative Artificial Intelligence (GenAI) and those who had not.
To identify and confirm the presence of our areas of interest measured by the questionnaire, an Exploratory Factor Analysis (EFA) and a Confirmatory Factor Analysis (CFA) were performed. An Analysis of Variance (ANOVA) was also conducted to examine differences in the intention to use ChatGPT between those who had previously used Artificial Intelligence and those who had not.
Additionally, the reliability and validity of the instrument used in the study were assessed to ensure the consistency and accuracy of the measurements. In all analyses, a significant level (alpha) of 0.05 was applied, consistent with standard practices in educational research.
All necessary details should be provided so that the reader can understand and verify the development of the research. Previously published methods must be indicated by reference.
3 Results
The questionnaire was completed by a total of 368 students from the Faculty of Education Sciences at the University of Málaga. The sample had an average age of 20 years (SD = 4.06) and an average EBAU exam score of 11.0 (SD = 1.62). Among the participants, 290 (78.8%) were women and 78 (21.2%) were men (Table 1). Of the total sample, only 39 (10.6%) had previous experience using some type of GenAI (Table 2).
Regarding the questionnaire response scores, most modes ranged between values 5 and 7, corresponding to agreement with the statements presented—except for the statement concerning the “intention to use GenAI to complete tasks without supervision or modification of its response,” and those referring to reasons for plagiarism, whose mode was 1 (strongly disagree).
Since our data did not meet parametric assumptions, a Mann–Whitney U test was conducted to identify significant differences between responses to the various questions, depending on whether participants had previously used GenAI or not. It was hypothesized that prior experience with GenAI could yield different results compared to those without such experience (Table 3). In the question regarding whether students believed that ChatGPT’s responses included all the requested information (A1), significant differences were observed between groups (U = 5,133, p = 0.034), suggesting that students who had previously used the tool perceived the information provided differently. However, the remaining questions related to the perceived quality of information (A2 and A3) did not show any group differences.
Concerning satisfaction and intention to use, the question related to satisfaction with the text generated by ChatGPT (E4) stood out for showing significant differences between groups (p = 0.027), which was not the case for the other items. For questions related to reasons for use and plagiarism, significant differences were found (U = 4,989, p = 0.02) only in the item stating that “it can be concluded that the text provided by ChatGPT is reliable once it has been reviewed in class” (D2).
An Exploratory Factor Analysis (EFA) was conducted to compare the theoretical approach with the data collected through the questionnaire. Three distinct factors were established to identify the variables associated with each, using the maximum likelihood method with Varimax rotation. The first factor showed consistently high loadings on variables (A1) to (E5), suggesting a dimension related to the perception of quality and satisfaction with AI-generated texts. Variables (F1) to (J4) displayed more varied loadings within the same factor, possibly indicating different dimensions within this construct, related to the intention to use and attitudes toward plagiarism (Table 4).
It is important to note the negative correlation of variables (I1) to (I6) with the rest of the scale, which may indicate that these items measure an aspect inversely related to the other variables. This information will be used later for the calculation of the scale’s reliability index. Regarding the fit of this model (Table 5), both the RMSEA (Root Mean Square Error of Approximation) and the TLI (Tucker–Lewis Index) show slight deviations from optimal thresholds. This suggests that the model has an acceptable fit for the questionnaire data but could still be optimized. As for the Chi-square statistics, the model is statistically significant (p < 0.001).
In the Confirmatory Factor Analysis (Table 6), all the variables studied showed significant factor loadings within their respective factors, demonstrating a strong relationship between each item and its corresponding factor. Regarding the factor covariances (Table 7), their statistical significance suggests that the factors are related but distinct from one another. Finally, the Chi-square model fit test (Table 8) was significant (χ2 = 4,762, p < 0.001). This result is common in large samples but reaffirms that the model could be further optimized, as confirmed by the RMSEA index (0.118, well above the ideal threshold of 0.08) and the CFI and TLI indices (both below the desirable threshold of 0.9) (Table 9).
In addition to RMSEA, CFI, and TLI, the Standardized Root Mean Square Residual (SRMR) was examined as a complementary fit index. This index was considered in the overall evaluation of model fit, which suggests that the measurement model presents a reasonable fit for an exploratory instrument, despite the limitations observed in other indices.
4 Discussion
This study reveals a notable trend of acceptance and adaptability toward the use of ChatGPT and other AI tools among university students. The findings, aligned with current trends in the educational field (Alkhaqani, 2023; Anders, 2023), indicate that students are willing to integrate these technologies into their academic practices. This adaptability suggests a general openness to technological innovation in education, which is essential for designing educational policies and developing pedagogical interventions that effectively incorporate AI into the learning process.
The inclusion of an ordinal logistic regression analysis provided valuable insights into how variables such as age and prior experience with AI influence students’ perceptions (Elareshi et al., 2022; Li and Zhang, 2023). This methodological approach offers a richer understanding of how university students perceive and use ChatGPT in their educational process, consistent with the study by Zhu et al. (2023) on the role of AI in education.
Our findings indicate no significant gender-based differences in the intention to use ChatGPT, which contrasts with previous studies suggesting gender-based disparities in the adoption of digital technologies (Guillén-Gámez et al., 2024; Joo et al., 2018). This result suggests a meaningful shift in technology adoption trends, where gender is becoming a less determining factor. This movement toward greater uniformity in technology adoption across genders has significant implications for future research, particularly in developing theories related to technology adoption and gender diversity in educational contexts. In line with Parsons and Curry (2024), this raises a broader call to action for universities to redesign assessment strategies and promote AI literacy, ensuring that generative tools are used to support learning, critical thinking, and ethical academic behavior rather than undermine them.
Building on this finding, another discussion emerges regarding educational initiatives that aim to incorporate AI tools such as ChatGPT: they may not need to focus as heavily on gender differentiation but rather on broader aspects such as accessibility and technological training. Furthermore, this finding could encourage further research into how other factors—such as cultural context, field of study, or prior experience with technology—may influence the adoption and use of AI tools in higher education.
5 Conclusion
We can affirm that the results of this study not only reflect the growing adaptability and acceptance of AI tools among university students but also challenge some preconceived notions regarding the impact of gender on technology adoption in educational contexts. These findings highlight the importance of integrating ChatGPT and other AI tools in education—not merely as technological supplements but as central elements in innovative pedagogical strategies (Crompton and Burke, 2023).
An important limitation of this study is the unequal gender distribution of the sample, with a clear predominance of female participants. Although this distribution reflects the characteristics of the degrees analyzed, it may have limited the statistical power to detect subtle gender-based differences. Therefore, the absence of significant gender effects should be interpreted with caution.
The evidence of widespread acceptance of these tools presents an opportunity for educational institutions to leverage such technologies to enrich the learning process. Furthermore, it underscores the need to address the digital divide and ensure equitable access to technology for all students, regardless of gender or socioeconomic background, which is crucial for developing inclusive digital strategies in modern pedagogy.
From an applied perspective, it is essential that educational institutions and educators not only integrate ChatGPT into their teaching methods but also foster a critical understanding of its uses and limitations. This includes addressing ethical and academic integrity issues related to its use, such as plagiarism and originality (Daneji et al., 2019).
In addition, these results emphasize the importance of closing the digital gap and ensuring equitable access to technology for all students, regardless of gender or socioeconomic status (Shahzad et al., 2021). Therefore, it is vital that educators and educational institutions develop strategies for digital inclusion, ensuring that all students benefit equally from technological advances in pedagogy.
As for the study’s limitations, it is important to note that the research was conducted with a specific sample of students from a single university, which may limit the generalizability of the results. Moreover, the use of a self-reported questionnaire could introduce potential bias in student responses. Accordingly, the findings should be interpreted as context-specific, and future research should replicate this study in diverse institutional, cultural, and disciplinary contexts to strengthen external validity.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by Ethics Committee (Innoeduca) of the University of Málaga. The studies were conducted in accordance with the local legislation and institutional requirements. The ethics committee/institutional review board waived the requirement of written informed consent for participation from the participants or the participants’ legal guardians/next of kin because the study presented minimal risk and data collection was conducted through an anonymous electronic questionnaire (Google Forms). Consent was implied by the voluntary completion and submission of the survey after participants were fully informed about the study’s purpose and the confidentiality of their responses.
Author contributions
ES-R: Funding acquisition, Supervision, Writing – review & editing, Conceptualization, Resources, Writing – original draft, Project administration, Methodology, Visualization, Data curation, Formal analysis. PF-C: Data curation, Project administration, Investigation, Validation, Methodology, Supervision, Software, Formal analysis, Writing – original draft. JS-R: Writing – review & editing, Writing – original draft. SR-V: Software, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Abbreviations
AI, Artificial Intelligence; GenAI, Generative Artificial Intelligence; NLP, Natural Language Processing; EFA, Exploratory Factor Analysis; CFA, Confirmatory Factor Analysis.
References
Aesaert, K., and Van Braak, J. (2015). Gender and socioeconomic related differences in performance based ICT competences. Comput. Educ. 84, 8–25. doi: 10.1016/j.compedu.2014.12.017
Alkhaqani, A. L. (2023). ChatGPT and nursing education: Challenges and opportunities. Al-Rafidain J. Med. Sci. 4, 50–51. doi: 10.54133/ajms.v4i.110
Anders, B. A. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Patterns 4:100694. doi: 10.1016/j.patter.2023.100694
Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., et al. (2023). Speculative futures on ChatGPT and generative Artificial intelligence (AI): A collective reflection from the educational landscape. Asian J. Distance Educ. 18, 1–33.
Caldarini, G., Jaf, S., and McGarry, K. (2022). A literature survey of recent advances in chatbots. Information 13:41. doi: 10.3390/info13010041
Chassignol, M., Khoroshavin, A., Klimova, A., and Bilyatdinova, A. (2018). Artificial Intelligence trends in education: A narrative overview. Proc. Comput. Sci. 136, 16–24. doi: 10.1016/j.procs.2018.08.233
Chaudhry, I. S., Sarwary, S. A. M., El Refae, G. A., and Chabchoub, H. (2023). Time to revisit existing student’s performance evaluation approach in higher education sector in a new era of ChatGPT — A case study. Cogent. Educ. 10:2210461. doi: 10.1080/2331186X.2023.2210461
Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learn. Environ. 32, 6187–6203. doi: 10.1080/10494820.2023.2253861
Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. J. Sci. Educ. Technol. 32, 444–452. doi: 10.1007/s10956-023-10039-y
Cotton, D. R. E., Cotton, P. A., and Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148
Crawford, J., Cowling, M., and Allen, K.-A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). J. University Teach. Learn. Pract. 20, 1–24. doi: 10.53761/1.20.3.02
Crompton, H., and Burke, D. (2023). Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 20:22. doi: 10.1186/s41239-023-00392-8
Daneji, A. A., Ayub, A. F. M., and Khambari, M. N. M. (2019). The effects of perceived usefulness, confirmation and satisfaction on continuance intention in using massive open online course (MOOC). Knowledge Manag. E-Learn. Int. J. 11, 201–214. doi: 10.34105/j.kmel.2019.11.010
Elareshi, M., Habes, M., Youssef, E., Salloum, S. A., Alfaisal, R., and Ziani, A. (2022). SEM-ANN-based approach to understanding students’ academic-performance adoption of YouTube for learning during Covid. Heliyon 8:e09236. doi: 10.1016/j.heliyon.2022.e09236
Guillén-Gámez, F. D., Gómez-García, M., and Ruiz-Palmero, J. (2024). Digital competence in research work: Predictors that have an impact on it according to the type of university and gender of the Higher Education teacher. Pixel-Bit. Rev. Med. Educ. 69, 7–34. doi: 10.12795/pixelbit.99992
Jangjarat, K., Kraiwanit, T., Limna, P., and Sonsuphap, R. (2023). Public perceptions towards ChatGPT as the Robo-Assistant. Online J. Commun. Media Technol. 13:e202337. doi: 10.30935/ojcmt/13366
Javaid, M., Haleem, A., Singh, R. P., Khan, S., and Khan, I. H. (2023). Unlocking the opportunities through ChatGPT Tool towards ameliorating the education system. BenchCouncil Trans. Benchmarks Standards Eval. 3:100115. doi: 10.1016/j.tbench.2023.100115
Joo, Y. J., Park, S., and Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and technology acceptance model. J. Educ. Technol. Soc. 21, 48–59.
Kang, Y., Cai, Z., Tan, C.-W., Huang, Q., and Liu, H. (2020). Natural language processing (NLP) in management research: A literature review. J. Manag. Anal. 7, 139–172. doi: 10.1080/23270012.2020.1756939
Li, Y., and Zhang, Y. (2023). “Analysis of factors influencing ChatGPT user’s willingness to use based on principal component analysis,” in proceedings of the sixth international conference on advanced electronic materials, computers, and software engineering (AEMCSE 2023), (Piscataway, NJ: IEEE), doi: 10.1117/12.3004527
Lin, Y., and Yu, Z. (2023). Extending technology acceptance model to higher-education students’ use of digital academic reading tools on computers. Int. J. Educ. Technol. High. Educ. 20:34. doi: 10.1186/s41239-023-00403-8
Meishar-Tal, H. (2024). ChatGPT: The challenges it presents for writing assignments. TechTrends Linking Res. Pract. Improve Learn. 68, 705–710. doi: 10.1007/s11528-024-00972-z
Parsons, A. J., and Curry, N. (2024). Generative AI and the future of assessment in higher education. J. Univ. Teach. Learn. Pract. 21.
Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. J. Univer. Teach. Learn. Pract. 20, 1–14. doi: 10.53761/1.20.02.07
Pradana, M., Elisa, H. P., and Syarifuddin, S. (2023). Discussing ChatGPT in education: A literature review and bibliometric analysis. Cogent Educ. 10:2243134. doi: 10.1080/2331186X.2023.2243134
Razmak, J., and Bélanger, C. (2018). Using the technology acceptance model to predict patient attitude toward personal health records in regional communities. Inf. Technol. People 31, 306–326. doi: 10.1108/ITP-07-2016-0160
Romero-Rodríguez, J.-M., Ramírez-Montoya, M.-S., Buenestado-Fernández, M., and Lara-Lara, F. (2023). Use of ChatGPT at university as a tool for complex thinking: Students’ perceived usefulness. J. New Approaches Educ. Res. 12, 323–339. doi: 10.7821/naer.2023.7.1458
Rospigliosi, P. (2023). Artificial intelligence in teaching and learning: What questions should we ask of ChatGPT? Interactive Learn. Environ. 31, 1–3. doi: 10.1080/10494820.2023.2180191
Ruiz Viruel, S., Sánchez Rivas, E., and Ruiz Palmero, J. (2025). The role of artificial intelligence in project-based learning: Teacher perceptions and pedagogical implications. Educ. Sci. 15:150. doi: 10.3390/educsci15020150
Shahsavar, Z., Kafipour, R., Khojasteh, L., and Pakdel, F. (2024). Is artificial intelligence for everyone? Analyzing the role of ChatGPT as a writing assistant for medical students. Front. Educ. 9:1457744. doi: 10.3389/feduc.2024.1457744
Shahzad, A., Hassan, R., Aremu, A. Y., Hussain, A., and Lodhi, R. N. (2021). Effects of COVID-19 in E-learning on higher education institution students: The group comparison between male and female. Quality Quantity 55, 805–826. doi: 10.1007/s11135-020-01028-z
Siddiq, F., and Scherer, R. (2019). Is there a gender gap? A meta-analysis of the gender differences in students’ ICT literacy. Educ. Res. Rev. 27, 205–217. doi: 10.1016/j.edurev.2019.03.007
Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interactive Learn. Environ. 32, 5142–5155. doi: 10.1080/10494820.2023.2209881
Susnjak, T. (2022). ChatGPT: The end of online exam integrity? (arXiv:2212.09292). arXiv [Preprint] doi: 10.48550/arXiv.2212.09292
Tømte, C., and Hatlevik, O. E. (2011). Gender-differences in Self-efficacy ICT related to various ICT-user profiles in Finland and Norway. How do self-efficacy, gender and ICT-user profiles relate to findings from PISA 2006. Comput. Educ. 57, 1416–1424. doi: 10.1016/j.compedu.2010.12.011
Tsai, M.-J., and Tsai, C.-C. (2010). Junior high school students’ Internet usage and self-efficacy: A re-examination of the gender gap. Comput. Educ. 54, 1182–1192. doi: 10.1016/j.compedu.2009.11.004
Vela-Acero, C., and Jiménez-Cortés, R. (2022). Learning experience with digital technologies and its influence on the scientific competence of high school students. Educar 58, 141–156. doi: 10.5565/rev/educar.1319
Keywords: artificial intelligence, educational technology, ethics, gender, higher education
Citation: Sánchez-Rivas E, Franco-Caballero PD, Sánchez-Rodríguez J and Ruiz-Viruel S (2026) The use of ChatGPT in university assignments: an analysis based on gender variable. Front. Educ. 10:1738691. doi: 10.3389/feduc.2025.1738691
Received: 03 November 2025; Revised: 24 December 2025; Accepted: 31 December 2025;
Published: 21 January 2026.
Edited by:
Reza Kafipour, Shiraz University of Medical Sciences, IranReviewed by:
Zahra Shahsavar, Shiraz University of Medical Sciences, IranFareeha Javed, Government of Punjab, Pakistan
Copyright © 2026 Sánchez-Rivas, Franco-Caballero, Sánchez-Rodríguez and Ruiz-Viruel. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Enrique Sánchez-Rivas, ZW5yaXF1ZXNyQHVtYS5lcw==
José Sánchez-Rodríguez