- 1Departamento de Filología Hispánica y Clásica, Universidad de Castilla-La Mancha, Ciudad Real, Spain
- 2Departamento de Educación, Universidad de Almería, Almería, Spain
- 3Departament de Filologia Catalana, Universitat d’Alacant, Sant Vicent del Raspeig, Spain
- 4Departamento de Biblioteconomía y Documentación, Universidad de Salamanca, Salamanca, Spain
Introduction: The aim of this systematic review is to examine the scientific literature published on digital reading and writing in higher education within the field of social sciences, assisted by generative artificial intelligence.
Methods: The PRISMA methodology and the SALSA Framework were applied, based on a bibliographic search conducted in the Scopus and Web of Science databases. Journal articles that explicitly addressed the established topic, published between 1 January 2023 and 7 March 2025, in open access, in Spanish or English, and within the field of Social Sciences, were included. After a rigorous screening and selection process, a final sample of 136 articles was compiled and used as the basis for the study.
Results: The findings indicate that the reviewed research employs a range of methodologies, encompassing quantitative (surveys, experimental studies, psychometric evaluations), qualitative (case studies, semi-structured interviews, thematic analysis), and mixed-method approaches. The results also reveal a clear trend toward the integration of artificial intelligence tools –particularly ChatGPT– into academic writing processes. A significant improvement is observed in the quality of students’ texts, especially regarding coherence, discursive organization, lexical richness, and argumentation. Furthermore, the role of AI in formative feedback, idea generation, paraphrasing, and fostering student autonomy in self-editing their texts is highlighted. The research also identifies key challenges, such as students’ overreliance on AI, diminished metacognitive engagement, and ethical dilemmas related to plagiarism and authorship.
Discussion: The emergence of AI in higher education is transforming teaching and learning processes, creating opportunities for personalization and enhanced support in academic writing. However, the scientific literature also exposes tensions between its potential benefits and associated risks, such as student dependency, loss of critical thinking, and ethical concerns regarding authorship and plagiarism. These findings call for a rethinking of pedagogical, assessment, and institutional practices, as well as the development of critical and digital literacy skills among both teachers and students.
1 Introduction
The rise of generative artificial intelligence (hereinafter GAI) in education has transformed students’ reading and writing practices, giving rise to an emerging field of study that warrants rigorous academic scrutiny. In this context, it is essential to investigate the key theories and concepts underpinning these transformations in higher education, the methodologies and data collection techniques applied in research, the findings obtained, and the challenges generated by the use of GAI.
The implementation of GAI in educational contexts holds the potential to transform classroom dynamics by offering tools that enable personalized learning based on students’ individual needs, automate administrative and assessment tasks, and generate adaptive educational content (UNESCO, 2024, 2025; Fornons del Arco and Bravo, 2024), thereby improving instructional decision-making. In today’s universities –characterized by digitalization and globalization– GAI stands out as a valuable resource for optimizing the teaching–learning process.
Nevertheless, its use also presents considerable challenges, including teachers’ resistance to change, the need to adapt methodologies, the revision of learning outcomes to align with evolving professional competencies, and the adoption of new assessment paradigms that leverage technology. Moreover, given the increasing capacity of GAI to emulate human writing (Aburass and Abu Rumman, 2024), it is necessary to address issues of plagiarism (Baron, 2024; Cotton et al., 2024; Ezeiza, 2023; Fleckenstein et al., 2024; Hassoulas et al., 2023; Waltzer et al., 2024) from ethical and preventive perspectives. This shift also requires ongoing professional development for both students and educators, as well as preliminary investment (Cruz Argudo et al., 2024, pp. 10–12).
From a sociocultural standpoint, Cassany and Morales (2009, p. 120) argue that reading and writing are contextually influenced practices. Therefore, in higher education, they advocate academic literacy aligned with the discursive conventions of each discipline—an approach that goes beyond the application of general literacy skills. Although literacy has traditionally referred to reading and writing abilities, in this context it should also encompass media, digital, and information literacies, alongside the acquisition of knowledge and skills required to understand and use GAI effectively (Kong et al., 2021; Bellas, 2024, p. 36). Accordingly, a systematic review of the scientific literature, as well as an update of existing reviews, is required—forming the central focus of this article.
The main objective of this systematic review is to analyze the role of GAI in reading and writing within higher education. From this primary aim, the following specific objectives are derived:
• To identify the most relevant findings of the empirical research conducted (e.g., number of studies, geographical scope, etc.).
• To understand the key theories, concepts, topics, perceptions, and uses of GAI in academic practice, as well as the principal debates, challenges, and difficulties arising from its use.
• To determine the methodological approaches and data collection techniques employed in the reviewed literature.
2 Methods
Regarding the study design and review protocol, a Systematic Literature Review (SLR) was conducted to examine the state of the art concerning the influence of GAI on the reading and writing practices of higher education students. The methodology was structured around several phases adapted from the SALSA Framework (Search, Appraisal, Synthesis, and Analysis) (Codina, 2018, 2023) and adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, applicable to the field of Social Sciences (Page et al., 2021; Chapman, 2021; Chiu et al., 2011). In addition, the recommendations proposed in the Methodological Protocol for the Development of Reliable and Valid Artificial Intelligence-Assisted Content Analysis: A Practical Guide with ChatGPT (Goyanes and De-Marcos, 2025) were also taken into account.
A structured approach was adopted for the adapted stages of the SALSA Framework, integrating them with the Preferred Reporting Items for Systematic Reviews and Meta- Analyses (PRISMA) to ensure a clear and consistent review design. The subsequent sections of this article present the synthesis of results, followed by the discussion and conclusions.
2.1 Stage 1: search, sources, and search strategy
The initial step involved selecting information sources. The Web of Science (WoS) and Scopus databases were chosen due to their relevance, scientific impact, and multidisciplinary scope. The following search strategy (Table 1) was applied to peer-reviewed journal articles in Spanish (the researchers’ native language) and English (the primary language of scientific communication), available in open access, and limited to the “Title,” “Abstract,” and “Keywords” fields: “reading” OR “lectura” OR “writing” OR “escritura”) AND (“Artificial Intelligence”) AND (“higher education” OR “university”). These searches were conducted on march 7, 2025.
In the WoS database, the search was limited to the subject areas of Education, Communication, Linguistics, Psychology, and Information Science, with a result of 138 articles.
In Scopus, the search was restricted to Social Sciences, returning 116 articles.
In total, 254 articles were retrieved. Following this compilation, the publications were downloaded (Phase 1. Identification - PRISMA).
2.2 Stage 2: evaluation: selection of studies and data extraction
After reading the titles and abstracts, the process of eliminating duplicate and irrelevant studies was carried out (Phase 2. Screening—PRISMA). The total sample was reduced to 215 articles.
Next, to refine the records, the inclusion and exclusion criteria (Table 2) were applied, eliminating false positives (Phase 2. Screening—PRISMA). After this process, the final sample comprised of 136 articles.
2.3 Stage 3: data analysis
A preliminary thematic content analysis (Phase 3. Eligibility – PRISMA) was carried out on the 136 articles using the AI tool NotebookLM, which relies exclusively on the user-provided documents to minimize the likelihood of hallucination. Data extraction and analysis were conducted using the following variables or codebook (Goyanes and De-Marcos, 2025): methodologies, contributions made, and relevant results (theories developed, key concepts, main debates, and notable problems and results).
The following prompts were defined and applied for this analysis:
• Cite this academic work following the APA 7th edition model.
• Summarize and explain the methodology used, the discipline to which it applies, the tool used, the academic level of the students, and the geographical scope in 200 words.
• Highlight the contributions it makes to the topic “contributions of artificial intelligence to the study of written language in higher education” in no more than 150 words.
• Summarize the theories developed, key concepts, main debates, as well as the most notable problems and relevant results in no more than 100 words.
2.4 Stage 4: synthesis
After the preliminary analysis, the full texts were reviewed for the final selection of studies based on methodological quality criteria (Phase 4. Inclusion—PRISMA). The final corpus consisted of 136 studies used for the qualitative synthesis of results (Figure 1).
2.5 Data validity and reliability
To ensure validity and reliability of the results, a scientific validation protocol was implemented through a parallel analysis involving both human researchers and the AI system. In the first phase, the AI generated an initial classification of emerging themes and subthemes from the corpus of selected articles. Next, the team of expert researchers independently reviewed the same dataset. This review was carried out using a manual categorization process based on the principles of thematic analysis (Braun and Clarke, 2006) and grounded theory (Strauss and Corbin, 2014). The researchers conducted a detailed reading and coding. This approach allowed them to capture interpretive nuances, contextual variations, and conceptual elements that might not be detected by GAI due to its reliance on statistical patterns. Finally, to resolve discrepancies between the GAI results and the human analysis, a general review was conducted to verify the data produced by the GAI, and human judgment was used to correctly categorize the topics and include the different studies in the previously established subject areas.
3 Results
During the 27-month period analyzed, 136 articles on GAI, writing, and reading in higher education within the social sciences were identified in the WoS and Scopus databases—a substantial number that highlights both the academic interest in and the growing significance of this research area. This evidence confirms that scientific production on the subject constitutes a dynamic and rapidly expanding field, reflecting the strong appeal this phenomenon holds within the academic community.
3.1 Geographic distribution of research
Interest in the impact of GAI on higher education is a global phenomenon; however, scientific output reveals notable geographical concentration. Of the 136 articles, 109 specify the country in which the study was conducted (Figure 2; Table 3), and 8 address multiple or comparative geographical regions. The remaining publications do not specify a particular area of application.
Asia emerges as the continent with the highest volume of research, led by China (12 articles) and Saudi Arabia (11). This dominance suggests a particular interest in integrating AI technologies into higher education systems within these regions.
Europe also demonstrates significant activity, with Spain (6) and the United Kingdom (5) leading production. The distribution indicates that while the debate is global, specific academic centers are spearheading empirical research on the uses, perceptions, and challenges of AI in university contexts.
3.2 Methodological approaches and data collection techniques
Research on GAI in reading and writing is characterized by substantial methodological diversity, although with a clear predominance of quantitative approaches.
3.2.1 Quantitative studies
Most of these studies adopt a quantitative methodology, mainly through cross-sectional surveys that explore the perceptions, attitudes, and uses of AI among students and teachers in higher education. These studies demonstrate a rigorous approach to quantifying these variables, using a variety of statistical tools to draw meaningful conclusions from the data collected.
Likert-scale questionnaires are common, varying in their levels: there are 4-point scales (Ortega-Rodríguez and Pericacho-Gómez, 2025), 5-point scales (Alkamel and Alwagieh, 2024; Liang et al., 2024; Gasaymeh et al., 2024; Al-Raimi et al., 2024), and even 7-point scales (Liang et al., 2024). To study the data collected, some studies use descriptive and inferential statistical analyses to understand the use of GAI tools among students (Almassaad et al., 2024; Mosleh et al., 2023; Nemt-allah et al., 2024; Ortega-Rodríguez and Pericacho-Gómez, 2025; Playfoot et al., 2024; Rababah et al., 2024) or to measure the degree of AI literacy (Stojanov et al., 2024).
Sample sizes vary widely, ranging from 21 participants in exploratory research (Torres-Gómez, 2024) to 1,035 participants in large-scale studies (Fiialka et al., 2023). While most of the work focuses on the student perspective, there are also notable studies that include teachers among the respondents, thus offering a more complete view of the phenomenon (Fiialka et al., 2023; Perezchica-Vega et al., 2024; Barrett and Pack, 2023).
Quantitative studies focus on understanding the use of a wide range of AI tools. The most overwhelmingly referenced is ChatGPT (Črček and Patekar, 2023; Ortega-Rodríguez and Pericacho-Gómez, 2025; Alkamel and Alwagieh, 2024; Juanda and Afandi, 2024; Nemt-allah et al., 2024; Rababah et al., 2024; Playfoot et al., 2024; Stojanov et al., 2024; Elhassan et al., 2025; Hesse and Helm, 2025). However, research is not limited to this model and also analyzes image generation tools such as Midjourney, DALL-E (Perezchica-Vega et al., 2024; Torres-Gómez, 2024) and Stable Diffusion (Liang 2024).
Additionally, translation and correction tools that improve the fluency, accuracy, and quality of writing, such as Grammarly or Quillbot (Almassaad et al., 2024), Jasper.ai (Gasaymeh et al., 2024; Malik et al., 2023), Rytr.me (Gasaymeh et al., 2024), Paperpal (Al-Raimi et al., 2024), and Copy.ai (Malik et al., 2023). In other cases, research has been conducted on the use of applications focused on plagiarism detection, such as GPTZero or Turnitin (Almassaad et al., 2024), tools for making presentations, such as Tome (Almassaad et al., 2024), or applications to promote foreign language learning (Alkamel and Alwagieh, 2024). Finally, virtual assistants such as Gemini (Almassaad et al., 2024), Socratic, or CoPilot are also mentioned.
3.2.2 Quasi-experimental studies
Another significant methodological approach involves quasi-experimental designs. These are characterized by evaluating the impact of an intervention on pre-existing groups, without complete random assignment, which allows cause-and-effect relationships to be established, albeit with certain limitations in terms of generalization. These studies illustrate how GAI is actively reshaping various areas, such as language learning, reading comprehension (Celik et al., 2024), writing and text revision (de Vicente-Yagüe-Jara et al., 2023; Chen et al., 2025), and motivation for learning (Fan et al., 2025). The scope of these studies is broad, ranging from the ethics of assessment and academic integrity to the understanding of cognitive processes and creativity, and even personality analysis through texts (El Bahri et al., 2024).
3.3 Theories, perceptions, uses, challenges, and difficulties of GAI
The reviewed articles reveal a multifaceted and rapidly evolving field of research. The findings can be structured around key thematic axes.
The first is the construction of theoretical and conceptual frameworks designed to understand and guide the integration of GAI in the university setting. Given the novelty of the phenomenon, researchers are developing new models to address issues such as author integrity, perceptions of the academic community, the functionality of tools, and the need for critical literacy in GAI.
The second theme focuses on the perceptions and uses of AI in academic practice. A considerable amount of research is devoted to empirically documenting how students and teachers from diverse geographical and disciplinary contexts are using these tools. These studies explore perceived benefits, such as improvements in the formal quality of texts and efficiency in the writing process, as well as patterns of use of specific applications. Similarly, the role of GAI as a feedback and correction tool is highlighted. In addition, the literature analyzes in depth the potential of GAI to offer formative feedback, comparing its effectiveness with human feedback and exploring hybrid models that combine both approaches. This axis also addresses one of the most significant challenges associated with GAI: the difficulty of detecting AI-generated texts and ensuring integrity in assessment processes.
Finally, the third major axis deals with the challenges posed by GAI in relation to plagiarism, authorship, and academic integrity.
3.3.1 Emerging theoretical and conceptual frameworks
The articles analyzed not only describe phenomena but also offer theoretical contributions and conceptual frameworks that explore how GAI and computer science are transforming higher education, academic writing, and research processes. These contributions seek to understand and guide their ethical, critical, and productive use, as well as to reflect on their implications for the future of the university and scientific writing. The most relevant frameworks identified in the literature are detailed below (Table 4):
• “Writer’s Integrity” Framework: This framework was developed by Aburass and Abu Rumman (2024) as a novel system to verify the authenticity of human authorship in academic texts and consequently offer a solution to the challenges of academic integrity in the age of AI writing. Unlike detection tools that analyze the final product, this approach is based on analyzing the writing process in real time, recording metrics such as typing speed, editing frequency, and the proportion of pasted text.
• Theories of AI Perception: Andersen et al. (2025) identify three key “theories of perception” about how researchers view generative AI in the research process. The first is the perception of GAI as a “workhorse,” a tool for automating routine and tedious tasks. The second is as a “language assistant,” focused on improving writing and editing. The third is as a “research accelerator,” used to generate ideas, analyze literature, and streamline the research process in its early stages. This framework helps to understand the diversity of attitudes and approaches toward GAI in the academic community.
• Functional Taxonomy of AI Tools for Writing: Laborda et al. (2024) make this proposal for teaching writing in a foreign language. This classification organizes tools according to their specific capabilities, such as text summarization (to generate abstracts or conclusions), writing enhancement (rewriting, revision, formatting), and assistance in searching for sources and references. This taxonomy offers a practical guide for educators to understand the potential of each type of tool and to monitor its ethical use in the classroom.
Critical GAI Literacy (C-GAI-L): Ou et al. (2024a) develop and conceptualize this theory as an essential competency for doctoral students. This multidimensional framework integrates critical thinking and pedagogy as well as self-learning so that students can ethically evaluate and effectively use GAI. The goal is to foster originality and voice in academic writing, avoiding uncritical dependence on technology. This framework responds to the need to train future researchers in the responsible and sophisticated use of GAI.
3.3.2 Perceptions and uses of GAI in academic practice
Recent studies in different geographical and educational contexts show how students perceive and use GAI to support their academic writing processes. Research by Al-Raimi et al. (2024) and Wang Y. (2024) points to significant benefits of these tools in the development of second language writing skills. In both cases, learners express favorable attitudes toward the integration of GAI, highlighting among its most frequent uses translation, grammar and spelling error checking, paragraph writing, and idea generation.
Similarly, Chauke et al. (2024), in a study conducted in South Africa with graduate students, show very positive perceptions regarding the use of ChatGPT, especially in tasks such as refining research topics, rephrasing sentences, and optimizing the time spent on bibliographic research.
Complementarily, de Vicente-Yagüe-Jara et al. (2023) highlight that ChatGPT can contribute significantly to narrative fluency and originality, although they caution that these advantages have limitations in contexts that require advanced literary creativity. Similarly, Alkamel and Alwagieh (2024) document the experience of Yemeni students, who report improvements in their ability to generate ideas, express complex concepts, and acquire new perspectives for structuring their academic work. From a theoretical perspective, Kohnke (2024) concludes that GAI tools are particularly beneficial within the framework of the Zone of Proximal Development, reinforcing learning in contexts where pedagogical support is key.
In South Korea, Lee et al. (2024) explore perceptions of tools such as Google Translate, Naver Papago, and Grammarly. Their quantitative and qualitative findings indicate that these applications are perceived as useful resources for improving writing skills, especially in identifying and correcting errors, structuring ideas, and enriching vocabulary. For his part, Mudawy (2024) points out that AI can improve learning outcomes and alleviate the burden on students with disabilities. Similarly, Rafida et al. (2024) explore perceptions in Indonesia and Taiwan, concluding that AI promotes grammar, sentence formulation, paraphrasing, vocabulary enrichment, and efficiency in topic generation.
In the Saudi context, Alshammari (2024) highlights that ChatGPT is valued as a resource capable of structuring essays, offering personalized feedback, and facilitating the generation and organization of ideas, reducing the anxiety associated with the writing process. Liang et al. (2024) indicate that GAI effectively helps students with grammar correction, word choice, sentence structure, and content organization in English, adding that perceived usefulness depends on the tool’s ease of use. Complementarily, Almassaad et al. (2024) state that among the main uses of ChatGPT by students are the definition of concepts, translation, the generation of writing ideas, and the synthesis of academic literature.
The effectiveness of specific applications has also been studied. Dizon and Gold (2023), using a quasi-experimental design, conclude that Grammarly has a positive impact on grammatical accuracy and lexical richness in writing. Karataş et al. (2024) demonstrate a favorable impact of ChatGPT use on writing improvement, as well as on grammar and vocabulary development in university students. Moussa and Belhiah (2024) report that AGI enhances linguistic competence, creativity, textual organization, and linguistic complexity in Moroccan university students.
Other studies have addressed more complex dynamics of human-AI collaboration. This is the case of Nguyen et al. (2024), who analyze the strategies that doctoral students use in their AI-assisted academic writing processes. From a posthumanist perspective, Ou et al. (2024b) highlight, based on the analysis of more than 1,700 student comments, that AI-assisted writing technologies contribute not only to improved communication but also to the identity and linguistic development of students.
Some studies have also explored the biases and limitations of AI-generated texts. Alvero et al. (2024) warn that automatically created essays tend to reproduce styles associated with male student profiles with higher levels of social privilege. Other approaches, such as that of Celik et al. (2024), highlight specific benefits in reading comprehension and inference, while Juanda and Afandi (2024) conclude that students have a lower level of comprehension compared to ChatGPT, which creates a need to review educational programs and integrate more practical and technological methodologies.
In Indonesia, Malik et al. (2023) identify a positive reception among students toward AI tools in academic essay writing, highlighting their usefulness in grammar correction, plagiarism detection, translation, and outline creation. Complementarily, Marzuki et al., 2023 document the adoption of various applications (Quillbot, WordTune, Jenni, ChatGPT, Paperpal, Copy.ai, Essay Writer) in English as a foreign language classrooms, with notable improvements in the content and organization of the texts produced. At Najran University, Mohammad et al. (2024) highlight the role of QuillBot in improving paraphrasing skills.
Recent experiences have also analyzed the integration of specific tools into pedagogical frameworks. Muslimin et al. (2024) evaluate the impact of Cami, an AI- based application. The results indicate significant improvements in the writing of English as a foreign language students, as well as positive perceptions of its usefulness in learning.
The reviewed literature pays special attention to the potential of AIG as a tool for correction and the provision of formative feedback. Mardini et al. (2024) implement an Automatic Short Answer Grading (ASAG) system to assess reading comprehension in Spanish, using open-ended questions based on aphorisms, in order to measure inference skills. Along the same lines, the exploratory study by McGuire et al. (2024) presents a constructivist-based model using ChatGPT to provide formative, individualized, and simulated peer feedback on graduate students’ writing. The authors emphasize the potential of AGI to strengthen learning by fostering the social construction of knowledge, skill development, critical thinking, and the refinement of research approaches. However, they also point out its limitations, such as the inability to provide feedback on issues of format, grammar, or spelling, and emphasize that it does not replace human experience.
Several studies confirm the positive impact of automatic feedback. Mohammed and Khalid (2025) show that it improves motivation, emotional intelligence, and writing proficiency in English as a second language. Similarly, Chan et al. (2024) present empirical research on the effects of feedback generated by GPT-3.5-turbo on essay writing. Their results demonstrate significant improvements in text quality, as well as increased student engagement and motivation. Abduljawad (2024) concludes that the formative feedback provided by ChatGPT significantly improves English as a second language writing skills compared to traditional methods. In turn, Dai et al. (2024) delve into the ability of GPT-3.5 and GPT-4 to generate feedback on open-ended writing tasks, concluding that both models produce more readable and consistent observations than those of human instructors. The study warns, however, that although GPT models offer opportunities to improve the efficiency and quality of feedback, their reliability limitations need to be addressed and they need to be supplemented with other sources of feedback. Ricart-Vayá (2024) shows that ChatGPT can be useful for detecting grammatical errors—such as the inappropriate use of indefinite articles or relative pronouns—suggesting more accurate vocabulary and improving textual clarity, as well as identifying the absence of argumentation or examples in the content. However, it is emphasized that its effectiveness depends on very precise user instructions and teacher supervision, given its limitations.
The importance of combining automatic and human feedback is confirmed in the study by Banihashem et al. (2024), who compared the quality of responses generated by ChatGPT with those provided by human peers in argumentative essays by graduate students. Their findings indicate that ChatGPT provides more descriptive feedback, while peers excel at identifying problems.
Similarly, Kurt and Kurt (2024) analyze future English teachers who, as part of an academic writing course, received feedback from ChatGPT, peers, and the teacher. The results show that participants valued ChatGPT as a particularly useful feedback tool for second language writing, highlighting its quality, practicality, interactivity, and adaptability. However, they also pointed out limitations associated with the inconsistency of its responses. Students expressed a clear preference for mixed feedback from ChatGPT, peers, and teachers in order to maximize the positive impact on their writing development.
In a convergent sense, Kim et al. (2025) point out that the results do not support the replacement of human tutors or the construction of an GAI with fully human traits. Rather, they emphasize the need for educational GAI design to prioritize the needs, characteristics, and experiences of students and teachers, enhancing active collaboration in human-GAI interactions. Along the same lines, Biju et al. (2024) suggest that AI-assisted assessments can create more favorable learning environments, reduce anxiety, improve attitudes, and increase motivation, providing valuable information. Complementarily, Chen and Gong (2025) indicate that a key factor in improving performance is immediate and specific feedback on grammar and vocabulary, identifying and correcting errors, organizing texts, completing sentences, etc., which contributes to improving the overall quality of their work. For their part, Ho (2024) analyze the use of tools to improve the English writing skills of university students in Hong Kong. Although students perceived these as convenient and useful, teachers expressed concerns about the general and ambiguous nature of the feedback and examples provided.
On the other hand, teachers’ perceptions are a central factor in the adoption of these tools. Alsalem (2024), in a study of teachers of English as a foreign language in Saudi Arabia, analyzes their assessments of CoGrader, an AI-based AWE tool for grading and providing feedback on essays. The results show that teachers value its usefulness as a complementary resource, capable of saving time and reducing workload, although they do not consider it a full substitute for their role.
Another line of research reflects on the ability of GAI to match human writing. Charpentier-Jiménez (2024) compares texts produced by students of English as a foreign language with those generated by Copy.ai, noting that GAI excels in the use of grammar and vocabulary, while human writing excels in content and organization. Similarly, Revell et al. (2024) point out that essays generated by GAI are comparable in quality to those written by undergraduate students, although the latter have greater contextual richness and nuances that GAI is not yet able to reproduce. Alrajhi (2024) explores English learners’ perceptions of the use of an educational chatbot as a conversational agent, highlighting positive perceptions in terms of intelligibility, practical support, and writing development, as well as its ability to reduce anxiety. However, limitations are evident, such as the lack of extended conversations, poor sensitivity to incorrect linguistic forms, and the generation of sporadically irrelevant responses. Along similar lines, Sarwanti et al. (2024) identify benefits of ChatGPT in writing support, personalized learning, productivity, brainstorming, and access to resources.
3.3.3 Challenges
The integration of GAI into higher education has placed academic integrity and authenticity at the center of an important and urgent debate. The ability of these tools to generate coherent and sophisticated texts raises fundamental questions about the originality and authorship of student work (Nguyen et al., 2024; Cotton et al., 2024; Wang and Ren, 2024). The most pressing concern for teachers lies in the risk that exams and assignments will be completed in whole or in part using GAI (Perezchica-Vega et al., 2024), which has led educators to reevaluate traditional teaching methods and forced them to update the definition of plagiarism and academic malpractice (Hassoulas et al., 2023; Revell et al., 2024).
One of the principal challenges is precisely the difficulty in detecting AI-generated content. Large language models (LLMs) can produce texts that closely resemble human writing (Alvero et al., 2024; Revell et al., 2024), making it extremely difficult to differentiate between student work and work generated by ChatGPT (Hassoulas et al., 2023; Revell et al., 2024). Empirical studies show that both novice and experienced teachers have serious difficulties identifying AI-generated texts (Fleckenstein et al., 2024), and that their ability to do so may be influenced by their previous experience with the technology (Hassoulas et al., 2023; Revell et al., 2024).
Hassoulas et al. (2023) obtained similar results: only 23 and 19% of evaluators correctly identified texts generated by ChatGPT in undergraduate and graduate courses, respectively. Although the writing style was indistinguishable, the content and bibliographic references raised suspicions about authorship. In this context, Baron (2024) warns that it is becoming increasingly difficult to determine what constitutes original work in an era of expanding generative AI.
This situation is exacerbated by the questionable reliability of existing detection tools, which makes it difficult to conclusively determine the origin of texts (Revell et al., 2024). In addition, methods such as “paraphrasing attacks,” which alter small elements of a text, can conceal the authorship of GAI (Revell et al., 2024), and it has been observed that detectors may discriminate against non-native speakers, whose linguistic structures may follow different patterns than those of native speakers (Alvero et al., 2024; Revell et al., 2024).
Given these difficulties, the debate has shifted from prohibition—considered neither advisable nor feasible (Perkins et al., 2024)—to transparency (Bozkurt, 2024; Perkins et al., 2024). Perkins (2023), Perkins et al. (2024) warns that GAI has reached a level where neither academic staff nor technological tools can determine authorship with certainty, which places the focus on explicit declaration: it is not the use that determines the infringement, but the absence of a transparent declaration about that use.
Similarly, Aburass and Abu Rumman (2024) propose a “Writer Integrity” framework as the basis for a new era in the verification of human texts in academic, research, and publishing sectors. For their part, Gralha and Pimentel (2024) describe the development of Gotcha GPT, a tool designed to distinguish AI-generated scientific manuscripts in English from those written by humans, using classifiers such as decision trees, random forests, and AdaBoost.
The advent of GAI has also called into question the traditional notion of individual authorship in academia (Wise et al., 2024). Students face a dilemma: choosing between producing a text that “sounds better” thanks to GAI or one that “sounds like themselves” (Wang C., 2024, p. 15). This tension can result in “AI-nized” writing that dilutes the student’s authentic voice, which is the essence of their identity in writing (Wang C., 2024, p. 15). There is also concern about the homogenization of language, as GAI could impose uniformity in expression (Alvero et al., 2024). Finally, legal dilemmas arise regarding copyright, as content created by GAI could be considered ineligible for registration and enter directly into the public domain (Bozkurt, 2024).
The reliability of information generated by GAI, the biases inherent in its algorithms, and the lack of transparency are areas of considerable concern in higher education. Some of these tools have been criticized for “hallucinating” content or providing incorrect information, a phenomenon that refers to the generation of inaccurate responses that nevertheless appear realistic (Kim et al., 2025; Revell et al., 2024). GAI can offer unreliable information, lacking real examples and with false citations or references (Alkamel and Alwagieh, 2024; Fakir et al., 2024; Sweeney, 2023; Thandla et al., 2024; Wang C., 2024; Baldrich and Domínguez-Oller, 2024; de Vicente-Yagüe-Jara et al., 2023; Esmaeil et al., 2023; Al-Zubaidi et al., 2024; Gasaymeh et al., 2024; Wise et al., 2024; Malik et al., 2023; Karataş et al., 2024; Dakakni and Safa, 2023; Almassaad et al., 2024). Studies such as those by Thandla et al. (2024) and Behrens et al. (2024) conclude that in the case of ChatGPT-4o, 46% of the bibliographic references generated do not exist. A recurring problem is that students often lack the skills to compare or verify the information obtained through these tools (Baldrich and Domínguez-Oller, 2024).
The opacity of AI training algorithms and data is a significant concern, as they can reinforce biased or discriminatory views (Kim et al., 2025), which can perpetuate discrimination, reinforce pre-existing inequalities (Wise et al., 2024), and introduce social biases related to race, gender, and other characteristics (Alvero et al.). It has even been labeled an “automated mansplaining machine,” as its capabilities are intrinsically linked to the worldview of its programmers, often described as white men from Silicon Valley (Wise et al., 2024, p. 583). This lack of transparency breeds mistrust and a deep sense of insecurity (Bozkurt, 2024, p. 4).
The digital divide and equity of access are also an important debate. The cost of premium features of AI tools is a concern and could widen the existing digital divide (Kohnke, 2024; Yeadon and Hardy, 2024), and a lack of equitable access can lead to disparities in learning outcomes (Marzuki et al., 2023). To avoid this problem, UNESCO has proposed a human-centered approach that ensures “AI for all” (de Vicente-Yagüe-Jara et al., 2023, p. 48).
Finally, there are ethical dilemmas related to data privacy and intellectual property of AI-generated content (Kim et al., 2025; Kohnke, 2024; Almassaad et al., 2024; Cordero et al., 2025; Perezchica-Vega et al., 2024; Pierrès et al., 2025), as well as concerns about the possibility of students’ work being used to train future AI models (Kohnke, 2024).
4 Discussion and conclusion
The emergence of AI in higher education has marked a significant milestone, rapidly transforming teaching and learning processes and generating a wide range of attitudes and perceptions (Almassaad et al., 2024; Arbona, 2024; Baron, 2024; Nemt-allah et al., 2024). In particular, GAI, with tools such as ChatGPT, which became popular in late 2022 and early 2023, has demonstrated the capacity to create original content and simulate human conversations, assist in writing, programing, and problem-solving tasks (Almassaad et al., 2024; Mendoza et al., 2024; Nemt-allah et al., 2024; Perezchica-Vega et al., 2024).
As evidenced in the results, the scientific literature reflects an ambivalent stance toward this phenomenon. On the one hand, significant opportunities are recognized for improving collaboration and student engagement (Cotton et al., 2024; Eager and Brunton, 2023), facilitating remote learning, generating personalized educational resources, and providing immediate and individualized feedback (Cotton et al., 2024; Eager and Brunton, 2023; Perkins et al., 2024, McGuire et al., 2024). Likewise, in the field of advanced research, AI can streamline processes such as identifying sources or reviewing literature (Storey, 2023).
However, this potential is offset by a number of critical challenges and concerns that emerge repeatedly in the studies analyzed (Andersen et al., 2025; Aburass and Abu Rumman, 2024). These debates, ranging from academic integrity to cognitive impact and equity, are at the heart of this discussion. Below, we explore in depth the main issues associated with the use of GAI in reading and writing processes in higher education.
4.1 Impact on cognitive skills and critical thinking
The debate on the use of AI in higher education also covers its impact on the development of students’ cognitive skills and creativity. A central and recurring concern in the literature is the potential for overconfidence and excessive dependence among students, which could hinder the development of critical thinking and other fundamental skills such as self-correction and text revision (Kim et al., 2025; Marzuki et al., 2023; Chen et al., 2025; Chen and Gong, 2025; Sarwanti et al., 2024; Stojanov et al., 2024; Wang Y., 2024; Wang and Ren, 2024). Students may be inclined to rely on the quick solutions offered by GAI, rather than taking the time to understand and learn from their mistakes (Marzuki et al., 2023).
This overdependence can lead to “mental laziness” and demotivation for independent study (Sarwanti et al., 2024, p. 107). Ultimately, GAI could deprive students of authentic and meaningful learning experiences that are necessary for deep intellectual development (Revell et al., 2024).
AI, despite its capabilities, may not be as effective in developing higher-order writing skills, such as argument structure and textual coherence (Marzuki et al., 2023; Özdere, 2025; Baldrich and Domínguez-Oller, 2024; Kim et al., 2025). The work generated with these tools often describes rather than analyzes, lacks depth in evaluation, may fail to recognize key aspects of the content (Revell et al., 2024), lacks a deep contextual understanding, and fails to capture the subtleties and nuances of human language (Santiago et al., 2023; Marzuki et al., 2023; Ricart-Vayá, 2024).
When it comes to creativity, opinions are divided. Some educators fear that AI may reduce students’ creative thinking and originality by generating ideas for them (Marzuki et al., 2023; Santiago et al., 2023). However, one study suggests that human-AI collaboration can improve fluency, flexibility, and originality in creative writing (de Vicente-Yagüe-Jara et al., 2023), although it is argued that AI lacks the human discernment to know when to stop generating ideas (de Vicente-Yagüe-Jara et al., 2023).
4.2 Pedagogical and evaluative implications
GAI acts as a new approach through which educational practices are examined and transformed, becoming a kind of “multifaceted explorer” that maps new territories and challenges traditional perceptions in various fields of study. The rapid evolution of AI has highlighted the need to adapt pedagogical practices in higher education. Its emergence requires a thorough reassessment of traditional teaching strategies and assessment mechanisms (Perezchica-Vega et al., 2024; Sevnarayan and Potter, 2024; Cotton et al., 2024; Bozkurt, 2024). It is necessary to design tasks that require deep reflection and critical analysis, as these are less likely to be performed entirely with this technology (Cordero et al., 2025). Despite these considerations, most teachers have not yet made significant adjustments to their assessment mechanisms (Perezchica-Vega et al., 2024).
Teachers require ongoing training to effectively integrate GAI into their classrooms and to develop critical thinking skills in their students (Kim et al., 2025; Cordero et al., 2025; Baldrich and Domínguez-Oller, 2024). This training should cover both technical skills and ethical-philosophical debates about its use (de Vicente-Yagüe-Jara et al., 2023).
In this context, GAI literacy is essential. Students must cultivate communication skills to generate appropriate prompts and develop critical thinking to evaluate the content generated (Kim et al., 2025). Emphasis is placed on the need for digital literacy activities and the practice of guidelines for formulating accurate prompts (Ortega-Rodríguez and Pericacho-Gómez, 2025), in fact, the design of instructions for AI (prompting) is going to become one of the essential competencies (Eager and Brunton, 2023; Bozkurt, 2024).
Teachers must assume the role of guide and supervisor. Their function is necessary to resolve doubts, complement the learning process, and verify the quality of student interactions (Ricart-Vayá, 2024). AGI should be seen as a tool that amplifies human capabilities, but it must allow student performance to remain at the center of the educational process (Wang C., 2024, p. 15).
A final point of discussion focuses on the possible reduction of human interaction and collaborative learning (Almassaad et al., 2024; Pierrès et al., 2025; Alshammari, 2024). Education involves the development of social and emotional skills that could be affected by excessive technological mediation (Baldrich and Domínguez-Oller, 2024, p. 3).
The scientific literature examined in this systematic review shows that the application of AI in higher education has led to a significant pedagogical and epistemological transformation, forcing a rethinking of teaching, assessment, and training practices across the academic spectrum and presenting a complex and ambivalent picture. On the one hand, it offers unprecedented opportunities for the personalization of learning, the automation of teaching tasks, and the generation of adaptive content that can support pedagogical decision-making. The results of this review confirm this potential: there is a significant improvement in the quality of student texts, especially in aspects such as coherence, discursive organization, lexical richness, and argumentation, as well as a strengthening of student autonomy in the self-editing of their texts.
However, this potential clashes with significant practical and ethical challenges that permeate the entire academic debate. Recent studies indicate that low teacher adoption is largely explained by concerns about academic integrity, the possibility of plagiarism, and the fear of being replaced by AI. At the institutional level, new competency frameworks and assessment methods are also required to integrate it without compromising the equity or originality of learning.
The evidence reviewed shows that, alongside the benefits, challenges such as student dependency, the possible reduction of metacognitive thinking, and profound ethical dilemmas linked to plagiarism and the redefinition of authorship are emerging strongly.
In this scenario of tension between opportunity and risk, the emerging theoretical frameworks identified in this review not only conceptualize the problem but also offer complementary and pragmatic solutions that allow for the detection of risks of impersonation and address concerns about AI’s ability to emulate human writing and protect academic authenticity.
Likewise, the three theories of perceptions of GAI identified in the research help explain why the reception of these tools is so heterogeneous among academics and why institutional implementation strategies must be differentiated and context-sensitive. Understanding these perceptions is essential for designing effective policies and training programs.
This analysis highlights the imperative need to cultivate the expanded and critical literacies necessary for students, especially at the graduate level, to integrate AI without losing their own voice or epistemological rigor. This modern and specialized approach connects directly with the sociocultural vision.
In summary, the successful and responsible integration of GAI into higher education is not merely a technological issue, but fundamentally a pedagogical and ethical one. While it offers tangible educational benefits in terms of personalization, efficiency, and new learning opportunities, it also poses risks to academic integrity and equity.
Ultimately, it is essential that educational institutions take a proactive role. Training in AI skills, both for students and teachers, and the redefinition of assessment methods are key conditions for successful integration.
The transformation of universities through AI cannot be left to chance; it demands robust institutional policies, sustained investment in professional development, and ongoing, open dialogue on ethics. Only then can this process be guided to balance innovation with equity and ensure that technology serves to enhance, rather than undermine, students’ intellectual and critical development.
5 Limitations of the research and prospective of the study
One of the most notable limitations of the study is the emerging nature of the phenomenon. GAI in higher education is recent (2022–2023) and is constantly evolving and being updated. In this regard, this study has researched the literature related to empirical studies and theoretical reviews published between January 1, 2023, and March 7, 2025. Therefore, much of the literature is still in its infancy, which limits the possibility of establishing consolidated, long-term trends. For this reason, it will be necessary to conduct a new review of what is published in the next 2 or 3 years.
On the other hand, it should be noted that although the two most comprehensive databases available were used for the search, relevant articles may have been omitted and will need to be located in specific databases such as ERIC for Education or PysycINFO for Psychology. However, given the degree of overlap between the aforementioned databases, the sample may be sufficiently significant. There is also a linguistic limitation, as the restriction to English and Spanish may have excluded relevant research published in other major languages, which could introduce geographical or cultural bias.
One of the future lines of research concerns the longitudinal impact of GAI on student learning. The aim is to design and implement empirical studies that can measure, in the medium and long term, how GAI influences critical thinking, creativity, autonomy, and the improvement of students’ academic writing.
On the other hand, we propose to study the evaluation of innovative teaching practices that generate the design of teaching models and assessment systems that integrate GAI in an ethical manner, guaranteeing both academic authenticity and integrity and the improvement and development of students’ metacognitive skills.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the author, without undue reservation.
Author contributions
AS-T: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. JD-O: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. JB-E: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. RG-D: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. AG-R: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This research has been derived from the R+D+I Project entitled “Educational transformation: Exploring the impact of Artificial Intelligence on the reading and writing skills of university students” (ref. PID2023-151419OB-I00), funded by: Ministry of Science, Innovation and University (MCIU)/State Research Agency (AEI)/10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe,” by the “European Union” or by the “European Union NextGenerationEU/PRTR.” Vice-Rectorate for Degrees and Teaching Innovation at the University of Almería for the award of Teaching Innovation Projects “Teaching cartographies for critical academic literacy: integrative approaches to generative artificial intelligence at the university” (25_26_1_23C) and “PLE-ALAI. The personal learning environment for academic literacy and artificial intelligence” (24_25_1_61C) in the Call for Teaching Innovation Projects for the 2024-2025 and 2025-2026 biennium.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The handling editor GL declared a past co-authorship with the author JB.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript.
A preliminary thematic content analysis of the articles was carried out using the AI tool NotebookLM.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abduljawad, S. A. (2024). Investigating the impact of ChatGPT as an AI tool on ESL writing: Prospects and challenges in Saudi Arabian higher education. Int. J. Comput. Assist. Lang. Learn. Teaching. 14:19. doi: 10.4018/IJCALLT.367276
Aburass, S., and Abu Rumman, M. (2024). Authenticity in authorship: The Writer’s Integrity framework for verifying human-generated text. Ethics Information Technol. 26:62. doi: 10.1007/s10676-024-09797-z
Alkamel, M. A. A., and Alwagieh, N. A. S. (2024). Utilizing an adaptable artificial intelligence writing tool (ChatGPT) to enhance academic writing skills among Yemeni university EFL students. Soc. Sci. Humanities Open 10:101095. doi: 10.1016/j.ssaho.2024.101095
Almassaad, A., Alajlan, H., and Alebaikan, R. (2024). Student perceptions of generative artificial intelligence: Investigating utilization, benefits, and challenges in higher education. Systems 12:385. doi: 10.3390/systems12100385
Al-Raimi, M., Mudhsh, B. A., Al-Yafaei, Y., and Al-Maashani, S. (2024). Utilizing artificial intelligence tools for improving writing skills: Exploring Omani EFL learners’ perspectives. Forum Linguistic Stud. 6:1177. doi: 10.59400/fls.v6i2.1177
Alrajhi, A. S. (2024). Artificial intelligence pedagogical chatbots as L2 conversational agents. Cogent Educ. 11:2327789. doi: 10.1080/2331186X.2024.2327789
Alsalem, M. S. (2024). EFL teachers’ perceptions of the use of an AI grading tool (CoGrader) in English writing assessment at Saudi universities: An activity theory perspective. Cogent Educ. 11:2430865. doi: 10.1080/2331186X.2024.2430865
Alshammari, J. (2024). Revolutionizing EFL learning through ChatGPT: A qualitative study. Amazonia Invest. 13, 208–221. doi: 10.34069/AI/2024.82.10.17
Alvero, A. J., Lee, J., Regla-Vargas, A., Kizilcec, R. F., Joachims, T., and Antonio, A. L. (2024). Large language models, social demography, and hegemony: Comparing authorship in human and synthetic text. J. Big Data 11:138. doi: 10.1186/s40537-024-00986-7
Al-Zubaidi, K., Jaafari, M., and Touzani, F. Z. (2024). Impact of ChatGPT on academic writing at Moroccan universities. Arab World English J. 1, 4–25. doi: 10.24093/awej/ChatGPT.1
Andersen, J. P., Degn, L., Fishberg, R., Graversen, E. K., Horbach, S. P. J. M., Kalpazidou Schmidt, E., et al. (2025). Generative Artificial Intelligence (GenAI) in the research process – a survey of researchers’ practices and perceptions. Technol. Soc. 81:102813. doi: 10.1016/j.techsoc.2025.102813
Arbona, G. (2024). La escritura creativa y el estímulo de la voz en la universidad. El Máster en escritura creativa de la Universidad Complutense de Madrid [Creative writing and voice development at university. The Master’s Degree in Creative Writing at the Complutense University of Madrid]. RILCE. Rev. Filología Hispánica. 40, 206–233. Spanish. doi: 10.15581/rilce.40.1.206-233
Baldrich, K., and Domínguez-Oller, J. C. (2024). El uso de ChatGPT en la escritura académica: Un estudio de caso en educación [The use of ChatGPT in academic writing: A case study in education]. Pixel-Bit. Rev. Med. Educ. 71:10. Spanish. doi: 10.12795/pixelbit.103527
Banihashem, S. K., Taghizadeh Kerman, N., Noroozi, O., Moon, J., and Drachsler, H. (2024). Feedback sources in essay writing: Peer-generated or AI-generated feedback? Int. J. Educ. Technol. High. Educ. 21:23. doi: 10.1186/s41239-024-00455-4
Baron, P. (2024). Are AI detection and plagiarism similarity scores worthwhile in the age of ChatGPT and other Generative AI? SOTL South 8, 151–179. doi: 10.36615/sotls.v8i2.411
Barrett, A., and Pack, A. (2023). Not quite eye to A.I: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. Int. J. Educ. Technol. High. Educ. 20, 1–24. doi: 10.1186/s41239-023-00427-0
Behrens, K. A., Marbach-Ad, G., and Kocher, T. D. (2024). AI in the Genetics Classroom: A useful tool but not a replacement for creative writing. J. Sci. Educ. Technol. 34, 621–625. doi: 10.1007/s10956-024-10160-6
Bellas, F. (2024). “Educar para la inteligencia artificial: Un enfoque en perspectiva [Educating for Artificial Intelligence: A Perspective Approach],” in Educación e inteligencia artificial: Horizontes de transformación [Education and artificial intelligence: Horizons of transformation], eds O. Flores-Alarcia and L. Fornons Casol (Madrid: Dykinson), 29–48. Spanish.
Biju, N., Abdelrasheed, N. S. G., Bakiyeva, K., Prasad, K. D. V., and Jember, B. (2024). Which one? AI-assisted language assessment or paper format: An exploration of the impacts on foreign language anxiety, learning attitudes, motivation, and writing performance. Language Testing Asia 45, 1–24. doi: 10.1186/s40468-024-00322-z
Bozkurt, A. (2024). GenAI et al.: Cocreation, authorship, ownership, academic ethics and integrity in a time of generative AI. Open Praxis 16, 1–10. doi: 10.55982/openpraxis.16.1.654
Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa
Cassany, D., and Morales, O. A. (2009). “Leer y escribir en la universidad: Los géneros científicos [Reading and writing at university: Scientific genres],” in Para ser letrados: Voces y miradas sobre la lectura [To Be Literate: Voices and Perspectives on Reading], ed. D. Cassany (Barcelona: Paidós), 109–128. Spanish.
Celik, F., Yangın Ersanlı, C., and Arslanbay, G. (2024). Does AI simplification of authentic blog texts improve reading comprehension, inferencing, and anxiety? A one-shot intervention in Turkish EFL context. Int. Rev. Res. Open Distributed Learn. 25, 288–299. doi: 10.19173/irrodl.v25i3.7779
Chan, S. T. S., Lo, N. P. K., and Wong, A. M. H. (2024). Enhancing university level English proficiency with generative AI: Empirical insights into automated feedback and learning outcomes. Contemp. Educ. Technol. 16, 1–17. doi: 10.30935/cedtech/15607
Chapman, K. (2021). Characteristics of systematic reviews in the social sciences. J. Acad. Librarianship. 47, 1–9. doi: 10.1016/j.acalib.2021.102396
Charpentier-Jiménez, W. (2024). Assessing artificial intelligence and professors’ calibration in English as a foreign language writing courses at a Costa Rican public university / Evaluación de la inteligencia artificial y de la calibración de docentes en los cursos de escritura de inglés como lengua extranjera en una universidad pública costarricense [Assessing artificial intelligence and professors’ calibration in English as a foreign language writing courses at a Costa Rican public university / Evaluación de la inteligencia artificial y de la balanza de docentes en los cursos de escritura de inglés como lengua extranjera en una universidad pública costarricinense]. Rev. Actualidades Invest. Educ. 24, 1–25. doi: 10.15517/aie.v24i1.55612 Spanish.
Chauke, T. A., Mkhize, T. R., Methi, L., and Dlamini, N. (2024). Postgraduate students’ perceptions on the benefits associated with artificial intelligence tools for academic success: The use of the ChatGPT AI tool. J. Curriculum Stud. Res. 6, 44–59. doi: 10.46303/jcsr.2024.41
Chen, A., Xiang, M., Zhou, J., Jia, J., Shang, J., Li, X., et al. (2025). Unpacking help-seeking process through multimodal learning analytics: A comparative study of ChatGPT vs Human expert. Comput. Educ. 226:105198. doi: 10.1016/j.compedu.2024.105198
Chen, C., and Gong, Y. (2025). The role of AI-assisted learning in academic writing: A mixed-methods study on Chinese as a second language students. Educ. Sci. 15:141. doi: 10.3390/educsci15020141
Chiu, S., Birch, D. W., Shi, X., Sharma, A. M., and Karmali, S. (2011). Effect of sleeve gastrectomy on gastroesophageal reflux disease: A systematic review. Surg. Obesity Related Dis. 7, 510–515. doi: 10.1016/j.soard.2010.09.011
Codina, L. (2018). Revisiones bibliográficas sistematizadas: Procedimientos generales y Framework para Ciencias Humanas y Sociales [Systematic literature reviews: General procedures and framework for the humanities and social sciences]. Barcelona: Universitat Pompeu Fabra. Spanish.
Codina, L. (2023). Revisiones de la literatura con aproximación sistemática: scoping reviews [slides-presentation]. Spanish. Available online at: https://www.lluiscodina.com/revisiones-sistematicas-literatura-2023/ (Accessed September 10, 2025).
Cordero, J., Torres-Zambrano, J., and Cordero-Castillo, A. (2025). Integration of generative artificial intelligence in higher education: Best practices. Educ. Sci. 15:32. doi: 10.3390/educsci15010032
Cotton, D. R. E., Cotton, P. A., and Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148
Črček, N., and Patekar, J. (2023). Writing with AI: University students’. Use of ChatGPT. J. Lang. Educ. 9, 128–138. doi: 10.17323/jle.2023.17379
Cruz Argudo, F., García Varea, I., Martínez Carrascal, J. A., Ruiz Martínez, A., Ruiz Martínez, P. M., Sánchez Campos, A., et al. (2024). La inteligencia artificial generativa en la docencia universitaria: Oportunidades, desafíos y recomendaciones [Generative artificial intelligence in university teaching: Opportunities, challenges and recommendations]. Spanish. Available online at: https://www.crue.org/wp-content/uploads/2024/03/Crue-Digitalizacion_IA-Generativa.pdf (accessed september, 5, 2025).
Dai, W., Tsai, Y.-S., Lin, J., Aldino, A., Jin, H., Li, T., et al. (2024). Assessing the proficiency of large language models in automatic feedback generation: An evaluation study. Comput. Educ. Artificial Intell. 7:100299. doi: 10.1016/j.caeai.2024.100299
Dakakni, D., and Safa, N. (2023). Artificial intelligence in the L2 classroom: Implications and challenges on ethics and equity in higher education: A 21st century Pandora’s box. Comp. Educ. Artificial Intell. 5. doi: 10.1016/j.caeai.2023.100179
de Vicente-Yagüe-Jara, M.-I., López-Martínez, O., Navarro-Navarro, V., and Cuéllar-Santiago, F. (2023). Escritura, creatividad e inteligencia artificial. ChatGPT en el contexto universitario [Writing, creativity, and artificial intelligence. ChatGPT in the university context]. Comunicar 31, 47–57. Spanish. doi: 10.3916/C77-2023-04
Dizon, G., and Gold, J. (2023). Exploring the effects of Grammarly on EFL students’ foreign language anxiety and learner autonomy. Jalt Call J. 19, 299–316. doi: 10.29140/jaltcall.v19n3.1049
Eager, B., and Brunton, R. (2023). Prompting higher education towards AI-augmented teaching and learning practice. J. Univer. Teach. Learn. Pract. 20:2. doi: 10.53761/1.20.5.02
El Bahri, N., Itahriouan, Z., Abtoy, A., and Ouazzani Jamil, M. (2024). Personality analysis of students’ writing in social media-based learning environments. IEEE Access. 12:3491934. doi: 10.1109/ACCESS.2024.3491934
Elhassan, S. E., Sajid, M. R., Syed, A. M., Fathima, S. A., Khan, B. S., and Tamim, H. (2025). Assessing familiarity, usage patterns, and attitudes of medical students toward ChatGPT and other chat-based AI apps in medical education: Cross-sectional questionnaire study. JMIR Med. Educ. 11:e63065. doi: 10.2196/63065
Esmaeil, A. A. A., Kiflee@Dzulkifli, D. N. A., Maakip, I., Matanluk, O. O., and Marshall, S. (2023). Understanding student perception regarding the use of ChatGPT in their argumentative writing: A qualitative inquiry. Malaysian J. Commun. 39, 150–165. doi: 10.17576/JKMJC-2023-3904-08
Ezeiza, A. (2023). Retomar el foro virtual como contexto sociodiscursivo para el desarrollo de la escritura académica universitaria [Reinstating the virtual forum as a socio-discursive context for the development of university academic writing]. Perspect. Educ. Formación Profesores. 62, 140–164. Spanish. doi: 10.4151/07189729-Vol.62-Iss.2-Art.140
Fakir, S. A., Marnaoui, S., and Al Anqodi, H. A. (2024). Written assignments and generative artificial intelligence: Challenges and considerations for english education major students at A’Sharqiyah University, Oman. Arab World English J. 15, 22–38. doi: 10.24093/awej/vol15no4.2
Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. Br. J. Educ. Technol. 56, 489–530. doi: 10.1111/bjet.13544
Fiialka, S., Kornieva, Z., and Honcharuk, T. (2023). ChatGPT in ukrainian education: Problems and prospects. Int. J. Emerg. Technol. Learn. 18, 236–250. doi: 10.3991/ijet.v18i17.42215
Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., and Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Comput. Educ. Artificial Intell. 6:100209. doi: 10.1016/j.caeai.2024.100209
Fornons del Arco, C. L., and Bravo, I. (2024). “Irrupción de la IA en educación superior [The emergence of AI in higher education],” in en Educación e Inteligencia Artificial: Horizontes de transformación [Education and Artificial Intelligence: Horizons of Transformation], eds O. Flores Alarcia and L. Fornons Casol (Madrid: Dykinson), 109–124. Spanish.
Gasaymeh, A.-M. M., Beirat, M. A., and Abu Qbeita, A. A. (2024). University students’ insights of generative artificial intelligence (AI) writing tools. Educ Sci. 14:1062. doi: 10.3390/educsci14101062
Goyanes, M., and De-Marcos, L. (2025). Protocolo metodológico para el desarrollo de análisis de contenido asistido por inteligencia artificial fiable y válido: Guía práctica con ChatGPT [Methodological protocol for the development of reliable and valid AI-assisted content analysis: A practical guide with ChatGPT]. Anuario ThinkEPI 19:e19a07. Spanish. doi: 10.3145/thinkepi.2025.e19a07
Gralha, J. G., and Pimentel, A. S. (2024). Gotcha GPT: Ensuring the Integrity in Academic Writing. J. Chem. Information Modeling. 64, 8091–8097. doi: 10.1021/acs.jcim.4c01203
Hassoulas, A., Powell, N., Roberts, L., Umla-Runge, K., Gray, L., and Coffey, M. J. (2023). Investigating marker accuracy in differentiating between university scripts written by students and those produced using ChatGPT. J. Appl. Learn. Teach. 6, doi: 10.37074/jalt.2023.6.2.13
Hesse, F., and Helm, G. (2025). Writing with AI in and beyond teacher education: Exploring subjective training needs of student teachers across five subjects. J. Digit. Learn. Teacher Educ. 41, 21–36. doi: 10.1080/21532974.2024.2431747
Ho, C. C. (2024). Using AI-generative tools in tertiary education: Reflections on their effectiveness in improving tertiary students’ english writing abilities. Online Learnig 28, 33–54. doi: 10.24059/olj.v28i3.4632
Juanda, J., and Afandi, I. (2024). Assessing text comprehension proficiency: Indonesian higher education students vs ChatGPT. XLinguae 17, 49–69. doi: 10.18355/XL.2024.17.01.04
Karataş, F., Yaşar Abedi, F., Ozek Gunyel, F., Karadeniz, D., and Kuzgun, Y. (2024). Incorporating AI in foreign language education: An investigation into ChatGPT’s effect on foreign language learners. Educ. Information Technol. 29, 19343–19366. doi: 10.1007/s10639-024-12574-6
Kim, J., Yu, S., Detrick, R., and Li, N. (2025). Exploring students’ perspectives on Generative AI-assisted academic writing. Educ. Information Technol. 30, 1265–1300. doi: 10.1007/s10639-024-12878-7
Kohnke, L. (2024). Exploring EAP students’ perceptions of GenAI and traditional grammar-checking tools for language learning. Comput. Educ. Artificial Intell. 7, 100279. doi: 10.1016/j.caeai.2024.100279
Kong, S. C., Cheung, W. M. Y., and Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Comput. Educ. Artificial Intell. 2:100026. doi: 10.1016/j.caeai.2021.100026
Kurt, G., and Kurt, Y. (2024). Enhancing L2 writing skills: ChatGPT as an automated feedback tool. J. Information Technol. Educ. Res. 23, 623–650. doi: 10.28945/5370
Laborda, J. G., Royo, T. M., and Madarova, S. (2024). Towards a taxonomy of artificial intelligence in teaching writing in a foreign language. S. Afr. J. Educ. 44, 1–8. doi: 10.15700/saje.v44n4a2540
Lee, Y.-J., Davis, R. O., and Lee, S. O. (2024). University students’ perceptions of artificial intelligence-based tools for English writing courses. Online J. Commun. Media Technol. 14:e202412. doi: 10.30935/ojcmt/14195
Liang, J., Huang, F., and Teo, T. (2024). Understanding Chinese university EFL learners’ perceptions of AI in English writing. Educ. Information Technol. 29, 19343–19366. doi: 10.4018/IJCALLT.358918
Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., et al. (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. Int. J. Comput. Assist. Lang. Learn. Teach. 14, 1–16. doi: 10.4018/IJCALLT.358918
Mardini, I. D., Quintero, M. C. G., Viloria, N., Percybrooks, B., Robles, N., and Villalba, R. (2024). A deep-learning-based grading system (ASAG) for reading comprehension assessment by using aphorisms as open-answer-questions. Educ. Information Technol. 29, 4565–4590. doi: 10.1007/s10639-023-11890-7
Marzuki, Widiati, U., Rusdin, D., and Darwin Indrawati, I. (2023). The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Educ. 10:2236469. doi: 10.1080/2331186X.2023.2236469
McGuire, A., Qureshi, W., and Saad, M. (2024). A constructivist model for leveraging GenAI tools for individualized, peer-simulated feedback on student writing. Int. J. Technol. Educ. 7, 326–352. doi: 10.46328/ijte.639
Mendoza, K. K. R., Pedroza Zúñiga, L. H., and López García, A. Y. (2024). Creación y jueceo de ítems: ChatGPT como diseñador y juez / Criação e julgamento de itens: ChatGPT como designer e juiz / Item creation and judging: ChatGPT as designer and judge [Item Creation and Judging: ChatGPT as Designer and Judge]. Texto Livre 17:e51222. Portuguese. doi: 10.1590/1983-3652.2024.51222
Mohammad, T., Nazim, M., Alzubi, A. A. F., and Khan, S. I. (2024). Examining EFL students’ motivation level in using quillbot to improve paraphrasing skills. World J. English Lang. 14, 501–513. doi: 10.5430/wjel.v14n1p501
Mohammed, S. J., and Khalid, M. W. (2025). Under the world of AI-generated feedback on writing: Mirroring motivation, foreign language peace of mind, trait emotional intelligence, and writing development. Lang. Testing Asia 15:7. doi: 10.1186/s40468-025-00343-2
Mosleh, R., Jarrar, Q., Jarrar, Y., Tazkarji, M., and Hawash, M. (2023). Medicine and pharmacy students’ knowledge, attitudes, and practice regarding artificial intelligence programs: Jordan and West Bank of Palestine. Adv. Med. Educ. Pract. 14, 1391–1400. doi: 10.2147/AMEP.S433255
Moussa, A., and Belhiah, H. (2024). Beyond syntax: Exploring moroccan undergraduate EFL learners’ engagement with AI-assisted writing. Arab World Engl. J. 2024, 138–155. doi: 10.24093/awej/ChatGPT.9
Mudawy, A. M. A. (2024). Investigating EFL faculty members’ perceptions of integrating artificial intelligence applications to improve the research writing process: A case study at Majmaah university. Arab World Engl. J. 2024, 169–183. doi: 10.24093/awej/ChatGPT.11
Muslimin, A. I., Mukminatien, N., and Ivone, F. M. (2024). Evaluating Cami AI across SAMR stages: Students’ achievement and perceptions in EFL writing instruction. Online Learn. 28, 1–19. doi: 10.24059/olj.v28i2.4246
Nemt-allah, M., Khalifa, W., Badawy, M., Elbably, Y., and Ibrahim, A. (2024). Validating the ChatGPT usage scale: Psychometric properties and factor structures among postgraduate students. BMC Psychol. 12:497. doi: 10.1186/s40359-024-01983-4
Nguyen, A., Hong, Y., Dang, B., and Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Stud. High. Educ. 49, 847–864. doi: 10.1080/03075079.2024.2323593
Ortega-Rodríguez, P. J., and Pericacho-Gómez, F. J. (2025). La utilidad didáctica percibida del ChatGPT por parte del alumnado universitario [The perceived educational usefulness of ChatGPT by university students]. Pixel-Bit. Rev. Med. Educ. 72, 159–178. doi: 10.12795/pixelbit.109778
Ou, A. W., Khuder, B., Franzetti, S., and Negretti, R. (2024a). Conceptualising and cultivating Critical GAI Literacy in doctoral academic writing. J. Sec. Lang. Writ. 66:101156. doi: 10.1016/j.jslw.2024.101156
Ou, A. W., Stöhr, C., and Malmström, H. (2024b). ‘Academic communication with AI-powered language tools in higher education: From a post-humanist perspective’. System 121:103225. doi: 10.1016/j.system.2024.103225
Özdere, M. (2025). AI in academic writing: Assessing the effectiveness, grading consistency, and student perspectives of ChatGPT and for EFL students. Int. J. Technol. Educ. 8, 123–154. doi: 10.46328/ijte.1001
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 372:n71. doi: 10.1136/bmj.n71
Perezchica-Vega, J. E., Sepúlveda-Rodríguez, J. A., and Román-Méndez, A. D. (2024). Inteligencia artificial generativa en la educación superior: Usos y opiniones de los profesores. Eur. Public Soc. Innov. Rev. 9, 01–20. doi: 10.31637/epsir-2024-593
Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. J. Univer. Teach. Learn. Pract. 20:7. doi: 10.53761/1.20.02.07
Perkins, M., Roe, J., Postma, D., McGaughran, J., and Hickerson, D. (2024). Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. J. Acad. Ethics 22, 89–113. doi: 10.1007/s10805-023-09492-6
Pierrès, O., Darvishy, A., and Christen, M. (2025). Exploring the role of generative AI in higher education: Semi-structured interviews with students with disabilities. Educ. Inf. Technol. 30, 8923–8952. doi: 10.1007/s10639-024-13134-8
Playfoot, D., Quigley, M., and Thomas, A. G. (2024). Hey ChatGPT, give me a title for a paper about degree apathy and student use of AI for assignment writing. Internet High. Educ. 62:100950. doi: 10.1016/j.iheduc.2024.100950
Rababah, L. M., Rababah, M. A., and Al-Khawaldeh, N. N. (2024). Graduate students’ ChatGPT experience and perspectives during thesis writing. International J. Eng. Pedagogy 14, 22–35. doi: 10.3991/ijep.v14i3.48395
Rafida, T., Suwandi, S., and Ananda, R. (2024). EFL students’ perception in Indonesia and Taiwan on using artificial intelligence to enhance writing skills. J. Ilmiah Peuradeun 12, 987–1016. doi: 10.26811/peuradeun.v12i3.1520
Revell, T., Yeadon, W., Cahilly-Bretzin, G., Clarke, I., Manning, G., Jones, J., et al. (2024). ChatGPT versus human essayists: An exploration of the impact of artificial intelligence for authorship and academic integrity in the humanities. Int. J. Educ. Integrity 20:18. doi: 10.1007/s40979-024-00161-8
Ricart-Vayá, A. (2024). ChatGPT como herramienta para mejorar la expresión escrita en inglés como lengua extranjera [ChatGPT as a tool to improve written expression in English as a foreign language]. Íkala Rev. Lenguaje Cult. 29, 1–16. doi: 10.17533/udea.ikala.354584
Santiago, C. S., Embang, S. I., Acanto, R. B., Ambojia, K. W. P., Aperocho, M. D. B., Balilo, B. B., et al. (2023). Utilization of writing assistance tools in research in selected higher learning institutions in the Philippines: A text mining analysis. Int. J. Learn. Teach. Educ. Res. 22, 259–284. doi: 10.26803/ijlter.22.11.14
Sarwanti, S., Sariasih, Y., Rahmatika, L., Islam, M. M., and Riantina, E. M. (2024). Are they literate on ChatGPT? University language students’ perceptions, benefits and challenges in higher education learning. Online Learn. J. 28, 105–130. doi: 10.24059/olj.v28i3.4599
Sevnarayan, K., and Potter, M.-A. (2024). Generative Artificial Intelligence in distance education: Transformations, challenges, and impact on academic integrity and student voice. J. Appl. Learn. Teach. 7:41. doi: 10.37074/jalt.2024.7.1.41
Stojanov, A., Liu, Q., and Koh, J. H. L. (2024). University students’ self-reported reliance on ChatGPT for learning: A latent profile analysis. Comput. Educ. Artificial Intell. 6:100243. doi: 10.1016/j.caeai.2024.100243
Storey, V. A. (2023). AI technology and academic writing: Knowing and mastering the “craft skills”. Int. J. Adult Educ. Technol. 14, 1–15. doi: 10.4018/IJAET.325795
Strauss, A., and Corbin, J. (2014). Basics of qualitative research: Techniques and procedures for developing grounded theory, 4th Edn. Thousand Oaks, CA: Sage Publications.
Sweeney, S. (2023). Who wrote this? Essay mills and assessment — considerations regarding contract cheating and AI in higher education. Int. J. Manage. Educ. 21:100818. doi: 10.1016/j.ijme.2023.100818
Thandla, S. R., Armstrong, G. Q., Menon, A., Shah, A., Gueye, D. L., Harb, C., et al. (2024). Comparing new tools of artificial intelligence to the authentic intelligence of our global health students. BioData Min. 17:58. doi: 10.1186/s13040-024-00408-7
Torres-Gómez, A. (2024). Necesidades de información y percepción sobre las herramientas de inteligencia artificial en estudiantes de doctorado en investigación educativa en Tlaxcala, México [Information and perception needs regarding artificial intelligence tools among doctoral students in educational research in Tlaxcala, Mexico]. Invest. Bibliotecol. Archivonomía Bibliotecol. Información 38, 79–98. Spanish. doi: 10.22201/iibi.24488321xe.2024.98.58852
UNESCO. (2024). Guía para el uso de IA generativa en educación e investigación [Guide to the use of generative AI in education and research]. Paris: United Nations Educational, Scientific and Cultural Organization. Spanish.
UNESCO. (2025). The challenges of AI in higher education and institutional responses: is there room for competency frameworks?. Paris: United Nations Educational, Scientific and Cultural Organization.
Waltzer, T., Pilegard, C., and Heyman, G. D. (2024). Can you spot the bot? Identifying AI-generated writing in college essays. Int. J. Educ. Integrity 20:11. doi: 10.1007/s40979-024-00158-3
Wang, C. (2024). Exploring students’ generative AI assisted writing processes: Perceptions and experiences from native and nonnative english speakers. Technol. Knowledge Learn. 30, 1825–1846. doi: 10.1007/s10758-024-09744-3
Wang, L., and Ren, B. (2024). Enhancing academic writing in a linguistics course with generative AI: An empirical study in a higher education institution in Hong Kong. Educ. Sci. 14:1329. doi: 10.3390/educsci14121329
Wang, Y. (2024). Cognitive and sociocultural dynamics of self-regulated use of machine translation and generative AI tools in academic EFL writing. System 126:103505. doi: 10.1016/j.system.2024.103505
Wise, B., Emerson, L., Van Luyn, A., Dyson, B., Bjork, C., and Thomas, S. E. (2024). A scholarly dialogue: Writing scholarship, authorship, academic integrity and the challenges of AI. High. Educ. Res. Dev. 43, 578–590. doi: 10.1080/07294360.2023.2280195
Keywords: reading, writing, artificial intelligence, higher education, students
Citation: Sanz-Tejeda A, Domínguez-Oller JC, Baldaquí-Escandell JM, Gómez-Díaz R and García-Rodríguez A (2026) The impact of generative AI on academic reading and writing: a synthesis of recent evidence (2023–2025). Front. Educ. 10:1711718. doi: 10.3389/feduc.2025.1711718
Received: 23 September 2025; Revised: 25 November 2025; Accepted: 30 November 2025;
Published: 06 January 2026.
Edited by:
Gemma Lluch, University of Valencia, SpainReviewed by:
M.ª Pilar Núñez Delgado, University of Granada, SpainHugo Heredia Ponce, University of Cádiz, Spain
Ricardo Pereira, Universidade Federal de Santa Catarina, Brazil
Copyright © 2026 Sanz-Tejeda, Domínguez-Oller, Baldaquí-Escandell, Gómez-Díaz and García-Rodríguez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Aránzazu Sanz-Tejeda, YXJhbnphenUuc2FuekB1Y2xtLmVz