Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Educ., 18 November 2025

Sec. Digital Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1701238

This article is part of the Research TopicThe Role of AI in Transforming Literacy: Insights into Reading and Writing ProcessesView all 5 articles

Artificial intelligence in academic literacy: empirical evidence on reading and writing practices in higher education

  • 1Department of Didactics of Language and Literature, Faculty of Education, University of Almería, Almería, Spain
  • 2Department of Didactics of Language and Literature, Faculty of Education, University of Granada, Granada, Spain

Academic reading and writing constitute fundamental competences in higher education that enable access to disciplinary knowledge and participation in academic communities. The incorporation of Artificial Intelligence has transformed these practices, yet empirical research examining validated instruments to measure students‘ beliefs, perceptions, and practices remains incipient. This systematic review followed PRISMA 2020 guidelines to identify empirical studies that examine the use of AI tools in university students' reading and writing processes, specifically locating surveys, questionnaires, and methodologies with clear information on their design, validation, and application adaptable for future studies in academic literacy. The search in Scopus and Web of Science between January 2023 and April 2025 yielded 4,248 initial results, with 55 studies meeting inclusion criteria after rigorous screening based on empirical content with validated instruments and minimum sample requirements. The corpus reveals predominance of quantitative methodologies with instruments developed and subjected to rigorous statistical validation, including the ChatGPT Usage Scale, AILS-CCS scale measuring four AI literacy dimensions, SIUAIT index for institutional integration, and adaptations of technology acceptance models (TAM, TPB, UTAUT). Students employ AI primarily for idea generation, text structuring, and revision, driven by perceived usefulness, intrinsic motivation, and facilitating conditions. The findings demonstrate five critical themes: AI impact on academic skills development, ethical concerns regarding integrity and authorship, need for critical digital literacy, disparities between perceptions and preparedness, and necessity for redefining assessment methods. These validated instruments provide a methodological foundation for future research, as institutions require empirical tools to guide pedagogical integration of AI as a complementary resource within academic literacy frameworks.

Introduction

Academic reading and writing in higher education

Academic reading and writing are fundamental competences in the context of higher education, as they enable access to disciplinary knowledge, the development of complex thinking, and participation in academic communities (Carlino, 2013; Parodi, 2010). These skills go beyond text comprehension and written production, since they also involve higher-order cognitive processes such as argumentation, synthesis, and the effective communication of ideas, among others (Cassany, 2006).

A considerable body of research has highlighted the diverse difficulties that university students face in relation to academic literacy practices, due to factors such as the variety of discourse genres, the cognitive demands of written tasks, and the lack of specific training in these competences (Carlino, 2013; Navarro, 2020). In this regard, academic literacy is conceived as a situated, gradual, and discipline-mediated process, which requires specific pedagogical strategies developed within the university setting.

In recent years, the increasing integration of digital technologies in university settings has opened new avenues for teaching and learning. Among these innovations, Artificial Intelligence has become particularly prominent, not only for its capacity to process and generate text, but also for its potential to transform how students read, interpret, and produce academic knowledge.

Artificial Intelligence (AI) in academic contexts

The incorporation of Artificial Intelligence (hereafter, AI) in higher education has radically transformed teaching and learning practices, as well as the ways of accessing, producing, and evaluating knowledge. Tools such as intelligent tutoring systems, writing assistants, and language models such as GPT-4 have broadened the possibilities of student support by offering suggestions, corrections, and content generation (Luckin et al., 2016; Holmes et al., 2022).

Although these technologies offer significant opportunities, they also pose ethical, pedagogical, and epistemological challenges. Among them are concerns about authorship, the assessment of students' own learning, and technological dependence (Selwyn, 2023; Habibi et al., 2024). Likewise, the spread of AI generates tensions with traditional academic practices, particularly in writing, where originality, critical reflection, and disciplinary expertise are highly valued.

Academic reading and writing with AI

The use of AI to support academic reading and writing has become consolidated in platforms and applications that provide automatic summaries, grammar checking, style suggestions, and text generation. Tools such as Grammarly, ChatGPT, or Elicit can act as mediators between students and knowledge, facilitating the comprehension of complex texts and improving written production (Zawacki-Richter et al., 2019).

However, the use of AI in these tasks requires a critical digital literacy that enables students to understand the scope and limitations of these technologies, as well as their ethical implications. At this point, it is important to emphasize that AI should not replace, in the words of Akgun and Greenhow (2022), the formative processes of critical reading or reflective writing, but rather complement pedagogical strategies within the framework of teaching that prioritizes meaningful learning (Blanco Fontao et al., 2024).

Furthermore, while academic literacy encompasses both reading and writing, existing research shows a greater focus on writing-related practices than on reading processes. However, academic reading plays a crucial role in developing comprehension, synthesis, and argumentation, which in turn condition writing quality. As Parodi (2010) highlights, the act of reading in academic contexts involves constructing meaning across multiple discourse levels and is inseparable from the processes of writing and reasoning. Therefore, this study adopts an integrated perspective on academic literacy that considers reading and writing as interdependent practices within higher education, both of which are now being reshaped by AI-mediated environments (Polakova and Ivenz, 2024; Rahayu et al., 2024; Wale and Kassahun, 2024).

Research on AI for academic reading and writing

Despite the increasing use of AI tools in education, empirical research on their impact on academic reading and writing remains incipient. Specifically, there is a shortage of studies that explore students' and teachers' beliefs, perceptions, and practices regarding the use of AI in academic literacy contexts (Holmes et al., 2022; Selwyn, 2023).

From a methodological perspective, there is an increasing need for studies that combine qualitative and quantitative approaches to better capture the complexity of the phenomenon. The purpose of the present work is precisely to examine research employing surveys, in-depth interviews, focus groups, and case studies, which can offer a richer understanding of how students use, value, and re-signify these tools in their learning processes (Luckin et al., 2016; Akgun and Greenhow, 2022; Mizumoto et al., 2024).

Previous empirical evidence provides valuable insights into these dimensions. Survey-based studies tend to capture large-scale perceptions of AI use in writing and reading tasks, while in-depth interviews and focus groups reveal students' beliefs, motivations, and ethical stances regarding AI tools. Case studies, in turn, illustrate the pedagogical integration of AI in specific contexts and disciplines. Expanding this synthesis enables a more comprehensive understanding of how AI influences students' literacy practices through both individual and collective experiences, thus reinforcing the multidimensional scope of this review (Musyaffi et al., 2024; Ngo et al., 2024; Nguyen et al., 2024).

In sum, it is essential to investigate how higher education institutions are integrating AI into their teaching and assessment policies, as well as the effects these decisions have on students' academic and professional training.

Methodology

This systematic review was conducted following the PRISMA 2020 guidelines (Page et al., 2021; Mateo, 2020) to ensure methodological quality and transparency in the literature review. Accordingly, the aim of this review is to identify empirical studies that, regardless of the discipline, examine the use of artificial intelligence tools in university students' reading and writing processes, with the purpose of locating surveys, questionnaires, and methodologies that include clear information on their design, validation, and application, and which may be adapted for future studies in the field of academic literacy in higher education. Particular attention is given to research addressing students' beliefs, attitudes, and knowledge, due to their key role in the critical integration of emerging technologies.

To this end, studies published in open access, in English or Spanish, between January 2023 and April 2025 were included, provided they contained empirical content based on validated instruments and participants in higher education. Quantitative, mixed, and qualitative designs were considered, as long as the latter included structured and well-documented data collection procedures. The studies had to explicitly address dimensions of academic literacy (reading, writing, metacognitive, or critical) in connection with AI tools. Excluded from the review were studies focused on technical applications of AI without an educational connection, research not involving university students, articles lacking clear data collection instruments, studies with quantitative samples smaller than 100 participants (except for mixed-methods research with sufficient qualitative depth), and work without empirical validation or a replicable methodology.

The search was carried out in Scopus and Web of Science, selected for their breadth, up-to-date coverage, and relevance to education. A strategy was developed based on five blocks of key terms in the title-abs-key field: artificial intelligence, empirical methodology, educational level, academic literacy, and cognitive and attitudinal aspects. The search equation employed is presented in Table 1.

Table 1
www.frontiersin.org

Table 1. Search strategy and keywords used in the systematic review.

The decision to set January 2023 as the starting point of the review responds to the accelerated emergence of generative AI tools in academic contexts during late 2022, which marked a turning point in research on AI-mediated literacy. Scopus and Web of Science were selected due to their comprehensive coverage, methodological transparency, and rigorous peer-review processes, ensuring the inclusion of high-quality and validated studies.

Exclusion filters were applied using EXCLUDE (EXACTKEYWORD, “Systematic Review”) OR EXCLUDE (EXACTKEYWORD, “Thematic Analysis”) to avoid duplicated or non-empirical works. The initial search yielded 4,650 results (4,156 in Scopus and 494 in WoS). After applying inclusion filters, 969 studies were excluded, resulting in 3,589 records from Scopus and 92 from WoS. Duplicates were then identified and removed using Mendeley Reference Manager (v2.93), with 97 duplicated records detected. Once duplicates were excluded, the first screening, based on title and abstract reading, was independently conducted by two researchers, following the principle of shared review to resolve discrepancies. This phase resulted in 1,591 articles selected for a more detailed screening.

The second screening phase was conducted using the Rayyan platform (Baldrich et al., 2024), involving full abstract reading and, when necessary, full-text assessment. Studies were labeled according to their relevance, and additional exclusion criteria were applied: lack of description of the data collection instrument, insufficient methodological clarity, or absence of empirical validation. At the beginning of this phase, 250 studies were analyzed. After successive exclusion procedures (176 excluded), 74 articles remained, of which 19 were removed in a final review for not meeting specific criteria (fewer than 100 respondents, absence of validated questionnaires, or lack of open access). The final corpus therefore consisted of 55 valid studies included in the analysis (Figure 1).

Figure 1
PRISMA 2020 flow diagram illustrating the selection process of studies. Starting with 4,650 records identified from WOS and SCOPUS, 969 were excluded by filters including scientific articles and specific years. After duplicate removal, 3,584 records remained. Following the first screening using Mendeley, 1,591 records were kept, narrowing to 250 after second screening with Rayyan. Reasons for exclusion included not meeting research purpose and lacking surveys. Final eligibility assessment left 74 records, with 55 remaining after exclusions. Nineteen documents were excluded for not meeting criteria.

Figure 1. PRISMA flow diagram of the systematic review process (source: authors' elaboration based on Page et al., 2021).

Data analysis

The analysis of the selected studies followed an inductive–deductive coding approach based on a structured extraction matrix designed to synthesize nine analytical categories: reference citation in APA 7, country, disciplinary field, educational level of the sample, methodological design, instrument characteristics, theoretical dimensions addressed (reading self-concept, critical literacy, use of generative AI), type of artificial intelligence employed, and main findings. Each study was independently examined by three researchers, who subsequently met to reach consensus and ensure consistency in classification. Open codes emerging during the process were refined through axial categorization. Reliability and validity were reinforced through triangulation between quantitative indicators, qualitative descriptors, and cross-referencing with the Supplementary Table 2. Particular attention was given to studies that introduced new or adapted scales, as well as to those proposing emerging indicators of academic literacy mediated by AI technologies.

This procedure ensured the traceability and comprehensiveness of the screening and analysis process, establishing a solid methodological foundation for the interpretation of results. The complete process of identification, evaluation, and inclusion of studies is presented in Figure 1, while Supplementary Table 2 provides details of all selected articles classified according to the criteria mentioned above.

Results

Methodological approaches and research designs

The analysis of the corpus reveals a clear preference for quantitative methodologies, which constitute the majority of the reviewed studies. These works mainly employ non-experimental designs and use statistical techniques such as exploratory and confirmatory factor analysis, structural equation modeling, and correlation tests. Mixed-methods studies are also identified, integrating questionnaires with semi-structured interviews, content analysis, or records of interaction in digital environments. Qualitative research is less frequent, although some studies include well-defined protocols for thematic analysis or textual analysis of academic productions generated with AI.

Instruments and validation processes

A considerable number of studies developed their own data collection instruments, in some cases subjected to rigorous statistical validation. Among the most notable questionnaires are the ChatGPT Usage Scale, focused on the functional classification of the academic use of this tool; the AILS-CCS scale, which measures four dimensions of AI literacy (awareness, use, evaluation, and ethics); and the SIUAIT index, designed to assess levels of institutional integration of AI tools. Other studies adapted existing scales from the fields of digital literacy or technology acceptance, adjusting them to the context of generative AI through expert review processes and factor analysis.

Focus on literacy dimensions: reading, writing, and critical aspects

Regarding the dimensions analyzed, most studies focus on practices linked to academic writing, such as idea generation, text structuring, spelling correction, and style improvement. Some works also include references to reading, particularly in relation to the evaluation of AI-generated content or the comprehension of texts in second languages. In addition, a significant proportion of the corpus addresses aspects of critical literacy, such as authorship identification, evaluation of content reliability, or positioning in relation to ethical dilemmas associated with the use of these technologies. To a lesser extent, metacognitive dimensions such as self-regulation or writing self-efficacy are also addressed.

Technological tools and conceptual frameworks

The technological tools examined correspond mainly to generative AI models. ChatGPT is the most frequently mentioned, either as the central object of study or as part of the set of tools used by students. Other applications include Grammarly, QuillBot, or Bing Chat, as well as ad hoc academic chatbots, automated feedback systems, and AI-assisted virtual tutoring platforms. These technologies are associated with academic tasks related to writing, revision, consultation, and the preparation of written work, both individual and collaborative.

The conceptual frameworks supporting the studies are diverse. A substantial part of the corpus relies on classical technology acceptance models such as the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), the Technology Readiness Index (TRI), or UTAUT. These frameworks allow the exploration of variables such as perceived usefulness, ease of use, behavioral intention, enjoyment, or social pressure. Other works adopt approaches centered on digital literacy, academic ethics, or teacher education, integrating constructs such as critical thinking, risk perception, or multimodal literacy.

Participants, objectives, and main findings

Participants are mostly undergraduate students, particularly in education programmes, although studies have also been identified in social sciences, applied linguistics, educational technologies, health, or business administration. Some research brings together interdisciplinary samples or combines different academic levels, including master's and doctoral studies. The number of participants varies, but in all cases meets the established criteria to ensure data robustness.

The objectives formulated in the studies include the exploration of attitudes and beliefs toward the use of AI, the analysis of academic use of generative tools, the validation of measurement instruments, and the study of factors influencing the adoption or rejection of these technologies. Some works also incorporate training components, intervention proposals, or reflections on the role of teachers in the pedagogical integration of AI.

A significant part of the corpus documents frequent uses of AI tools for tasks related to academic writing, both in the initial stages and in text revision or improvement. Several studies also mention consulting these tools to resolve doubts, expand content, or generate alternative explanations. Some works report perceptions related to ethics, authorship, dependence, or the reliability of generated content, while others include references to training initiatives or institutional frameworks regulating the use of AI.

Discussion and conclusions

The following section presents the main elements of analysis derived from the review that were not developed in the results section, as they correspond to a subsequent phase based on the comparative interpretation of the included studies.

The findings show that the incorporation of artificial intelligence (AI) tools into university academic literacy has mainly focused on the impact of AI on academic skills, ethical concerns regarding academic integrity and plagiarism, the need for critical digital literacy on AI, the disparity between students' and teachers' perceptions and preparedness, and the need for a redefinition of teaching and assessment.

When contrasting the results of the reviewed articles, continuity can be observed with research on digital literacy in which the use of emerging technologies is linked to the development of new competences. At the same time, the use of AI introduces dilemmas related to authorship, student autonomy, and the need to critically evaluate algorithmically generated content.

The studies analyzed reveal a conceptual shift in the way academic literacy is understood. In this regard, its conception should include not only reading and writing skills but also the ability to interact critically with digital tools such as AI. Consequently, pedagogical approaches in higher education must be reconsidered in order to integrate AI not as a substitute, but as a complementary resource that demands new forms of teacher guidance.

This technology, capable of simulating human cognitive processes such as learning, adaptation, synthesis, and self-correction (Lozano and Blanco-Fontao, 2023), and of generating coherent, human-like texts (Segovia-García, 2023; Zou and Huang, 2023), offers significant opportunities to support learning and transform educational environments (Bautista et al., 2024).

Theoretical models of AI acceptance and use

Most studies that developed questionnaires as instruments of analysis are based on well-known and validated technology acceptance models, such as the Technology Acceptance Model (TAM) (Falebita and Kok, 2025; Liu et al., 2024; Lai et al., 2023; Zou and Huang, 2023; Liang et al., 2024), the Theory of Planned Behavior (TPB) (Anani et al., 2025), the Technology Readiness Index (TRI) (Cui, 2025), the Unified Theory of Acceptance and Use of Technology Models 1 and 2 (UTAUT and UTAUT2) (Jdaitawi et al., 2024; Habibi et al., 2023; Romero-Rodríguez et al., 2023), the Extended Technology Acceptance Model (ETAM), the Information Systems Success Model (ISSM) (Kanwal, 2025), the Value-based Adoption Model (VAM) (Al-Abdullatif, 2023), and the TPACK framework (Technology, Pedagogy, and Content Knowledge) (Bautista et al., 2024).

Some studies have developed their own instruments adapted to the context of AI use, such as the ChatGPT Usage Scale (Nemt-allah et al., 2024), the AILS-CCS scale measuring four dimensions of AI literacy (awareness, use, evaluation, and ethics) (Ma and Chen, 2024), the SIUAIT index assessing institutional integration levels of AI tools (Grájeda et al., 2024), the KAP-CQ39 instrument evaluating knowledge, attitudes, and practices regarding ChatGPT (Robledo et al., 2023), and the Meta AI Literacy Scale (MAILS) (Mansoor et al., 2024). These models and instruments have been employed to understand the factors that drive or hinder the use of artificial intelligence in education, among which the following stand out:

• Perceived Usefulness (PU): Considered a significant predictor of the intention to use AI. In general, students adopt AI if they believe it will improve their performance and the efficiency of their academic tasks (Habibi et al., 2023; Jdaitawi et al., 2024; Lai et al., 2023; Liang et al., 2024; Zou and Huang, 2023; Yu et al., 2024; Helmiatin et al., 2024).

• Perceived Ease of Use (PEOU): Some studies suggest that ease of use influences perceived usefulness and attitudes toward AI (Jdaitawi et al., 2024; Liang et al., 2024; Zou and Huang, 2023). However, others do not find a significant relationship with the intention to use (Habibi et al., 2023; Lai et al., 2023; Liang et al., 2024).

• Intrinsic Motivation (IM): An important factor in the intention to use AI. Enjoyment derived from using this technology motivates students to explore and employ it (Habibi et al., 2023; Lai et al., 2023; Zou and Huang, 2023; Liang et al., 2024). Curiosity also emerges as a key component of intrinsic motivation (Lai et al., 2023; Breese et al., 2024; Wang and Ren, 2024).

• Facilitating Conditions (FC): The availability of resources and technological support (internet speed, computers, training) significantly influences the intention to use AI (Habibi et al., 2023; Jdaitawi et al., 2024; Romero-Rodríguez et al., 2023).

• Social Influence (SI): The influence of peers, teachers, and other referents can affect students' intention to use AI (Habibi et al., 2023; Jdaitawi et al., 2024).

• Perceived Risks (PR): Risks related to time and psychological factors may negatively affect students' willingness to use AI (Jdaitawi et al., 2024). Concerns about privacy and security are also important considerations (Lai et al., 2023).

• Knowledge and perception: A higher level of knowledge about AI tools is associated with a more favorable perception of them (Lamrabet et al., 2024).

Impact of AI on academic skills

The discussion on the impact of AI on academic skills can be structured into four interrelated dimensions that align with the study's purpose: (1) effects on reading and writing competences, (2) ethical and integrity concerns, (3) the development of critical AI literacy, and (4) disparities between perception and preparedness.

Artificial intelligence tools, particularly generative AI such as ChatGPT, are used for a range of academic tasks, including summarisation, paraphrasing, and language support (Hellmich et al., 2024; Jdaitawi et al., 2024; Romero-Rodríguez et al., 2023; Segovia-García, 2023; Lamrabet et al., 2024; Barrett and Pack, 2023; Robledo et al., 2023; Parveen and Mohammed Alkudsi, 2024; Abdulah et al., 2024). Some of the studies reviewed show that AI tools help students develop grammar, accuracy, and overall writing quality (Chan and Hu, 2023; Malik et al., 2023; Zou and Huang, 2023; Anani et al., 2025), facilitate idea generation (Barrett and Pack, 2023; Capinding, 2024; Chan and Hu, 2023), and enable access to academic information in different languages (Malik et al., 2023; Kalnina et al., 2024). Studies such as Smerdon (2024) indicate that students who used generative AI (ChatGPT) were more likely to have had higher prior academic performance, but no statistically significant effect was found on their final academic outcomes.

While most studies focus on writing, a smaller set—such as Çelik et al. (2024) or Liu et al. (2024)—demonstrate that AI can also mediate reading comprehension processes by simplifying input or providing adaptive feedback. This dual function of AI highlights the need to study academic literacy as an integrated process: reading critically to interpret algorithmic outputs and writing reflectively to preserve human intentionality. Therefore, future pedagogical designs should link AI-assisted reading and writing rather than treat them as separate domains.

From students' perceptions, AI is regarded as a tool for idea generation and writing support (Alkamel and Alwagieh, 2024). In this regard, AI is used for brainstorming, outlining, drafting, revising, and generating ideas for assignments and projects (Barrett and Pack, 2023; Capinding, 2024; Jdaitawi et al., 2024; Lozano and Blanco-Fontao, 2023; Robledo et al., 2023; Zou and Huang, 2023; Grájeda et al., 2024). Students report that it improves the quality of their writing, clarity, and coherence (Capinding, 2024; Liang et al., 2024). Furthermore, these tools are valued for saving time and increasing efficiency, as content generation, grammar checking, and paraphrasing functions are considered highly effective (Barrett and Pack, 2023; Capinding, 2024; Lai et al., 2023; Lozano and Blanco-Fontao, 2023; Robledo et al., 2023; Segovia-García, 2023).

Factors influencing the use of AI for academic writing include intrinsic and extrinsic motivation, technological optimism, and perceived usefulness. Thus, willingness to use AI results from the interaction between cognitive factors, such as usefulness or ease of use, and emotional factors, such as optimism and enjoyment (Cui, 2025). Another factor contributing to a positive evaluation of AI is its ability to personalize learning and provide instant feedback Slimi et al., 2025; Saklaki and Gardikiotis, 2024.

However, its use also presents risks for the development of fundamental academic competences such as critical thinking, originality, and autonomous learning (Barrett and Pack, 2023; Segovia-García, 2023; Lozano and Blanco-Fontao, 2023; Robledo et al., 2023). In this sense, growing concern has been expressed regarding the excessive dependence that AI use may generate and the corresponding decline in skills such as deep comprehension (Segovia-García, 2023; Lozano and Blanco-Fontao, 2023; Milton et al., 2024; Klimova and Luz de Campos, 2024) and problem-solving (Capinding, 2024; Segovia-García, 2023).

Interpreting these findings critically, AI appears to reconfigure academic literacy as a continuum where reading and writing processes are increasingly mediated by algorithms. Rather than replacing human agency, AI should be approached as a cognitive partner whose affordances require pedagogical framing. The challenge for higher education is to design learning contexts that cultivate reflective and autonomous engagement with AI—linking reading comprehension, critical evaluation, and ethical writing within a single, integrated model of academic literacy.

Ethical issues and academic integrity

Several studies highlight academic integrity and plagiarism as central themes (Barrett and Pack, 2023; Hellmich et al., 2024; Lozano and Blanco-Fontao, 2023; Malik et al., 2023; Romero-Rodríguez et al., 2023; Segovia-García, 2023; Espinoza Vidaurre et al., 2024; Lund et al., 2025). Some research reveals that a large proportion of students are willing to use AI unethically for their assignments, especially if punitive measures are removed (Hellmich et al., 2024). AI-generated content is difficult to detect through plagiarism checkers and often lacks proper references, raising questions about intellectual authorship (Hellmich et al., 2024; Romero-Rodríguez et al., 2023). For this reason, teachers generally distrust AI and would be willing to receive training for supervision purposes in order to detect students' misuse.

The study by Johnston et al. (2024) showed that although most participants had heard of generative AI and more than half of the respondents (2,555) reported having used it or considered doing so, 70.4% believed that using ChatGPT to write complete essays was inappropriate. It is also noteworthy in this study that students with higher confidence in academic writing were less likely to use generative AI.

Similarly, Yavich and Davidovitch (2024) found that academic dishonesty increased with low self-efficacy and remote class attendance. Along these lines, Chan's (2025) research revealed that students understood the unethical use of AI, but their perceptions of more nuanced uses were complex, since they struggled to grasp concepts such as acknowledgment and self-plagiarism.

The need for critical AI literacy

Research emphasizes the importance of both students and teachers developing strong AI literacy (Mansoor et al., 2024; Bautista et al., 2024; Lozano and Blanco-Fontao, 2023; Segovia-García, 2023; Parveen and Mohammed Alkudsi, 2024; Kalnina et al., 2024; Al-Abdullatif, 2023; Acosta-Enriquez et al., 2024). This includes understanding how AI works, its capacities, its limitations, and, fundamentally, its ethical dimensions (Bautista et al., 2024; Lamrabet et al., 2024; Parveen and Mohammed Alkudsi, 2024; Segovia-García, 2023).

Likewise, educational institutions should incorporate ethical considerations into the curriculum in order to prepare students for the responsible use of AI (Bautista et al., 2024; Parveen and Mohammed Alkudsi, 2024; Jdaitawi et al., 2024; Segovia-García, 2023; Abou-Hashish and Alnajjar, 2024). In several studies, future teachers acknowledge the need to acquire knowledge about AI in order to understand how students can use it appropriately (Bautista et al., 2024; Lamrabet et al., 2024; Jdaitawi et al., 2024; Kalnina et al., 2024; Lozano and Blanco-Fontao, 2023; Barrett and Pack, 2023; Segovia-García, 2023).

Another issue raised in the research concerns algorithmic bias and equity (Villarino, 2024). In this sense, AI systems may be prone to bias, which could exacerbate inequalities among different socio-economic communities or minorities (Mansoor et al., 2024). Moreover, risks associated with AI use include privacy and security, as it may entail a loss of control over students' personal information (Chan and Hu, 2023; Lai et al., 2023). In the study by Ardelean and Veres (2023), 85% of students expressed concern about job losses, which could be another potential risk.

Nevertheless, the study by Kalnina et al. (2024) stresses that 61% of experienced teachers agreed that AI allows the inclusion of students with different needs, which constitutes a noteworthy benefit.

Differences between perceptions and preparedness

Several studies state that students' technological self-efficacy determines their use of AI tools and their perception of ease of use (Kanwal, 2025; Habibi et al., 2023). In this regard, preparedness influences perceived usefulness (PU), perceived ease of use (PEOU), and self-efficacy (Falebita and Kok, 2025).

Although most studies claim that a large proportion of students and teachers hold generally positive attitudes toward the potential of AI in education (Al-Abdullatif, 2023; Barrett and Pack, 2023; Jdaitawi et al., 2024; Lai et al., 2023; Lamrabet et al., 2024; Lozano and Blanco-Fontao, 2023; Parveen and Mohammed Alkudsi, 2024; Segovia-García, 2023; Zou and Huang, 2023), there remains a significant gap between these positive perceptions and the effective and real implementation of AI in classrooms (Lamrabet et al., 2024).

Many institutions and teachers are unprepared for the rapid integration of generative AI, as they lack clear guidelines and professional development opportunities (Barrett and Pack, 2023; Lamrabet et al., 2024; Jdaitawi et al., 2024; Lozano and Blanco-Fontao, 2023; Segovia-García, 2023), as well as specific training for classroom implementation (Hesse and Helm, 2025; Ma and Chen, 2024). This situation generates confusion and inconsistent responses within the educational field (Segovia-García, 2023). Studies such as that of Deschenes and McMahon (2024) highlight that students seek guidance on how to use these tools critically and effectively.

Overall, these findings underscore that the integration of AI into academic literacy must be guided by pedagogical principles that preserve interpretative reading and critical writing as central to higher education, while leveraging AI's potential to scaffold these processes rather than replace them.

Redefining teaching and assessment in the AI era

The emergence of AI necessitates a revision and reformulation of educational systems, including curriculum design, learning objectives, and assessment methods (Barrett and Pack, 2023; Jdaitawi et al., 2024; Lozano and Blanco-Fontao, 2023; Parveen and Mohammed Alkudsi, 2024; Romero-Rodríguez et al., 2023; Segovia-García, 2023; Qi et al., 2025; Kelder et al., 2025; Ventura and Lopez, 2024).

The focus must shift toward “AI-proof tasks” that foster critical evaluation and construction from AI-generated content rather than mere content generation (Lozano and Blanco-Fontao, 2023; Romero-Rodríguez et al., 2023; Segovia-García, 2023; Malik et al., 2023; Aldreabi et al., 2025). From this perspective, the human component of teaching —including critical thinking, creativity, and affective interaction— remains indispensable and cannot be entirely replaced by AI (Segovia-García, 2023; Lozano and Blanco-Fontao, 2023; Malik et al., 2023).

Ultimately, the irruption of AI in education requires a proactive and ethical adaptation on the part of all stakeholders in the educational system to ensure that students develop the skills necessary for a future increasingly driven by technology. Effective integration of AI demands a holistic approach that addresses students' and teachers' perceptions, ethical implications, practical challenges, and the need for clear institutional training and policies.

Despite the contributions of this review, certain limitations must be acknowledged. Although a comprehensive search was carried out across several databases, a linguistic bias was detected, as most of the available studies are in English. These limitations call for cautious interpretation of the results and underscore the need for more rigorous research.

As a direction for future research, it would be valuable to conduct studies that systematically compare students' self-perceptions of AI use with their actual competence in applying AI tools for academic reading and writing. Such analyses could help distinguish between perceived and demonstrated literacy, providing more precise indicators for pedagogical intervention.

Finally, several directions for future research are identified. It is necessary to promote longitudinal studies that examine the impact of AI on the development of academic competences over time. It is also pertinent to broaden the analysis to different contexts and disciplines, given that this review has focused on the field of education. Furthermore, research is required that explicitly addresses the ethical implications and training in critical thinking in relation to AI-generated content. These directions will not only enrich the academic understanding of the phenomenon but also help to guide more responsible and effective educational practices.

Author contributions

KB: Writing – original draft, Writing – review & editing. CP-G: Writing – original draft, Writing – review & editing. MS-S: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This research was funded by the R&D&I Project “Educational Transformation: Exploring the Impact of Artificial Intelligence on the Reading and Writing Development of University Students” (PID2023-151419OB-I00), within the call for R&D&I Projects “Knowledge Generation”, of the State Programme to Promote Scientific and Technical Research and its Transfer, under the framework of the State Plan for Scientific, Technical, and Innovation Research 2021–2023, Ministry of Science, Innovation and Universities, and Spanish State Research Agency, 2024–2027. This work was also supported by a posdoctoral fellowship (FPU) from the Spanish Ministry of Universities.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1701238/full#supplementary-material

References

Abdulah, D. M., Zaman, B. A., Mustafa, Z. R., and Hassan, L. H. (2024). Artificial intelligence integration in academic writing: insights from the University of Duhok. ARO- Scient. J. Koya Univer. 12:11794. doi: 10.14500/aro.11794

Crossref Full Text | Google Scholar

Abou-Hashish, E. A., and Alnajjar, H. (2024). Digital proficiency: Assessing knowledge, attitudes, and skills in digital transformation, health literacy, and artificial intelligence among university nursing students. BMC Med. Educ. 24:508. doi: 10.1186/s12909-024-05482-3

PubMed Abstract | Crossref Full Text | Google Scholar

Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Huamaní Jordan, O., López Roca, C., and Saavedra Tirado, K. (2024). Analysis of college students' attitudes toward the use of ChatGPT in their academic activities: effect of intent to use, verification of information and responsible use. BMC Psychol. 12:255. doi: 10.1186/s40359-024-01764-z

PubMed Abstract | Crossref Full Text | Google Scholar

Akgun, S., and Greenhow, C. (2022). Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2, 431–440. doi: 10.1007/s43681-021-00096-7

PubMed Abstract | Crossref Full Text | Google Scholar

Al-Abdullatif, A. M. (2023). Modeling students' perceptions of chatbots in learning: integrating technology acceptance with the value-based adoption model. Educ. Sci. 13:1151. doi: 10.3390/educsci13111151

Crossref Full Text | Google Scholar

Aldreabi, H., Dahdoul, N. K. S., Alhur, M., Alzboun, N., and Alsalhi, N. R. (2025). Determinants of student adoption of generative AI in higher education. Electron. J. e-Learn. 23, 15–33. doi: 10.34190/ejel.23.1.3599

Crossref Full Text | Google Scholar

Alkamel, M. A. A., and Alwagieh, N. A. S. (2024). Utilizing an adaptable artificial intelligence writing tool (ChatGPT) to enhance academic writing skills among Yemeni university EFL students. Soc. Sci. Humanit. Open 10:101095. doi: 10.1016/j.ssaho.2024.101095

Crossref Full Text | Google Scholar

Anani, G. E., Nyamekye, E., and Bafour-Koduah, D. (2025). Using artificial intelligence for academic writing in higher education: the perspectives of university students in Ghana. Discover Educ. 4:46. doi: 10.1007/s44217-025-00434-5

Crossref Full Text | Google Scholar

Ardelean, T-K., and Veres, E. (2023). “Students' perceptions of artificial intelligence in higher education,” in Proceedings of 10th SWS International Scientific Conference on Social Sciences - ISCSS 2023, 10. doi: 10.35603/sws.iscss.2023/s08.38

Crossref Full Text | Google Scholar

Baldrich, K., Domínguez-Oller, J. C., and García-Roca, A. (2024). La Inteligencia artificial y su impacto en la alfabetización académica: una revisión sistemática. Educatio Siglo XXI 42, 53–74. doi: 10.6018/educatio.609591

Crossref Full Text | Google Scholar

Barrett, A., and Pack, A. (2023). Not quite eye to AI: student and teacher perspectives on the use of generative artificial intelligence in the writing process. Int. J. Educ. Technol. Higher Educ. 20:59. doi: 10.1186/s41239-023-00427-0

Crossref Full Text | Google Scholar

Bautista, A., Estrada, C., Jaravata, A. M., Mangaser, L. M., Narag, F., Soquila, R., et al. (2024). Preservice teachers' readiness towards integrating AI-based tools in education: a TPACK approach. Educ. Process: Int. J. 13, 40–68. doi: 10.22521/edupij.2024.133.3

Crossref Full Text | Google Scholar

Blanco Fontao, C., López Santos, M., and Lozano, A. (2024). ChatGPT's role in the education system: Insights from the future secondary teachers. Int. J. Inform. Educ. Technol. 14, 1035–1043. doi: 10.18178/ijiet.2024.14.8.2131

Crossref Full Text | Google Scholar

Breese, J. L., Rebman, C. M., and Levkoff, S. (2024). State of student perceptions of AI (circa 2024) in the United States. Issues Inform. Syst. 25, 311–321. doi: 10.48009/4_iis_2024_125

Crossref Full Text | Google Scholar

Capinding, A. T. (2024). Students' AI dependency in 3R's: Questionnaire construction and validation. Int. J. Inform. Educ. Technol. 14, 1532–1543. doi: 10.18178/ijiet.2024.14.11.2184

Crossref Full Text | Google Scholar

Carlino, P. (2013). Alfabetización académica diez años después. Revista Mexicana de Investigación Educativa 18, 355–381.

Google Scholar

Cassany, D. (2006). Tras las líneas. Sobre la lectura contemporánea. Barcelona: Anagrama.

Google Scholar

Çelik, F., Yangin Ersanli, C., and Arslanbay, G. (2024). Does AI simplification of authentic blog texts improve reading comprehension, inferencing, and anxiety? A one-shot intervention in Turkish EFL context. Int. Rev. Res. Open Distrib. Learn. 25, 287–303. doi: 10.19173/irrodl.v25i3.7779

Crossref Full Text | Google Scholar

Chan, C. K. Y. (2025). Students' perceptions of ‘AI-giarism': investigating changes in understandings of academic misconduct. Educ. Inform. Technol. 30, 8087–8108. doi: 10.1007/s10639-024-13151-7

Crossref Full Text | Google Scholar

Chan, C. K. Y., and Hu, W. (2023). Students' voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. Higher Educ. 20:43. doi: 10.1186/s41239-023-00411-8

Crossref Full Text | Google Scholar

Cui, Y. (2025). What influences college students using AI for academic writing? A quantitative analysis based on HISAM and TRI theory. Comp. Educ.: Artif. Intellig. 8:100391. doi: 10.1016/j.caeai.2025.100391

Crossref Full Text | Google Scholar

Deschenes, A., and McMahon, M. (2024). A survey on student use of generative AI chatbots for academic research. Evid. Based Libr. Inf. Pract. 19, 2–22. doi: 10.18438/eblip30512

Crossref Full Text | Google Scholar

Espinoza Vidaurre, S. M., Velásquez Rodríguez, N. C., Gambetta Quelopana, R. L., Martinez Valdivia, A. N., Leo Rossi, E. A., and Nolasco-Mamani, M. A. (2024). Perceptions of artificial intelligence and its impact on academic integrity among university students in Peru and Chile: an approach to sustainable education. Sustainability 16:9005. doi: 10.3390/su16209005

Crossref Full Text | Google Scholar

Falebita, O. S., and Kok, P. J. (2025). Artificial intelligence tools usage: a structural equation modeling of undergraduates' technological readiness, self-efficacy and attitudes. J. STEM Educ. Res. 8, 257–282. doi: 10.1007/s41979-024-00132-1

Crossref Full Text | Google Scholar

Grájeda, A., Burgos, J., Córdova, P., and Sanjinés, A. (2024). Assessing student-perceived impact of using artificial intelligence tools: construction of a synthetic index of application in higher education. Cogent Educ. 11:228–7917. doi: 10.1080/2331186X.2023.2287917

Crossref Full Text | Google Scholar

Habibi, A., Muhaimin, M., Danibao, B. K., Wibowo, Y. G., Wahyuni, S., and Octavia, A. (2023). ChatGPT in higher education learning: acceptance and use. Comp. Educ.: Artif. Intellig. 5:100190. doi: 10.1016/j.caeai.2023.100190

Crossref Full Text | Google Scholar

Habibi, A., Mukminin, A., Octavia, A., Wahyuni, S., Danibao, B. K., and Wibowo, Y. G. (2024). ChatGPT acceptance and use through UTAUT and TPB: a big survey in five Indonesian universities. Soc. Sci. Humanit. Open 14, 101–136. doi: 10.1016/j.ssaho.2024.101136

Crossref Full Text | Google Scholar

Hellmich, E. A., Vinall, K., Brandt, Z. M., Chen, S., and Sparks, M. M. (2024). ChatGPT in language education: centering learner voices. Technol. Lang. Teach. Learn. 6, 17–41. doi: 10.29140/tltl.v6n3.1741

Crossref Full Text | Google Scholar

Helmiatin, H., Hidayat, A., and Kahar, M. R. (2024). Investigating the adoption of AI in higher education: a study of public universities in Indonesia. Cogent Educ. 11:1. doi: 10.1080/2331186X.2024.2380175

Crossref Full Text | Google Scholar

Hesse, F., and Helm, G. (2025). Writing with AI in and beyond teacher education: Exploring subjective training needs of student teachers across five subjects. J. Digit. Learn. Teacher Educ. 41, 21–36. doi: 10.1080/21532974.2024.2431747

Crossref Full Text | Google Scholar

Holmes, W., Bialik, M., and Fadel, C. (2022). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston: Center for Curriculum Redesign.

Google Scholar

Jdaitawi, M., Hamadneh, B., Kan'an, A., Al-Mawadieh, R., Torki, M., Hamoudah, N., et al. (2024). Factors affecting students' willingness to use artificial intelligence in university settings. Int. J. Inform. Educ. Technol. 14, 1763–1769. doi: 10.18178/ijiet.2024.14.12.2207

Crossref Full Text | Google Scholar

Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., and Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. Int. J. Educ. Integrity 20:2. doi: 10.1007/s40979-024-00149-4

Crossref Full Text | Google Scholar

Kalnina, D., Nimante, D., and Baranova, S. (2024). Artificial intelligence for higher education: Benefits and challenges for pre-service teachers. Front. Educ. 9:1501819. doi: 10.3389/feduc.2024.1501819

Crossref Full Text | Google Scholar

Kanwal, A. (2025). Exploring the impact of ChatGPT on psychological factors in learning English writing among undergraduate students. World J. English Lang. 15, 404–420. doi: 10.5430/wjel.v15n3p404

Crossref Full Text | Google Scholar

Kelder, J., Crawford, J., Al Naabi, I., and To, L. (2025). Enhancing digital productivity and capability in higher education through authentic leader behaviors: a cross-cultural structural equation model. Educ. Inform. Technol. doi: 10.1007/s10639-025-13422-x

Crossref Full Text | Google Scholar

Klimova, B., and Luz de Campos, V. P. (2024). University undergraduates' perceptions on the use of ChatGPT for academic purposes: evidence from a university in Czech Republic. Cogent Educ. 11:2373512. doi: 10.1080/2331186X.2024.2373512

Crossref Full Text | Google Scholar

Lai, C. Y., Cheung, K. Y., and Chan, C. S. (2023). Exploring the role of intrinsic motivation in ChatGPT adoption to support active learning: An extension of the technology acceptance model. Comp. Educ.: Artif. Intellig. 5:100178. doi: 10.1016/j.caeai.2023.100178

Crossref Full Text | Google Scholar

Lamrabet, M., Fakhar, H., Echantoufi, N., Khattabi, K., et al. (2024). AI-based tools: exploring the perceptions and knowledge of Moroccan future teachers regarding AI in the initial training—a case study. Int. J. Inform. Educ. Technol. 14, 1493–1505. doi: 10.18178/ijiet.2024.14.11.2180

Crossref Full Text | Google Scholar

Liang, J., Huang, F., and Teo, T. (2024). Understanding Chinese University efl learners' perceptions of AI in English writing. Int. J. Comp.-Assisted Lang. Learn. Teach. (IJCALLT) 14, 1–16. doi: 10.4018/IJCALLT.358918

Crossref Full Text | Google Scholar

Liu, G. L., Darvin, R., and Ma, C. (2024). Exploring AI-mediated informal digital learning of English (AI-IDLE): a mixed-method investigation of Chinese EFL learners' AI adoption and experiences. Comput. Assist. Lang. Learn. 38, 1632–1660. doi: 10.1080/09588221.2024.2310288

Crossref Full Text | Google Scholar

Lozano, A., and Blanco-Fontao, C. (2023). Is the education system prepared for the irruption of artificial intelligence? A study on the perceptions of students of primary education degree from a dual perspective: Current pupils and future teachers. Educ. Sci. 13:733. doi: 10.3390/educsci13070733

Crossref Full Text | Google Scholar

Luckin, R., Holmes, W., Griffiths, M., and Forcier, L. B. (2016). Intelligence Unleashed. An argument for AI in Education. London: Pearson.

Google Scholar

Lund, B., Mannuru, N. R., Teel, Z. A., Lee, T. H., Ortega, N. J., Simmons, S., et al. (2025). Student perceptions of AI-assisted writing and academic integrity: ethical concerns, academic misconduct, and use of generative AI in higher education. AI Educ. 1:2. doi: 10.3390/aieduc1010002

Crossref Full Text | Google Scholar

Ma, S., and Chen, Z. (2024). The development and validation of the artificial intelligence literacy scale for Chinese college students (AILS-CCS). IEEE Access 12, 146419–146429. doi: 10.1109/ACCESS.2024.3468378

Crossref Full Text | Google Scholar

Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., et al. (2023). Exploring artificial intelligence in academic essay: higher education students' perspective. Int. J. Educ. Res. Open 5:100296. doi: 10.1016/j.ijedro.2023.100296

Crossref Full Text | Google Scholar

Mansoor, H. M. H., Bawazir, A., Alsabri, M. A., Alharbi, A., and Okela, A. H. (2024). Artificial intelligence literacy among university students—a comparative transnational survey. Front. Commun. 9:1478476. doi: 10.3389/fcomm.2024.1478476

Crossref Full Text | Google Scholar

Mateo, S. (2020). Procédure pour conduire avec succès une revue de littérature selon la méthode PRISMA. Kinésithérapie, la Revue 20, 29–37. doi: 10.1016/j.kine.2020.05.019

Crossref Full Text | Google Scholar

Milton, C., Vidhya, L., and Thiruvengadam, G. (2024). Examining the impact of AI-powered writing tools on independent writing skills of health science graduates. Adv. Educ. 25:315068. doi: 10.20535/2410-8286.315068

Crossref Full Text | Google Scholar

Mizumoto, A., Yasuda, S., and Tamura, Y. (2024). Identifying ChatGPT-generated texts in EFL students' writing: through comparative analysis of linguistic fingerprints. Appl. Corpus Linguist. 4:100106. doi: 10.1016/j.acorp.2024.100106

Crossref Full Text | Google Scholar

Musyaffi, A. M., Adha, M. A., Mukhibad, H., and Oli, M. C. (2024). Improving students' openness to artificial intelligence through risk awareness and digital literacy: evidence from a developing country. Soc. Sci. Humanit. Open 10:101168. doi: 10.1016/j.ssaho.2024.101168

Crossref Full Text | Google Scholar

Navarro, F. (2020). La escritura en la universidad: Entre el aprendizaje disciplinar y la participación en comunidades académicas. Lectura y Vida 41, 20–31.

Google Scholar

Nemt-allah, M., Khalifa, W., Badawy, M., Elbably, Y., and Ibrahim, A. (2024). Validating the ChatGPT usage scale: psychometric properties and factor structures among postgraduate students. BMC Psychol. 12:497. doi: 10.1186/s40359-024-01983-4

PubMed Abstract | Crossref Full Text | Google Scholar

Ngo, T. T. A., An, G. K., Nguyen, P. T., and Tran, T. T. (2024). Unlocking educational potential: Exploring students' satisfaction and sustainable engagement with ChatGPT using the ECM model. J. Inform. Technol. Educ.: Res. 23:5344. doi: 10.28945/5344

Crossref Full Text | Google Scholar

Nguyen, T. N. T., Lai, N. V., and Nguyen, Q. T. (2024). Artificial intelligence (AI) in education: A case study on ChatGPT's influence on student learning behaviors. Educ. Process: Int. J. 13, 105–121. doi: 10.22521/edupij.2024.132.7

Crossref Full Text | Google Scholar

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372:71. doi: 10.1136/bmj.n71

PubMed Abstract | Crossref Full Text | Google Scholar

Parodi, G. (2010). Alfabetización académica y profesional en el siglo XXI. Valparaíso: Pontificia Universidad Católica de Valparaíso.

Google Scholar

Parveen, M., and Mohammed Alkudsi, Y. (2024). Perspectivas de los Graduados sobre la Integraci?n de la IA: implicaciones para el desarrollo de habilidades y la preparaci?n profesional. Int. J. Educ. Res. Innov. doi: 10.46661/ijeri.10651

Crossref Full Text | Google Scholar

Polakova, P., and Ivenz, P. (2024). The impact of ChatGPT feedback on the development of EFL students' writing skills. Cogent Educ. 11:2410101. doi: 10.1080/2331186X.2024.2410101

Crossref Full Text | Google Scholar

Qi, J., Liu, J., and Xu, Y. (2025). The role of individual capabilities in maximizing the benefits for students using GenAI tools in higher education. Behav. Sci. 15:328. doi: 10.3390/bs15030328

PubMed Abstract | Crossref Full Text | Google Scholar

Rahayu, S. W., Weda, S., and Muliati, De Vega, N. (2024). Artificial intelligence in writing instruction: a self-determination theory perspective. XLinguae 17, 231–245. doi: 10.18355/XL.2024.17.01.16

Crossref Full Text | Google Scholar

Robledo, D. A. R., Zara, C. G., Montalbo, S. M., Gayeta, N. E., Gonzales, A. L., Escarez, M. G. A., et al. (2023). Development and validation of a survey instrument on knowledge, attitude, and practices (KAP) regarding the educational use of ChatGPT among preservice teachers in the Philippines. Int. J. Inform. Educ. Technol. 13, 1582–1590. doi: 10.18178/ijiet.2023.13.10.1965

Crossref Full Text | Google Scholar

Romero-Rodríguez, J. M., Ramírez-Montoya, M. S., Buenestado-Fernández, M., and Lara-Lara, F. (2023). Use of ChatGPT at university as a tool for complex thinking: students' perceived usefulness. J. New Approach. Educ. Res. 12, 323–339. doi: 10.7821/naer.2023.7.1458

Crossref Full Text | Google Scholar

Saklaki, A., and Gardikiotis, A. (2024). Exploring Greek students' attitudes toward artificial intelligence: Relationships with AI ethics, media, and digital literacy. Societies 14:248. doi: 10.3390/soc14120248

Crossref Full Text | Google Scholar

Segovia-García, N. (2023). Percepción y uso de los chatbots entre estudiantes de posgrado online: un estudio exploratorio. Revista de Investigación en Educación 21, 335–349. doi: 10.35869/reined.v21i3.4974

Crossref Full Text | Google Scholar

Selwyn, N. (2023). Should Robots Replace Teachers? AI and the Future of Education. Cambridge: Polity Press.

Google Scholar

Slimi, Z., Benayoune, A., and Alemu, A. E. (2025). Students' perceptions of artificial intelligence integration in higher education. Eur. J. Educ. Res. 14, 471–484. doi: 10.12973/eu-jer.14.2.471

Crossref Full Text | Google Scholar

Smerdon, D. (2024). AI in essay-based assessment: student adoption, usage, and performance. Comp. Educ.: Artif. Intellig. 7:100288. doi: 10.1016/j.caeai.2024.100288

Crossref Full Text | Google Scholar

Ventura, A. M. C., and Lopez, L. S. (2024). Unlocking the future of learning: assessing students' awareness and usage of AI tools. Int. J. Inform. Educ. Technol. 14, 645–651. doi: 10.18178/ijiet.2024.14.8.2142

Crossref Full Text | Google Scholar

Villarino, R. T. H. (2024). Integración de la inteligencia artificial (IA) en la educación superior rural filipina: Perspectivas, desafíos y consideraciones éticas. Int. J. Educ. Res. Innovat. 23, 1–25. doi: 10.46661/ijeri.10909

Crossref Full Text | Google Scholar

Wale, B. D., and Kassahun, Y. F. (2024). The transformative power of AI writing technologies: enhancing EFL writing instruction through the integrative use of Writerly and Google Docs. Hum. Behav. Emerg. Technol. 2024:9221377. doi: 10.1155/2024/9221377

Crossref Full Text | Google Scholar

Wang, L., and Ren, B. (2024). Enhancing academic writing in a linguistics course with generative AI: An empirical study in a higher education institution in Hong Kong. Educ. Sci. 14:1329. doi: 10.3390/educsci14121329

Crossref Full Text | Google Scholar

Yavich, R., and Davidovitch, N. (2024). Plagiarism among higher education students. Educ. Sci. 14:908. doi: 10.3390/educsci14080908

Crossref Full Text | Google Scholar

Yu, C., Yan, J., and Cai, N. (2024). ChatGPT in higher education: factors influencing ChatGPT user satisfaction and continued use intention. Front. Educ. 9:1354929. doi: 10.3389/feduc.2024.1354929

Crossref Full Text | Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. Int. J. Educ. Technol. Higher Educ. 16, 1–27. doi: 10.1186/s41239-019-0171-0

Crossref Full Text | Google Scholar

Zou, M., and Huang, L. (2023). To use or not to use? Understanding doctoral students' acceptance of ChatGPT in writing through technology acceptance model. Front. Psychol. 14:1259531. doi: 10.3389/fpsyg.2023.1259531

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: academic literacy, artificial intelligence, reading practice, writing practice, higher education

Citation: Baldrich K, Pérez-García C and Santamarina-Sancho M (2025) Artificial intelligence in academic literacy: empirical evidence on reading and writing practices in higher education. Front. Educ. 10:1701238. doi: 10.3389/feduc.2025.1701238

Received: 08 September 2025; Accepted: 03 November 2025;
Published: 18 November 2025.

Edited by:

Gemma Lluch, University of Valencia, Spain

Reviewed by:

Andry Sophocleous, University of Nicosia, Cyprus
Daniel Laliena, University of Zaragoza, Spain

Copyright © 2025 Baldrich, Pérez-García and Santamarina-Sancho. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kevin Baldrich, a2JyOTU1QHVhbC5lcw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.