- 1Department of Education and Sports Sciences, Telematic University of Pegaso, Naples, Italy
- 2Department of Humanities, Literature, Cultural Heritage, Education Sciences, University of Foggia, Foggia, Italy
- 3Department of Humanities and Social Sciences, University of Bergamo, Bergamo, Italy
Writing is a complex, learned skill that requires continuous practice and support. Digital technologies, and more recently AI-assisted writing tools, have been introduced to enhance students’ writing and self-assessment, with potential benefits for motivation, accuracy, and collaboration. Their effectiveness, however, depends on how they are integrated into educational contexts and on teachers’ perceptions. This exploratory study, conducted between February and April 2025 with n. 1,072 pre-service teachers enrolled in qualification programs in Italy, examines perceptions and practices related to AI-assisted writing tools for student self-assessment. In this study, the construct was operationalised through a restricted set of widely used AI-based generative writing tools available at the time of data collection. Adopting a mixed embedded design that combines quantitative and qualitative analyses, the research found that most teachers expressed cautious or sceptical views about their general usefulness, yet more positive attitudes emerged toward specific tools (e.g., ChatGPT) and sub-skills, especially spelling and text composition. Qualitative data indicated that teachers who experimented with such tools perceived benefits for student motivation, engagement, and skill development, though concerns persisted regarding critical thinking and over-reliance on algorithms. The findings underscore the need for targeted teacher training and further research to support the effective pedagogical integration of AI-assisted tools in writing instruction.
1 Introduction: writing and learning writing skills supported by technology
Writing is not an innate but a skill to be learned. As a human ability, writing is a complex process involving components of different levels: key components—mechanical skills, vocabulary, sentence construction, text structure, idea generation; higher-order: reflection and elaboration, organization, clarity and accuracy (Cornoldi et al., 2018; Hayes and Flower, 1980; Graham, 2019). As an acquired skill, it requires continuous development through constant exercise and practice, a supportive context that provides appropriate models, formalization of thinking and progressive symbolization of meanings for the development of clearer ideas and easier identification of themes (Bazerman, 2016; Guo et al., 2025). The specific ability of text composition is closely linked to the communicative function which presupposes the ability to elaborate concepts and the ability to manage the cultural conventions that make the text comprehensible (Philippek et al., 2025; Portanova et al., 2017).
Research on learning to write highlights the importance of understanding and applying text structures, using authentic activities and tools such as graphic organizers, and integrating writing with reading and technology (Klein and Boscolo, 2016). Effective approaches often include explicit instruction on text structures, allowing students to actively engage in comprehension and writing activities, and the use of technologies such as keylogging software to study writing processes. Writing also serves as a self-regulated learning activity dependent on the writer’s goals and strategies, promoting critical thinking and problem-solving ability.
Research indicates that technology, particularly digital tools, supports writing development by improving motivation, writing quality, and collaboration, with a strong effect on the quantity and mechanics of writing (Li et al., 2023; Engeness, 2025; Cortiana, 2017). Digital technologies assist students, especially those with writing difficulties, by providing tools such as grammar checkers and AI assistants that improve structure and readability (Wu et al., 2018; Gaggioli, 2025). Research also explores digital multimodal composition (DMC) and the influence of prior beliefs on text comprehension (Kessler, 2024; Park, 2021; Yu et al., 2024).
However, the effectiveness of technology in writing learning depends on the type of integration made into the learning environment and setting (Ghavifekr and Rosdy, 2015; Williams and Beam, 2019; Bauer et al., 2025). Furthermore, while digital tools facilitate collaborative and multimodal writing, handwriting may still be superior for fundamental skills such as letter knowledge and brain connectivity, especially in early childhood (Askvik et al., 2020; Petrigna et al., 2022). Williams and Beam's (2019) review confirms that technology-mediated writing instruction produces improvements in students’ composition processes and writing skills, increased motivation, engagement, and participation in writing tasks, and improved social interaction and peer collaboration. However, the challenges for teachers are related to integrating technology into the writing curriculum.
2 AI-assisted self-assessment tools for writing skills
Artificial intelligence assisted tools are slowly transforming the way we perform daily activities, increasingly impacting various areas of social and working life (Chakraborty et al., 2022).
For the purposes of this study, a conceptual distinction is drawn between AI-assisted writing and AI-generated writing. Although this conceptual distinction guided the theoretical framework, the empirical investigation focused on a limited number of widely accessible AI-based generative tools commonly perceived as supporting writing and self-assessment processes in educational contexts.
The former refers to tools that support and enhance the writing and self-assessment process through feedback, correction, or guidance functions, while the latter involves the autonomous production of texts by artificial systems with minimal human intervention. This research focuses exclusively on AI-assisted tools used to facilitate students’ writing development and self-assessment. In particular, within the educational context, these tools are fostering a significant evolution in teaching and learning processes and strategies (Markauskaite et al., 2022) and, more specifically, assessment practices (Swiecki et al., 2022). AI-assisted tools are increasingly being used to support learning and assessment, thanks also to the implementation of technologies and practical strategies that allow for their relatively safe use—such as shared labeling systems (European Commission, 2022; Gašević et al., 2023) and watermarked and measurable short texts (Kamaruddin et al., 2018)—which, however, require appropriate national policies to regulate their use. Architectural innovations and pre-trained models—sometimes specialized—have supported the development of Natural Language Processing (NLP) systems and the implementation of increasingly large language models, which are now employed in learning and written language assessment tools (Bender et al., 2021; Wang et al., 2023).
AI-assisted writing tools are also increasingly used to assess the subskills that contribute to students’ writing proficiency, as well as to process and synthesize data generated by writing tasks (Quratulain and Bilan, 2025).
The advantages of using AI-assisted writing tools would be in real-time modeling by students during self-assessment (Papamitsiou and Economides, 2017), as well as in supporting the understanding and solving of complex problems (Greiff et al., 2015). The use of automated writing assessment and feedback tools has also improved grammatical accuracy and the consistency of academic achievement, making students feel like more independent writers (Ranalli, 2021). The condition, however, is that students received rapid feedback and easy-to-understand suggestions (Li et al., 2023). Focalized studies on tools confirm that Grammarly help non-native English learners improve their grammar, sentence structure, and academic tone (McCarthy et al., 2022), and that ChatGPT supports students in generating ideas, formulating arguments, and structuring text (Zhai, 2023).
Instead, research on the perceptions of users, especially school teachers, has identified both challenges and opportunities: on the one hand, a potential decline in critical thinking and authenticity in writing, on the other, greater efficiency and better idea generation (Agrati and Beri, 2025).
Educational researches on teachers’ perceptions and dispositions toward AI-assisted tools for language learning support found that the main challenges expected by 343 communication teachers were reduced critical thinking skills and a lack of authenticity in students’ writing; on the contrary, the main benefits they envisioned were reduced assessment time for teachers and improved idea generation in the writing process for students (Cardon et al., 2023). Furthermore, one hundred second language (L2) teachers expressed “mixed feelings” about using ChatGPT in their teaching and assessment practices—they were simultaneously enthusiastic about the pedagogical support offered by the tool and concerned about potential academic dishonesty (Zimotti et al., 2024). This study, in particular, found that teachers’ levels of enthusiasm or apprehension were related not so much to age or teaching experience, but to their personal experience with the tool: the greater their familiarity with using the tool, the more favorable, even enthusiastic, teachers reported using it in the classroom.
The qualitative study carried out by Marzuki et al. (2023), through the use of semi-structured interviews, identified, on the one hand, the most commonly used AI-assisted writing tools in academic contexts—such as Quillbot, WordTune, Jenni, ChatGPT, Paperpal, Copy.ai, and Essay Writer—and, on the other, language teachers’ perceptions regarding their impact on writing skills, particularly in relation to content development and the structural organization of texts, showing unanimous agreement on their influence. Finally, the study by Marzuki et al. (2023)—in addition to identifying the AI-assisted writing tools most commonly used in academic contexts—confirms that they are perceived as having a strong impact on writing skills, in particular on the ability to process content and structurally organize texts.
Considering the growing presence of AI-assisted writing tools in educational contexts, this study aims to explore how pre-service teachers perceive and use such tools in relation to students’ self-assessment of writing. Specifically, it seeks to address three main research questions:
1. What prevailing perceptions do teachers express toward AI-assisted writing tools?
2. What practices do teachers report regarding the use of these tools for student self-assessment?
3. Which factors influence teachers’ perceptions of students’ writing skills?
These questions guided the design of the study and the subsequent analysis of quantitative and qualitative data.
3 Purpose and method of the study
This section presents an exploratory study conducted between February and April 2025, involving students enrolled in the 60-credit teacher qualification programme, which constitutes part of the initial training for prospective secondary pre-service teachers1 (as per Art. 23 of the Italian Ministerial Decree of August 4, 2023). While the sample was one of convenience, it nonetheless offers meaningful insights into the forthcoming generation of teachers, which can inform the development of teacher education.
These students were attending cross-disciplinary courses at Pegaso Online University (n = 548) and Foggia University (n = 524) in Italy.
The aim of the research was to provide a reflection on the potential of AI-assisted writing tools in school-based assessment practices. Specifically, the study sought to explore the perceptions by pre-service teachers, as well as to describe the practices they adopt in relation to the use of AI-assisted tools—both in terms of assessing students’ learning and students’ self-assessment of their learning.
The sample consisted of 1,072 pre-service teachers. The majority identified as female (70.6%), followed by male (29.1%) and non-binary (0.3%) participants. In terms of age, most participants were over 45 years old (36.8%), with the remainder distributed across the other age groups. In terms of educational attainment, 57.5% of participants held a master’s degree, 19.9% held a postgraduate qualification, 3.1% held a PhD and 19.5% reported holding other qualifications.
Teaching experience refers to the number of years spent in professional teaching practice. Data on teaching experience was missing for 215 participants. Of those who provided this information, 37.7% had 0–3 years’ experience, 31.7% had 4–7 years’ experience, 14.9% had 8–11 years’ experience, 5.0% had 12–15 years’ experience, and 10.3% had over 16 years’ experience (Table 1).
Data came open−/closed-ended “ad hoc” questionnaire, serving both quantitative perceptions and qualitative (practices) aims.
A mixed open and closed ‘ad hoc’ questionnaire was used as a quantitative (perceptions of use) and qualitative (use practices) data collection tool. It is divided into 4 sessions: 1. socio-professional data, 2. general knowledge of AI-assisted tools; 3. use of AI-assisted tools in own teaching practice; 4. perceptions and practices related to the use of AI-assisted tools in the self-assessment of students’ writing skills.
This paper reports on the analysis of responses to the questions in the fourth section:
• How useful are AI-assisted writing tools for the students’ self-assessment? (Qa);
• Which AI-assisted writing tool is useful for the students’ self-assessment? (Qb);
• For which specific writing skills (graphomotor2, orthographic, text composition) are useful AI-assisted tools for student self-assessment? (Qc);
• Describe an example of AI-assisted writing tool used in your practice for student self-assessment? (Qd).
Regarding the operationalisation of AI-assisted writing tools, respondents were presented with a predefined list of widely known AI-based applications (ChatGPT, Copilot, DeepL Write), selected due to their visibility and accessibility at the time of data collection. The questionnaire did not allow participants to add or specify other tools (Cfr. Limitation).
This exploratory investigation employed a mixed embedded design (see Figure 1) that combined simultaneously collected quantitative and qualitative data (Creswell, 2013; Teddlie and Tashakkori, 2009).
Figure 1. Mixed embedded design adapted from Teddlie and Tashakkori (2009).
The authors selected this method because of the characteristics of the available data. A mixed embedded design allows for the use of one primary approach (quantitative, in our case) while embedding another approach (qualitative) to obtain further, more in-depth information.
Prior to analysis, the quantitative dataset was examined for completeness and internal consistency. Due to missing responses in some socio-professional variables, the effective sample size varied across analyses. Specifically, cases with missing gender data (n = 19) and incomplete teaching grade information (n = 244) were excluded from analyses involving those variables. As a result, the valid sample size was n = 1,053 for most covariates and n = 828 for analyses including teaching grade. No imputation procedures were applied, and all analyses were conducted using available-case data. Accordingly, variations in sample size across statistical analyses reflect differences in data completeness rather than inconsistencies in data processing.
4 Results
4.1 QR1—teachers’ perceptions toward AI-assisted writing tools
A descriptive statistical analysis of the quantitative data was carried out (Table 2).
With regard to the perceived usefulness of AI-assisted tools for student self-assessment, most teachers (n. = 444; 41.4%) expressed a cautious view of their usefulness, followed by a substantial proportion (n. = 401; 36.5%) who perceived them as having low usefulness.
In terms of perceptions of AI-assisted tools used by students for self-assessment—such as ChatGPT, CoPilot, and DeepL Write—a significant portion of teachers (n = 410; 38.2%) reported that they did not consider any of the tools listed to be useful. Conversely, around one third of respondents (n = 314; 29.3%) expressed a positive view of the usefulness of ChatGPT, followed by DeepL Write (n = 119; 11.1%).
In relation to the perceived usefulness of AI-assisted tools for students’ self-assessment of detailed writing skills (graphomotor, orthographic, text composition)—the majority of teachers shared a perception of usefulness particularly in relation to the orthographic sub-skill (n = 492; 45.9%), followed by the text composition sub-skill (n = 286; 26.7%).
4.2 QR2—teachers’ practices in using AI-assisted writing tools for student self-assessment
The qualitative data3 from the open-ended responses (Qd) were analyzed using a three-phase coding process (Glaser et al., 1968): [a] open, involving the conceptualization of significant textual units and the assignment of labels; [b] axial, focusing on the identification of frequent macro-categories based on the number of occurrences; [c] entailing the hierarchical and analytical organization of the macro-categories, leading to the emergence of the main thematic categories. Out of 1,073 respondents, only 43 (approximately 4%) provided answers to the open-ended question (Qd).
Although the qualitative sample was small, the analysis focused on the depth and richness of the data rather than statistical representativeness.
The coding process was conducted by a single researcher, and it involved three phases: open, axial and selective coding. Themes were consolidated by iteratively comparing segments, discussing discrepancies among coders and ensuring that recurring patterns were captured consistently. Although data saturation could not be formally assessed due to the low response rate, the emergent themes were validated by repeatedly examining all textual units to ensure the identification of main categories and sub-categories was reliable and consistent. The small size of the qualitative sample should be considered when generalising the findings, but it provides valuable insights into teachers’ practices and perceptions of AI-assisted writing tools.
The unit of analysis consisted of individual text segments rather than participants, resulting in a corpus of 73 segments derived from 43 respondents (Table 3).
The analysis identified three main types of categories:
• Categories (n. 8), concerning the negative effects of AI-assisted tools on students;
• Categories (n. 26), relating to the contexts and conditions of use of such tools by teachers;
• Categories (n. 39), addressing the positive effects of these tools on students’ motivation and skills development.
The first group—i.e. ‘negative effects on students’—comprises two sub-categories, respectively linked to psychological aspects (n. 3), such as the sense of loneliness, and cognitive aspects (n. 5), such as the delegation to algorithmic processes and the risk of not understanding.
The second group—called “conditions of use”—includes three subtypes, including: [a] common practices (n. 12), linked to evaluation procedures; [b] cases of exceptional use (n. 4), such as during the closure due to COVID-19; [c] teaching experiences that extend the use of these tools (n. 10), for example through their integration with other interactive digital tools or within more complex and structured projects.
The categories of the third type—called ‘positive effects on students’—are also divided into three main areas: [a] motivational components—such as the interest shown in using these tools (n. 19), associated with involvement, stimulation and positive disposition, feeling of play—[b] possibility of enhancing multiple levels of learning (n. 9)—such as writing and content skills—and [c] transversal skills (n. 16)—such as critical thinking skills and self-control in the process of checking and reflecting on one’s work.
Although the number of open responses was relatively low, the characteristics of the practices described is predominantly positive. Notably, teachers tended to adopt the student’s perspective in their descriptions, focusing on both motivation and learning enhancement, as well as actual skill development. Even the few negative aspects were expressed in reference to the students’ experience.
4.3 QR3—associations between socio-professional variables and teachers’ perceptions of students’ writing skills
As for the perceived usefulness of AI-assisted tools for supporting students in the self-assessment of specific writing sub-skills (graphomotor, orthographic, text composition), most of the teachers considered them useful for the orthographic sub-skill (n = 492; 45.9%) followed by text composition (n = 286; 26.7%).
An inferential analysis was conducted to examine the relationship between Qc (usefulness of AI for specific writing skills) and variables (gender, employment status, age, educational qualification, years of teaching and teaching grade) (Tables 4, 5).
Table 5. Chi-square tests of association between demographic variables and perceived usefulness of AI for writing skills.
To examine the influence of socio-professional covariates on the perceived usefulness of AI-assisted writing tools for specific skills (Qc), a multinomial logistic regression was conducted (Table 6).
No significant correlations were found for any of the variables in relation to age (χ2(15) = 18.87, p = 0.220), educational qualification (χ2(9) = 13.87, p = 0.127), years of experience (χ2(15) = 16.32, p = 0.361) or teaching grade (χ2(9) = 10.35, p = 0.323).
However, the correlation between gender and the writing skill for which AI was considered useful revealed a statistically significant association (χ2(6) = 19.56, p = 0.003). However, the effect size was very small (Cramér’s V = 0.096), suggesting that gender’s practical influence on perceptions is likely minimal. Women were slightly more likely to identify orthography as a suitable writing skill for AI support, whereas men were somewhat more likely to not identify any writing skill as such.
Similarly, the correlation between employment status and the writing skill for which AI tools were considered useful was statistically significant (χ2(9) = 30.19, p < 0.001); however, the effect size was again weak (Cramér’s V = 0.098). This suggests that, although employment status is associated with perceptions, it has minimal practical explanatory power. Fixed-term (temporary) teachers were marginally more likely to select “orthography” or “text composition” as the skills most in need of AI support.
With reference to the multinomial logistic regression analysis, the reference category for Qc was set as ‘none of the above’. Although the pseudo R2 was low (Nagelkerke = 0.041), the overall model fit was better than the null model (χ2(18) = 30.147, p = 0.036). Among the covariates, only participants’ gender had a statistically significant effect. Specifically, compared to men, women were less likely to indicate ‘written expression skill’ (Exp(B) = 0.570, p = 0.015) and ‘orthographic skill’ (Exp(B) = 0.494, p = 0.001) relative to the reference category. No other socio-demographic factors (age, educational qualification, employment status, teaching grade or years of teaching experience) had a significant effect on the Qc categories. Given the small effect sizes in both the chi-square and regression analyses, these results suggest that the practical influence of gender and employment status on perceptions of the usefulness of AI for specific writing skills is likely to be limited.
Although some associations reached statistical significance, the observed effect sizes were very small and the overall explanatory power of the model was low.
5 Discussion
Analysing both quantitative and qualitative data provides a nuanced picture of pre-service teachers’ perceptions and practices regarding AI-assisted writing tools for student self-assessment. Overall, a discrepancy emerges between teachers’ expressed perceptions and their actual practices: while many report cautious or sceptical attitudes toward the general usefulness of AI-assisted tools, those with direct experience of using them express strongly positive perceptions. This finding is consistent with the results of recent studies (Zimotti et al., 2024; Marzuki et al., 2023), which have shown that hands-on experience with AI tools increases teachers’ appreciation of their pedagogical value, whereas non-use is associated with more critical or ambivalent attitudes.
Quantitative data suggest that teachers’ perceptions of usefulness increase when they focus on specific tools, such as ChatGPT, or particular writing sub-skills, such as spelling and text composition. This suggests that the evaluation of AI tools is closely linked to concrete classroom applications. Gender and employment status were found to have statistically significant, albeit practically minor, associations with perceived usefulness: female teachers were slightly more likely to recognise orthography as a skill suitable for AI support, while temporary teachers were marginally more likely to select text composition. Nevertheless, these effects are minor, highlighting that perceptions are predominantly shaped by experience with the tools rather than by demographic or professional characteristics.
Quantitative analyses indicate that teachers’ perceptions of AI-assisted writing tools tend to vary when attention is directed toward specific tools or writing sub-skills. While gender and employment status showed statistically significant associations with perceived usefulness, the corresponding effect sizes were very small and the explanatory power of the model was limited. These findings suggest that such variables should not be interpreted as strong predictors of teachers’ perceptions, but rather as weakly associated factors within an exploratory framework.
However, concerns were also reported, particularly regarding cognitive and psychological aspects. These include the risk of over-reliance on algorithmic suggestions, misinterpreting or uncritically accepting feedback, and students potentially feeling isolated. These concerns are consistent with previous research on the challenges of AI integration in educational contexts (Agrati and Beri, 2025; Cardon et al., 2023).
The study also corroborates prior evidence that teachers’ enthusiasm or apprehension is closely linked to their familiarity with the tools: greater experience tends to produce more favorable perceptions, as observed by Zimotti et al. (2024). This suggests that experience mediates between perception and practice, reinforcing the importance of providing pre-service teachers with opportunities to actively engage with AI-assisted writing tools during their training.
The pedagogical implications are clear: teacher education programs should integrate training on responsible and reflective use of AI, strategies for fostering critical assessment and self-assessment in students, awareness of ethical issues, and approaches to combine AI tools with traditional writing instruction to avoid over-reliance on technology. In summary, the analysis highlights that while perceptions and practices do not always align, teachers who have experimented with AI-assisted tools report positive outcomes for both motivation and skill development, whereas teachers without hands-on experience tend to maintain cautious or critical perspectives.
6 Limitation of the study
This study has several limitations that should be taken into account when interpreting the findings. Firstly, the exploratory and descriptive nature of the study means that it captures perceptions and practices without establishing causal relationships or predictive models, which limits the generalisability of the findings beyond the context in which the study was conducted.
Secondly, although the ad hoc questionnaire was carefully designed to align with the research objectives, it did not undergo any formal validation procedures. Specifically, no systematic expert review, pilot testing or content validity assessment was conducted prior to data collection. Consequently, some items may not fully capture the complexity of teachers’ attitudes, perceptions or practices relating to the use of AI, which could affect the precision of the measurements and the internal consistency of the results. For this reason, the results should primarily be interpreted as exploratory indicators rather than as robust measurements of stable constructs.
Thirdly, another limitation concerns the operationalisation of the construct under investigation. While the study refers to AI-assisted writing tools, the empirical focus was limited to a few AI-based generative applications (ChatGPT, Copilot and DeepL Write). Participants were not given the opportunity to report on other commonly used tools, such as Grammarly and QuillBot. This restriction may have affected the validity of the content and influenced the reported levels of familiarity and use, thus limiting the interpretability of the findings to this specific subset of tools.
Fourthly, only around 4 % of participants provided qualitative comments, which limits representativeness and precludes confirmatory qualitative interpretations. Accordingly, the qualitative findings should be understood as illustrative insights that offer depth and contextualisation to the quantitative results rather than as evidence of saturation or general patterns.
In conclusion, the sample was drawn exclusively from teacher qualification programs in Italy between February and April 2025, reducing the possibility of extending the findings to other countries, school levels or professional groups. Fifthly, certain socio-professional characteristics, such as employment contract type or years of teaching experience, were over- or under-represented, which may have constrained data variability. Finally, the rapid evolution of AI technologies means that the perceptions captured represent a snapshot in time, and the adoption and acceptability of tools such as ChatGPT or Grammarly are subject to continuous change.
7 Conclusion
This study indicates that AI-assisted writing tools for student self-assessment are perceived ambivalently by pre-service teachers in Italy. Rather than identifying predictive or influential factors, the study highlights tentative associations and emerging patterns that should be interpreted within an exploratory framework, particularly in light of the small effect sizes and the limited explanatory power of the statistical models.
Overall, perceptions are more favorable among teachers with direct experience using these tools, particularly when focusing on specific skills such as spelling and text composition. Qualitative data and open-ended comments suggest that AI-assisted tools can enhance student motivation, engagement, and the development of both writing-specific and transversal skills when used in a guided and reflective manner. The findings underscore the importance of teacher training that emphasizes practical engagement with AI, critical assessment strategies, ethical awareness, and the integration of AI with traditional writing instruction. Institutional support and educational leadership are crucial to ensure that technology is both available and used responsibly in classrooms. Future research should explore the actual impact of AI-assisted self-assessment on student writing skills across different school levels, investigate the relationship between perceptions, familiarity, and classroom use of AI tools, and employ longitudinal and comparative approaches to monitor evolving practices and perceptions. Additionally, research should consider adaptation of AI-assisted tools to diverse cultural and institutional contexts to better understand their potential and limitations.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Ethics statement
Ethical approval was not required for the studies involving humans because the study was conducted in Italy, where institutional guidelines and national legislation, aligned with the European Union’s General Data Protection Regulation (GDPR), provide a clear framework for when formal ethics board review is required. According to Italian law (Legislative Decree No. 196/2003, as amended, and in line with EU Regulation 2016/679), formal IRB approval is not mandated for research that uses exclusively anonymised or pseudonymised data and does not involve “sensitive personal data.” This category of data is strictly defined to include information that could reveal racial or ethnic origin, political opinions, religious or philosophical beliefs, or data concerning health, sexual life, or genetic and biometric data. The questionnaire for this study collected only non-sensitive personal data, such as age, gender, educational background, and prior teaching experience. This type of information is not considered sensitive under the aforementioned Italian and EU regulations. Furthermore, all data were collected and processed in an anonymised format, ensuring that no individual participant could be identified. Consistent with all ethical research practices, participants were fully informed about the study’s purpose, the nature of the data collected, and the voluntary nature of their participation. Informed consent was obtained from all participants before data collection began. This rigorous adherence to national and EU data protection regulations, which govern the ethical use of non-sensitive, anonymised data, ensured the study was conducted to the highest ethical standards applicable in our jurisdiction. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
LA: Writing – original draft, Writing – review & editing. VV: Writing – original draft, Writing – review & editing. AB: Writing – original draft, Writing – review & editing. PB: Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1. ^Note that the participants in the study, although referred to as pre-service teachers, already had work experience. This is because it is very common in Italy for teachers to have worked before completing their degree. On the one hand, the law allows those who possess the required university credits to be included in ranking lists for teaching assignments even before completing their university studies. On the other hand, many pre-service teachers have already earned a degree and, after gaining experience in schools, decide to obtain teaching certification in order to move beyond precarious employment.
2. ^As outlined in the theoretical framework, graphomotor skills form part of writing ability (Cornoldi et al., 2018). For this reason, they were included in the administered questionnaire. However, it is evident that AI tools have little influence on these skills.
3. ^Given the limited number of open-ended responses, the qualitative findings are presented as illustrative examples of teachers’ experiences and practices rather than as representative or confirmatory evidence.
References
Agrati, L. S., and Beri, A. (2025). AI-assisted writing tools for student self-assessment. Investigation on teachers’ perceptions and practices. G. Ital. Educ. Salute Sport Didatt. Inclusiva 9, 1–16. doi: 10.32043/gsd.v9i1.1351
Askvik, E. O., van der Ruud Weel, F. R., and van der Meer, A. L. H. (2020). The importance of cursive handwriting over typewriting for learning in the classroom: a high-density EEG study of 12-year-old children and young adults. Second. Educ. Psychol. 11:1810. doi: 10.3389/fpsyg.2020.01810
Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., and Sailer, M. (2025). Looking beyond the hype: understanding the effects of AI on learning. Educ. Psychol. Rev. 37:45. doi: 10.1007/s10648-025-10020-8
Bazerman, C. (2016). What do sociocultural studies of writing tell us about learning to write. In Handbook of writing research. (eds.) C. A. MacArthur, S. Graham, and J. Fitzgerald, 2nd ed., (The Guilford Press), pp. 11–23.
Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). “On the dangers of stochastic parrots: can language models be too big?” in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (FAccT ‘21) (Association for Computing Machinery), 610–623.
Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., and Heidewald, J. (2023). The challenges and opportunities of AI-assisted writing: developing AI literacy for the AI age. Bus. Prof. Commun. Q. 86, 257–295. doi: 10.1177/23294906231176517
Chakraborty, U., Banerjee, A., Saha, J. K., Sarkar, N., and Chakraborty, C. (2022). Artificial intelligence and the fourth industrial revolution. Singapore: Jenny Stanford Publishing.
Cornoldi, C., Meneghetti, C., Moè, A., and Zamperlin, C. (2018). Processi cognitivi, motivazione e apprendimento. Bologna: il Mulino.
Cortiana, P. (2017). Support writing through new technologies. Formazione & Insegnamento 15, 153–164.
Creswell, J. W. (2013). Research design: qualitative, quantitative and mixed methods approaches. Thousand Oaks, CA: Sage.
Engeness, I. (2025). Cultural-historical perspective to design pedagogical AI for enhancing student writing. Tech Knowl. Learn. doi: 10.1007/s10758-025-09876-0
European Commission (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Luxembourg: Publications Office of the European Union.
Gaggioli, C. (2025). Reading and writing in the age of AI: opportunities and challenges for students with dyslexia. Ital. J. Spec. Educ. Incl. 13, 107–119. doi: 10.7346/sipes-01-2025-8
Gašević, D., Siemens, G., and Sadiq, S. (2023). Empowering learners for the age of artificial intelligence. Comput. Educ.: Artif. Intell. 4:100130. doi: 10.1016/j.caeai.2023.100130
Ghavifekr, S., and Rosdy, W. A. W. (2015). Teaching and learning with technology: effectiveness of ICT integration in schools. Int. J. Res. Educ. Sci. 1, 175–191. doi: 10.21890/ijres.23596
Glaser, B. G., Strauss, A. L., and Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nurs. Res. 17:361. doi: 10.1097/00006199-196807000-00014
Graham, S. (2019). Changing how writing is taught, changing teaching practice in P-20 educational settings. Rev. Educ. Res. 34, 277–303. doi: 10.3102/0091732X18821125
Greiff, S., Stadler, M., Sonnleitner, P., Wolff, C., and Martin, R. (2015). Sometimes less is more: comparing the validity of complex problem-solving measures. Intelligence 50, 100–113. doi: 10.1016/j.intell.2015.02.007
Guo, Y., Puranik, C., Xie, Y., and Zhao, A. (2025). Typical writing instruction and practice: contributions to writing skills in kindergarten. Read. Writ. doi: 10.1007/s11145-025-10654-8
Hayes, J. R., and Flower, L. S. (1980). “Identifying the organization of the writing process” in Cognitive processes in writing. eds. L. W. Gregg and E. R. Steinberg (Hillsdale, NJ: Lawrence Erlbaum Associates), 3–30.
Kamaruddin, N. S., Kamsin, A., Por, L. H., and Rahman, H. (2018). A review of text watermarking: theory, methods, and applications. IEEE Access 6, 8011–8028. doi: 10.1109/ACCESS.2018.279658
Kessler, M. (2024). Digital multimodal composing: connecting theory, research, and practice in second language acquisition. Bristol, UK: Multilingual Matters.
Klein, P. D., and Boscolo, P. (2016). Trends in research on writing as a learning activity. J. Writ. Res. 7, 311–350. doi: 10.17239/jowr-2016.07.03.01
Li, X., Li, B., and Cho, S.-J. (2023). Empowering Chinese language learners from low-income families to improve their Chinese writing with ChatGPT’s assistance afterschool. Language 8:238. doi: 10.3390/languages8040238
Markauskaite, L., Marrone, R., Poquet, O., Knight, S., Martinez-Maldonado, R., Howard, S., et al. (2022). Rethinking the entwinement between artificial intelligence and human learning: what capabilities do learners need for a world with AI? Comput. Educ.: Artif. Intell. 3:100056. doi: 10.1016/j.caeai.2022.100056
Marzuki,, Widiati, U., Rusdin, D., Darwin,, and Indrawati, I. (2023). The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Educ. 10:2236469. doi: 10.1080/2331186X.2023.2236469
McCarthy, K. S., Roscoe, R. D., Allen, L. K., Likens, A. D., and McNamara, D. S. (2022). Automated writing evaluation: Does spelling and grammar feedback support high-quality writing and revision? Assess. Writ. 52:100608.
Papamitsiou, Z., and Economides, A. A. (2017). “Student modeling in real-time during self-assessment using stream mining techniques” in Proceedings of the 17th IEEE international conference on advanced learning technologies (Timisoara, Romania: IEEE), 286–290.
Park, J. H. (2021). ““Dear future me”: connecting college L2 writers’ literacy paths to an envisioned future self through a multimodal project” in Multimodal composing in K-16 ESL and EFL education: multilingual perspectives. eds. D. Shin, T. Cimasko, and Y. Yi (Singapore: Springer), 73–86.
Petrigna, L., Thomas, E., Brusa, J., Rizzo, F., Scardina, A., Gallassi, C., et al. (2022). Does learning through movement improve academic performance in primary schoolchildren? A systematic review. Front. Pediatr. 10:841582. doi: 10.3389/fped.2022.841582,
Philippek, J., Kreutz, R. M., Hennes, A. K., Schmidt, B. M., and Schabmann, A. (2025). The contributions of executive functions, transcription skills and text-specific skills to text quality in narratives. Read. Writ. 38, 651–670. doi: 10.1007/s11145-024-10528-5
Portanova, P., Rifenburg, M., and Roen, D. (2017). Contemporary perspectives on cognition and writing. Fort Collins, CO: The WAC Clearinghouse.
Quratulain, S., and Bilan, S. (2025). The effectiveness of AI-powered writing assistants in enhancing essay writing skills at undergraduate level. J. Soc. Sci. Arch. 3, 845–855. doi: 10.59075/jssa.v3i1.166
Ranalli, J. (2021). Automated writing evaluation and feedback: examining students’ perceptions and engagement. Comput. Assist. Lang. Learn. 34, 343–365. doi: 10.1080/09588221.2019.1640109
Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldonado, R., Lodge, J. M., Milligan, S., et al. (2022). Assessment in the age of artificial intelligence. Comput. Educ. Artif. Intell. 3:100052. doi: 10.1016/j.caeai.2022.100075
Teddlie, C., and Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Thousand Oaks, California: Sage.
Wang, H., Li, J., Wu, H., Hovy, E., and Sun, Y. (2023). Pre-trained language models and their applications. Engineering 25, 51–65. doi: 10.1016/j.eng.2022.04.024
Williams, C., and Beam, S. (2019). Technology and writing: review of research. Comput. Educ. 128, 227–242. doi: 10.1016/j.compedu.2018.09.024
Wu, T. F., Chen, C. M., Lo, H. S., Yeh, Y. M., and Chen, M. C. (2018). Factors related to ICT competencies for students with learning disabilities. J. Educ. Technol. Soc. 21, 76–88.
Yu, S., Zhang, E. D., and Liu, C. (2024). Research into practice: digital multimodal composition in second language writing. Lang. Teach., 1–17. doi: 10.1017/S0261444824000375
Keywords: AI-assisted, questionnaire, self-assessment, teachers, writing skills
Citation: Agrati LS, Vinci V, Beri A and Berardi P (2026) Use of AI-assisted writing tools for student self-assessment at school: survey on teachers’ perceptions and practices. Front. Educ. 11:1701597. doi: 10.3389/feduc.2026.1701597
Edited by:
José Manuel de Amo Sánchez-Fortún, University of Almeria, SpainReviewed by:
Kevin Baldrich, University of Almeria, SpainAsih Zunaidah, BINUS University Fakultas Humaniora, Indonesia
Zhi Wang, Civil Aviation Management Institute of China, China
Copyright © 2026 Agrati, Vinci, Beri and Berardi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Laura Sara Agrati, bGF1cmFzYXJhLmFncmF0aUB1bmlwZWdhc28uaXQ=