ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
More Polished, Not Necessarily More Learned: LLMs and Perceived Text Quality in Higher Education
Provisionally accepted- Lund University, Lund, Sweden
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The use of Large Language Models (LLMs) such as ChatGPT is a prominent topic in higher education, prompting debate over their educational impact. Studies on the effect of LLMs on learning in higher education often rely on self-reported data, leaving an opening for complimentary methodologies. This study contributes by analysing actual course grades as well as ratings by fellow students to investigate how LLMs can affect academic outcomes. We investigated whether using LLMs affected students’ learning by allowing them to choose one of three options for a written assignment: 1) composing the text without LLM assistance; 2) writing a first draft and using an LLM for revisions; or 3) generating a first draft with an LLM and then revising it themselves. Students’ learning was measured by their scores on a mid-course exam and final course grades. Additionally, we assessed how the students rate the quality of fellow students' texts for each of the three conditions. Finally we examined how accurately fellow students could identify which LLM option (1-3) was used for a given text. Our results indicate only a weak effect of LLM use. However, writing a first draft and using an LLM for revisions compared favourably to the 'no LLM' baseline in terms of final grades. Ratings for fellow students' texts was higher for texts created using option 3, specifically regarding how well-written they were judged to be. Regarding text classification, students most accurately predicted the 'no LLM' baseline, but were unable to identify texts that were generated by an LLM and then edited by a student at a rate better than chance.
Keywords: LLMS, Generative AI, higher education, Student learning outcome, Academic writing, Text quality, Peer assessment
Received: 25 Jun 2025; Accepted: 05 Nov 2025.
Copyright: © 2025 Tärning, Tjøstheim and Wallin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Betty Tärning, betty.tarning@lucs.lu.se
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.