ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Natural Language Processing
Volume 8 - 2025 | doi: 10.3389/frai.2025.1615761
This article is part of the Research TopicArtificial Intelligence in Educational and Business Ecosystems: Convergent Perspectives on Agency, Ethics, and TransformationView all 6 articles
Ethical Implications of ChatGPT and Other Large Language Models in Academia
Provisionally accepted- 1Technological University Dublin, Dublin, Ireland
- 2UNICAF, Larnaca, Cyprus
- 3Lahore Garrison University, Lahore, Punjab, Pakistan
- 4INTI International University, Nilai, Negeri Sembilan Darul Khusus, Malaysia
- 5University of East London, London, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The rapid advancement of technology in the digital age has significantly transformed human communication and knowledge exchange. At the forefront of this transformation are Large Language Models (LLMs)-powerful neural networks trained on vast text corpora to perform a wide range of Natural Language Processing (NLP) tasks. While LLMs offer promising benefits such as enhanced productivity and human-like text generation, their integration into academic settings raises pressing ethical concerns. This study investigates the ethical dimensions surrounding the use of LLMs in academia, driven by their increasing prevalence and the need for responsible adoption.A mixed-methods approach was employed, combining surveys, semi-structured interviews, and focus groups with key stakeholders, including students, faculty, administrators, and AI developers. The findings reveal a high level of LLM adoption accompanied by concerns related to plagiarism, bias, authenticity, and academic integrity. In response, the study proposes concrete strategies for ethical integration, including: (1) the establishment of transparent usage policies, (2) the incorporation of LLM literacy training into academic curricula, (3) the development of institutional review frameworks for AI-generated content, and (4) ongoing stakeholder dialogue to adapt policies as the technology evolves. These recommendations aim to support the responsible and informed use of LLMs in scholarly environments.The widespread influence of technological advancement has notably transformed communication and knowledge sharing, with LLMs playing a central role. These advanced neural networks, trained on extensive text datasets, have become valuable tools for generating human-like text and improving efficiency. However, their growing use in academic contexts raises significant ethical concerns. This study investigates these issues, focusing on the implications of LLM integration in scholarly environments. Using a mixed-methods approach-including surveys,Frontiers 2 semi-structured interviews, and focus groups-the research gathered insights from students, faculty, administrators, and AI developers. The findings highlight substantial adoption of LLMs alongside concerns about plagiarism, bias, and academic integrity. Based on this input, the study proposes guidelines for their responsible and ethical use in academia.
Keywords: artificial intelligence, ChatGPT, Generative AI, nlp, Large language models
Received: 21 Apr 2025; Accepted: 28 Jul 2025.
Copyright: © 2025 Arshad, Ahmad, ONN and Elechi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Muhammad Arshad, Technological University Dublin, Dublin, Ireland
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.