ORIGINAL RESEARCH article
Front. Comput. Sci.
Sec. Digital Education
Volume 7 - 2025 | doi: 10.3389/fcomp.2025.1549761
This article is part of the Research TopicGenerative AI Tools in Education and its Governance: Problems and SolutionsView all 10 articles
Using Pseudo-AI Submissions for Detecting AI-Generated Code
Provisionally accepted- Imam Muhammad ibn Saud Islamic University, Riyadh, Saudi Arabia
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Generative AI tools can produce programming code that looks very similar to human-written code, which creates challenges in programming education. Students may use these tools inappropriately for their programming assignments, and there currently are not reliable methods to detect AI-generated code. It is important to address this issue to protect academic integrity while allowing the constructive use of AI tools.Previous studies have explored ways to detect AI-generated text, such as analyzing structural differences, embedding watermarks, examining specific features, or using fine-tuned language models. However, certain techniques, like prompt engineering, can make AI-generated code harder to identify. To tackle this problem, this article suggests a new approach for instructors to handle programming assignment integrity. The idea is for instructors to use generative AI tools themselves to create example AI-generated submissions (pseudo-AI submissions) for each task. These pseudo-AI submissions, shared along with the task instructions, act as reference solutions for students. In the presence of pseudo-AI submissions, students are made aware that submissions resembling these examples are easily identifiable and will likely be flagged for lack of originality.On one side, this transparency removes the perceived advantage of using generative AI tools to complete assignments, as their output would closely match the provided examples, making it obvious to instructors.On the other side, the presence of these pseudo-AI submissions reinforces the expectation for students to produce unique and personalized work, motivating them to engage more deeply with the material and rely on their own problem-solving skills. A user study indicates that this method can detect AI-generated code with over 96% accuracy.
Keywords: Programming code plagiarism detection, Detecting AI-generated code, Generative AI tools, Large Language Models (LLMs), Programming code similarity
Received: 21 Dec 2024; Accepted: 29 Apr 2025.
Copyright: © 2025 Bashir. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Shariq Bashir, Imam Muhammad ibn Saud Islamic University, Riyadh, Saudi Arabia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.