PERSPECTIVE article

Front. Commun., 30 April 2025

Sec. Culture and Communication

Volume 10 - 2025 | https://doi.org/10.3389/fcomm.2025.1598988

This article is part of the Research TopicTeaching and Assessing with AI: Teaching Ideas, Research, and ReflectionsView all 4 articles

[AI-reflection] writing with machines? Reconceptualizing student work in the age of AI

  • Department of Communication and Arts, Roskilde University, Roskilde, Denmark

The rise of generative artificial intelligence (GenAI) such as ChatGPT fundamentally challenges traditional assumptions about student authorship and assessment in higher education. Drawing on Michel Foucault's notion of the “author function” and Roland Barthes' critique of textual authority, this paper argues that AI chatbots expose critical tensions in how we understand and evaluate student work. After examining why conventional approaches to ensuring assessment integrity have become obsolete, I propose a 'tapas model' of assessment that combines different evaluation types: pure human work, bounded AI use, and full AI integration. This model moves beyond binary notions of AI detection and cheating, instead embracing AI as a co-participant in knowledge production while ensuring students develop both traditional and AI-enhanced competencies. The paper argues for shifting from punitive AI detection to transparent AI declaration, treating AI as a methodological consideration rather than a threat to academic integrity. This approach acknowledges that knowledge creation has always involved complex networks and suggests that education must evolve beyond simplistic notions of individual authorship to embrace more nuanced forms of assessment suited to an AI-augmented world.

Introduction

Write a ten-page essay on the impact of digital platforms on public discourse. Your response must include references to Habermas, van Dijck, and Stiegler, and demonstrate engagement with at least two sources from the syllabus. The essay should present a clear argument and follow APA citation format.”

Questions like these are staples of higher education. The wording might change, and the requirements vary, but written take-home exams (THE) are a common assessment because they ostensibly promote higher-order thinking skills and allow time for reflection (Bengtsson, 2019, p. 13). We use essays as instruments for evaluating students' ability to articulate arguments, analyze texts, reflect critically on ideas, and demonstrate subject competence. But the rise of generative artificial intelligence (GenAI) such as ChatGPT, unmoors student writing from traditional assumptions about authorship, blurring the lines between originality and copy and even between human and machine.

Since AI chatbots can produce a competent response to essay prompts in minutes (Scarfe et al., 2024), what exactly are we assessing in a take-home written essay? More importantly, what does it mean for students to “write” in an era when a machine can do it for them? The questions are both painfully real for educators having to figure out what AI-safe assessment means in their classes. It is also theoretically intriguing beyond practical concerns about plagiarism.

Drawing on Michel Foucault's notion of the “author function” and Roland Barthes' critique of textural authority, I argue that AI chatbots expose fundamental tensions underpinning contemporary assessment in higher education. The goal is not to examine whether students should or should not use AI—I argue this ship has long sailed—but rather to explore how these tools force a fundamental reassessment of how educators understand student work, intellectual labor, and valid authorship.

When essays write themselves

One of the most pressing issues of GenAI in higher education is the potential for academic dishonesty, with students using the tool to complete assignments without understanding the underlying concepts (Kim et al., 2024, p. 389). As Kim et al. argue, it is impossible to prevent the use of ChatGPT or AI writing among students considering both the obvious benefits of these technologies and the lack of effective AI writing detection tools (ibid). Traditional methods for ensuring assessment integrity such as designing complex questions, plagiarism detection software, and proctoring have long been used to deter dishonest practices (Bengtsson, 2019). However, the advent of ChatGPT has rendered many of these measures ineffective almost overnight, forcing educators to reconsider their approach to student work.

One of the strongest historical defenses against take-home exam plagiarism has been designing questions requiring deep engagement with course materials (Bredon, 2003). The argument follows that if answers require genuine understanding rather than just regurgitation, students cannot easily outsource them. Yet, when properly prompted, AI chatbots can synthesize complex theoretical arguments, identify logical connections, and even mimic human reasoning patterns at a very high level (Hubert et al., 2024). The requirement for direct references to course materials may introduce a minor obstacle, but students can simply feed the chatbot relevant course content to circumvent this.

Strategies aimed at ensuring individual accountability such as honor codes (Frein, 2011) and grading penalties for unreferenced copying (Freedman, 1968) function only insofar as students perceive a realistic threat of detection or ethical responsibility. However, GenAI introduces a gray area in which students may not perceive their use as cheating but rather as “assistance,” like using a thesaurus.

The few measures that impose direct friction such as requiring handwritten responses (López et al., 2011) or watermarking printed exams are among the least scalable and practical solutions, and do not prevent students from using GenAI to generate answers before transcribing them. Similarly, statistical cohort analysis of similar responses (D'Souza and Siegfeldt, 2017) are unreliable as chatbots generate slightly varied but fundamentally equivalent answers for different users.

Lastly, traditional plagiarism detection mechanisms (Williams and Wong, 2009) are inadequate as AI-generated content is novel at creation. Although many AI detectors have been proposed such as Originality AI or ZeroGPT, these detectors are fundamentally unreliable (Gorichanaz, 2023, p. 184). Neither chatbots nor humans can correctly identify AI-generated text (Rathi et al., 2024), the detectors are easily circumvented through precise prompting or by instructing ChatGPT to rewrite a passage using less predictable language patterns (Sadasivan et al., 2023), and they show considerable bias against non-native English speakers (Liang et al., 2023). An arms-race of AI detection between faculty and students is therefore not only futile but damaging, as it may steer institutions away from implementing more collaboration-oriented and transparent approaches to human-AI interaction (Oravec, 2023, p. 214).

As such, the capacity of chatbots to generate tailored, plausible, and contextually appropriate writing necessitates a fundamental reconsideration of assessment design, moving beyond reactive security measures toward novel forms of evaluation that either integrate GenAI use transparently or emphasize forms of knowledge demonstration that chatbots cannot easily replicate, such as oral defenses, in-class problem-solving, or project-based assessments.

However, switching fully to oral exams may marginalize students who process information differently or need more reflection time (Sequeira, 2021), and requires careful implementation to avoid disadvantaging ethnic minorities and foreign-born students (Roberts et al., 2000).

In sum, the written essay is deeply compromised, detection of AI-generated content is unfeasible, and simply reverting to all oral exams is unsustainable. Locking out GenAI fails to help students engage productively and responsibly with the technology (Liu and Bates, 2025), and as Fernando Juárez and Rudick (2025) argue, discussions surrounding AI in communication education are often too narrowly focused on student cheating and plagiarism, overlooking the significant transformative potential of AI (p. 123). While we may scramble to understand the practical uses of AI chatbots in both teaching and assessment, larger, more fundamental questions must be addressed: how should human-AI collaboration be understood, valued and evaluated? This question will shape the evolving role of GenAI in higher education.

Author-gods and author-functions

When AI generates text, who—or what—is the author? Is ChatGPT a method, a source, or something else entirely? These questions expose fundamental tensions in how we conceptualize authorship, originality, and textual ownership. Two theoretical frameworks, developed well before the advent of GenAI, offer surprisingly relevant analytical insight: Roland Barthes' dissolution of the author-god and Michel Foucault's “author-function”.

Barthes (1977, p. 148) famously stated that the Author must die so that the reader may live. To Barthes, a text does not reveal a single, “theological” meaning or message, but is “a network of quotations, drawing from countless foci of culture” (Barthes, 1977, p. 146). The writer does not invent the text, but reenacts, combining earlier fragments in juxtaposition to create a palimpsest. This understanding transforms the written work from “book” to “text”, with the reader and not the writer acting as the authority (Barthes, 1977, p. 148). Just as Barthes suggests that texts are woven from cultural quotations, GenAI outputs are pastiche, bearing the imprint of countless contributors who have provided the data—freely or not—on which it was trained. AI chatbots do not originate ideas but rather “remix” the vast corpus of human-generated text it has been fed. If the author as an individual creative force is a construct, then ChatGPT as a “source” to be cited is even more so. AI chatbots challenge the idea of authorship by diffusing the notion of creative agency across algorithms and datasets, in turn prompted and molded by users in dialogic creation.

Foucault likewise challenges us to think beyond the simplistic understanding of the author as the creator of a given text, arguing they instead fulfill a specific role in organizing knowledge, controlling the text. The so-called “author-function” is tied to “legal and institutional systems that circumscribe, determine and articulate the realm of discourse” (Foucault, 1998, p. 215). The idea of an author, therefore, is a regulatory construct that can both validate and marginalize ideas, acting as a node of power. As Foucault notes, only certain texts have authors: “a private letter may have a signatory, but it does not have an author; a contract can have an underwriter, but not an author; and, similarly, an anonymous poster attached to a wall may have a writer, but he cannot be an author” (Foucault, 1998, p. 211). Authorship is ownership, and connects to themes of transgression, punishment and property (Wilson, 2004, p. 349).

Critically, Foucault turned the question of the author from a “who” into a “what” (Wilson, 2004, p. 342), opening the “Author” as a site of enquiry. This is immensely valuable for understanding writing under the pressure of AI chatbots, but raises new questions: Is prompting an act of authorship, or a displacement of it?

GenAI tools fragment the author into a series of inputs and collaborative interactions. Chatbots do not simply assist the human author but become part of the authorship itself, challenging the traditional notion of the author as a unified, singular creator. Users interacting with an AI chatbot are also not passive recipients; they may shape the output through their prompts, their feedback, and their modifications. The resulting “voice” is a chorus of absorbed voices, a digital echo of discourses and training data combined with human prompting and dialogue. Generative AI reshapes the very nature of what it means to create, as the boundaries between human and machine, and author and tool blur.

This has significant implications for higher education, where we work with individual assessments and value original authorship. To assess a student, we need to make sure they and they alone can write a text that demonstrates their competences. Concretely in education, when our students use ChatGPT to generate essays or exam responses, they find themselves in a liminal space. They are neither the sole authors nor bystanders in the creation of text. This blurring of lines challenges the very notions of ownership and authorship that Barthes and Foucault critique, making the questions about plagiarism in the academic setting more complex than ever before.

Writing in an AI-mediated world

Traditionally, plagiarism is viewed as the appropriation of someone else's work, thoughts, or ideas without proper attribution, grounded in the belief in a singular author whose rights over a text must be respected. However, if the text is a “multi-dimensional space in which a variety of writings, none of them original, blend and clash,” as Barthes (1977, p. 146) wrote, then perhaps our notions of plagiarism and originality must be reevaluated entirely. Human ideas are an amalgamation of countless influences, shaped by everything from scholarly articles to casual conversations. Some of these influences lend themselves easily to the established academic practice of citation; an article or a book can be cited quite straightforwardly. Yet others—half-remembered conversations, personal experiences, cultural mores, or algorithmic suggestions from a machine like ChatGPT—are far more nebulous and defy conventional forms of attribution.

Just as Foucault questioned who is an author, we are now confronted with a different but related question: what constitutes independent student work? Does using GenAI to refine expression dilute or deepen a student's intellectual agency? Unlike traditional plagiarism, AI-generated content is original at the point of output yet unoriginal in origin. If a student produces an essay with GenAI but actively interrogates, rewrites, and reshapes it, are they not performing a more sophisticated cognitive task than a student who writes a passable, but less critical human-only draft?

As the student moves from being a writer to a mediator in a broader circuit of textual production, our prevailing moral economy of authorship based on individual originality is challenged by hybrid forms of composition. Might there be pedagogical value in embracing co-authorship not as a threat to be policed, but as a method to be cultivated? What would it mean to design assessments that acknowledge the entangled realities of writing in the age of generative AI?

The Tapas model: reimagining assessment after ChatGPT

AI in education is often framed either as an ethical hazard that must be policed or as a convenient tool for student support. But it is becoming something more fundamental: a co-participant in meaning-making, reshaping how knowledge is produced, cited, and valued. As Fernando Juárez and Rudick (2025, p. 124) note, a more holistic, forward-thinking approach is necessary. The goal is to make assessments AI-safe without undermining students' learning process. This challenges our current pedagogical structures with fixed curricula and content learning, which is precisely what chatbots excel at.

First step is to shift from a focus on AI-detection to AI-declaration. Students should be required to disclose and discuss GenAI use transparently, just as they would any other methodological consideration. A well-crafted student response is not one that is untouched by AI, but one that demonstrates the ability to interrogate, refine, and expand upon machine-generated text with intellectual autonomy.

However, transparency alone does not resolve the issue of students misrepresenting their engagement with GenAI tools. Our assessment models also must account for cases where students falsely claim AI collaboration. One approach is to implement multi-step assessments that require students to document their process, such as submitting evolving drafts, or ongoing portfolios. Another option is to incorporate synopses with both written and oral components, ensuring students engage more deeply with material rather than relying solely on AI output. These strategies provide instructors with concrete evidence of student authorship while still allowing AI to function as a legitimate aid rather than an undetectable shortcut. Another solution is to embed reflexive engagement with AI chatbots into the assessment itself, requiring students to produce metacognitive commentaries explaining their use of AI tools such as specific prompt strategies and decision points in their writing process. Peer feedback structures can also act as an internal accountability system, where students critique each other's use of AI, reinforcing ethical engagement while providing evidence of AI integration.

Long term, I suggest a shift toward a “tapas model” of assessments, including a wide spectrum of exams without AI (such as in-class discussions and oral presentations), exams that permit delineated AI-use (for instance in copy-editing and preliminary research synthesis), and assessments that fully integrate AI in project design and collaborative writing (like AI-human co-authored analyses and iterative feedback loops).

In a communication theory course, for instance, student learning might be evaluated through multiple small assessments: an in-class debate on media effects theories, a take-home essay analyzing platformization (with declared AI use permitted for editing), a group presentation on audience research methods, a real-time analysis of a news story's framing, and a final project exploring GenAI's impact on journalism that explicitly incorporates AI tools. Each assessment serves a distinct purpose, making it harder to rely solely on AI while creating multiple opportunities for students to demonstrate competence. Some tasks require pure human reasoning and real-time synthesis, others benefit from AI assistance in specific ways, while still others explicitly examine AI's role in communication.

As Kim et al. (2025, p. 106) write, “AI will produce whatever is requested by the user”. Therefore, students' capabilities in communication are paramount. Fostering these requires educators to focus on cultivating uniquely human skills, such as critical thinking and creativity, while emphasizing ethics and responsible AI use (ibid.). The integration of AI into university life represents ongoing negotiation among faculty, staff, students, and administrators, and as Fernando Juárez and Rudick (2025) note, this process is not only technological, but fundamentally communicative (p. 124).

The tapas model, like its namesake dining style, emphasizes variety and combination rather than one single, “AI-safe” approach to assessment. By diversifying assessment methods, we can better evaluate students' ability to work both with and without AI assistance. A tapas model emphasizes choice and flexibility, as opposed to a more binary assessment scale approach “which suggests that one can restrict or control AI use (one cannot), or that there is a linear gradation of AI use (there is not)” (Liu and Bates, 2025).

Perhaps the deeper challenge lies in fostering an academic culture that values authentic intellectual growth and meaningful engagement, while acknowledging that knowledge creation has always been dialogic. We might instead use this moment to reimagine assessment practices that value critical engagement over illusory originality. This requires developing new forms of assessment that look at students' ability to evaluate, contextualize, and build upon AI-generated content, preparing them for a world of work characterized by human-machine communication. The future will requires us to develop collaborative human-AI projects, where students work alongside AI to produce creative and analytical outputs; AI-assisted peer supervision, where AI bots act as critical participants in cluster feedback and provide initial feedback on student work, shared authorship models, where students and AI are recognized as co-creators of knowledge; and increase our focus on critical AI literacy, teaching students not only how to use AI tools but also how to evaluate their outputs and understand their limitations.

Conclusion

This paper has explored how AI chatbots challenge traditional notions of authorship, originality, and assessment in higher education. As GenAI becomes increasingly integrated into academia, educators are forced to reconsider not only how we assess students but also how we define creativity and originality in student work.

The discussion acknowledges what Barthes and Foucault suggested decades ago: authorship is more complex than our traditional academic practices admit. When students use AI to generate, refine, or analyze text, they engage in forms of intellectual labor that blur conventional boundaries between original and derivative work. Rather than fighting this reality through detection tools or blanket prohibitions, we may embrace the opportunity to develop new forms of assessment that value both traditional competencies and emerging AI-enhanced capabilities.

The tapas model proposed here offers a practical framework for reimagining assessment in an age where AI is increasingly integral to knowledge production. By combining different types of evaluation—from pure human work to full AI integration—we can better capture the complex reality of contemporary learning and knowledge creation. This approach transforms the challenge of AI verification from a technical problem of detection to a pedagogical opportunity for developing critical AI literacy. Students learn to document their thinking process, justify their AI usage, and demonstrate meaningful engagement rather than just delegation of tasks.

Traditional academic notions of individual authorship were always a simplification of how knowledge emerges. Now, AI tools make visible and urgent what was already true: ideas arise through complex networks of influence, technology, and human agency.

The question is not whether GenAI will change education and assessment—it already has—but how we will respond.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

MH: Writing – review & editing, Writing – original draft.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that Gen AI was used in the creation of this manuscript. Coming up with key words, drafting the final abstract, checking grammar throughout the article, formatting bibliography.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Barthes, R. (1977). Image, Music, Text. London: Fontana Press.

Google Scholar

Bengtsson, L. (2019). Take-home exams in higher education: a systematic review. Educ. Sci. 9:267. doi: 10.3390/educsci9040267

Crossref Full Text | Google Scholar

Bredon, G. (2003). Take-home tests in economics. Econ. Anal. Policy 33, 52–60. doi: 10.1016/S0313-5926(03)50004-2

Crossref Full Text | Google Scholar

D'Souza, K. A., and Siegfeldt, D. V. (2017). A conceptual framework for detecting cheating in online and take-home exams. Deci. Sci. J. Innov. Educ. 15, 370–391. doi: 10.1111/dsji.12140

Crossref Full Text | Google Scholar

Fernando Juárez, S., and Rudick, C. K. (2025). Imagining futures for communication education: AI, education, and the coming crises. Commun. Educ. 74, 123–125. doi: 10.1080/03634523.2025.2451391

Crossref Full Text | Google Scholar

Foucault, M. (1998). Aesthetics, Method, and Epistemology: Essential Works of Foucault 1954-1984, ed. J. D. Faubion. New York: The New Press.

Google Scholar

Freedman, A. S. (1968). The take-home examination. Peabody J. Educ. 45, 343–347. doi: 10.1080/01619566809537566

Crossref Full Text | Google Scholar

Frein, S. T. (2011). Comparing in-class and out-of-class computer-based tests to traditional paper-and-pencil tests in introductory psychology courses. Teach. Psychol. 38, 282–287. doi: 10.1177/0098628311421331

Crossref Full Text | Google Scholar

Gorichanaz, T. (2023). Accused: how students respond to allegations of using ChatGPT on assessments. Learn.: Res. Pract. 9, 183–196. doi: 10.1080/23735082.2023.2254787

Crossref Full Text | Google Scholar

Hubert, K. F., Awa, K. N., and Zabelina, D. L. (2024). The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks. Sci. Rep. 14:3440. doi: 10.1038/s41598-024-53303-w

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, J., Kelly, S., Colón Alex, X., Spence Patric, R., and Lin, X. (2024). Toward thoughtful integration of AI in education: mitigating uncritical positivity and dependence on ChatGPT via classroom discussions. Commun. Educ. 73, 388–404. doi: 10.1080/03634523.2024.2399216

Crossref Full Text | Google Scholar

Kim, J., Kelly, S., and Prahl, A. (2025). Navigating AI in education: foundational suggestions for leveraging AI in teaching and learning. Commun. Educ. 74, 104–113. doi: 10.1080/03634523.2024.2447235

Crossref Full Text | Google Scholar

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., and Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv. doi: 10.1016/j.patter.2023.100779

PubMed Abstract | Crossref Full Text | Google Scholar

Liu, D. Y. T., and Bates, S. (2025). “Generative AI in higher education: Current practices and ways forward (Whitepaper),” in Generative AI in Education: Opportunities, Challenges and Future Directions in Asia and the Pacific. Hong Kong: Association of Pacific Rim Universities.

Google Scholar

López, D., Cruz, J.-L., Sánchez, F., and Fernández, A. (2011). “A take-home exam to assess professional skills,” in 2011 Frontiers in Education Conference (FIE) (Rapid City, SD: IEEE), F1C-1–F1C-6.

Google Scholar

Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: expanding the dimensions of responsible human-AI collaboration with ChatGPT. J. Interact. Learn. Res. 34, 213–237. Available online at: https://www.learntechlib.org/primary/p/222340/

Google Scholar

Rathi, I., Taylor, S., Bergen, B. K., and Jones, C. R. (2024). GPT-4 is judged more human than humans in displaced and inverted Turing tests. arXiv. Available online at: http://arxiv.org/abs/2407.08853 (accessed August 30, 2024).

Google Scholar

Roberts, C., Sarangi, S., Southgate, L., Wakeford, R., and Wass, V. (2000). Oral examinations—equal opportunities, ethnicity, and fairness in the MRCGP. Br. Med. J. 320, 370–375. doi: 10.1136/bmj.320.7231.370

PubMed Abstract | Crossref Full Text | Google Scholar

Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., and Feizi, S. (2023). Can AI-Generated text be reliably detected? arXiv [Preprint]. Available online at: http://arxiv.org/abs/2303.11156 (accessed June 16, 2023).

Google Scholar

Scarfe, P., Watcham, K., Clarke, A., and Roesch, E. (2024). A real-world test of artificial intelligence infiltration of a university examinations system: a “Turing Test” case study. PLoS ONE 19:e0305354. doi: 10.1371/journal.pone.0305354

PubMed Abstract | Crossref Full Text | Google Scholar

Sequeira, L.-A. (2021). “The problem with silent students,” in Meaningful Teaching Interaction at the Internationalised University, eds D. Dippold and M. Heron (London: Routledge), 39–54.

Google Scholar

Williams, J. B., and Wong, A. (2009). The efficacy of final examinations: A comparative study of closed-book, invigilated exams and open-book, open-web exams. Br. J. Educ. Technol. 40, 227–236. doi: 10.1111/j.1467-8535.2008.00929.x

Crossref Full Text | Google Scholar

Wilson, A. (2004). Foucault on the “question of the author”: a critical exegesis. Modern Lang. Rev. 99, 339–363. doi: 10.1353/mlr.2004.a827409

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: assessment, authorship, digital epistemology, educational technology, higher education, pedagogical innovation, generative AI, knowledge production

Citation: Hau MF (2025) [AI-reflection] writing with machines? Reconceptualizing student work in the age of AI. Front. Commun. 10:1598988. doi: 10.3389/fcomm.2025.1598988

Received: 24 March 2025; Accepted: 14 April 2025;
Published: 30 April 2025.

Edited by:

Davide Girardelli, University of Gothenburg, Sweden

Reviewed by:

Stephanie Kelly, North Carolina Agricultural and Technical State University, United States

Copyright © 2025 Hau. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mark F. Hau, bWFya2ZoQHJ1Yy5kaw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.