- 1Department of Applied IT, University of Gothenburg, Gothenburg, Sweden
- 2School of Communication, Film, and Media Studies, University of Cincinnati, Cincinnati, OH, United States
- 3Department of Languages and Literatures, University of Gothenburg, Gothenburg, Sweden
Editorial on the Research Topic
Teaching and assessing with AI: teaching ideas, research, and reflections
Introduction
Few technologies in recent memory have unsettled higher education and sparked intense debate as swiftly and profoundly as generative AI. Built on large language models (LLMs), these systems have become rapidly accessible and are already reshaping the landscape of teaching and assessment, offering new possibilities for personalized tutoring, automated feedback, and adaptive learning (Ou et al., 2024).
A growing body of scholarship has begun to examine how AI is integrated into educational contexts and how students communicatively engage with AI instructors and tools. For example, scholars have explored AI teaching assistants, social robots, and other AI-based instructional agents in various contexts and have demonstrated that students apply familiar interpersonal and instructional schemas (e.g., credibility, social presence, role expectations) when interacting with nonhuman teachers (Edwards et al., 2018; Kim et al., 2020, 2022; Spence et al., 2024). This line of research conceptualizes AI in the classroom as not just a technological innovation, but as a communicative actor that reshapes instructional relationships, authority, and engagement, while raising broader questions about pedagogy, instructor self-efficacy, and equity in AI-supported learning environments (Edwards and Edwards, 2017; Kim et al., 2025). Yet these innovations also raise pressing questions on learning effects (Fan et al., 2025), the changing dynamics of teacher–student communication, the shifting nature of the educator's role (Jeon and Lee, 2023), emerging ethical concerns in assessment (Sullivan et al., 2023), the influence of generative AI on intercultural understanding in increasingly diverse learning environments (Yusuf et al., 2024), and the role of administrators in leading ethical, strategic AI adoption at the institutional level (Spence et al., 2025), among others.
This Research Topic offers a wide-angle snapshot of how educators and researchers across the world are responding to these opportunities and challenges, moving beyond broad analyses of the opportunities and challenges of AI to also capture lived pedagogical responses and innovative classroom practices. With contributions from universities on five continents, the Research Topic provides an inclusive arena for a diversity of voices and approaches to AI in tertiary education.
Whereas existing edited collections and volumes have predominantly examined AI in higher education from conceptual, methodological, and policy-level perspectives (Cerratto-Pargman et al., 2025; Kim, 2021), the editorial team for this Research Topic adopted a short-paper format to capture a pivotal moment (e.g., practice-based experimentation, context-specific innovation and opinions) in higher education, when generative AI transitions from a technological novelty to a daily teaching reality. By doing so, this Research Topic provides an inclusive and flexible venue for educators to share fresh insights, reflections, and micro-level interventions that collectively shed light on how AI is reshaping the culture of teaching and assessment in higher education today. The Research Topic features three types of contributions: Teaching ideas (GIFT-AIs) that offer creative inspiration for classroom practice; Research papers (RESEARCH-AIs) that present empirical findings, theoretically-grounded insights, and future directions; and Perspective essays (PERSPECTIVE-AIs) that question, critique, and provoke new thinking about AI in teaching and assessment, including from scholar–activist perspectives.
Major themes
We have identified four interrelated themes from the 20 contributions that are part of the Research Topic.
Theme 1: pedagogical re-orientation with AI. The articles in this theme highlight how AI is prompting educators to rethink teaching, feedback (Fredriksson), and readiness for the job market (LeFebvre and LeFebvre). Across contexts ranging from business writing to organizational communication (Cruz) and pattern recognition training (Kazimova et al.), AI can serve as a catalyst for deeper learning (Sellnow) when teachers are willing to experiment, question established routines, and deepen their understanding of the affordances and limitations of new tools. This shift moves teaching toward a hybrid form of intelligence in which AI and educators collaborate (Reinhold et al.). From conceptual models (Jaakkola) to classroom experiments, these contributions not only rethink teaching but also model pedagogical frameworks grounded in evidence-based practice (MacArthur et al.). Overall, the articles in this theme position AI as a partner in cultivating adaptive, reflective, and career-relevant learning.
Theme 2: critical perspectives on AI. The six contributions in this theme approach AI in education through critical, sociocultural, and sociotechnical lenses, examining how power, language, and representation shape teaching and learning. Together, they extend critical inquiry of AI in higher education by moving beyond a narrow focus on academic integrity and exploring the hidden labor sustaining AI systems (Graham et al.), the visual dimensions of identity and representation (Åkervall), the linguistic dynamics of bilingual education (Rivero and Yin), and the pedagogical implications of algorithmic instruction (Kim). Also, through practical classroom activities centered around visual AI literacy (Källström) and a revisitation of the Turing Test (Geoghegan), these articles invite educators to engage with AI reflexively and critically, fostering justice-oriented practices in teaching and assessment.
Theme 3: AI and creativity. “Creativity” emerged as another key area of investigation. For these authors, teaching in the AI era foregrounds the matter of innovation, authenticity, and creative practice across educational contexts, and requires careful consideration of how AI and creativity intersect and resonate with the creative industries' tradition. They examine how generative technologies both expand and unsettle creative practice, and how educators can balance technological innovation with authentic creative expression. By mapping the multiple roles that AI plays in education (Urmeneta and Romero) and tracing its applications across media such as podcasting (Fox) and filmmaking (Monserrat and Srnec), the articles in this theme outline pedagogical designs that use AI to extend human creativity effectively.
Theme 4: emerging tensions surrounding integrity and authorship in the AI classroom. As generative AI unsettles long-standing norms of writing and evaluation, the classroom becomes a space where new meanings of integrity, authorship, and trust are actively negotiated. Across these contributions, tensions surface in teacher–student relations as faculty navigate ethical framings of AI use through rule-based or punitive lenses vs. integrity-focused or collaborative approaches (Petricini et al.). Faculty resistance to AI is often rooted in moral and value-based concerns, including fears that it might compromise originality or enable cheating (Shata). These tensions in how educators interpret and evaluate student work unsettle traditional notions of authorship and assessment, leading to calls for reimagining evaluation grounded in transparency rather than prohibition (Hau).
Building on the themes outlined above, we have summarized some key takeaways derived from the practical insights shared by our authors, hoping to offer guidance for instructors and academic leaders seeking to develop thoughtful and forward-looking practices that shape the evolving classroom cultures in the AI era.
Practical takeaways for instructors
• Create structured spaces for classroom experimentation with AI. Design classroom opportunities where students explore AI tools in a reflective, ethical manner. Assignments can include designated safe sandboxes that allow creative experimentation with AI, such as rewriting, feedback comparison, or co-drafting, without fear of misconduct. These activities should be paired with guided reflection, enabling students to articulate insights about how AI shapes writing, feedback, meaning, and skill development.
• Acknowledge and engage with critical perspectives on AI. Integrate critical inquiry into everyday teaching by embedding discussions of AI into regular coursework. Encourage students to examine the environmental costs of AI, including energy and resource use, and to question the labor, data, and biases behind AI systems. Activities that trace outputs or compare representations help students understand how AI shapes information, authorship, and identity, particularly in language education and creative fields.
• Address AI-assisted plagiarism with transparency and creativity. Foster openness and disclosure around AI use as a foundation for ethical academic practice. Students should document and reflect on how AI affects originality, authorship, and accountability in their work. As human and machine collaboration deepens, educators can introduce accessible metaphors such as promptwashing, drawn from greenwashing, and promptfishing, inspired by catfishing, to describe inauthentic or superficial engagement with AI that undermines meaningful learning. These concepts build on experiences familiar to many students and offer accessible entry points for critical reflection on responsible AI-assisted work.
Practical takeaways for academic leaders
• Invest in professional development focused on AI in teaching and learning. Address the urgent need for capacity building in higher education by supporting both technical and pedagogical approaches to AI. Invest in research, innovation grants, and AI literacy for faculty and staff, while encouraging cross-disciplinary partnerships. Creating spaces for open dialogue through workshops, town halls, and learning communities fosters shared reflection, collaboration, and responsible AI use.
• Develop flexible and context-sensitive guidelines for AI use in teaching, research, and assessment. Institutional AI policies should balance innovation and academic integrity while remaining flexible and pedagogically responsive. Adaptable frameworks allow educators to tailor AI use to diverse learning needs. To bridge policy and practice, universities can establish AI policy labs where faculty and students co-design and test guidelines, ensuring governance is participatory, evidence-based, and grounded in local educational contexts.
• Strengthen institutional ethics in AI adoption and procurement. Institutions should commit to ethically sourcing and deploying AI tools that prioritize transparency, data protection, and authorship. Leaders must set clear expectations for responsible vendor practices and the protection of academic values. Establishing periodic ethical audits of AI procurement ensures accountability and signals that institutional integrity extends to the technological infrastructures shaping learning.
• From creating safe sandboxes that foster reflective experimentation to developing ethical infrastructures that sustain institutional integrity beyond policy, teaching and assessing with AI is as much about values as it is about tools. The challenge ahead for the educational community at large (from instructors, to students, to academic leaders) is to foster the collaborative spirit needed to ensure that generative AI serves not only innovation but also inclusion, authenticity, and trust in higher education.
Author contributions
DG: Conceptualization, Project administration, Writing – original draft, Writing – review & editing. KM: Conceptualization, Project administration, Writing – original draft, Writing – review & editing. AWO: Conceptualization, Project administration, Writing – original draft, Writing – review & editing.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. We used the generative AI tool ChatGPT 5.0 Enterprise for brainstorming and language editing (grammar and stylistic suggestions). All substantive conceptual work, writing, analysis, and interpretation were done by the author(s). The author(s) remain fully responsible for the content.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Cerratto-Pargman, T., McGrath, C., and Milrad, M. (2025). Towards responsible AI in education: challenges and implications for research and practice. Comput. Educ. Artif. Intell. 9:100345. doi: 10.1016/j.caeai.2024.100345
Edwards, A., and Edwards, C. (2017). The machines are coming: future directions in instructional communication research. Commun. Educ. 66, 487–488. doi: 10.1080/03634523.2017.1349915
Edwards, C., Edwards, A., Spence, P. R., and Lin, X. (2018). I, teacher: using artificial intelligence (AI) and social robots in communication and instruction. Commun. Educ. 67, 473–480. doi: 10.1080/03634523.2018.1502459
Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., et al. (2025). Beware of metacognitive laziness: effects of generative artificial intelligence on learning motivation, processes, and performance. Br. J. Educ. Technol. 56, 489–530. doi: 10.1111/bjet.13544
Jeon, J., and Lee, S. (2023). Large language models in education: a focus on the complementary relationship between human teachers and ChatGPT. Educ. Inf. Technol. 28, 15873–15892. doi: 10.1007/s10639-023-11834-1
Kim, J. (2021). A new era of education: Incorporating machine teachers into education. J. Commun. Pedagogy 4, 121–122. doi: 10.31446/jcp.2021.1.11
Kim, J., Kelly, S., and Prahl, A. (2025). Navigating AI in education: foundational suggestions for leveraging AI in teaching and learning. Commun. Educ. 74, 104–113. doi: 10.1080/03634523.2024.2447235
Kim, J., Merrill Jr, K., Xu, K., and Kelly, S. (2022). Perceived credibility of an AI instructor in online education: the role of social presence and voice features. Comput. Hum. Behav. 136:107383. doi: 10.1016/j.chb.2022.107383
Kim, J., Merrill Jr, K., Xu, K., and Sellnow, D. D. (2020). My teacher is a machine: understanding students' perceptions of AI teaching assistants in online education. Int. J. Hum.–Compu. Interact. 36, 1902–1911. doi: 10.1080/10447318.2020.1801227
Ou, A. W., Stöhr, C., and Malmström, H. (2024). Academic communication with AI-powered language tools in higher education: from a post-humanist perspective. System 121:103225. doi: 10.1016/j.system.2024.103225
Spence, P. R., Kaufmann, R., Lachlan, K. A., Lin, X., and Spates, S. A. (2024). Examining perceptions and outcomes of AI versus human course assistant discussions in the online classroom. Commun. Educ. 73, 121–142. doi: 10.1080/03634523.2024.2308832
Spence, P. R., Lin, X., Lachlan, K. A., and Kaufmann, R. (2025). Quickening the shift from disruption to opportunity: administrators and AI in higher education. Commun. Educ. 74, 114–122. doi: 10.1080/03634523.2024.2449047
Sullivan, M., Kelly, A., and McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. J. Appl. Learn. Teach. 6, 31–40.
Keywords: AI in education (AIED), AI-driven pedagogy, artificial intelligence (AI), classroom culture, digital education, instructional communication, teaching and assessment
Citation: Girardelli D, Merrill Jr K and Ou AW (2026) Editorial: Teaching and assessing with AI: teaching ideas, research, and reflections. Front. Commun. 10:1769019. doi: 10.3389/fcomm.2025.1769019
Received: 16 December 2025; Accepted: 22 December 2025;
Published: 30 January 2026.
Edited and reviewed by: Anastassia Zabrodskaja, Tallinn University, Estonia
Copyright © 2026 Girardelli, Merrill and Ou. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Davide Girardelli, ZGF2aWRlLmdpcmFyZGVsbGlAYWl0Lmd1LnNl