- Department of Communication, Clemson University, Clemson, SC, United States
A rigorous debate is underway regarding the use of artificial intelligence (AI) in higher education. Risk perceptions regarding AI range from concerns about students using AI inappropriately and unethically to teachers and teaching assistants being replaced by AI robots. This essay situates the discussion around AI historically to mitigate such fears and propose workable strategies for integrating it into the educational experience. In other words, I place AI within the frame of other new technologies that have been introduced, scrutinized, studied, and adopted throughout history (e.g., textbooks, calculators, personal computers and word processors, Internet and online learning). Ultimately, I argue that we ought to embrace the challenges posed as opportunities to again conduct theoretically driven empirical research to inform best practices for integrating AI into teaching and assessment in ways that improve learning and the environment for learning. By incorporating AI with integrity, teachers could be freed to do more deep teaching and to engage in more intellectually stimulating dialogue with students, each of which are designed to foster higher order critical thinking and analysis skills among students.
Introduction
Not long after Bill Gates, co-founder of the Microsoft Corporation (arguably one of the world’s largest personal computer software companies), appeared on NBC’s The Tonight Show, on February 4, 2025, his remarks about the future of artificial intelligence (AI), particularly in the fields of medicine and education, went viral. He claimed “advancements in artificial intelligence will significantly reduce humanity’s role” in these two fields because great medical advice and great tutoring will become free and easily accessible (Zilber, 2025). In this essay, I call his projections into question. In fact, AI is not the first technological affordance to cause alarm and to be touted as an innovation that will ultimately replace academic experts, devastate the integrity of educational practices and systems, and destroy our capacity to think critically. I am also certain it will not be the last.
I argue in this essay that—rather than fear AI as it may take away jobs by replacing teachers—teacher-scholars ought to embrace it (as we have with other new technologies when they emerged) by focusing on strategies to employ it as a tool that augments pedagogical practices and ultimately improves learning and the environment for learning. When we employ AI and other technological affordances to perform the mundane and time-consuming tasks related to, for example, managing low-stake assignments and doing low level assessment and evaluation, teachers will be afforded much needed time to engage with learners in dialogical communication and intellectual exchange that fosters civil discourse, as well as critical thinking, analytical skills, and other higher order learning outcomes (Edwards et al., 2018). Moreover, if integrated effectively, AI has the potential to transform pedagogical practices in ways that free teachers to employ deep teaching, which could make college classroom experiences more inclusive and equitable for students coming from underrepresented populations, particularly in the STEM fields where exclusionary pedagogy has been linked to attrition (Dewsbury, 2020). To make my case, I begin by tracing the history of several educational technologies that have been introduced, scrutinized, and then adopted in higher education over the years. Then I offer a broad historical account of risk perceptions regarding AI as it has become more sophisticated in recent years. Finally, I propose why and how we as teacher-scholars need to shift our mindsets from fearing the unknown to embracing the ever-changing landscape of higher education regarding AI. In other words, I propose food for thought regarding innovative ways to employ AI to our advantage, not by replacing teachers, but by providing means for improving what teachers do (Sellnow et al., 2022).
Technology adoption in higher education
Prior to the invention of the moveable-type printing press by Johannes Gutenberg in 1440, teachers had relied for centuries on oral communication and lecture methods as the primary mode of instruction (Wakefield, 1998). The teacher was responsible for both disseminating information and explaining material to students. This was due in part to the fact that, until then, books were both costly and time-consuming to produce (Li, 2023). By the end of the 15th century, however, books were being mass produced and made available as supplements to oral lectures.
Throughout the decades that followed, lively debate ensued about the use of textbooks based on issues of ethics, policies, politics, religion, and accessibility (Some poorer countries still do not have access to mass produced textbooks today.) (Altbach, 1983). Although many of these debates about textbooks continue today (including questions about the viability and integrity of open access resources and textbook technology supplements), textbooks have become a prominent educational technology for acquiring information (Sellnow et al., 2005). Arguably, when students prepare for class by reading assigned chapters, instructors may expand on that information to “simulate students’ curiosity and desire to explore knowledge, so that they can actively learn and acquire skills” (Li, 2023, p. 221).
In the 1950s and 1960s, television was introduced as a new technological tool for use in teaching and learning (Buckingham, 1998). Again, academic teacher-scholars debated its utility, fearing it would hinder learning rather than stimulate curiosity (Li, 2023). McLuhan (1975), considered by many to be the “father of media studies,” even coined the phrase “the medium is the message” to account for the pervasive role of television in both reflecting and shaping beliefs and behaviors. Perhaps most critical to acknowledge here is that television can be used effectively when teacher-scholars integrate it mindfully into their pedagogy rather than as a replacement for teaching and learning—also sometimes referred to as surrogate parent or babysitter (Gantz, 1982; Hillard, 1958). Teacher-scholars continue to report that television, when used mindfully, can be an effective technological teaching tool, particularly as it supports learning among students in low- and middle-income communities and countries (Watson and McIntyre, 2020).
In the 1980s, personal computers and word processors were introduced as new technologies to replace typewriters (Blissmer, 1985; Flores, 1983). At that time, scholars warned of the inherent biases in computer programs that could, if not managed properly, be passed on to learners in the guise of them being neutral tools rather than mediators of culture (Bowers, 1988). Debates also abounded about whether these technologies would hinder analytical and argumentation skills, spelling and grammar capabilities, and the iterative process of writing and revising (e.g., Keefe and Candler, 1989; Owston et al., 1992). Based on a plethora of research examining the relationships between these technologies and learning outcomes, personal computers and word processing software are taken-for-granted as effective tools for use in higher education today. As Reys and Reys (1987) reported, similar arguments were posed when calculators were introduced into classrooms.
A final historical example (among many) I will mention is the internet. Some of the initial concerns focused on (a) the inability of students to evaluate online information and sources (e.g., Wikipedia) (Ayers, 2006), accessing class notes on websites (Sharma and Mayleyeff, 2003), purchasing papers from online paper mills (Phillips and Horton, 2000), internet plagiarism (Howard, 2007), gamification (Caponetto et al., 2014), and internet addiction (Ambad et al., 2017). As online courses became popular, additional concerns were raised about how this internet environment would also reduce student engagement, intellectual curiosity, and learning outcome achievement (Means et al., 2014).
When the COVID-19 lockdown forced colleges and universities around the globe to move to online learning, a plethora of research ensued. Among other things, this body of work revealed that the internet (i.e., technology) is not inherently disruptive to achieving desired learning outcomes (e.g., Kryston et al., 2021). In fact, some positive implications of online learning include its potential to reach non-traditional students, to be accessed anytime and anyplace, and to provide opportunities for guest appearances by notable scholars and industry experts (Sharma and Mayleyeff, 2003). Moreover, studies illustrate how online pedagogical practices can foster a positive classroom climate, student engagement, and learning in myriad ways (e.g., Cole et al., 2021; Kaufmann et al., 2016; Sellnow and Kaufmann, 2017). As Zuin and de Mello (2024) conclude, critical thinking and dialogical communication as proposed by Freire (2018) in Pedagogy of the Oppressed and Pedagogy of Hope (1992) can be cultivated in online classroom environments in ways that overcome structural barriers of exclusion to promote a “pedagogy of freedom” as long as “they are constituted with the students and not for them” (p. 988). Ultimately, as has been the case with other new technologies, the key is to integrate the internet in pedagogically sound ways based on theoretically driven empirical research (Strawser, 2017).
With this foundation in mind, I argue AI can also enhance educational experiences when integrated mindfully. To do so, we must begin by addressing risk perception concerns raised by skeptics through theoretically driven empirical research. Then, as before, we will be equipped to develop a series of adaptive best practices for using AI to improve teaching and learning.
Artificial intelligence (AI) and the future of teaching and assessment
Although the role of artificial intelligence (AI) in higher education is a relatively new phenomenon, the term AI was first proposed in 1956 by John McCarty, a mathematician and computer scientist at Dartmouth College (Schwarz and Faj, 2024). Although no universally agreed-upon definition exists, an expert panel at Stanford University (2016) defined it as a “set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action” (p. 4). The European Commission High-Level Expert Group on Artificial Intelligence (2018) extended this definition to claim that “AI systems can also be designed to learn to adapt their behavior by analysing how the environment is affected by their previous actions” (p. 7). Schwarz and Faj (2024) point out that people across the globe “are experiencing high levels of uncertainty, if not fear, regarding the impact of these technologies” and how they are being “embedded and regulated in present and future society” (pp. 504–505). Based on a content analysis of articles published in the New York Times and the Washington Post from 1985 to 2020, Cools et al. (2022) identified 10 topics of interest/concern regarding AI, education being one of them.
Regarding education, many fears about AI stem from the fact that it can be used for good or evil at the same time, as well as both intentionally and unintentionally (Brundage et al., 2018). Moreover, fears have grown exponentially with the introduction of ChatGPT in November 2022—an AI-based chatbot “capable of generating cohesive and informative human-like responses to user input” (Lo, 2023, p. 410). In their comprehensive content analysis of AI in education from 2010 to 2020, Zhai et al. (2021) discovered three prominent risk perception themes. These include the “inappropriate use of AI techniques, changing roles of teachers and students, as well as social and ethical issues” (p. 1).
One major concern stems from the fact that so much progress is being made in speech and image recognition, speech and language generation, and language comprehension. Consequently, educators worry about the spread of misinformation and disinformation and students’ (in)ability to discern fact from fiction, as well as what makes for quality information and quality sources (Bringula et al., 2021; Ojukwu and Saidu, 2025). They also worry that students will use free generative AI tools like Grammarly or ChatGPT unethically to conduct research and construct essays (Lo, 2023). Other concerns revolve around what Bill Gates proclaimed—that AI and robots will take over the jobs of teachers, rendering the role of the instructor obsolete (Okulich-Kazarin et al., 2023). Similarly, some worry that teaching assistants, who are often employed to fund their graduate education, will no longer be necessary (Kim et al., 2020).
I argue that we should reimagine AI in higher education not as something to be feared but, rather, as something to be embraced as an opportunity to improve what we do and how we do it. We ought to use these concerns as our foundation for conducting future research that will ultimately inform best practices regarding the role of AI in teaching and assessment. In other words, I agree with Alam’s (2021) conclusions based on a comprehensive review of literature that AI can be employed to perform a range of administrative tasks more quickly and efficiently (e.g., assessment, grading, feedback) and that the benefits of using it with integrity clearly outweigh the disadvantages. I also agree with Louis and ElAzab (2023) that “teachers remain at the helm of major instructional decisions” (p. 9). As we have done with other technologies, we can and should conduct research to determine what methods are best for getting students to achieve the desired learning outcomes using AI. Herein is where teacher-scholars have an opportunity to influence how AI is utilized to enhance teaching and learning experiences. Let us learn from the past to lead us into the future. For example:
• Just as teachers eventually embraced textbooks as a resource to prepare students for class, thereby freeing them up to focus on active experiential learning (Dewey, 1938; Kolb et al., 2014), so too can teacher-scholars conduct research to determine how AI can similarly provide foundational information upon which to build opportunities for deep teaching. For example, teacher and students can engage in dialogical discourse, which has already demonstrated its utility to overcome structural barriers and inequities as long as all students have equal access to the tools (e.g., Dewsbury, 2020; Dewsbury et al., 2022; Freire, 1992; Zuin and de Mello, 2024).
• Just as television has been shown to improve learning, particularly among low and middle-income communities and countries (Watson and McIntyre, 2020), so too can teacher-scholars study ways in which an intelligent adaptive AI gamification environment can be employed to motivate students to engage based on diverse personalities, needs, norms, and values (Bennani et al., 2022).
• Just as research was conducted to inform pedagogical practices for using word processing software tools to enhance learning and the environment for learning (Morphy and Graham, 2012), so too can teacher-scholars guide the use of generative AI robots such as Grammarly and ChatGPT to improve the iterative process of composition and communication, particularly when learners must do so in a second language (Gayed et al., 2022).
• Just as instructors learned to embrace internet searches by teaching students how to locate and evaluate quality information and sources they find online, so too can we use generative AI to help teach students to discern quality information from misinformation and disinformation, as well as determine quality sources from bogus or malicious ones (Reddy et al., 2020).
• Just as teacher-scholars developed methods for teaching students to use Wikipedia as a starting point when brainstorming a topic rather than as a primary reference in their research papers, we can discover through research and assessment, ways to teach students how to use ChatGPT as a tool to synthesize a body of work as a starting point when doing research and to evaluate the research ChatGPT draws from to create the summary (Ciampa et al., 2023).
• Just as teacher-scholars conducted research to determine how to integrate online tools to make our jobs more efficient, so too can we lead the way in how to use robots and other generative AI tools to answer redundant student questions, as well as to create and/or assess and evaluate low-stakes assignments (Kryston et al., 2021). Programming AI to perform these duties will not replace teachers; however, it could feasibly provide them with more time to engage in meaningful dialogue and intellectual exchange with students to address higher-order learning such as critical thinking and civil discourse (Selwyn, 2019).
Discussion
I am convinced that AI should not be feared by teachers or students. As with any new technology, we have an opportunity and obligation to do research to inform how we will employ it effectively and equitably. Teacher-scholars have been doing assessment research on new technologies for decades. I propose we accept the challenge once again to create theoretically driven research-informed best practices for employing AI to augment, enhance, and improve teaching and learning. New technologies will continue to emerge and evolve, but the fundamentals of teaching and learning remain. I argue that our goal as teacher-scholars is to adapt research-informed best practice pedagogies to operate effectively using new technological affordances as they emerge. If we do so mindfully, we may even be able to employ AI in ways that address potential structural constraints rooted in economic disparities, administrative imperatives, and governmental or corporate control (Pedro et al., 2019). Whether or not we achieve these goals, however, is predicated on accepting the challenge to try. As Apostel (2017) suggests, “collaboration between peers, students, and artificial intelligence” creates “the potential for creative problem solving and innovation at a level we are only beginning to imagine” (p. 177). I will add that we have an ethical responsibility to conduct the research required to integrate AI mindfully into higher education by adapting best practice pedagogies rather than replacing them.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
DS: Supervision, Data curation, Writing – original draft, Investigation, Software, Conceptualization, Writing – review & editing, Methodology, Resources, Funding acquisition, Visualization, Project administration, Formal analysis, Validation.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author declares that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Alam, A. (2021). “Should robots replace teachers? Mobilisation of AI and learning analytics in education,” in 2021 International Conference on Advances in Computing, Communication, and Control (ICAC3), 1–12. IEEE.
Altbach, P. G. (1983). Key issues of textbook provision in the third world. Prospects 13, 313–325. doi: 10.1007/BF02220426
Ambad, S. N. A., Kalimin, K. M., and Yusof, K. M. A. A. K. (2017). The effect of internet addiction on students’ emotional and academic performance. E-Academia J. 6, 86–98.
Apostel, S. (2017). “The next phase: new media and the inevitable transition” in New media and digital pedagogy: Enhancing the 21st century classroom. ed. M. G. Strawser (Lanham, MD: Lexington Books), 169–177.
Ayers, P. (2006). Researching Wikipedia – current approaches and new directions. Proc. Am. Soc. Inf. Sci. Technol. 43, 1–14. doi: 10.1002/meet.14504301252
Bennani, S., Maalel, A., and Ben Ghezala, H. (2022). Adaptive gamification in E-learning: a literature review and future challenges. Computer Applications Eng. Educ. 30, 628–642. doi: 10.1002/cae.22477
Blissmer, R. H. (1985). Computer annual: An introduction to information systems 1985–1986. Hoboken, NJ: John Wiley & Sons.
Bowers, C. A. (1988). The cultural dimensions of educational computing: Understanding the non-neutrality of technology. New York, NY: Teachers College Press.
Bringula, R. P., Catacutan-Bangit, A. E., Garcia, M. B., Gonzales, J. P. S., and Valderama, A. M. C. (2021). “Who is gullible to political disinformation?”: predicting susceptibility of university students to fake news. J. Inform. Tech. Polit. 19, 165–179. doi: 10.1080/19331681.2021.1945988
Brundage, M. S., Clark, A. J., Toner, H., Eckersley, P., Garfinkel, B., and Dafoe, A. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. doi: 10.48550/arXiv.1802.07228
Buckingham, D. (1998). Media education in the UK: moving beyond protectionism. J. Commun. 48, 33–43. doi: 10.1111/j.1460-2466.1998.tb02735.x
Caponetto, I., Earp, J., and Ott, M. (2014). Gamification and education: a literature review. Proc. Eur. Conf. Game Based Learning 1:50.
Ciampa, K., Wolfe, Z. M., and Bronstein, B. (2023). ChatGPT in education: transforming digital literacy practices. J. Adolesc. Adult Lit. 67, 186–195. doi: 10.1002/jaal.1310
Cole, A. W., Lennon, L., and Weber, N. L. (2021). Student perceptions of online active learning practices and online learning climate predict online course engagement. Inter. Learning Environ. 29, 866–880. doi: 10.1080/10494820.2019.1619593
Cools, H., Gorp, B. V., and Opgenhaffen, M. (2022). Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers. Journalism 10, 3–21. doi: 10.1177/14648849221122647
Dewsbury, B. M. (2020). Deep teaching in a college STEM classroom. Cultural Stu. Sci. Educ. 15, 169–191. doi: 10.1007/s11422-018-9891-z
Dewsbury, B. M., Swanson, H. J., Moseman-Valtierra, S., and Caulkins, J. (2022). Inclusive and active pedagogies reduce academic achievement outcome gaps and improve long-term performance. PLoS One 17:e0268620. doi: 10.1371/journal.pone.0268620
Edwards, C., Edwards, A., Spence, P. R., and Lin, X. (2018). I, teacher: using artificial intelligence (AI) and social robots n communication and instruction. Commun. Educ. 67, 473–480. doi: 10.1080/0363
European Commission High-Level Expert Group on Artificial Intelligence. (2018). A definition of AI: Main capabilities and scientific disciplines. Available online at: https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december_1.pdf (accessed December 18, 2018).
Gantz, W. (1982). Television the surrogate parent: uses and correlates of television as babysitter. (ED218650). New York, NY: ERIC.
Gayed, J. M., Carlon, M. K. J., Oriola, A. M., and Cross, J. S. (2022). Exploring an AI-based writing Assistant's impact on English language learners. Comput. Educ. Artif. Int. 3:100055. doi: 10.1016/j.caeai.2022.100055
Hillard, R. L. (1958). Television and education. J. High. Educ. 29, 431–470. doi: 10.1080/00221546.1958.11779578
Howard, R. M. (2007). Understanding “Internet plagiarism”. Comput. Compos. 24, 3–15. doi: 10.1016/j.compcom.2006.12.005
Kaufmann, R., Sellnow, D. D., and Frisby, B. N. (2016). The development and validation of the online learning climate scale (OLCS). Commun. Educ. 65, 307–321. doi: 10.1080/03634523.2015.1101778
Keefe, C. H., and Candler, A. C. (1989). LD students and word processors: questions and answers. Learn. Disabil. Res. Pract. 4, 78–83. doi: 10.1177/093889828900400203
Kim, J., Merrill, K., Xu, K., and Sellnow, D. D. (2020). My teacher is a machine: understanding students’ perceptions of AI teaching assistants in online education. Int. J. Hum. Computer Int. 36, 1902–1911. doi: 10.1080/10447318.2020.1801227
Kolb, D. A., Boyatzis, R. E., and Mainemelis, C. (2014). “Experiential learning theory: previous research and new directions” in Perspectives thinking, learning, and cognitive styles. eds. R. J. Sternberg and L. Zhang (London: Routledge), 227–247.
Kryston, K., Goble, H., and Eden, A. (2021). Incorporating virtual reality training in an introductory public speaking course. J. Commun. Pedagogy 4, 131–151. doi: 10.31446/JCP.2021.1.13
Li, S. (2023). The historical evolution of educational technology: from the printing press to online education. Innov. Hum. Soc. Sci. Res. 2023, 272–279. doi: 10.31166/VoprosyIstorii202309Statyi27
Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 13:410. doi: 10.3390/educsci13040410
Louis, M., and ElAzab, M. (2023). Will AI replace teacher? Int. J. Int. Educ. 22, 9–21. doi: 10.21608/IJIE.2023.312491
Means, B., Bakia, M., and Murphy, R. (2014). Learning online: What research tells us about whether, when and how. London: Routledge.
Morphy, P., and Graham, S. (2012). Word processing programs and weaker writers/readers: a meta-analysis of research findings. Read. Writ. 25, 641–678. doi: 10.1007/s11145-010-9292-5
Ojukwu, N. N. C., and Saidu, D. (2025). Strategies for tackling misinformation, disinformation and malinformation for sustainable science communication among undergraduate students at the Federal University Lokoja, Nigeria. South Afr. J. Libraries Inf. Sci. 91, 1–9.
Okulich-Kazarin, V., Artyukhov, A., Skowron, Ł., Artyukhova, N., Dluhopolskyi, O., and Cwynar, W. (2023). Sustainability of higher education: study of student opinions about possibility of replacing teachers with AI technologies. Sustain. For. 16:55. doi: 10.3390/su16010055
Owston, R. D., Murphy, S., and Wideman, H. H. (1992). The effects of word processing on students’ writing quality and revision strategies. Res. Teach. Engl. 26, 249–276. doi: 10.58680/rte199215434
Pedro, F., Subosa, M., Rivas, A., and Valverde, P. (2019). Artificial intelligence in education: challenges and opportunities for sustainable development.
Phillips, M. R., and Horton, V. (2000). Cybercheating: has morality evaporated in business education. Int. J. Educ. Manage. 14, 150–155. doi: 10.1108/09513540010333003
Reddy, P., Sharma, B., and Chaudhary, K. (2020). Digital literacy: a review of literature. Int. J. Technoethics 11, 65–94. doi: 10.4018/IJT.20200701.oa1
Reys, B. J., and Reys, R. E. (1987). Calculators in the classroom: how can we make it happen? Arithmetic Teacher 34, 12–14. doi: 10.5951/AT.34.6.0012
Schwarz, A., and Faj, T. (2024). “Communicating and perceiving risks of artificial intelligence as an emerging technology” in Communicating risk and safety. eds. T. L. Sellnow and D. D. Sellnow (Berlin: De Gruyter Mouton), 503–526.
Sellnow, D. D., Child, J. T., and Ahlfeldt, S. L. (2005). Textbook technology supplements: what are they good for? Commun. Educ. 54, 243–253. doi: 10.1080/03634520500356360
Sellnow, D. D., and Kaufmann, R. (2017). “Instructional communication and the online learning environment: then, now, and next” in Handbook of instructional communication: Rhetorical and relational perspectives. eds. M. L. Hauser and A. Hosek. 2nd ed (London: Routledge), 195–206.
Sellnow, D. D., Strawser, M. G., and Miller, A. (2022). Navigating the course integrity/compassionate care dialectic in online teaching and learning. Commun. Educ. 71, 158–160. doi: 10.1080/03634523.2021.2022733
Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. New York, NY: John Wiley & Sons.
Sharma, P., and Mayleyeff, J. (2003). Internet education: potential problems and solutions. Int. J. Educ. Manag. 1, 19–25. doi: 10.1108/09513540310456365
Stanford University (2016). Artificial intelligence and life in 2030: one hundred year study on artificial intelligence. Stanford, CA: Stanford University.
Strawser, M. G. (Ed.) (2017). New media and digital pedagogy: Enhancing the 21st century classroom. Lanham, MD: Lexington Books.
Wakefield, J. F. (1998). A brief history of textbooks where have we been all these years? (ED419246). New York, NY: ERIC.
Watson, J., and McIntyre, N. (2020). Educational television: a rapid evidence review (Rapid evidence review no. 5). EdTechHub.
Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., et al. (2021). A review of artificial intelligence (AI) in education from 2010 to 2020. Complexity 2021:8812542. doi: 10.1155/2021/8812542
Zilber, A. (2025). Bill gates says AI will replace doctors, teachers within 10 years— and claims humans won't be needed "for most things." New York Post. Available online at: https://nypost.com/2025/03/27/business/bill-gates-said-ai-will-replace-doctors-teachers-within-10-years/ (accessed March 27, 2025).
Keywords: artificial intelligence, new technologies, instructional communication, risk perceptions, teaching and learning, higher education pedagogy
Citation: Sellnow DD (2025) Reflection-AI: exploring the challenges and opportunities of artificial intelligence in higher education. Front. Commun. 10:1615040. doi: 10.3389/fcomm.2025.1615040
Edited by:
Davide Girardelli, University of Gothenburg, SwedenReviewed by:
Panagiotis Tsiotakis, University of Peloponnese, GreeceJanice Thorpe, University of Colorado Colorado Springs, United States
Copyright © 2025 Sellnow. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Deanna D. Sellnow, ZHNlbGxub0BjbGVtc29uLmVkdQ==