PERSPECTIVE article

Front. Commun., 21 May 2025

Sec. Culture and Communication

Volume 10 - 2025 | https://doi.org/10.3389/fcomm.2025.1598082

This article is part of the Research TopicTeaching and Assessing with AI: Teaching Ideas, Research, and ReflectionsView all 12 articles

Reflection-AI: artificial intelligence or algorithmic instruction problem? Empowering students through situated knowledges-based reflexivity

  • Department of Communication, University of Illinois Chicago, Chicago, IL, United States

This article argues for a socio-technical rethinking of the contexts of teaching and assessing with artificial intelligence (AI), whether viewed as a threat or an opportunity. Drawing on technology studies and critical reflection on student experiences with English academic writing assignments in pre-AI era Korea, I reposition the “AI problem” as a cultural problem, namely an “algorithmic instruction” problem concerning structural prioritization of formulaic student work and pedagogical standardization, not a novel technology or individual moral(e) problem. Therefore, cultural, structural solutions are desirable. As potential breakthroughs, critical feminist epistemology of situated knowledges and qualitative methodological practice of reflexivity are discussed. Four practical mottos inspired from the concepts are introduced: 1. Building from positionality and reflexivity; 2. Memorization to (aided) storytelling; 3. “I” to “beyond-I” scaffolding; and 4. Evaluation to celebration. Examples from personal teaching experiences and implications for AI integration are discussed. Sustainable (re-)imaginations of AI in pedagogy are recommended.

1 Introduction

The broader availability of communicative artificial intelligence (AI) has led to the “AI problem” in classrooms: for those concerned, students’ uncritical over-reliance on AI, and for those welcoming, practical considerations for integrating AI in pedagogy. This article invites tackling the problem by first evaluating the logics that structure our pedagogical practices. That is, the AI problem may be less an “artificial intelligence” problem than an “algorithmic instruction” problem regarding procedural systemization in pedagogy. Drawing on critical feminist epistemology and qualitative methodology, I position situated knowledges-informed (Haraway, 1991) reflexivity as a key solution and a rich source of innovations. Similar to how the awareness that “the researcher is the instrument” (Tracy, 2013, p. 25)—or even “instrument par excellence” (Hammersley and Atkinson, 2019, p. 18)—grounds qualitative rigor, I argue that fostering a keen self-understanding that the students themselves are the learning instruments is the core task. In short, the AI problem is a cultural problem: the challenge at hand is student empowerment, not better AI per se.

The article structure is as follows. The first section of the article begins with the argument that this is not a novel technology problem. I reflect on my pre-generative AI era academic experience in (South) Korea, where English writing presented an insoluble challenge for some students that made outsourcing an alluring choice. Then, I discuss the underlying “what we ought to do” in algorithmic instruction and identify it as the problem. The second section proposes situated knowledges-based reflexivity as an alternative pedagogical approach, i.e., one that encourages the students to take the plunge into contexts with a sharp analytic awareness of the interrelations, starting from who they and their mutually immersed peers are. I explain the key terms and discuss the four mottos that have guided my curricular practices with examples. Together, they aim to foster an environment where students can conduct and enliven their learning through a shared sense of ownership.

2 The AI problem and reflexivity

2.1 The AI problem: artificial intelligence or algorithmic instruction problem?

I posit the AI problem as a consequence and a facilitator of algorithmic, automated culture: an algorithmic instruction problem. This is not to surreptitiously remove AI from the discussion, perhaps from the long-refuted belief that technologies are impartial tools and so are their propagators (Napoli and Caplan, 2017; Winner, 1992). I am arguing that this is not just an AI issue from the social shaping perspective, which emphasizes the continuous mutual influencing between technology and society (Baym, 2015). The AI problem, whether threat or opportunity, is not solely attributable to a novel technology or non-optimal (mis-)uses of it. Rather, it is a complex socio-technical problem that demands human-involved thinking and cultural solutions. This means, for example, that an improved detection tool alone cannot fundamentally solve AI-related academic dishonesty, nor can a more “humanlike” AI tutor guarantee quality learning, especially in the elevated cognitive dimensions (Krathwohl, 2002). Redacting AI or replacing it with existing non-AI practices quickly reveals that these are ultimately learning culture and strategy problems. The specificity of AI should not be discarded, but we must contextualize beyond the immediate AI use cases to identify the undercurrents. This allows sustainable re-imaginations.

2.1.1 Minsoo and English as the academic lingua franca

I think with my undergraduate experiences in the early 2010s Korea, where I saw several peers struggle with the “good English” writing expectation. Some ended up resorting to paid services and plagiarism. I distinctly remember that often these were not from their active want (e.g., laziness, moral lack), but from feeling powerless and lost. A memory from my freshman year particularly reminds me. I saw “Minsoo” (pseudonym) working on an English essay assignment on the student lounge computer. I am not sure whether it was our catch-up conversation, my glances at the monitor, or his conscience that prompted him to explain: it was for a class I was not taking, and he was copying large chunks from English-based Google. I was shocked because he never struck me as a dishonest person. He spoke articulately and intelligently in Korean classes. Possibly responding to my visible disappointment, he elaborated. What still humbles me is how he said “You would not understand because you are good at English.” The tone was not accusatory. He sounded embarrassed and dejected. He wished to write his mind in his voice, but he felt obstructed by having to—if not unable to—form it in English. He was not proud, but given the short time and the relative grading system, he felt that this was the only way to pass. He seemed conflicted. I never found out what he ended up doing.

Minsoo’s story gently reminds me that the root of the AI problem may be algorithmic instruction, not corrupt individuals or technologies’ deterministic outcomes. I believe what lay at the heart of his conundrum was the compulsion to fulfill the standardized requirements above all, especially amid the competitive social backdrop (as captured by relative grading). This hijacked the actual goal: his education. I am not condoning plagiarism; rather, I am pointing out what we can learn about the AI problem from the context of Minsoo’s decision-making. An important factor was the English writing requirement, stemming from its dominating position as the lingua franca including in academia. Curran (2020) explained how “good English” has been conflated with the myth of “authentic” English, often equated with that spoken by privileged White native speakers from the West. “Good English” also exerts stratifying power on knowledge work in communication (Suzina, 2021). I too, despite being “good at English” according to Minsoo, spend much of my academic writing time making non-substantial edits to speak with a “better accent,” if not laboring on (incommensurable) transliterations and translations. Then, Minsoo’s trouble may have been exacerbated by his lack of familiarity with standard English academic writing. This could have been an epistemological struggle, not a linguistic, formal one: Korean composition traditionally recognizes dugwalsik (main idea at the beginning—i.e., the common English model) and migwalsik (main idea at the end) structures, which, respectively, correspond to deductive and inductive reasonings. Given everything, he may have felt compelled to choose what seemed to be the best guarantee of “success,” which in Korean society tends to be equated with high scores and high-earning jobs gate-kept by it. To win at life (Ahmed, 2010), Minsoo must consecutively make the best choices to stay on the path to success; differences with the standards and mistakes become losing choices, not ways to explore and learn (Kim, 2023). The rhetoric of choice hides that seeing and executing the winning choices (e.g., writing a good English essay) presume certain privileged conditions (e.g., good at English), and that this process can reaffirm the embedded values (e.g., “authentic” English is good) while reproducing the definition of success (e.g., high scores). Contextualized this way, what surfaces as a more foundational solution to Minsoo’s predicament is a curricular adjustment—and ultimately a cultural shift—to adequately support and empower him.

2.1.2 Algorithmic instruction problem and AI

Critical algorithm studies helps connect this Minsoo with the current-day Minsoos. I contend that while AI’s availability and possibilities are new, the underlying context of the AI problem is not new. We therefore need a structural solution. Likewise, Warner (2025) located the alienation of learning not in AI but in the system that has incentivized producing formulaic responses for standardized assessments, as opposed to the exploratory and expressive process of writing. Hence the ease and allure of AI outsourcing. I use “algorithmic instruction” to invite such critical reflections on procedural systematization in pedagogy. I invoke “algorithmic” to underline the socio-technical context, namely “the insertion of procedure into human knowledge and social experience” (Gillespie, 2016, p. 25). This is informed by Ananny’s (2016) conceptualization of “networked information algorithms” as assemblages to scrutinize how the linkages among various sites govern “what we ought to do”: “relationships [sic] producing, interpreting, and relying upon” algorithmic formations (p. 97). Like Warner, I suggest that formulaic assignments are linked with procedural datafication of student progress into quantifiable, computationally measurable bodies (Cheney-Lippold, 2017). This operates with value-laden categories like “good English.” The problem is less transparency per se (e.g., detailed guidelines and rubric) (Ananny and Crawford, 2018). It is more algorithmic instruction and the embedded values’ seeming naturalness (e.g., formulaic writing in “authentic” English is the standard) and insufficient attention to linkages.

I offer this lens as a means to think otherwise about the system (Gunkel, 2018), not to simplistically discredit established techniques or to diminish their practical benefits. For instance, step-by-step instructions can help both confused students and overworked teachers. What is troubling is how the formulaic convenience can leave little room for engaged reflections and varied definitions of success. AI can be threatening if the instructional context prioritizes sorting students’ learning into measurable bodies, thereby contributing to an educational relationship ripe for automation.

In this section, I identified algorithmic instruction as an important context of the AI problem, and student empowerment through curricular adjustment as the core challenge at hand. Minsoo reminds me that learning should not be a project predicated on completing tasks as per algorithmic requirements, but a process to be driven, discovered, and deepened by students. Algorithmic instruction is prone to outsourcing, now to AI. It is a cultivated accomplice of an algorithmic, automated culture: where education, as a powerful cultural process, expands organizing humans as per the logic of automation—a logic that machines thrive on (Andrejevic et al., 2023; Seaver, 2017; Striphas, 2015).

2.2 Reflexivity: the student is the learning instrument par excellence

I propose situated knowledge-based reflexivity as a solution to the AI problem. Tracy (2007), writing about communication research and methodology, encouraged researchers to “take the plunge” or to focus on problems and in situ contexts, stressing the vital analytic role of power in untangling them. As advised, I identified the powerful process of algorithmic instruction as an important context to the AI problem. Extending the advice, I suggest that “taking the plunge” can also be an effective pedagogical strategy to reconstruct the power relations in learning. I first explain the feminist epistemology of “situated knowledges” and the qualitative methodological practice of “reflexivity.” Then, in the form of four mottos, I detail how they have guided me with examples from my personal teaching experiences.

2.2.1 Situated knowledges

Haraway (1991) theorized situated knowledges as “feminist objectivity” (p. 188), a doctrine and practice rooted in “the sciences of the multiple subject with (at least) double vision” (p. 195). It builds on the recognition that technologies of knowledge, including the supposedly objective, “dis-engaged” (p. 201) instruments of sciences, are “active perceptual systems” that translate the world and promulgate specific visions (p. 190). Thus, knowledge work must be accountable for establishing the patterns of reality. To this, Haraway recommended contextually engaging with embodied experiences: “resonance, not dichotomy” (p. 195). This is not merely a moral call, but that for accuracy and innovation. She is interested in “views from somewhere” because they enable “connections and unexpected openings” (p. 196). Romanticizing or appropriating subjugated standpoints is explicitly warned against (p. 191). Situated knowledges’ link to the AI problem lies in the relationship between context and knowledge. Flyvbjerg (2011) elucidated that contexts are central to the process of human learning and context-dependent knowledge and experiences are fundamental to expert activities. Situated knowledges demands deep contexts from algorithmic instruction.

2.2.2 Reflexivity

“(Self-)Reflexivity” is a core qualitative concept (Tracy, 2013). It refers to the ongoing careful reflection of how the researcher’s positionality—i.e., how their perspectives are rooted in experiences emerging from their social and personal situatedness (Jadallah, 2024)—affects their research process and outcome (Berger, 2015; Tracy, 2013). Simply put, it is the continual critical dialog between the researcher’s context and the research context. The researcher is the research instrument in qualitative methodology (Hammersley and Atkinson, 2019; Tracy, 2013), and thus reflexivity is the key guiding principle and practice for rigor in all steps of research (Berger, 2015; Braun and Clarke, 2023; Morse, 2018).

2.2.3 Thinking with situated knowledges-based reflexivity: four mottos

Situated knowledges laid the groundwork for my re-imagination, and reflexivity has supplied practical inspirations. The core inspiration was that both researchers and students engage in knowledge work, and therefore reflexivity can benefit the students’ learning processes, too. This means that students should contextualize their learning by continuously reflecting on their and others’ (including AIs’) respective positionalities. This resonates with situated knowledges’ emphasis on “views from somewhere.” This also follows its urges to recognize objects as actors and to problematize binaries to activate passive categories (Haraway, 1991). With reflexivity, knowledge is not recognized as fixed, dis-engaged products but as ongoing, engaged processes. Students are activated as participants, re-imagined from the acquirer and recipient roles in the teacher-student binary. The two are linked; approaching knowledge as processes allows foregrounding the students’ co-ownership and co-producership. Consequently, “success” becomes negotiable places of personalized growth, not an algorithmically pre-set label prone to outsourcing. Finally, reflexivity works with situated knowledges’ demand for cultural accountability. Whether teacher or student, knowledge work with/via AI must be reflexively considered in light of its implications and consequences, including through our cultural positionality.

2.2.3.1 Building from positionality and reflexivity

Thinking about AI and algorithmic instruction through this lens has led to four teaching and assessment mottos. 1. Building from positionality and reflexivity regards fostering a critical self-understanding in students and encouraging learning through situated visions. AI can aid but not substitute the process because the student is the learning instrument. Early semester, I use personal and social identity wheels in class activities (see University of Michigan, 2025) and/or assign a reflection about their personal experiences related to the course subject. A creative iteration of this is my videogame course’s “alien ship” exercise, where the students must quickly sketch a diagram of the human body to appease the hypothetical alien abductors (Kim, 2022). Often, they model themselves or the “standard” adult male body. We look at our own bodies and discuss whether we can be “human” across all diagrams. We then discuss how games may also presume certain players and how to mitigate it. These serve as embodied contexts for the ensuing coursework.

2.2.3.2 Memorization to (aided) storytelling

2. Memorization to (aided) storytelling relates to “taking the plunge.” For example, my exams prioritize demonstrating contextual understanding. I avoid simple regurgitating questions and use narrativized questions that put concepts and information into contexts, often borrowing from in-class examples and student life. For instance, the diffusion of innovations theory’s “trialability” is represented not by the verbatim definition, but by “[student name] has been a [streaming platform] subscriber since they used their first complimentary month to check out [popular show among my students].” Importantly, students are invited to create sample questions for extra credit before the exam. This becomes their collective study guide. I incentivize taking the plunge, i.e., narrativized questions. This also allows me to see the content from their standpoints, and I adjust and adapt accordingly. This approach does not bar AI involvement, and/but helps reposition students as storytellers (Krathwohl, 2002), not passive examinees.

2.2.3.3 “I” to “Beyond-I” scaffolding

3. “I” to “beyond-I” scaffolding focuses on elevating the first motto to “(at least) double vision.” An example would be the progression from a self-dialogic essay my students individually write (see Kim, 2021) to a group podcast they collectively produce in my pop culture course. During preparation, they give feedback on group members’ essays. After production, they engage with peer groups’ podcasts by leaving appreciative-but-constructive audience comments. They develop one of them into their final paper, which should have original research contributions to the conversation their peers started. This progression allows students to thoroughly consider differently situated visions and connectively (re-)think with nuances. AIs can be introduced during the process—either by the instructor or (covertly) by the students, and/but the learning subject remains intact. The basis in self-reflexivity and peer participation, as well as the interconnection among the steps, seemed to have promoted continued engagement and accountability.

2.2.3.4 Evaluation to celebration

4. Evaluation to celebration acknowledges and appreciates how students’ multiple visions enrich learning. For instance, with class presentations, I clarify that our goal is to share and celebrate our learning and that the grading criterion is “contribution to the collective learning process.” Most students excellently meet this by helping us think together with their unique context-rich view. We celebrate with snacks, and students often linger around to continue discussing. I modeled this after fandoms’ logic of gift economy, where members give and reciprocate out of goodwill and passion for communal benefit (Jenkins, 2009). Although this logic itself is not necessarily oppositional to algorithmic culture (Yin, 2020), the communal model is harmonious with situated knowledges-based reflexivity. I conjecture that individualistic AI abuse is unappealing when gifting, not competitive taking and trading, grounds learning. Moreover, celebration embraces diverse growth-based successes, which vastly expands fresh partnership opportunities with AI.

2.2.4 Suggestions for applications

The above represents concerted efforts toward empowering the students to practice their learning membership. Granted, they are situated in my teaching experiences (e.g., subjects, levels, teaching persona) and thus application should be contextually considered. AI-related strategies should be accordingly imagined. For example, the provided curricular activities under mottos 2 and 3 can be supplemented by asking students to critically compare the vision(s) underlying their storytelling with those in AI’s versions. Some teachers may need to prioritize simple memorization of certain information or formulas. AI could serve as effective retention coaches or roleplaying partners in such cases (e.g., key information-reiterating problems personalized to each student’s interests or applicable scenarios, repeated as per individual progress) but could also cause mundane dependence or limited training (cf., “alien ship” exercise), which may be detrimental depending on the course topic and objectives. Research that expands on students’ experiences (e.g., Abbas et al., 2024), particularly through the structural lens of algorithmic instruction, is recommended. Finally, available resources and expected labor should be carefully assessed, including immaterial dimensions (Hardt, 1999) (e.g., personalized attentiveness to students’ developing self-reflexivity). This is non-conclusive. If adopted well, perhaps for some in creative partnership with AI, the student co-ownership model could ease the work.

3 Conclusion

I believe situated knowledges and reflexivity can be productive bases for sustainable pedagogy in the generative AI era. Hughes (1994) warned that technologies can become so entrenched and pervasive that their “momentum” may be difficult to intervene with. I picture a rolling snowball. The best way to prevent a crash would be to watch where it gets packed or to skillfully redirect its course by working with the landscape. This is why we should tackle the AI problem contextually. Identifying the key context as algorithmic instruction, I explained situated knowledges and reflexivity as alternative lenses and shared four practical mottos that have guided my teaching and assessment practices. I believe this “positioned rationality” can inform and inspire, and thereby contribute to our collective knowledge building around the momentum of AI: “The only way to find a larger vision is to be somewhere in particular” (Haraway, 1991, p. 196).

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

DK: Conceptualization, Investigation, Writing – review & editing, Writing – original draft.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Acknowledgments

I thank my teaching mentors and students. I appreciate the New Media and Digital Cultures working group’s pedagogy discussion at the 2023 Cultural Studies Association conference where I shared my early-stage initial ideas that inspired me to begin developing and writing the paper. I would also like to thank the editors and reviewers.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author declares that no Gen AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abbas, M., Jam, F. A., and Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. Int. J. Educ. Technol. High. Educ. 21:10. doi: 10.1186/s41239-024-00444-7

Crossref Full Text | Google Scholar

Ahmed, S. (2010). The promise of happiness. Durham, NC; London, UK: Duke University Press.

Google Scholar

Ananny, M. (2016). Toward an ethics of algorithms: convening, observation, probability, and timeliness. Sci. Technol. Hum. Values 41, 93–117. doi: 10.1177/0162243915606523

Crossref Full Text | Google Scholar

Ananny, M., and Crawford, K. (2018). Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20, 973–989. doi: 10.1177/1461444816676645

Crossref Full Text | Google Scholar

Andrejevic, M., Fordyce, R., Li, L., and Trott, V. (2023). Automated culture: introduction. Cult. Stud. 37, 1–19. doi: 10.1080/09502386.2022.2042579

Crossref Full Text | Google Scholar

Baym, N. K. (2015). Personal connections in the digital age: Digital media and society series. 2nd Edn. Cambridge, UK; Malden, MA: Polity Press.

Google Scholar

Berger, R. (2015). Now I see it, now I don’t: researcher’s position and reflexivity in qualitative research. Qual. Res. 15, 219–234. doi: 10.1177/1468794112468475

Crossref Full Text | Google Scholar

Braun, V., and Clarke, V. (2023). Is thematic analysis used well in health psychology? A critical review of published research, with recommendations for quality practice and reporting. Health Psychol. Rev. 17, 695–718. doi: 10.1080/17437199.2022.2161594

PubMed Abstract | Crossref Full Text | Google Scholar

Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York: NYU Press.

Google Scholar

Curran, N. M. (2020). Intersectional English(es) and the gig economy: teaching English online. Int. J. Commun. 14, 2667–2686. Available online at: https://ijoc.org/index.php/ijoc/article/view/11310 (Accessed March 22, 2025).

Google Scholar

Flyvbjerg, B. (2011). “Case Study” in The sage handbook of qualitative researc. eds. N. K. Denzin and Y. S. Lincoln. (Thousand Oaks: SAGE), 301–316.

Google Scholar

Gillespie, T. (2016). “Algorithm” in Digital keywords: A vocabulary of information society and culture. ed. B. Peters (Princeton, NJ: Princeton University Press), 18–30.

Google Scholar

Gunkel, D. J. (2018). Gaming the system: Deconstructing video games, games studies, and virtual worlds. Bloomington, IN: Indiana University Press.

Google Scholar

Hammersley, M., and Atkinson, P. (2019). Ethnography: Principles in practice. London, UK: Routledge.

Google Scholar

Haraway, D. (1991). Simians, cyborgs, and women: The reinvention of nature. London, UK: Free Association Books.

Google Scholar

Hardt, M. (1999). Affective labor. Boundary 26, 89–100.

Google Scholar

Hughes, T. (1994). “Technological momentum” in Does technology drive history? The dilemma of technological determinism. eds. M. R. Smith and L. Marx (Cambridge, MA: MIT Press), 101–113.

Google Scholar

Jadallah, C. C. (2024). Positionality, relationality, place, and land: considerations for ethical research with communities. Qual. Res. 25, 227–242. doi: 10.1177/14687941241246174

PubMed Abstract | Crossref Full Text | Google Scholar

Jenkins, H. (2009). “What happened before YouTube” in YouTube: Online video and participatory culture. eds. J. Burgess and J. Green (Cambridge, UK: Polity), 109–125.

Google Scholar

Kim, D. O. (2021). The joy kill Club: on squid game (2021), a roundtable-monologue by a Korean female aca-fan (part one). Pop junctions: reflections on entertainment, pop culture, activism, media literacy, fandom and more. Available online at: https://henryjenkins.org/blog/2022/12/11/games-as-social-technologya-syllabus (Accessed March 22, 2025).

Google Scholar

Kim, D. O. (2022). Games as social technology—a syllabus. Pop junctions: reflections on entertainment, pop culture, activism, media literacy, fandom and more. Available online at: https://henryjenkins.org/blog/2022/12/11/games-as-social-technologya-syllabus (Accessed March 22, 2025).

Google Scholar

Kim, D. O. (2023). “Pay for your choices”: deconstructing neoliberal choice through free-to-play mobile interactive fiction games. New Media Soc. 25, 943–962. doi: 10.1177/14614448211018177

Crossref Full Text | Google Scholar

Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: an overview. Theory Pract. 41, 212–218. doi: 10.1207/s15430421tip4104_2

Crossref Full Text | Google Scholar

Morse, J. (2018). “Reframing rigor in qualitative inquiry” in The SAGE handbook of qualitative research. eds. N. K. Denzin and Y. S. Lincoln. 5th ed (Thousand Oaks, CA: SAGE), 1373–1409.

Google Scholar

Napoli, P. M., and Caplan, R. (2017). Why media companies insist they’re not media companies, why they’re wrong, and why it matters. First Monday 22:7051. doi: 10.5210/fm.v22i5.7051

Crossref Full Text | Google Scholar

Seaver, N. (2017). Algorithms as culture: some tactics for the ethnography of algorithmic systems. Big Data Soc. 4, 205395171773810–205395171773812. doi: 10.1177/2053951717738104

PubMed Abstract | Crossref Full Text | Google Scholar

Striphas, T. (2015). Algorithmic culture. Eur. J. Cult. Stud. 18, 395–412. doi: 10.1177/1367549415577392

PubMed Abstract | Crossref Full Text | Google Scholar

Suzina, A. C. (2021). English as lingua franca. Or the sterilisation of scientific work. Media Culture Soc 43, 171–179. doi: 10.1177/0163443720957906

Crossref Full Text | Google Scholar

Tracy, S. J. (2007). Taking the plunge: a contextual approach to problem-based research. Commun. Monogr. 74, 106–111. doi: 10.1080/03637750701196862

Crossref Full Text | Google Scholar

Tracy, S. J. (2013). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact. West Sussex, UK: John Wiley & Sons.

Google Scholar

University of Michigan. (2025). Sample activities. Available online at: https://sites.lsa.umich.edu/equitable-teaching/category/sample-activities/ (Accessed March 21, 2025).

Google Scholar

Warner, J. (2025). More than words: How to think about writing in the age of AI. New York, NY: Basic Books.

Google Scholar

Winner, L. (1992). The whale and the reactor: A search for limits in an age of high technology. Chicago, IL: University of Chicago Press.

Google Scholar

Yin, Y. (2020). An emergent algorithmic culture: the data-ization of online fandom in China. Int. J. Cult. Stud. 23, 475–492. doi: 10.1177/1367877920908269

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, algorithmic culture, algorithmic instruction, situated knowledges, reflexivity, pedagogy

Citation: Kim DO (2025) Reflection-AI: artificial intelligence or algorithmic instruction problem? Empowering students through situated knowledges-based reflexivity. Front. Commun. 10:1598082. doi: 10.3389/fcomm.2025.1598082

Received: 22 March 2025; Accepted: 06 May 2025;
Published: 21 May 2025.

Edited by:

Kelly Merrill, University of Cincinnati, United States

Reviewed by:

Magda G. Sánchez-Trujillo, Autonomous University of the State of Hidalgo, Mexico
Mihaela Brumen, University of Maribor, Slovenia

Copyright © 2025 Kim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Do Own (Donna) Kim, ZG9vd25raW1AdWljLmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.