- Department of Curriculum Studies, Faculty of Education, Stellenbosch University, Stellenbosch, South Africa
Introduction
I frame this paper from my perspective as an art education scholar, focusing on how creativity thrives within the realms of ambiguity, hesitation, and the productive discomfort of not knowing at least, not yet. A persistent question I have been grappling with is whether seeking immediate answers from AI undermines the essential space of uncertainty that nurtures curiosity, creativity, and critical thinking. These are habits we engage with daily in art education, which are crucial for developing new ideas in scholarship.
My reflections on this issue deepened after I commented on a colleague's invitation to discuss AI illiteracy and its implications for scholarly credibility on a professional platform. This discussion illuminated a key problem: the challenge is not solely whether to use AI, but rather how we can do so in ways that uphold the thinking habits essential for trustworthy scholarship and teaching.
In this article, I argue that credible scholarship and teaching in the age of AI hinge on AI literacy practices that promote inquiry through the formation of meaningful questions, verification against primary sources, transparent disclosure, and assessment designs that make thinking visible. I begin by defining credibility and outlining the risks associated with AI illiteracy. Next, I examine the impact of AI on research practices and assessment. I conclude by explaining the importance of allowing creativity the time it needs to flourish in uncertainty. I conclude by explaining why creativity requires time in uncertainty, and I offer practical guidance for educators and researchers navigating these challenges. While examples are primarily drawn from studio practice, the underlying approach can be generalized. Requirements for disclosure and author responsibility in medical publishing, policy guidance for course-level declarations, reflective activities in higher education, and AI-aware assessment trials in environmental data science all demonstrate that verification, disclosure, and transparent assessment processes are being implemented beyond the realms of art and design (Yousaf, 2025; Elshall and Badir, 2025; Walter, 2024; Luo et al., 2024; Li et al., 2025; Wang et al., 2024; El Arab et al., 2025; Huh et al., 2025; Freeman, 2025; Nguyen et al., 2024).
Two interconnected issues are central to the challenges of AI illiteracy and concerns regarding credibility in research. The first issue is the allure of rapid publication driven by metrics, which can lead to a fragmented approach. In this scenario, authors may simply take what an AI model generates and insert it into their articles, bypassing the rigorous yet rewarding processes of experimenting with ideas, testing theories, and refining arguments through iterative work. This practice can result in polished prose that includes fabricated or distorted citations if the outputs are not properly verified, ultimately undermining the integrity of the research record (Chelli et al., 2024; Aljamaan et al., 2024). The second issue pertains to teaching and learning contexts. Discussions about assessment are often driven by anxiety, leading to significant efforts focused on bans and detection methods. Comparative studies indicate that AI text detectors frequently misclassify human writing and disproportionately target non-native English speakers, resulting in potential unfair outcomes and diverting attention from meaningful learning design (Weber-Wulff et al., 2023; Liang et al., 2023).
This is a concern that many non-English-speaking countries can relate to, particularly in my context in South Africa, where the country boasts 11 official languages. This situation poses a significant risk for many of our students from diverse linguistic backgrounds. The unintended consequence is an assessment process that lacks pedagogical soundness and fails to provide adequate challenge. Rather than accurately gauging whether learning has occurred and the extent to which students have mastered the material, tasks often devolve into mere compliance checks.
Scholarship credibility: protecting the curiosity, creativity, critical-thinking nexus
Publish-to-perform and the short-circuiting of inquiry
Studies show that empirical evaluations reveal significant rates of hallucinated or inaccurate references and confidently declared errors in AI-generated text when authors neglect to verify the outputs (Chelli et al., 2024; Aljamaan et al., 2024). Incorporating such unverified material undermines essential scholarly practices, including close reading, triangulation, and transparent methodology, thereby weakening the overall scholarly voice. Recent scholarly guidance consistently emphasizes that authors must take full responsibility for accuracy and systematically disclose and verify any AI-assisted steps involved in their work (Yousaf, 2025; El Arab et al., 2025).
Curiosity as the engine of credible research
Curiosity serves as a fundamental driver of lifelong learning. Research indicates that it activates midbrain reward centers and enhances hippocampus-dependent memory for both the main information and incidental details encountered during exploration (Gruber and Ranganath, 2019; Murayama et al., 2019). Furthermore, intellectual curiosity has been shown to predict academic success beyond mere cognitive ability and conscientiousness (von Stumm et al., 2011). If the use of AI consistently reduces the time scholars spend in a state of uncertainty, it may systematically undermine the conditions that promote originality and careful judgment.
Question formation, automation bias, and premature closure
Meaningful inquiry begins with identifying problems rather than jumping straight to solutions. Research into human interaction with automation reveals that individuals often over-rely on algorithmic suggestions an issue known as automation bias or algorithm appreciation (Parasuraman and Manzey, 2010; Logg et al., 2019). However, there is insufficient critical examination of the potential risk that novelty may be entirely lost when problem identification is neglected. When the initial framing of a problem is delegated to a model, scholars might focus more on refining the model's questions instead of engaging in the challenging process of formulating their own questions. This can lead to premature conclusions that overlook essential critical thinking. This consideration led me to reflect on creativity and the conditions necessary for it to flourish in the scholarly journey.
Creativity requires ambiguity and diversity
Research consistently demonstrates that tolerance for ambiguity is positively correlated with the development of creativity. Design studies indicate that novelty often emerges at the boundaries of coherent solution spaces, where uncertainty is highest (Zenasni et al., 2008). Large-scale experimental studies reveal that while AI can enhance average creativity ratings among individuals, it tends to limit the diversity (novelty) of outputs across a group. This lack of diversity can jeopardize credibility within the field if many creators adopt similar patterns and approaches (Doshi and Hauser, 2024; Freeman, 2025). When numerous writers follow identical formulas, the field risks losing the variety of styles and ideas that are crucial for signaling credibility and fostering innovation. Currently, the evidence gathered in studio settings is limited. There is a lack of cohort-level evaluations in art and design that explore whether shared prompts or common models lead to stylistic convergence in the work being assessed. Utilizing dispersion metrics for outputs and conducting longitudinal studio studies would be beneficial in clarifying the effects of such factors on originality within the field.
Practical stance for researchers: AI as a cognitive gym rather than a prosthetic
Credibility is best safeguarded when AI enhances inquiry rather than bypasses it. Practical habits that can help include starting with questions and criteria generated by humans, using AI to identify counterarguments and highlight disagreements, and systematically auditing all substantive claims against primary sources. Additionally, maintaining a concise verification log to record prompts, outputs, checks, and corrections is essential. It's also important to disclose the use of AI while taking full responsibility for accuracy. These practices align with contemporary expectations in medical publishing, where journals mandate explicit disclosure of AI assistance and hold authors accountable for the accuracy of their work (Yousaf, 2025; El Arab et al., 2025). A simple diagnostic test can guide practice: did the tool extend the duration of inquiry, or did it merely accelerate movement past it?
Teaching and assessment credibility: moving from fear to literacy
The limitations of prohibition and detection
Detector performance continues to exhibit significant inconsistencies, characterised by documented false positives and notable fairness issues, particularly affecting non-native English speakers (Weber-Wulff et al., 2023; Liang et al., 2023). An over-reliance on detection technologies can lead to the development of tasks that are easier to monitor, rather than assessments that genuinely demand interpretation and disciplinary judgement. Discourse from engineering education furthermore suggests that the issue is not whether students will encounter generative AI, but how educators design, implement, and evaluate its use within assessment and learning activities (Keith et al., 2025).
AI-aware feedback at task, process, and self-regulation levels; authentic assessments requiring process evidence, brief oral examinations, and statements of uncertainty; student declarations and justifications of AI use; course-level policy clarity (Hattie and Timperley, 2007; Elshall and Badir, 2025; Walter, 2024; Luo et al., 2024; Su and Yang, 2023; Keith et al., 2025; Freeman, 2025).
Credibility in teaching is enhanced when institutions transition from a policing approach to one focused on learning and prioritizing educational outcomes. However, existing studies often overlook how detection-centric policies may influence students' trust, their willingness to take intellectual risks, the formulation of meaningful assessment activities that challenge students, and the validity of assessments in multilingual contexts (Walter, 2024; Luo et al., 2024; Li et al., 2025; Wang et al., 2024; Freeman, 2025).
Feedback that drives learning, not compliance
Effective feedback should enhance learning outcomes rather than simply document policy violations. Hattie and Timperley's influential model highlights that effective feedback addresses three key questions for learners: Where am I going? How am I progressing? Where should I go next? (Hattie and Timperley, 2007). Research indicates that feedback is most effective at the task, process, and self-regulation levels, while it is least effective when it relies solely on general praise.
The feedback requirements outlined are aligned with evidence suggesting that effective feedback should focus on task-related aspects, process improvements, and self-regulation, rather than relying solely on general praise. These elements can be seamlessly integrated into AI-aware coursework by incorporating explicit criteria, justifications for rejected suggestions, and verification logs (Hattie and Timperley, 2007; Su and Yang, 2023).
In an AI-aware classroom or studio, these feedback levels can be implemented through specific requirements. These include connecting accepted AI suggestions to clear criteria, justifying at least one rejected suggestion, explaining strategies for source integration or image curation, and maintaining a brief verification and reflection log. This log should record checks and future actions, with at least one component completed without AI assistance to ensure cognitive engagement.
While some reports offer guidance on implementing these requirements, evidence regarding their effects remains limited. There is a lack of comparative studies across disciplines that assess the impact on learning outcomes, academic integrity, and equity. Future research should include randomized or quasi-experimental comparisons testing portfolios with annotations, short oral examinations, and uncertainty statements against traditional written tasks. Such studies should also evaluate validity, reliability, workload, and student trust in addition to integrity metrics (Hattie and Timperley, 2007; Elshall and Badir, 2025; Walter, 2024; Luo et al., 2024; Li et al., 2025; Wang et al., 2024; Nguyen et al., 2024; Freeman, 2025).
Authentication, AI-aware assessment
Authentic assessments require students to apply disciplinary criteria to complex problems while making their processes transparent. Documented practices in environmental data science integrate permitted AI tool usage with high-stakes evaluations and explicit disclosures. These practices utilize formats such as process portfolios, annotated drafts, brief oral examinations of decision-making, and statements of uncertainty that detail what was verified and the methods used (Elshall and Badir, 2025). Policy-oriented research in higher education advocates for course-level declarations of AI usage and structured reflective activities aimed at fostering metacognitive control, rather than relying solely on detection mechanisms (Walter, 2024; Luo et al., 2024; Li et al., 2025; Wang et al., 2024; Freeman, 2025).
Art and design education: preserving studio habits of mind
I propose that we can gain valuable insights from studio learning, which fundamentally relies on risk-taking, iteration, and critique. To ensure that curiosity and originality remain central when students engage with text-to-image or code-based tools, programs should implement several strategies. First, they can mandate human problem identification prior to any prompting, requiring students to articulate the challenges they wish to address. Second, process annotations should be required to document the tools used, the prompts generated, the choices made in curation, and any post-processing steps undertaken. Finally, incorporating both blind and labeled critiques can illuminate how cues of origin influence judgments of creativity. These approaches align with research indicating that creativity flourishes in contexts of ambiguity and that fostering group-level diversity requires careful balancing, especially when commonly used models dominate the discourse (Zenasni et al., 2008; Doshi and Hauser, 2024; Freeman, 2025). To synthesise these risks and opportunities across domains, Table 1 summarises how AI illiteracy undermines credibility and how AI literacy can protect inquiry and scholarly creativity in higher education.
Practical implications for teaching and assessment in the AI era
The shift toward generative AI in education has highlighted two key issues: first, that detector-led approaches can lead to unfair outcomes; and second, that credible assessment in this context relies less on policing and more on designing tasks that make student thinking observable. In my own practice, I started recognizing these implications early during the widespread adoption of AI tools.
Teaching implication 1: move from detection to “visible thinking” assessment design
When AI began gaining traction in educational settings, I assigned a research-based project to my second-year students. While marking the submissions, I noticed that some of the work was written in a style that did not align with the typical capabilities of students at this level in my teaching context. The prose was unusually polished and professionally structured. At that time, I relied on Turnitin's AI detection tools to assess whether AI may have been used. This approach led to complications: many students found themselves at a disadvantage due to the limitations of the detector-driven process, especially considering the large enrolments, which made it nearly impossible to thoroughly investigate every submission or conduct individual follow-ups.
This experience prompted a shift in assessment design away from increased surveillance. I continued to use written assignments but added an authentication component that emphasizes reasoning rather than mere confessions. Students now submit a short video in which they discuss their work and explain the key ideas, decisions, and sources they utilized. The guidelines explicitly instruct students not to read from their assignments but to articulate their arguments and learning process. This method does not simply aim to “catch” students; rather, it makes learning visible. Students who understand their work can explain and defend it effectively, while those who relied heavily on AI tool often struggle to present a coherent account. Importantly, this approach supports multilingual students more equitably than detector-led judgments, as it assesses whether learning has indeed taken place rather than conformity to a specific assessment layout. What this changes in practice: instead of asking “Did you use AI?”, the assessment asks, “Can you demonstrate ownership of your thinking and evidence?”
Teaching implication 2: reduce punitive escalation and design for dignity
Another experience involved suspected AI misuse that triggered a formal escalation process, with assignments moving from the lecturer to the HOD and into disciplinary structures. In practice, this route often felt tedious and, at times, inhumane, especially when the initial suspicion rested mainly on detector outputs rather than verified evidence. As the limitations of AI detection have become clearer, including documented false positives and a heightened risk of disadvantaging second-language writers, it is increasingly difficult to justify disciplinary escalation as a default response.
A more credible and humane approach is to treat assessment design as the first line of response. When tasks require process evidence (draft annotations, decision rationales, short oral or video explanations, and statements of uncertainty), integrity concerns can be addressed through learning-focused mechanisms before they become disciplinary matters. This does not remove accountability; it relocates it into educational practice. In practice, the implication is that the default response becomes pedagogical: a redesign and a visible process, with discipline reserved for clear, demonstrable misconduct.
Teaching implication 3: AI pressures educators toward pedagogical innovation
One unintended consequence of generative AI is that it exposes overly relaxed assessment practices. When assessment depends mainly on predictable written outputs, it becomes easier for students to outsource thinking. In this sense, AI compels educators to be more deliberate and innovative: to design assessments that value interpretation, judgement, and explanation rather than the reproduction of plausible text. This shift moves us beyond an overreliance on sitting written tests or traditional exams toward varied assessment opportunities that better align with authentic learning, particularly in fields such as art education, where critique, iteration, and decision-making are central. This shift means assessments focus less on simply submitting a product and more on evaluating the process, judgement, and accountability involved.
Practical implications for research practice: protecting scholarly voice in the AI era
From a research perspective, I do not position myself as an AI specialist. The practical implication is not that researchers must become technical experts, but that AI literacy (or competence) must be paired with curiosity and the ability to work through the problems they study. In other words, AI can assist, but it cannot substitute the intellectual labor of problem-framing, uncertainty, and judgement that makes research credible.
Research implication 1: treat AI as support for expression, not a replacement for thinking
In research, a scholarly voice remains one of the most important aspects of academic work. This is particularly evident when art is used as a metaphor: even when multiple people attempt a similar image or sculpture, originality is still imprinted in the work through material choices and decision-making. Similarly, credible research requires that a scholar's voice remain present in the writing, evident in how problems are framed and how evidence is interpreted. This matters because generative AI tools can produce fluent text that is often generic. Without clear instruction and careful judgement, the output can shift the meaning of what the researcher is trying to say and weaken the distinctiveness of the argument.
Research implication 2: use AI as an editor with constraints (voice-preserving use)
A practical example from my own practice is using AI tools to support clarity in writing rather than content generation. As a non-native English speaker, I often use tools such as Grammarly and, at times, ChatGPT to correct grammar and improve clarity. However, I specifically request that the tool preserve my style and scholarly voice, as I have observed that when prompt engineering is not precise, the text becomes polished but generic and no longer reflects what I intend to argue. This change means that AI functions as a language and clarity aid, while the researcher keeps ownership of their argument, interpretation, and voice. Thus, the tool enhances communication without diminishing the originality that underpins scholarly credibility.
Discussion
The two credibility challenges serve as reflections of each other. In academia, publishing work based on unverified AI outputs can stifle curiosity, shorten the time spent in productive uncertainty, and dilute the scholarly voice. In education, the fear of AI can shift assessments toward detection technologies and compliance-focused tasks that provide limited insights into what students know or can accomplish. A viable response in both areas lies in fostering AI literacy, which encompasses technical understanding alongside verification, reflection, transparent disclosure, and assessment designs that prioritize visible thinking.
The literature on curiosity highlights the importance of navigating the uncertain space of not knowing for learning and innovation (Gruber and Ranganath, 2019; Murayama et al., 2019). Research on automation bias and creativity in the context of AI assistance highlights the necessity of human-led question formation and intentional engagement with ambiguity for credible intellectual work (Parasuraman and Manzey, 2010; Logg et al., 2019; Zenasni et al., 2008; Doshi and Hauser, 2024). Feedback and assessment studies suggest designs that keep interpretation and judgment at the forefront of evaluation (Hattie and Timperley, 2007; Elshall and Badir, 2025; Walter, 2024; Luo et al., 2024).
Examples from fields beyond art education demonstrate that similar principles are already being applied: medical publishing mandates disclosure and author accountability for AI-assisted processes; higher education policy development encourages course-level declarations and structured reflection; and environmental data science reports utilize AI-aware assessment formats that emphasize process and verification (Yousaf, 2025; Elshall and Badir, 2025; Walter, 2024; Luo et al., 2024; Li et al., 2025; Wang et al., 2024; Peng et al., 2025; El Arab et al., 2025; Huh et al., 2025; Freeman, 2025; Nguyen et al., 2024). These initiatives support the assertion that credibility is enhanced when inquiry time, verification, disclosure, and transparent reasoning are integrated into research and assessment.
However, several gaps persist. We require measures to assess time spent in uncertainty under both AI and non-AI conditions, tools to evaluate question quality and risks in problem framing, studies that embed diversity in outputs at the studio and cohort levels, evaluations of detection policies in multilingual contexts, and outcome studies focused on whether disclosure and verification enhance credibility. When AI is utilized to extend inquiry rather than replace it, the likelihood of maintaining credibility increases.
Author contributions
PC: Writing – original draft.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author declares that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. I used generative AI tools solely for language editing (grammar, clarity, and readability). I reviewed and verified all text and accept full responsibility for the work.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Aljamaan, F., Aljaffary, A., and Alreshidi, N. (2024). Reference hallucination score for medical AI chatbots: development and validation. JMIR Med. Informat. 12:e54345. doi: 10.2196/54345
Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., et al. (2024). Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews: comparative analysis. J. Med. Internet Res. 26:e53164. doi: 10.2196/53164
Doshi, A. R., and Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 10:eadn5290. doi: 10.1126/sciadv.adn5290
El Arab, R. A., Al Moosa, O. A., Abuadas, F. H., and Somerville, J. (2025). The role of AI in nursing education and practice: umbrella review. J. Med. Internet Res. 27:e69881. doi: 10.2196/69881
Elshall, A. S., and Badir, A. M. (2025). Balancing AI-assisted learning and traditional assessment: the FACT assessment in environmental data science education. Front. Educ. 10:1596462. doi: 10.3389/feduc.2025.1596462
Freeman, J. (2025). Student Generative AI Survey 2025 (HEPI Policy Note 61). Oxford: Higher Education Policy Institute (HEPI) and Kortext.
Gruber, M. J., and Ranganath, C. (2019). How curiosity enhances hippocampus-dependent memory: the PACE framework. Trends Cogn. Sci. 23, 1014–1025. doi: 10.1016/j.tics.2019.10.003
Hattie, J., and Timperley, H. (2007). The power of feedback. Rev. Educ. Res. 77, 81–112. doi: 10.3102/003465430298487
Huh, M. B., Miri, M., and Tracy, T. (2025). Students? perceptions of generative AI image tools in design education: insights from architectural education. Educ. Sci. 15:1160. doi: 10.3390/educsci15091160
Keith, M., Keiller, E., Windows-Yule, C., Kings, I., and Robbins, P. (2025). Harnessing generative AI in chemical engineering education: Implementation and evaluation of the large language model ChatGPT v3.5. Educ. Chem. Eng. 51, 20–33. doi: 10.1016/j.ece.2025.01.002
Li, M., Xie, Q., Enkhtur, A., Meng, S., Chen, L., Yamamoto, B. A., et al. (2025). A framework for developing university policies on generative AI governance: a cross-national comparative study. arXiv preprint arXiv:2504.02636. doi: 10.48550/arXiv.2504.02636
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., and Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns 4:100779. doi: 10.1016/j.patter.2023.100779
Logg, J. M., Minson, J. A., and Moore, D. A. (2019). Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103. doi: 10.1016/j.obhdp.2018.12.005
Luo, A., Lee, M., and Denman, B. (2024). A policy approach to AI in assessment in higher education: key issues and principles. Assess. Eval. High. Educ. 49, 1–15.
Murayama, K., FitzGibbon, L., and Sakaki, M. (2019). Process account of curiosity and interest: a reward-learning perspective. Educ. Psychol. Rev. 31, 875–895. doi: 10.1007/s10648-019-09499-9
Nguyen, A., Hong, Y., Dang, B., and Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Stud. High. Educ. 49, 847–864. doi: 10.1080/03075079.2024.2323593
Parasuraman, R., and Manzey, D. H. (2010). Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52, 381–410. doi: 10.1177/0018720810376055
Peng, J., Zhang, H., Tu, X., Zhang, Z., Wu, Q., Wang, Y., et al. (2025). Effectiveness of AI-assisted medical education for Chinese undergraduate medical students: a meta-analysis. BMC Med. Educ. 25:1207. doi: 10.1186/s12909-025-07770-y
Su, J., and Yang, W. (2023). Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Rev. Educ. 6, 1–12. doi: 10.1177/20965311231168423
von Stumm, S., Hell, B., and Chamorro-Premuzic, T. (2011). The hungry mind: intellectual curiosity is the third pillar of academic performance. Perspect. Psychol. Sci. 6, 574–588. doi: 10.1177/1745691611421204
Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int. J. Educ. Technol. High. Educ. 21:15. doi: 10.1186/s41239-024-00448-3
Wang, H., Dang, A., Wu, Z., and Mac, S. (2024). Generative AI in higher education: seeing ChatGPT through universities' policies, resources, and guidelines. Comput. Educ. Artif. Intellig. 7:100326. doi: 10.1016/j.caeai.2024.100326
Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., et al. (2023). Testing of detection tools for AI-generated text. Int. J. Educ. Integrity 19:26. doi: 10.1007/s40979-023-00146-z
Yousaf, M. N. (2025). Practical considerations and ethical implications of using AI-generated citations in academic writing. ACG Case Rep. J. 12:e01629. doi: 10.14309/crj.0000000000001629
Keywords: AI literacy, higher education, scholarship, assessment, curiosity, creativity, art education
Citation: Chisale PB (2026) Protecting creativity in the age of generative AI: productive uncertainty, and visible thinking in scholarship and assessment. Front. Educ. 10:1694819. doi: 10.3389/feduc.2025.1694819
Received: 24 November 2025; Revised: 23 December 2025;
Accepted: 26 December 2025; Published: 30 January 2026.
Edited by:
Rawan Nimri, Griffith University, AustraliaReviewed by:
Arthur William Fodouop Kouam, University of Sanya University, ChinaFarah Shishan, University of Jordan, Jordan
Copyright © 2026 Chisale. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Paseka Blessing Chisale, cGJjaGlzYWxlQHN1bi5hYy56YQ==