Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 01 September 2025

Sec. Digital Learning Innovations

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1628019

This article is part of the Research TopicDigital Learning Innovations: Trends Emerging Scenario, Challenges and OpportunitiesView all 27 articles

Exploring ethical dilemmas and institutional challenges in AI adoption: a study of South African universities

  • Department of Media and Language and Communication, Durban University of Technology, Durban, South Africa

Introduction: Artificial intelligence tools like ChatGPT and DeepSeek are increasingly shaping higher education. However, their integration into student learning remains underexplored. This study investigates how university students in South Africa use AI-based tools in their academic practices and the specific tasks these tools support. It also examines the ethical challenges and considerations arising from their use, highlighting the need for structured institutional guidelines.

Methods: A qualitative approach was employed, involving in-depth semi-structured interviews with 50 students from four South African universities.

Results: Findings reveal that students widely but informally use AI tools for tasks such as essay writing and assignment preparation. The absence of formal institutional guidance has led to ethical ambiguities and inconsistent usage practices.

Discussion: The study accentuates the urgency for universities to develop institutional AI frameworks. These frameworks should promote the responsible and effective use of AI tools while addressing academic support needs and ethical considerations in higher education.

1 Introduction

Artificial intelligence technologies are also fast changing higher education worldwide, shaping student learning, teaching, and assessment practices at institutions (Dwivedi et al., 2021; Kumar, 2024). In the South African context, the change is taking place in an environment with its own complexities characterized by historical inequalities, digital divides, and institutional capacities that vary. The majority of the students are first-generation university students with little access to technology. These conditions make it promising yet problematic to implement AI in education (Khoalenyane and Ajani, 2024; Bosch et al., 2023).

Students in South Africa are using ChatGPT and DeepSeek for learning, with instant feedback and efficient access (Chauke et al., 2024; Firat, 2023; Crček and Patekar, 2023). This, however, raises ethical as well as institutional issues. Students are faced with questions regarding authorship, originality, as well as the extent to which AI support in assignments is ethically justifiable. These are the issues of academic integrity, data privacy, and intellectual autonomy (Khatri and Karki, 2023; Ajlouni et al., 2023). The problems are unclear AI policies, heterogeneous digital infrastructure, and disparate staff capacity for managing AI learning (Rapanyane and Sethole, 2020; Kamukapa et al., 2025).

Global scholarship on AI in universities is largely focusing on institutional benefits and teacher innovations with minimal consideration of informal, student-driven AI uses and their ethical implications (Rabatseta et al., 2024; Opesemowo and Adekomaya, 2024). This is especially the case in South Africa where students use AI tools on their own, particularly in less resourced schools and those with unequal digital access (Chauke et al., 2024; Khoalenyane and Ajani, 2024). There has been research on the institutional use of AI, with Khoalenyane and Ajani (2024) explaining how personalized learning and predictive analytics improve academic performance. Sanders and Mukhari (2024) talk about AI's role in individualized learning but stress the need for institutional investment. Cele et al. (2025) illustrate the accounting curriculum adapting through the incorporation of 4IR skills.

Despite these findings, there remains limited empirical attention to how students themselves adopt and navigate AI tools. The Unified Theory of Acceptance and Use of Technology (UTAUT) is a useful framework for analyzing this. It helps explain both individual motivations—such as perceived usefulness, ease of use, and social influence—and broader structural barriers, including institutional support and technology access (Kamukapa et al., 2025). UTAUT thus enables a dual analysis of user behavior and environmental constraints.

Drawing on the Unified Theory of Acceptance and Use of Technology (UTAUT), this paper examines how university students in South Africa incorporate AI tools, such as ChatGPT and DeepSeek, into their learning practices. It explores the motivations, barriers, and ethical considerations shaping students' AI adoptions and usage. The paper employs a qualitative design focusing on students drawn from four universities in the KwaZulu-Natal province (Durban University of Technology, University of KwaZulu-Natal, Mangosuthu University of Technology, and University of Zululand), South Africa, to understand how AI tools are utilized to support academic tasks and the broader ethical implications of the tools on students' learning experiences. Two empirical questions guide this inquiry:

i. How are university students in South Africa integrating AI-based tools, such as ChatGPT and DeepSeek, into their academic practices, and what specific tasks do these tools primarily support within their studies?

ii. What challenges and ethical considerations arise from the use of AI tools among students, and how can universities develop structured guidelines to facilitate responsible and effective use of these technologies in academic contexts?

Following this introduction, the paper is structured as follows. Firstly, it discusses the UTAUT as the theory that underpins the study. The second section reviews relevant literature on the use of AI among university students and the ethical concerns around its use. The paper then focuses on the research methodology used to collect and analyse data. After that, the findings from the participant's responses are presented and analyzed. The last section discusses these findings and then provides concluding remarks.

2 Theoretical underpinnings

2.1 Unified theory of acceptance and use of technology (UTAUT)

The Unified Theory of Acceptance and Use of Technology (UTAUT) model, developed by Venkatesh et al. (2003), offers a comprehensive framework for understanding the factors influencing technology adoption (Kelly et al., 2023; Chen et al., 2024). It is highly relevant to this study, which examines how South African university students integrate AI-based tools like into their academic practices. UTAUT aligns with the study's objectives by exploring how performance expectancy, effort expectancy, social influence, and facilitating conditions impact students' acceptance and usage behaviors in higher education (Acosta-Enriquez et al., 2024). By applying UTAUT, this study addresses existing gaps in the literature on informal, student-driven AI tool adoption, thereby contributing to the discourse on individual-centric and institutional support for AI in academia.

UTAUT was conceptualized by synthesizing eight prominent models of technology acceptance, including the Technology Acceptance Model (TAM) and Theory of Planned Behavior (TPB) (Kanont et al., 2024; Li, 2023). Since its inception, UTAUT has been adapted for diverse sectors such as healthcare and education, demonstrating flexibility across cultural and contextual domains (Mulaudzi and Hamilton, 2024). The model's applicability to higher education has gained traction, particularly in studies assessing students' intentions and behaviors related to emerging educational technologies (Obenza et al., 2024; Zou and Huang, 2023). This adaptability underscores its relevance in analyzing how students perceive AI tools like within the university context, emphasizing behavioral intentions driven by performance expectations and ease of use.

In the UTAUT model, four core constructs are pivotal in understanding students' adoption of AI tools in higher education. Performance expectancy, which gauges a technology's perceived usefulness, is crucial as students are more likely to use AI tools like ChatGPT if they believe they enhance academic performance and learning efficiency (Kelly et al., 2023; Chen et al., 2024). Research indicates that such tools increase productivity and optimize learning outcomes (Li, 2023). Effort expectancy, or the perceived ease of use, also plays a significant role; students tend to adopt technologies that are intuitive and simple to integrate into their routines, with ChatGPT's user-friendly interface exemplifying this characteristic (Kanont et al., 2024; Obenza et al., 2024). Social influence, the degree to which adoption is encouraged by peers or institutional recommendations, further impacts student uptake, especially where peer endorsements or academic expectations align with technology use (Acosta-Enriquez et al., 2024; Mulaudzi and Hamilton, 2024). Lastly, facilitating conditions assess the available resources and institutional support for technology use, which can vary significantly in South African universities. Technical support and infrastructure availability will likely shape student adoption, especially in environments where formalized structures for AI use are limited (Zou and Huang, 2023; Chen et al., 2024). Together, these constructs provide a comprehensive lens for analyzing the motivations and barriers to AI adoption among students.

Although UTAUT is robust, it has been criticized for its complexity and lack of parsimony, which may limit its generalizability (Acosta-Enriquez et al., 2024). Additionally, UTAUT has been critiqued for underrepresenting subjective aspects, such as ethical considerations and trust, that are increasingly relevant in AI adoption studies (Chen et al., 2024; Zou and Huang, 2023). This study seeks to address these limitations by integrating qualitative insights from students to capture ethical concerns associated with AI tools like ChatGPT, adding depth to UTAUT's application in educational contexts. Despite its weaknesses, UTAUT's constructs informed the research design by developing the interview guide for the interview questions. Constructs such as performance expectancy and social influence were integral in structuring questions that elicit students' motivations, perceived benefits, and social pressures related to AI use. This approach ensures that the data collection is aligned with UTAUT's framework, allowing for a comprehensive analysis of AI acceptance factors among students in South Africa (Kanont et al., 2024; Li, 2023).

UTAUT provides a robust theoretical foundation for analyzing students' use of AI tools, emphasizing the importance of expectations, ease of use, social dynamics, and resource availability. Applying UTAUT, this study contributes valuable insights into how students independently adopt AI in educational settings.

2.2 AI ethics framework

To complement the Unified Theory of Acceptance and Use of Technology (UTAUT), this study adopts an AI Ethics Framework to critically explore the ethical dimensions of artificial intelligence adoption in South African higher education. UTAUT allows analysis of user intentions and conditions but does not have ethical issues and system issues related to AI. An AI Ethics framework allows for a holistic analysis by focusing on fairness, accountability, transparency, and power dynamics within AI learning ecosystems (Memarian and Doleck, 2023; Holmes et al., 2022).

Artificial intelligence in higher education raises ethical issues like authorship, data privacy, algorithmic bias, and the line between facilitation and academic cheating (Slimi and Carballido, 2023; Chauke et al., 2024). Tools like ChatGPT and DeepSeek help students do assignments faster, yet they heighten plagiarism issues and less critical thinking (Sam and Olbrich, 2023; Airaj, 2024). In South Africa, access to digital resources is not equal and works to exacerbate gaps and challenge the fairness of AI-supported learning (Patel and Ragolane, 2024; Opesemowo and Adekomaya, 2024).

The AI Ethics Framework supports the two research questions of the study. The first research question regarding the integration of AI tools focuses on fairness and agency, which manifests the autonomy of students within limits. The second question regarding ethical concerns aligns with the framework's focus on transparency and moral governance. The AI Ethics Framework, together with UTAUT, offers an understanding of individual behavior and systemic forces affecting AI adoption in higher education.

2.3 Socio-technical systems theory (STS)

Socio-Technical Systems (STS) Theory analyzes social and technical interactions within organizations. The best performance, it states, happens when both social (people, structures, norms), and technical (tools, infrastructure, processes) aspects are optimized simultaneously (Yu et al., 2023). The theory perceives technology adoption as motivated by institutional culture, values, and the behavior of people (Kudina and van de Poel, 2024).

Key STS principles are optimization, flexibility, and mutual shaping. Technology must be tailor-made to user needs and institutional goals (Owusu, 2024). In an educational context, AI tools like ChatGPT and DeepSeek can aid instruction by addressing specific issues (Tarisayi, 2024). Institutions must be flexible in infrastructure and ethical leadership to utilize AI correctly.

STS theory complements UTAUT by extending beyond individual behavioral intentions to include the institutional and technical environment. It also supports the AI Ethics Framework by emphasizing human-centered values, participatory design, and transparency in technology use (Swist and Gulson, 2023). This theory underpins the study's research questions. The first question examines how students utilize AI tools in learning. STS analyzes this by examining the relationship between technology, institutional support, and social norms (Alshahrani et al., 2024). The second question touches on ethics and policy matters. STS provides means to evaluate the effects of structural voids, such as undefined policies, uneven access, and poor training, on the ethical deployment of AI (Thomas, 2024; Aseeri and Kang, 2023). Thus, STS allows this study to account for not only what students do with AI, but also how institutional ecosystems either enable or constrain those practices. It foregrounds the interdependence of people, policies, and technologies in shaping educational futures.

3 Literature review

3.1 AI integration in South African universities

Current research attests that university students in South Africa utilize AI tools, specifically ChatGPT and DeepSeek, in their academic work. Their responses, methods, and explanations vary institutionally. Chauke et al. (2024) report that students utilize ChatGPT to understand and build competencies and not solely for generating content. This is supported by Khraisha et al. (2023), who report that students utilize GPT-4 to break down difficult texts. But contrary to Chauke et al., who foreground AI as a scaffolding device for learning, Beck and Levine (2024) view AI as a support for creativity in the early stages of writing. This contrast suggests that while AI supports several activities, guidelines within institutions are fragmented, leading to disjointed usage effects.

Technical uses of AI, even in STEM disciplines, also vary by setting. Hassani and Silva (2023) show students applying ChatGPT to coding assignments, enhancing feedback loops. However, Kohnke et al. (2023) emphasize its use in mitigating linguistic barriers for non-native speakers of English. Such variation indicates the flexibility of AI tools in subject area, but it is also referred to as an absence of institutional contexts defining usage. Patel and Ragolane (2024) maintain that there are no consensus plans for the adoption of AI within the South African institutions, as posited by Lubinga et al. (2023), emphasizing that 4IR readiness is uneven. Zeb et al. (2025) contend that successful implementation of ChatGPT in learning and library environments relies on continuous digital literacy campaigns that are not present in the South African situation to a great degree.

Ethical issues are globally recognized but responded to differently. Cotton et al. (2024) and Storey (2023) warn against violations of academic integrity, yet Sam and Olbrich (2023) take it a step further, calling for AI ethics to be ingrained in institutional culture. Bond et al. (2024) call for community-wide frameworks to inform responsible use, yet South African universities are not many in taking up the call. Holmes et al. (2022) and Airaj (2024) both stress transparency and equity, yet institutional uptake is slow. Slimi and Carballido (2023) even internationally discovered policies on AI to be vague. Chan (2023) suggests a policy education model to bridge this divide in line with Zembylas (2023), who criticizes the Western bias of AI ethics and demands a decolonial stance, a view particularly pertinent in South African institutions characterized by structural inequality.

South African universities lag in institutional readiness. Opesemowo and Adekomaya (2024) noted that AI adoption is student-driven with minimal institutional regulation. This is consistent with Tarisayi (2024), who argues that AI adoption in South Africa is socio-technically uneven and infrastructurally incoherent. Shah et al. (2024) and Geok et al. (2024), writing on Pakistani universities, illustrate how institutional environments, policy certainty, and stakeholder engagement drive green and digital transformations, factors largely lacking in South African counterparts. One also finds this in Ullah et al. (2024), where financial literacy had a moderating effect on AI's performance, a factor one rarely encounters in South African AI policies.

Though students are adopting AI for scholarship, the absence of ethical guidelines, policy alignment, and institutional support can undermine long-term gains. The current research fills these lacunae through theoretically analyzing how students make sense of and negotiate AI adoption in the face of no linear instructions. It also unveils how unofficial use is configured through social, infrastructural, and ethical ambiguities, thereby going beyond instrumental superficialities of AI as a technology.

3.2 Ethical challenges of AI in student academics

Empirical studies reveal that students' informal, individualized use of AI presents significant ethical challenges. A growing body of research highlights concerns over academic integrity, specifically the potential for students to misuse AI tools in ways that undermine learning and academic standards (Cotton et al., 2024; Simpson, 2023). For instance, Simpson (2023) observed that AI-driven text generation tools facilitate plagiarism, allowing students to submit work that is not entirely their own. The study illustrates how AI tools intended for support can lead to “academic shortcuts” that compromise the authenticity of student submissions. Similarly, Zhai et al. (2024) argue that unsupervised AI use encourages dependency, where students rely on AI-generated solutions without developing their problem-solving skills.

Another ethical issue centers on data privacy and security. Cotton et al. (2024) discusses how AI tools require access to personal data, raising concerns about student privacy and data misuse. These authors argue that the absence of clear data protection guidelines leaves students vulnerable, especially when using third-party AI tools with unclear privacy policies. Additionally, AI algorithms often collect and process data from diverse sources, creating a risk of unintended data sharing (Cotton et al., 2024). The need for universities to consider these risks when permitting AI use in academic contexts is underscored by Huang (2023), who notes that students are rarely informed about the extent of data AI tools collect, leading to uninformed consent.

Access to AI tools also creates equity challenges. Ifenthaler and Schumacher (2023) highlight that informal use of AI in academic contexts may advantage students with access to advanced technology. Their study demonstrates that wealthier students with access to paid AI tools perform better academically compared to their less privileged peers. This unequal access perpetuates disparities, resulting in an “AI divide” that mirrors existing social and economic inequalities (Bell and Korinek, 2023; Akiba and Fraboni, 2023). This access imbalance highlights the need for universities to establish equitable structures for AI tool use.

Scholars advocate for the development of structured guidelines to promote responsible AI use in response to these challenges. A study by Liang (2023) suggests that universities adopt AI literacy programs to educate students on the ethical use of these technologies. Their research shows that students with a clear understanding of AI's limitations use it more responsibly, aligning with academic integrity. Likewise, Perino et al. (2022) propose mandatory AI training that addresses data privacy, encouraging students to make informed choices when selecting AI tools. They argue that structured guidelines combined with transparency policies can mitigate the risks associated with AI usage.

3.3 Institutional challenges

AI incorporation into higher education has led to differing institutional responses, with diverse states of readiness and governance. While AI holds the potential to improve teaching as well as administration, universities, specifically in South Africa, have mammoth challenges to overcome in actualizing the potential.

Literature suggests that AI policy implementation in higher education is disjointed. Sam and Olbrich (2023) theorize that institutions often have no coherent AI governance, and this leads to piecemeal and reactive approaches to ethical, pedagogical, as well as infrastructure requirements. It is stated by Chan (2023) that while some global institutions are coming up with AI policy education guidelines, most universities are falling short in keeping up with corresponding tech adoption through planning.

In South Africa, infrastructural inequalities, digital divides, and low institutional capacity compound the issues. Patel and Ragolane (2024) observe that as universities adopt AI tools like ChatGPT, policy or infrastructural support is patchy. Institutions are unable to properly deal with or train staff members and students in using AI in a responsible manner. This is particularly concerning in underprivileged institutions, where digital access and literacy are still unbalanced.

One of the primary issues is academic staff training. Airaj (2024) clarifies that universities invest little in AI literacy among lecturers, causing confusion when using AI tools. Sanders and Mukhari (2024) further state that while some lecturers acknowledge the value of AI in blended learning, others cannot effectively guide students or assess AI materials. This indicates a gap: technology is evolving, but the teaching corps is unprepared.

There is no overarching national AI policy in the South African higher education system. Bond et al. (2024) highlight the need for ethics-based alignment and consistent governance, yet universities operate in silos without any standardization. van Rensburg and van der Westhuizen (2024) propose a literacy-focussed approach to ethical AI but is primarily theoretical and yet to be tested. Zembylas (2023) criticizes the policy for its silence on digital neocolonialism and how AI tools will inevitably present Western perspectives, excluding local knowledge unless they are localized.

There are still gaps in knowledge about student navigation of absent AI policies. Chauke et al. (2024) reveal that postgraduate students use AI independently for success, exposing them to ethical danger and system unfairness. Your study fills this gap by focusing on student-centered application of AI in South African higher education and the effect of institutional silence on student attitudes, behavior, and ethics. Institutions engage with AI to some extent, but the absence of integrated policies, staff training, and infrastructure is significant. This study explores how these shortcomings affect student experiences and fosters better AI governance in South African universities.

4 Methodology

4.1 Research design

The research utilized a qualitative framework to examine students' experience and ethical deliberations on AI tools in education. Qualitative methods yield in-depth context on emerging phenomena like AI in higher education (Creswell and Poth, 2016). The design allowed the researcher to explore the comprehension and usage of AI tools like ChatGPT and DeepSeek among the students and their awareness of related policies and ethics (Merriam and Tisdell, 2016). Because of the absence of earlier studies on South African student-led AI practices, the qualitative exploratory design was suitable.

4.2 Participants

The study involved 50 university students drawn from four South African institutions: Durban University of Technology (DUT, n = 18), University of KwaZulu-Natal (UKZN, n = 15), University of Zululand (n = 10), and Mangosuthu University of Technology (MUT, n = 7). Participants were selected through convenience sampling, primarily via Facebook posts on student forums. While this method facilitated rapid recruitment, it may have excluded students without consistent internet access or social media engagement, particularly those from under-resourced backgrounds. This limitation is acknowledged, and future studies are encouraged to use broader recruitment methods to enhance representation (Muringa and Adjin-Tettey, 2024). Convenience sampling was deemed appropriate given the exploratory nature of the research and the objective of capturing diverse perspectives within the constraints of digital accessibility. The sample consisted of 31 female and 19 male students, ranging in age from 19 to 27, across various disciplines.

4.3 Data collection

Data were collected through semi-structured interviews conducted via Microsoft Teams, providing flexibility and ease of access for participants across different locations. Semi-structured interviews were selected for their ability to facilitate in-depth discussions while maintaining a focused structure aligned with the study's objectives (Muringa and Shava, 2024). An interview guide with eight open-ended questions directed the conversation toward critical areas. These include students' specific uses of AI, their understanding of ethical boundaries, and any perceived gaps in institutional support. Conducting interviews remotely allowed for broader accessibility and participant comfort, facilitating engagement with students from multiple universities (Muringa and Shava, 2025).

4.4 Data analysis

Data were analyzed using Braun and Clarke's (2006) thematic analysis, a widely accepted method for identifying and interpreting patterns within qualitative data. Each interview transcript was coded sequentially as Participant 1 through Participant 50. The thematic analysis approach provided a systematic approach to identify recurring themes and patterns in AI usage and ethical perceptions. The thematic analysis allowed for a rigorous, detailed examination of students' responses, uncovering themes related to task-specific usage, ethical ambiguities, and the need for institutional support (Braun and Clarke, 2006).

To provide trustworthiness, credibility, reliability, confirmability, and transferability guidelines were followed in the research. Member-checking was conducted with five participants to check transcript accuracy and thematic fit. An audit trail was kept, and analytic memos recorded interpretive decisions along the way.

4.5 Ethical considerations

The data collection procedure for the study adhered to the ethical research guidelines set out by the Human Sciences Research Council of South Africa. Participants were informed about potential risks and their right to withdraw at any time and were assured that their data would remain anonymous and confidential (British Psychological Society, 2018). All participants signed consent forms. Anonymity was maintained through pseudonym coding, and rigorous protocols were followed to protect participant confidentiality. These measures ensured the research was conducted responsibly and ethically, fostering trust and supporting data integrity throughout the study. As the researcher held a postdoctoral position during data collection, ethical sensitivity to power dynamics was prioritized, ensuring interviews were conversational and respectful to mitigate perceived hierarchies (British Psychological Society, 2018). These measures safeguarded participant wellbeing and promoted ethical rigor.

5 Findings

This section presents the key findings from the interviews conducted with South African university students on using AI-based tools, specifically ChatGPT and DeepSeek, within their academic practices. The findings are organized in response to the study's main research questions, focusing on how students integrate these tools into their studies, the tasks these technologies primarily support and the ethical considerations arising from their use. The thematic analysis highlights both the benefits and challenges experienced by students. It provides insights into their motivations for using AI and the need for structured guidance within academic institutions.

5.1 Integration of AI tools in academic practices and task-specific support

This section focuses on the diverse ways students integrate AI into their academic practices, illustrating its role as a flexible tool for foundational understanding and assignment preparation. In response to the first research questions, three themes emerged: Learning and Understanding Support, where students use AI for conceptual clarity and comprehension; Idea Generation and Structuring, where students leverage AI to brainstorm and organize academic content.

5.1.1 Learning and understanding support

The theme Learning and Understanding Support explores how students use AI-based tools like to aid comprehension of challenging academic material. Rather than solely relying on AI to complete assignments, students describe using it as an interactive resource for clarifying complex topics, thus bridging gaps in their understanding. The following excerpts from participant interviews provide insights into how students utilize AI tools for learning support. It emphasizes AI's significant role in their independent study practices.

Participant 45 described using ChatGPT to break down concepts that feel overwhelming, especially in technical subjects. The participant explains that AI tools act as a bridge by simplifying dense material. It helps them overcome learning obstacles without needing direct educators assistance.

Sometimes, especially with my science courses, the concepts feel so overwhelming. I'll sit there reading the same paragraph over and over, and it just doesn't make sense. That's when I turn to ChatGPT—I'll type in the part I'm stuck on and ask it to explain in simpler terms. It's like having a tutor available all the time. ChatGPT and DeepSeek breaks things down in a way that makes sense to me, and I can finally move forward with my work. (Participant 45)

Participant 2 described the role of AI tools in providing foundational knowledge that aids comprehension. They state that using AI to clarify terminology allows students to internalize key ideas, thus building their confidence in expressing these concepts independently.

Before I start writing an assignment, especially in subjects like economics, I usually ask ChatGPT to clarify certain terms or give me a rundown on key concepts. For example, last time, I wasn't sure about ‘market equilibrium,' so I asked, and it gave me a really good example that made things click. It's not like I'm copying what it says, but it helps me understand enough to put it in my own words. (Participant 2)

A participant from the University of Zululand describes their use of AI to simplify academic readings, particularly dense journal articles. This use case highlights how AI tools can make complex scholarly material more accessible, allowing students to focus on understanding key ideas rather than getting bogged down by intricate language.

Sometimes, the articles we're assigned are really dense, with complex language and long sentences. I use ChatGPT to summarize or rephrase some of those sections, so I can actually understand the main points. I remember one article on cultural theories—ChatGPT rephrased it so well that I finally understood what the author was saying. Without that help, I'd probably be completely lost. (Participant 8)

A participant from Durban University of Technology explained how AI provides immediate post-lecture support, helping clarify topics that may have been inadequately covered in class. This ability to independently access clarification through AI fosters greater autonomy in the learning process. It stresses the importance of AI tools as supplementary academic resources. Thus, allowing students to strengthen their understanding outside of class.

After lectures, I sometimes use ChatGPT and DeepSeeek to go over things that weren't explained well in class. For instance, if I'm not sure about a concept, I'll ask ChatGPT or DeepSeek for a simpler explanation, and it usually gives me something that's easy to understand. It's reassuring because I don't have to wait for office hours or bother my classmates; I can just get an answer right then and there. (Participant 21)

These responses above collectively signal that AI tools have become a companion for serious independent learning when timely academic support is impossible. By simplifying complex ideas, clarifying complex texts, and summarizing dense materials, AI tools develop and build students' academic understanding, confidence, and independence in study. This theme illustrates that AI is increasingly positioned to usefully support education by making the student an active actor in identifying and resolving their learning needs autonomously.

5.1.2 Idea generation and structuring

The theme Idea Generation and Structuring explored how university students utilize AI tools to refine their ideas, brainstorm content, and structure their academic work. This approach enabled students to explore multiple perspectives and scaffold their writing, from essays to feature articles, enhancing coherence and quality. The following excerpts from participants offer insight into this application. They demonstrate the value of AI as a tool for intellectual support.

Participants from Mangosuthu University of Technology described how they use ChatGPT and DeepSeek for initial brainstorming and structure formulation. The participants stated that by outlining key sections, these tools provide a foundational framework that guides the exploration of the topic. This approach showcases AI's utility as a cognitive partner, fostering independent idea development while maintaining academic integrity.

When I'm starting a new essay, I often find it challenging to get the right structure or even know where to begin. I'll type my topic into ChatGPT, asking for ideas on how to approach it or what sections I might include. For instance, when I wrote about media ethics, ChatGPT helped me organize the main points, like sections on privacy, bias, and accountability. It didn't write for me, but it outlined a roadmap I could follow, and I found that so helpful. (Participant 3)

A journalism student from Durban University of Technology used AI tool as a guide for structuring complex journalism assignments. The participant states that this interaction improves the quality of content organizations, making AI an instrumental tool for meeting academic standards. The participant's experience underscores AI's supportive role in helping students align their work with journalistic and academic expectations.

In journalism, we have to be very concise but also cover a lot of ground. Sometimes I feel lost about organising my thoughts, especially for feature articles. ChatGPT gives me examples of structures I can follow. I remember asking it how to approach a piece on social media trends, and it broke it down into introduction, main trends, and impact sections. That structure helped me a lot, especially in getting my article approved. (Participant 22)

An Arts student from University of Zululand described the way AI tools assist students in organizing their ideas, particularly in complex or theoretical topics. They state that ChatGPT and DeepSeek reduces the cognitive load associated with structuring intricate subjects. It allows the student to concentrate on content creation and analysis. This aspect of AI use reflects its role in empowering students to tackle demanding academic content with greater confidence.

There was a time when I needed to write about cultural theories for an assignment, and I had no idea how to start. I asked ChatGPT for an outline, and it suggested starting with historical context, followed by key theorists and applications. That outline wasn't something I would have thought of on my own, and it made the writing process much smoother. I could focus on filling in the content rather than worrying about the structure. (Participant 30)

Participant 4 from University of KwaZulu-Natal emphasized the role of AI tools in creative assignments that lack a formal structure. The tool's guidance on structuring narrative flow enables students to express their ideas better and enhances creativity by providing a flexible yet coherent framework.

For creative writing projects, I find myself stuck because there's no clear format like with essays. When I use ChatGP and DeepSeek, I can ask for ways to arrange my thoughts or generate a basic story structure. It doesn't tell me what to write, but it suggests sequences, like ‘character introduction, conflict, resolution,' which helps me get my ideas on paper and stay organized.

These responses show how AI tools support students in organizing their thoughts, structuring their work, and generating ideas. These tools, among others, lessens the first cognitive barriers a student may have to overcome if faced with a new or challenging assignment because it gives them an outline or framework for what needs to be addressed.

5.2 Challenges and ethical considerations in the informal use of AI tools

The section on Challenges and Ethical Considerations regarding the use of AI tools informally covers the obstacles to students' independent use of AI and the moral conundrums they might find without formal guidelines.

5.2.1 Ethical ambiguities and academic integrity

The theme, Ethical Ambiguities and Academic Integrity reveals the challenges students face in setting the limits of ethics in AI tool usage within academic work, originality, and plagiarism. The students were in a dilemma regarding how much they could rely on AI content without crossing over into academic dishonesty. This is often pegged on an institutional lack of clear guidelines, leading students to interpret ethical boundaries independently.

Almost all participants from the sampled universities expressed a cautious approach to using AI tools, recognizing the tool's benefits while being mindful of ethical constraints. Participant 18 from Mangosuthu University of Technology described using AI to understand assignment expectations better, particularly when struggling with structure or content.

I use ChatGPT to get an idea of what is expected of me, like when I don't know how to structure my assignment, but I don't copy and paste; instead, I try to write it in my own words after understanding the answer it provides. (Participant 18)

The response above reveals a nuanced engagement with AI, using it as a learning tool while consciously maintaining academic integrity by rephrasing and integrating information independently.

All participants from Durban University of Technology illustrated the ethical dilemmas faced when using AI tools for academic work. The participants indicated a lack of confidence in the boundaries of ethical AI use. This uncertainty shows the necessity of institutional support in guiding students toward responsible AI engagement. It highlights the complexities students face in maintaining academic integrity without standardized protocol. For instance, one recounted using ChatGPT to “scan question papers and essays,” noting the dual function of AI in assisting with comprehension and verifying answers. However, they expressed concern over balancing the benefits of AI with the potential risks of dependency and inadvertent plagiarism.

I use it mostly to understand the topics better, but it's hard to know how much is too much without any clear rules from the university. (Participant 15)

Participant 14 shared a sense of skepticism around AI usage, influenced by peer discussions about the potential risks of overreliance on technology for academic tasks. The participant stressed that they used ChatGPT primarily for structuring ideas in essays, explaining,

I use it sparingly, just for organising my thoughts, because I've heard from friends that it can be easy to depend on it too much. I'm careful not to let it replace my own ideas. (Participant 14)

This response suggests that while the participant acknowledges the utility of AI in aiding their academic work, there is an underlying concern about maintaining originality and avoiding excessive reliance. The participant's reflection on peer influence points to the broader issue of student-driven ethical standards in the absence of formalized institutional policies.

5.2.2 Reliance on individual interpretation

The theme of Reliance on Individual Interpretation shows that, without formal guidelines, decisions about the extent and ethical use of AI-generated content in academic work are left to individual judgment. Without such institutional guidance, students are susceptible to varying and multiple ways of using AI tools in their work, which inevitably results in a lack of consistency in the ways that AI tools are integrated into the work they do.

Almost all participants from various universities expressed their approaches to using AI in their universities without formal institutional guidelines. For instance, Participant 1 shared their approach to using AI tools without formal guidance, noting the challenges of setting personal limits.

Since there aren't any real rules from the university, I just go by what feels right. For instance, if I'm stuck on a section, I'll use ChatGPT to help me out, but I always try to make sure it sounds like me in the end. It's hard because I don't really know if there's a line I'm crossing, but without guidelines, I just have to trust my own judgment (Participant 1).

The response above shows students' personal uncertainty in deciding what is permissible. They rely on subjective standards to determine ethical boundaries, which can vary significantly between students, underscoring the need for standardized guidance.

A participant from University of KwaZulu-Natal stressed described feeling uncertain about how much to rely on AI for content creation vs. support. The participant stressed that

I use ChatGPT for things like structuring my essays or generating ideas, but I always wonder if it's too much. I mean, without any guidelines, I'm just guessing what's allowed. I try to add my own thoughts, but it's tricky because I don't know if I'm following the rules or if there even are any. (Participant 5)

This quote demonstrates the ambiguity that students encounter when using AI. They attempt to balance its utility with ethical considerations in the absence of formalized rules. The participants' experience indicates that students' reliance on personal judgment may lead to inconsistent academic standards without explicit boundaries.

Participant 34 discussed their approach to interpreting permissible AI usage, pointing out how different students may have varying thresholds.

I know some classmates use ChatGPT for almost everything, while I try to use it only for outlines or clarifications. We all have different ideas of what's okay, but it would be helpful if the university just told us what's acceptable. Right now, it's all up to us to decide, and that makes it confusing. (Participant 34)

The participants' statement reflects the inconsistency among students regarding AI use, as each individual applies their criteria without an institutional standard to guide them. This variability in personal interpretations underscores the need for clear guidelines from universities to ensure a consistent and fair approach to AI integration in academic work.

These responses reflect that in such reliance on individual judgment, an uneven playing field is created where students apply varied standards to the question of what uses of AI are appropriate. This theme conveys the importance of universities providing clear and consistent guidance on AI applications that would support students in making ethically sound decisions and promote fairness in academic practices among students.

5.2.3 Need for institutional support and resources

The theme “Need for Institutional Support and Resources” reflects students' struggles. This is due to the absence of formal guidance. It also stems from a lack of resources. Additionally, there is no structured support for ethically integrating AI tools into academic work. Many research participants voiced their feelings of underpreparedness to use AI, with advice from a peer or self-directed learning. This shows an institutional gap in providing complete training or resources to help students use AI responsibly and productively.

Participant 22 expressed the need for more structured guidance from the university, indicating that most of their understanding of AI tools came from trial and error rather than formal instruction.

Honestly, I just figured out how to use ChatGPT on my own. There hasn't been any real training or resources from the university, so I mostly just ask my friends or try things myself. It would be great if the university offered workshops or even just guidelines on how to use it responsibly—right now, we're all just guessing. (Participant 22)

The participants' experience underscores a common reliance on informal support, highlighting the potential value of institutional workshops or guidance that could standardize responsible AI use across the student body.

Participant 42 also emphasized the lack of resources and the importance of official support for understanding the ethical implications of AI.

I use AI tools for structuring my work, but I'm never really sure if I'm using them correctly. If the university offered some kind of training or even an online module on how to use these tools in a way that aligns with academic integrity, it would make things a lot easier. Right now, I just follow what my friends say, but everyone has different ideas on what's acceptable. (Participant 42)

The participant's response points to the confusion that arises from unstructured learning about AI tools. The reliance on peer guidance creates inconsistencies in AI usage, reinforcing the need for standardized training to ensure all students understand and apply AI responsibly and effectively.

Participant 31 from University of Zululand further highlighted the institutional gap, noting that AI is becoming essential in academic work, yet no formal resources support its usage.

I think the university should do more to help us understand AI tools because it's part of how we study now. If they held seminars or provided resources on using AI for things like research or writing, it would be really helpful. Right now, we're on our own, and sometimes it feels like guessing. (Participant 31)

The participant's remarks underscore the growing importance of AI in academia and the responsibility of institutions to offer accessible, structured support that equips students to use these tools ethically and effectively.

Participant 19 from Mangosuthu University of Technology described their experience with AI as somewhat isolating. They expressed that they struggled to understand how to use it responsibly without institutional support.

I've tried using ChatGPT for assignments, but I'm never sure if I'm crossing any lines. It would be really helpful if the university organized workshops or at least offered a guide on what's acceptable and what isn't. Right now, it's just me figuring things out alone, and I sometimes feel like I could be using it better if I had some support. (Participant 19)

The participant's account underscores the potential benefit of university-led initiatives, such as workshops, that could equip students with skills and knowledge to maximize AI's advantages while adhering to ethical guidelines.

Participant 6 from Durban University of Technology expressed frustration over the lack of institutional resources and felt that relying on friends' advice often left them with inconsistent guidance.

Everyone seems to be using AI tools differently, and there's no standard way we've been taught to use them. I end up asking friends, but their advice is all over the place. The university should really step in here; even an online tutorial would help to clear up what's okay and what's not.”

The participant's response highlights the inconsistencies that arise when students rely on peer guidance, stressing the need for a unified approach to AI usage that a university-led resource could provide.

Participant 12 from University of KwaZulu-Natal shared similar concerns, indicating that they felt unprepared to use AI tools effectively without formal training.

AI has become such a big part of how we do our work now, but I don't feel confident using it because no one really showed us how. I think having official guidelines from the university would make a big difference. Right now, I feel like I'm just guessing, and that's not a great feeling in academic work.

The responses above hint at a common requirement of having formal resources, guidelines and support from the university regarding AI usage. The students feel strongly a need for workshops, tutorials, or a policy that could help standardize how AI is used across the student population for coherence, adherence to ethical standards, and confidence in using AI as an academic resource.

6 Discussion

This study revealed how university students in South Africa are integrating AI-based tools, such as ChatGPT, into their academic practices. These tools are specifically used for essay writing, assignment preparation, and article creation. However, findings also highlight significant ethical ambiguities, inconsistent standards, and a strong reliance on self-directed learning due to the absence of institutional guidelines. This gap has left students uncertain about the ethical boundaries of AI use and dependent on informal peer networks and personal judgment, leading to varied practices across the student body.

The first finding highlights that students use AI-based tools, such as ChatGPT and DeepSeek, for specific academic tasks, including essay writing, assignment preparation, and article structuring. This focused use aligns with Fauzi et al.'s (2023) findings that AI enhances student productivity by assisting with task-oriented needs, such as generating ideas and improving organization. In particular, quickly accessing structured outlines and guidance through AI allows students to streamline their work. It provides them with practical support to reduce the cognitive load of initial task planning and content organization. The emphasis on task-specific use indicates that students approach AI to enhance their efficiency rather than replace independent thinking. This usage pattern demonstrates how AI tools are becoming embedded in students' academic workflows, transforming traditional approaches to academic tasks and supporting Beck and Levine's (2024) framework, which suggests that AI can be both a benefit and a limitation, depending on its application.

The study also revealed students' ethical concerns around AI use, particularly regarding originality and the risk of inadvertently crossing plagiarism-related boundaries. Many students reported uncertainty about how much AI-generated content they could use without compromising academic integrity. This concern was echoed by Cotton et al. (2024), who argued that AI complicates traditional standards of originality. This finding suggests that without formal guidelines, students are left to independently determine the ethical limits of AI use, which can lead to inconsistencies and potential breaches of academic integrity. Unlike previous studies focusing on AI's productivity benefits, this research exposes a critical ethical gap in AI adoption within education.

The study found that, without formal institutional guidelines, students interpret acceptable AI use on their own, resulting in varying standards across the student body. This reliance on individual interpretation can lead to inconsistencies in academic practices, creating a disparity in how students use AI tools in their work. As Beck and Levine (2024) suggest, AI's integration into academic environments necessitates clear policies to prevent students from inadvertently breaching academic standards. This finding aligns with Abdaljaleel et al. (2024), who noted that students in different cultural and educational contexts similarly struggle with a lack of guidance. The varying practices underscore the importance of institutional intervention to establish consistent standards, thereby reducing confusion and promoting fair academic practices across the student population.

Finally, the study identifies a significant gap in institutional support for AI use, with students expressing a need for resources, training, and formal guidelines on how to use AI tools responsibly and effectively. The lack of structured support has led students to rely on informal channels, such as peer guidance and self-directed learning, which can result in inconsistent practices and potential misunderstandings. This finding aligns with Akiba and Fraboni's (2023) call for AI-supported academic advising, as well as Huang's (2023) recommendation for structured training on AI ethics and privacy. Universities could help standardize AI usage, supporting students in making ethically informed decisions and enhancing their academic experience. Providing such resources would also alleviate students' concerns around the ethical use of AI, ensuring that they can maximize the benefits of these tools without risking academic integrity.

The findings of this study offer valuable information on how the Unified Theory of Acceptance and Use of Technology (UTAUT) can be reconceptualized in the case of student-led AI adoption. While the basic UTAUT constructs of performance expectancy, effort expectancy, social influence, and facilitating conditions remain relevant, this study identifies gaps in how they function in the setting of institutional ambiguity. Students exhibited high performance expectancy of AI tools, especially in assignment organization and learning support. Yet, since there was no institutional facilitation, i.e., policy or formal training, students acquired AI adoption through trial and error and peer influence. This suggests that, in the absence of regulation, social influence is not only an effect on adoption but also a substitute for institutional support. The irregularity and ambivalence among the students are contrary to UTAUT's assumption that facilitating conditions are sufficient to guarantee successful adoption. Instead, facilitating conditions must involve normative and moral scaffolding that UTAUT in its present formulation neglects.

The findings of the research also justify the application of the AI Ethics Framework in demonstrating how ethical ambiguities play a critical role in shaping student conduct. Students consistently showed concern with boundary crossing in terms of novelty, plagiarism, and appropriate use of AI. In the absence of clear ethical guidance, they depended on subjective judgment or peer agreement, resulting in variable practices and uncertainty in their actions. These results validate the contention by Memarian and Doleck (2023) and Holmes et al. (2022) that there is no ethical use of AI without institutional commitments to fairness, transparency, and accountability. The findings necessitate the development of ethical frameworks with practical, contextualized guidance in student-facing contexts. Moreover, ethical issues were not marginal but at the forefront of students' decisions, implying that AI ethics need to be conceived as integral to the adoption of technology in education, rather than as an add-on or afterthought.

The study's findings align with Socio-Technical Systems (STS) Theory, which emphasizes the interaction among human actors, institutional frameworks, and technological tools. The use of AI by students was not just a matter of individual choice but was influenced by a systemic lack of institutional infrastructure. The shortage of organized support, training, and resources caused fragmented and disjointed practices on campuses. STS theory explains these findings by showing that implementing technology in education must be co-designed with social and organizational systems to ensure it is fair and meaningful (Yu et al., 2023; Tarisayi, 2024). The results highlight that the lack of institutional investment in digital literacy and ethical governance forces students to self-regulate in a system that is not yet equipped to support them. Therefore, the research confirms the need to balance technological innovation with robust institutional ecosystems capable of fostering responsible and effective student engagement with AI.

The findings carry significant implications for universities, particularly regarding policy development, academic training, and resource allocation. To bridge the gap in understanding ethical AI use, universities should consider adopting structured programs that include workshops, ethical guidelines, and AI literacy modules (Huang, 2023). Implementing these changes would provide students with consistent frameworks for responsible AI usage, minimizing reliance on informal guidance. This is particularly relevant at a time when AI tools like ChatGPT and DeepSeek are becoming integral to academic practices globally (Simpson, 2023). Moreover, as students increasingly depend on AI to aid writing and learning, policies must also address data privacy concerns in AI applications (Perino et al., 2022).

There are a few limitations to this study that need to be noted. The application of convenience sampling, which was largely carried out through Facebook student groups, might have brought about selection bias by way of the preferential inclusion of digitally competent participants, social media active participants, and participants with stable internet connectivity. Consequently, students from disadvantaged resources backgrounds, non-social media active participants, or students from less well-resourced institutions with less stable digital infrastructure might be underrepresented in the sample. Additionally, the entire group of participants was recruited from four KwaZulu-Natal universities, and this may restrict generalisability to the wider South African higher education context. The use of self-reported data garnered from online interviews may also have functioned to restrict the range of experiences obtained, especially for those students who were less familiar with the use of technology or less at ease with virtual communication. These constraints imply that subsequent studies ought to use stratified sampling on various platforms and cover institutions from various regions in order to have more inclusive and representative findings.

This study provides a critical perspective on AI integration in higher education, showing that while AI tools offer productivity benefits, the lack of structured institutional support leads to ethical ambiguity and inconsistent practices. The findings suggest that South African universities urgently need to create standardized guidelines and educational resources to empower students with the skills to use AI responsibly. By addressing these gaps, educational institutions can foster a balanced, ethical approach to AI in academia, ensuring that AI tools serve as empowering resources rather than ethical liabilities, contributing to a more cohesive and accountable academic.

7 Conclusion

This study explored how university students in South Africa integrate AI-based tools into their academic practices, as well as to identify the challenges and ethical considerations that arise from informal usage. Findings revealed that students primarily use AI tools like ChatGPT and DeepSeek for academic support tasks, including idea generation, structuring assignments, and enhancing understanding. These findings indicate a substantial reliance on AI to aid various academic tasks, which underscores the potential of AI as an educational resource for enhancing productivity and understanding in learning environments. However, the study also highlighted significant ethical ambiguities and concerns around academic integrity, as students often lack formal guidelines on acceptable AI use. These lead to inconsistencies in academic standards and approaches across institutions. Furthermore, participants expressed a desire for structured institutional support, such as training and resource access, which would facilitate responsible and effective AI usage. The evidence from this study emphasizes the need for clear institutional policies and resources to support ethical AI usage, which may help maintain academic integrity while allowing students to maximize the benefits of AI in their studies.

In addressing these gaps, this study contributes to the field of educational technology by highlighting the dual role of AI as both a support tool and a source of ethical challenges in academic contexts. The findings offer valuable insights for universities aiming to integrate AI more formally into their learning support systems. They suggest AI-guided workshops or resources could foster a more standardized approach to ethical AI usage. One limitation of this study is its reliance on self-reported data, which may be influenced by social desirability bias, as students could underreport their dependence on AI tools due to concerns about academic misconduct. Additionally, the sample was restricted to four South African universities, which may limit the generalisability of the findings to other cultural or institutional contexts. The absence of a longitudinal design also means that this study provides only a snapshot of current AI usage and ethical concerns, without accounting for evolving attitudes over time. Future research should explore AI usage in a broader range of educational and cultural contexts to validate the generalisability of these findings. A longitudinal study could examine how student perceptions and usage patterns of AI tools change over time, especially as institutions begin to introduce guidelines and resources.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

TM: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Acknowledgments

Special Acknowledgments go to the M&G Research Pty Ltd team for assisting in the data collection and transcribing of the data.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that Gen AI was used in the creation of this manuscript. ChatGPT and DeepSeek were used for the brainstorming of the manuscript. The Generative AI was not used in writing the manuscript. Grammarly was used to edit the paper.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abdaljaleel, M., Barakat, M., Alsanafi, M., Salim, N. A., Abazid, H., Malaeb, D., et al. (2024). A multinational study on the factors influencing university students' attitudes and usage of ChatGPT. Sci. Rep. 14:1983. doi: 10.1038/s41598-024-52549-8

PubMed Abstract | Crossref Full Text | Google Scholar

Acosta-Enriquez, B. G., Arbulú Ballesteros, M. A., Huamaní Jordan, O., López Roca, C., and Saavedra Tirado, K. (2024). Analysis of college students' attitudes toward the use of ChatGPT in their academic activities: effect of intent to use, verification of information and responsible use. BMC Psychol. 12:255. doi: 10.1186/s40359-024-01764-z

PubMed Abstract | Crossref Full Text | Google Scholar

Airaj, M. (2024). Ethical artificial intelligence for teaching-learning in higher education. Educ. Inf. Technol. 29, 17145–17167. doi: 10.1007/s10639-024-12545-x

Crossref Full Text | Google Scholar

Ajlouni, A. O., Wahba, F. A. A., and Almahaireh, A. S. (2023). Students' attitudes towards using ChatGPT as a learning tool: the case of the University of Jordan. Int. J. Interact. Mob. Technol. 17, 99–117. doi: 10.3991/ijim.v17i18.41753

Crossref Full Text | Google Scholar

Akiba, D., and Fraboni, M. (2023). AI-supported academic advising: exploring ChatGPT's current state and future potential toward student empowerment. Educ. Sci.13:885. doi: 10.3390/educsci13090885

Crossref Full Text | Google Scholar

Alshahrani, B. T., Pileggi, S. F., and Karimi, F. (2024). A social perspective on AI in the higher education system: a semisystematic literature review. Electronics 13:1572. doi: 10.3390/electronics13081572

Crossref Full Text | Google Scholar

Aseeri, M., and Kang, K. (2023). Organisational culture and big data socio-technical systems on strategic decision making: case of Saudi Arabian higher education. Educ. Inf. Technol. 28, 8999–9024. doi: 10.1007/s10639-022-11500-y

Crossref Full Text | Google Scholar

Beck, S. W., and Levine, S. (2024). The next word: a framework for imagining the benefits and harms of generative AI as a resource for learning to write. Read. Res. Q. 59, 706–715. doi: 10.1002/rrq.567

Crossref Full Text | Google Scholar

Bell, S., and Korinek, A. (2023). AI's economic peril. J. Democr. 34, 151–161. doi: 10.1353/jod.2023.a907696

PubMed Abstract | Crossref Full Text | Google Scholar

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., et al. (2024). A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 21:4. doi: 10.1186/s41239-023-00436-z

Crossref Full Text | Google Scholar

Bosch, T., Jordaan, M., Mwaura, J., Nkoala, S., Schoon, A., Smit, A., et al. (2023). South African university students' use of AI-powered tools for engaged learning. SSRN Electron. J. 4595655. doi: 10.2139/ssrn.4595655

Crossref Full Text | Google Scholar

Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101.

Google Scholar

British Psychological Society (2018). BPS website. Retrieved July 1, 2025, from https://www.bps.org.uk

Google Scholar

Cele, S. M. K., Pietersen, D., and Gaillard, C. (2025). Black African students' social and academic identities in South African universities vis-à-vis student drop out: A social justice and philosophical perspective. JCVE 8, 240–251. doi: 10.46303/jcve.2025.14

Crossref Full Text | Google Scholar

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 20:38. doi: 10.1186/s41239-023-00408-3

PubMed Abstract | Crossref Full Text | Google Scholar

Chauke, T. A., Mkhize, T. R., Methi, L., and Dlamini, N. (2024). Postgraduate students' perceptions on the benefits associated with artificial intelligence tools on academic success: In case of ChatGPT AI tool. J. Curric. Stud. Res. 6, 44–59. doi: 10.46303/jcsr.2024.4

Crossref Full Text | Google Scholar

Chen, G., Fan, J., and Azam, M. (2024). Exploring artificial intelligence (AI) chatbots adoption among research scholars using unified theory of acceptance and use of technology (UTAUT). J. Libr. Inf. Sci. doi: 10.1177/09610006241269189

Crossref Full Text | Google Scholar

Cotton, D., Cotton, P., and Shipway, J. (2024). Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148

Crossref Full Text | Google Scholar

Crček, N., and Patekar, J. (2023). Writing with AI: University students' use of ChatGPT. J. Lang. Educ. 9, 128–138. doi: 10.17323/jle.2023.17379

Crossref Full Text | Google Scholar

Creswell, J. W., and Poth, C. N. (2016). Qualitative Inquiry and Research Design: Choosing Among Five Approaches. Sage Publications.

Google Scholar

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., et al. (2021). Artificial Intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 57:101994. doi: 10.1016/j.ijinfomgt.2019.08.002

Crossref Full Text | Google Scholar

Fauzi, F., Tuhuteru, L., Sampe, F., Ausat, A., and Hatta, H. (2023). Analysing the role of ChatGPT in improving student productivity in higher education. J. Educ. 5, 14886–14891. doi: 10.31004/joe.v5i4.2563

Crossref Full Text | Google Scholar

Firat, M. (2023). What ChatGPT means for universities: perceptions of scholars and students. J. Appl. Learn. Teach. 6, 57–63. doi: 10.37074/jalt.2023.6.1.22

Crossref Full Text | Google Scholar

Geok, T. K., Shah, A., Goh, G. G. G., and Zeb, A. (2024). The links between sustainability dimensions and green campus initiatives in mountain universities. J. Infrastruct. Policy Dev. 8:7653. doi: 10.24294/jipd.v8i9.7653

Crossref Full Text | Google Scholar

Hassani, H., and Silva, E. (2023). The role of ChatGPT in data science: how AI-assisted conversational interfaces are revolutionizing the field. Big Data Cogn. Comput. 7:62. doi: 10.3390/bdcc7020062

Crossref Full Text | Google Scholar

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., et al. (2022). Ethics of AI in education: towards a community-wide framework. Int. J. Artif. Intell. Educ. 32, 504–526. doi: 10.1007/s40593-021-00239-1

Crossref Full Text | Google Scholar

Huang, L. (2023). Ethics of artificial intelligence in education: student privacy and data protection. Sci. Insights Educ. Front. 16, 2577–2587. doi: 10.15354/sief.23.re202

Crossref Full Text | Google Scholar

Ifenthaler, D., and Schumacher, C. (2023). Reciprocal issues of artificial and human intelligence in education. J. Res. Technol. Educ. 55, 1–6. doi: 10.1080/15391523.2022.2154511

PubMed Abstract | Crossref Full Text | Google Scholar

Kamukapa, T. D., Lubinga, S., Masiya, T., and Sono, L. (2025). Assessing the integration of AI competencies in undergraduate public administration curricula in selected South African higher education institutions. Teach. Public Adm. 43, 108–125. doi: 10.1177/01447394241266443

Crossref Full Text | Google Scholar

Kanont, K., Pingmuang, P., Simasathien, T., et al. (2024). Generative-AI, a learning assistant? Factors influencing higher-ed students' technology acceptance. Electron. J. e-Learn. 22, 18–33. doi: 10.34190/ejel.22.6.3196

Crossref Full Text | Google Scholar

Kelly, S., Kaye, S.-A., and Oviedo-Trespalacios, O. (2023). What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat. Inform. 77:101925. doi: 10.1016/j.tele.2022.101925

PubMed Abstract | Crossref Full Text | Google Scholar

Khatri, B. B., and Karki, P. D. (2023). Artificial intelligence (AI) in higher education: growing academic integrity and ethical concerns. Nepal. J. Dev. Rural Stud. 20, 1–7. doi: 10.3126/njdrs.v20i01.64134

Crossref Full Text | Google Scholar

Khoalenyane, N. B., and Ajani, O. A. (2024). A systematic review of artificial intelligence in higher education-South Africa. Soc. Sci. Educ. Res. Rev. 11, 17–26. doi: 10.5281/zenodo.15258127

Crossref Full Text | Google Scholar

Khraisha, Q., Put, S., Kappenberg, J., Warraitch, A., and Hadfield, K. (2023). Can large language models replace humans in the systematic review process? Evaluating GPT-4's efficacy in screening and extracting data from peer-reviewed and grey literature in multiple languages. arXiv [Preprint]. abs/2310.17526. doi: 10.1002/jrsm.1715

PubMed Abstract | Crossref Full Text | Google Scholar

Kohnke, L., Zou, D., and Zhang, R. (2023). Zoom supported emergency remote teaching and learning in teacher education: a case study from Hong Kong. Knowl. Manag. E-learn. 15, 192–213. doi: 10.34105/j.kmel.2023.15.011

Crossref Full Text | Google Scholar

Kudina, O., and van de Poel, I. (2024). A sociotechnical system perspective on AI. Minds Machines 34:21. doi: 10.1007/s11023-024-09680-2

Crossref Full Text | Google Scholar

Kumar, S. (2024). Digital Transformation, Artificial Intelligence and Society: Opportunities and Challenges. Singapore: Springer.

PubMed Abstract | Google Scholar

Li, K. (2023). Determinants of college students' actual use of AI-based systems: an extension of the technology acceptance model. Sustainability 15:5221. doi: 10.3390/su15065221

Crossref Full Text | Google Scholar

Liang, Y. (2023). Balancing: the effects of AI tools in educational context. Front. Hum. Soc. Sci. 3, 7–10. doi: 10.54691/fhss.v3i8.5531

Crossref Full Text | Google Scholar

Lubinga, S., Maramura, T. C., and Masiya, T. (2023). The fourth industrial revolution adoption: challenges in South African higher education institutions. J. Cult. Values Educ. 6, 1–17. doi: 10.46303/jcve.2023.5

Crossref Full Text | Google Scholar

Memarian, B., and Doleck, T. (2023). Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI) and higher education: a systematic review. Comput. Educ. 5:100152. doi: 10.1016/j.caeai.2023.100152

Crossref Full Text | Google Scholar

Merriam, S. B., and Tisdell, E. J. (2016). Qualitative Research: A Guide to Design and Implementation (4th ed.). San Francisco, CA: Jossey-Bass.

Google Scholar

Mulaudzi, L., and Hamilton, J. (2024). Student perspectives on optimising AI tools to enhance personalised learning in higher education. Interdiscip. J. Educ. Res. 6, 1–15. doi: 10.38140/ijer-2024.vol6.s1.03

PubMed Abstract | Crossref Full Text | Google Scholar

Muringa, T., and Adjin-Tettey, T. D. (2024). Assessing the responsiveness of journalism curricula to the labor market needs in South Africa: a systematic review. J. Mass Commun. Educ. doi: 10.1177/10776958251356372

Crossref Full Text | Google Scholar

Muringa, T., and Shava, E. (2024). Examining the efficacy of core competencies of municipal leaders in transforming local government in South Africa. Int. J. Public Leaders. 20, 330–346. doi: 10.1108/IJPL-04-2024-0032

Crossref Full Text | Google Scholar

Muringa, T., and Shava, E. (2025). “It's all talk but no action”-navigating political and administrative will in transforming local government. Cogent Soc. Sci. 11:2516078. doi: 10.1080/23311886.2025.2516078

Crossref Full Text | Google Scholar

Obenza, B. N., Caballo, J. H. S., and Caangay, R. B. R. (2024). Analyzing university students' attitude and behavior toward AI using the extended unified theory of acceptance and use of technology model. Am. J. Appl. Stat. Econ. 3, 1–19. doi: 10.54536/ajase.v3i1.2510

Crossref Full Text | Google Scholar

Opesemowo, O. A. G., and Adekomaya, V. (2024). Harnessing artificial intelligence for advancing sustainable development goals in South Africa's higher education system: a qualitative study. Int. J. Learn. Teach. Educ. Res. 23, 67–86. doi: 10.26803/ijlter.23.3.4

Crossref Full Text | Google Scholar

Owusu, A. (2024). Knowledge management systems implementation effects on university students' academic performance: the socio-technical theory perspective. Educ. Inform. Technol. 29, 4417–4442. doi: 10.1007/s10639-023-11999-9

Crossref Full Text | Google Scholar

Patel, S., and Ragolane, M. (2024). The implementation of artificial intelligence in South African higher education institutions: opportunities and challenges. Tech. Educ. Hum. 9, 51–65. doi: 10.47577/teh.v9i.11452

Crossref Full Text | Google Scholar

Perino, D., Katevas, K., Lutu, A., Marin, E., and Kourtellis, N. (2022). Privacy-preserving AI for future networks. Commun. ACM 65, 52–53. doi: 10.1145/3512343

Crossref Full Text | Google Scholar

Rabatseta, P. C., Modiba, M., and Ngulube, P. (2024). Utilisation of artificial intelligence for the provision of information services at the University of Limpopo libraries. S. Afr. J. Libr. Inf. Sci. 90, 1–8. doi: 10.7553/90-2-2394

Crossref Full Text | Google Scholar

Rapanyane, M. B., and Sethole, F. R. (2020). The rise of artificial intelligence and robots in the 4th industrial revolution: implications for future South African job creation. Contemp. Soc. Sci. 15, 489–501. doi: 10.1080/21582041.2020.1806346

Crossref Full Text | Google Scholar

Sam, A. K., and Olbrich, P. (2023). “The need for AI ethics in higher education,” in AI Ethics in Higher Education: Insights from Africa and Beyond, eds. A. K. Sam and P. Olbrich (New York: Springer International Publishing), 3–10.

Google Scholar

Sanders, D. A., and Mukhari, S. S. (2024). The perceptions of lecturers about blended learning at a particular higher institution in South Africa. Educ. Inf. Technol. 29, 11517–11532. doi: 10.1007/s10639-023-12302-6

Crossref Full Text | Google Scholar

Shah, A., Tan, K. G., Goh, G. G. G., and Zeb, A. (2024). Readiness assessment of mountain universities of Pakistan for green campus initiatives. J. Infrastruct. Policy Dev. 8. doi: 10.24294/jipd9597

Crossref Full Text | Google Scholar

Simpson, D. (2023). Educators, students, and plagiarism in age of AI. BMJ 381:1403. doi: 10.1136/bmj.p1403

PubMed Abstract | Crossref Full Text | Google Scholar

Slimi, Z., and Carballido, B. V. (2023). Navigating the ethical challenges of artificial intelligence in higher education: an analysis of seven global AI ethics policies. TEM J. 12, 689–697. doi: 10.18421/TEM122-02

Crossref Full Text | Google Scholar

Storey, V. (2023). AI technology and academic writing: knowing and mastering the “craft skills”. Int. J. Adult Educ. Technol. 14, 1–15. doi: 10.4018/IJAET.325795

Crossref Full Text | Google Scholar

Swist, T., and Gulson, K. N. (2023). Instituting socio-technical education futures: encounters with/through technical democracy, data justice, and imaginaries. Learn. Media Technol. 48, 181–186. doi: 10.1080/17439884.2023.2205225

Crossref Full Text | Google Scholar

Tarisayi, K. S. (2024). ChatGPT use in universities in South Africa through a socio-technical lens. Cogent Educ. 11:2295654. doi: 10.1080/2331186X.2023.2295654

Crossref Full Text | Google Scholar

Thomas, A. (2024). Digitally transforming the organization through knowledge management: a socio-technical system (STS) perspective. Eur. J. Innov. Manag. 27, 437–460. doi: 10.1108/EJIM-02-2024-0114

Crossref Full Text | Google Scholar

Ullah, R., Ismail, H. B., Khan, M. T. I., and Zeb, A. (2024). Nexus between Chat GPT usage dimensions and investment decisions making in Pakistan: moderating role of financial literacy. Technol. Soc. 76:102454. doi: 10.1016/j.techsoc.2024.102454

Crossref Full Text | Google Scholar

van Rensburg, Z. J., and van der Westhuizen, S. (2024). “Ethical AI integration in academia: developing a literacy-driven framework for LLMs in South African higher education,” in AI Approaches to Literacy in Higher Education, ed. M. Cloete (London: IGI Global), 23–48.

Google Scholar

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. (2003). User acceptance of information technology: toward a unified view. MIS Quart. 27, 425–478. doi: 10.2307/30036540

PubMed Abstract | Crossref Full Text | Google Scholar

Yu, X., Xu, S., and Ashton, M. (2023). Antecedents and outcomes of artificial intelligence adoption and application in the workplace: the socio-technical system theory perspective. Inform. Technol. People 36, 454–474. doi: 10.1108/ITP-04-2021-0254

Crossref Full Text | Google Scholar

Zeb, A., Rehman, F. U., Bin Othayman, M., and Rabnawaz, M. (2025). Artificial intelligence and ChatGPT are fostering knowledge sharing, ethics, academia and libraries. Int. J. Inform. Learn. Technol. 42, 67–83. doi: 10.1108/IJILT-03-2024-0046

Crossref Full Text | Google Scholar

Zembylas, M. (2023). A decolonial approach to AI in higher education teaching and learning: strategies for undoing the ethics of digital neocolonialism. Learn. Media Technol. 48, 25–37. doi: 10.1080/17439884.2021.2010094

Crossref Full Text | Google Scholar

Zhai, C., Wibowo, S., and Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 11:28. doi: 10.1186/s40561-024-00316-7

Crossref Full Text | Google Scholar

Zou, M., and Huang, L. (2023). To use or not to use? Understanding doctoral students' acceptance of ChatGPT in writing through technology acceptance model. Front. Psychol. 14:1259531. doi: 10.3389/fpsyg.2023.1259531

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: artificial intelligence, higher education, student learning, ChatGPT, South Africa contemporary society, fundamentally reshaping professional, social

Citation: Muringa TP (2025) Exploring ethical dilemmas and institutional challenges in AI adoption: a study of South African universities. Front. Educ. 10:1628019. doi: 10.3389/feduc.2025.1628019

Received: 13 May 2025; Accepted: 23 July 2025;
Published: 01 September 2025.

Edited by:

Indira Boutier, Glasgow Caledonian University, United Kingdom

Reviewed by:

Ali Zeb, Multimedia University, Malaysia
NtandokaMenzi Dlamini, University of South Africa, South Africa

Copyright © 2025 Muringa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tigere Paidamoyo Muringa, dGlnZXJlbTU4OUBnbWFpbC5jb20=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.