Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 12 January 2026

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1711539

This article is part of the Research TopicResearch Ethics and Integrity in the Artificial Intelligence EraView all 11 articles

AI use in academic research: Examining the mediating effect of user experience and the moderating effect of disciplinary context on perceived integrity

  • Postgraduate Programme and Research, Mogadishu University, Mogadishu, Somalia

The study investigates academic researchers’ use of AI in higher education research, drawing on the Unified Theory of Acceptance and Use of Technology (UTAUT) and integrity-trust frameworks. It examines the mediating effect of user experience and the moderating effect of disciplinary context on perceived integrity. A quantitative cross-sectional design was employed, collecting data via questionnaire from a sample of 100 academic researchers and lecturers across universities in Mogadishu, Somalia. Data analysis using SPSS and JAMOVI (PROCESS macro) involved correlation, mediation (bootstrapping), and moderation analyses. The overall trend in the findings suggests a positive, and growing, acceptance of AI for academic research as researchers believe it helps them produce better research outcomes. User experience mediates significantly the relationship between AI use and perceived integrity but disciplinary context does not moderate this relationship. Though perceived integrity is generally high, the researchers have some concerns about the reliability and ethical use of AI in research.

Introduction

The rapid development of artificial intelligence (AI) technology has presented both new opportunities and greater ethical concerns in the context of academic research. AI systems are now routinely employed in data analytics, manuscript composition, and even peer-review, as well as in other processes where greater operational efficiency is desired (Lund and Naheem, 2023; Leung et al., 2023). This assimilation, however, brings with it certain challenges in regard to authorship, transparency, and academic fraud.

On the other hand, AI-assisted processes such as data analysis and drafting of manuscripts can augment the quality of research outputs. For example, AI assistance in manuscript drafting and editing has reduced literature synthesis times significantly and aids researchers in gap identification as well (Lund and Naheem, 2023; Leung et al., 2023). Owing to these undeniable advantages, many journals and academic institutions are now framing policies to define the involvement of AI in authorship, manuscript production and editing, and peer review (Lund and Naheem, 2023; Leung et al., 2023). These policies are not only helpful in resolving questions of contribution recognition, but also in protecting the scientific communication system from abuse (Lund and Naheem, 2023).

Conversely, integrating AI in research also generates substantial research ethics and research integrity issues. Different research publications have documented instances of AI-created work resulting in duplication of effort, reinforcement of bias, and even creation of faked yet true-to-form manuscript (Chen et al., 2024; Májovský et al., 2023). The risk of these practices undermines the integrity of scientific journals and requires caution in approaches to using AI tools in disseminating research (Chen et al., 2024; Májovský et al., 2023). These issues call for rigorous and regular policies in academic journals to create standards for the amount of acceptable AI assistance and avoid such unethical practices of ghost writing or surreptitious data manipulation (Lund and Naheem, 2023; Guleria et al., 2023; Khlaif et al., 2023).

To address the positive and negative aspects of the function of AI in science, a set of scholars has placed some guidelines and ethical pointers on the use of this technology. Their initiatives range from the policies AI’s contributions disregard and acknowledgment to the set high terms and requirements for the transparency, accountability, and validation of AI-generated data (Lund and Naheem, 2023; Leung et al., 2023; Hryciw et al., 2023). These actions among other productivity are the participation of traditional researchers and journal editors in the construction of guidelines for the responsible and ethical use of AI that promotes advancement without contravening the ethical principles of the university (Leung et al., 2023; Hryciw et al., 2023). Similarly, the authors also promise that by frequently updating these guidelines as the technology keeps evolving, they will be better in a position to say cheese to the emerging opportunities and at the same time scatter the possible harms related to meandering data production, thus achieving the replication barriers across a wide breadth of a sector (Hryciw et al., 2023).

While this global discourse is vigorous, its empirical evidence base is skewed toward developed nations. There is a distinct lack of research on how AI is being adopted and perceived within the specific context of academic researchers and lecturers in higher education institutions (HEIs) in developing regions like Somalia. In such contexts, resource constraints, varying levels of digital literacy, and distinct disciplinary cultures may shape AI adoption and its ethical implications in unique ways. This study addresses this gap by focusing on academic researchers in Somali universities. It aims to understand not just if AI is used, but how the experience of using it (user experience) shapes perceptions of its integrity, and whether this relationship differs across academic disciplines. By testing these mediation and moderation mechanisms, the study provides novel, context-specific insights that move beyond descriptive accounts of AI use. The findings are intended to inform the development of relevant training and ethical frameworks for AI integration in Somali HEIs and similar settings, contributing a unique perspective from an under-researched region to the global literature.

Problem statement

The integration of Artificial Intelligence (AI) into academic research is accelerating within Somali higher education, driven by its potential to enhance productivity and innovation. However, this rapid uptake coincides with significant concerns among researchers regarding the ethical integrity and reliability of AI-generated outputs (e.g., plagiarism risks, data fabrication). This creates a tension: AI use is rising, yet trust in its integrity is not automatic. The mechanisms through which AI use translates into perceptions of integrity are unclear. Specifically, it is unknown whether a positive User Experience (UX) acts as a crucial mediator in building trust, or if the relationship between AI use and perceived integrity is contingent upon the norms and practices of different Disciplinary Contexts (DC). Existing literature has largely examined these factors in isolation or within well-resourced, Western academic systems, neglecting the unique socio-technical environment of Somali universities. Consequently, there is no evidence-based guidance for Somali HEIs on how to foster responsible AI adoption. This study, therefore, investigates two core questions to address this gap: (1) To what extent does User Experience mediate the relationship between AI use and Perceived Integrity among Somali academic researchers? (2) Does Disciplinary Context moderate this relationship? By answering these, the study provides foundational evidence for developing tailored AI governance strategies that support research growth while safeguarding academic credibility in Somalia.

Value of the study

This study holds significant value across academic, practical, and policy dimensions. Academically, it advances the understanding of AI in research by exploring the understudied mediating role of user experience and moderating effect of disciplinary context on perceived integrity, particularly in Somalia’s research context - filling crucial gaps in the existing literature. For practitioners like Somali researchers and academic administrators, the findings provide actionable insights for responsibly integrating AI tools to enhance research productivity while maintaining ethical standards, potentially informing training programs to improve AI literacy. At the policy level, the study offers evidence-based recommendations for developing context-specific AI governance frameworks and ethical guidelines in Somali higher education, supporting national efforts to strengthen research capacity and align with global academic standards.

Literature review

The integration of Artificial Intelligence (AI) into academic research has seen a significant rise, impacting various facets of scholarly communication. A critical evaluation reveals both facilitative benefits and ethical dilemmas concerning integrity.

AI benefits and use in research

AI tools are increasingly recognized for enhancing research efficiency and quality. They assist in systematic literature reviews, data analysis, and the drafting and editing of manuscripts, potentially leading to more robust outputs and clearer communication (Marmoah et al., 2024; Forgas et al., 2024). However, the effectiveness of these tools often depends on the availability of structured training, as a lack of formal training may result in misinformation and reduced content quality (Bhavsar et al., 2024; Ng et al., 2024). The perceived usefulness and ease of use of these technologies are key drivers of their adoption, as explained by technology acceptance models like UTAUT (Venkatesh, 2000).

Ethical risks and integrity concerns

Conversely, AI integration raises profound ethical questions. Issues of plagiarism, originality, bias reinforcement, and even the generation of fraudulent content threaten academic integrity (Amirjalili et al., 2024; Chen et al., 2024). While AI can promote critical thinking and collaboration, it also raises ethical challenges that require awareness and educational adaptations (Aisyah et al., 2024). While AI can improve writing and analysis, these advantages must be balanced against the necessity for careful evaluation of AI-generated content, as highlighted by ongoing discussions about authenticity and integrity (Kouam, 2024). This has spurred calls for clear institutional policies and ethical guidelines to govern AI use (Perkins and Roe, 2024; Ajwang and Ikoha, 2024). Trust in AI, therefore, is not solely based on performance but heavily on Perceived Integrity—the belief that the system operates ethically and transparently (Lalot and Bertram, 2025).

The role of user experience

The pathway from using AI to trusting it may depend heavily on UX. A system that is user-friendly, efficient, and reliable can foster positive attitudes and greater trust (Kim, 2024; Park et al., 2023). Conversely, poor UX, marked by inaccuracy or complexity, can undermine confidence and perceived integrity. Thus, UX is a plausible mediator in the relationship between AI use and integrity perceptions.

The influence of disciplinary context

Disciplinary norms and epistemic cultures significantly shape technology adoption. Fields with strong computational traditions (e.g., Engineering) may embrace AI more readily and trust its outputs differently compared to theory-heavy or interpretive fields (e.g., Humanities) (Liu et al., 2024; Fontaine et al., 2024). This suggests Disciplinary Context could moderate how AI use influences integrity perceptions.

The Somalia-specific evidence gap

A synthesis of the literature reveals a critical omission: nearly all empirical studies are situated in developed, well-resourced academic systems. There is a stark lack of investigation into how AI is adopted, experienced, and perceived in higher education contexts of developing nations like Somalia, where infrastructure, training, and research cultures present unique challenges and opportunities. This study directly addresses this gap.

Theoretical frameworks

This study is grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, 2000) and trust-in-technology frameworks that emphasize Perceived Integrity as a key dimension (Lukyanenko et al., 2022). UTAUT posits that performance expectancy (usefulness) and effort expectancy (ease of use) drive behavioral intention to use a technology, a notion supported by empirical studies on AI adoption in academia (Roy et al., 2024). In this context, “AI Use” represents the behavioral outcome. We extend this model by proposing that the consequences of use—specifically, the perception of the technology’s integrity—are not direct but are shaped by the user’s experience and their disciplinary environment.

AI use in academic research

Operationalized as the frequency and extent of applying AI tools to core research tasks such as literature discovery and synthesis (Granjeiro, 2025; Pinzolits, 2023), data analysis, and scholarly writing (Sharma et al., 2024).

User experience

refers to the pragmatic and hedonic qualities of interacting with an AI system. AI variants can tailor interaction to user needs, enhancing satisfaction (Luo et al., 2024; Kim et al., 2021). UX encompasses usability, efficiency, reliability, and overall satisfaction (Kim, 2024; Dave et al., 2023; Wang et al., 2023), and is crucial for user comfort and continued engagement (Park et al., 2023; Wei, 2024). Acceptance depends on understanding user factors (Jiang et al., 2024), including perceptions of effectiveness and likability (Zheng and Wang, 2022). We hypothesize that a positive UX translates the functional use of AI into a trusting belief in its integrity.

Perceived integrity

Generally refers to the extent to which users believe that an AI system adheres to ethical principles of honesty, fairness, transparency, and accountability in its operations and outputs (Lalot and Bertram, 2025). It is a core antecedent of trust. Research indicates that positive attitudes toward AI tools, shaped by integrity perceptions, promote acceptance (Al-Bukhrani et al., 2025).

Disciplinary context

Encompasses the distinct norms, methodologies, and epistemic values of an academic field that shape how knowledge is produced and validated. Disciplinary differences significantly determine AI acceptance and integration (Liu et al., 2024), with fields varying in their epistemic readiness and evaluation biases (Seeber et al., 2022; Fontaine et al., 2024; Fecher and Hebing, 2021). We theorize it as a boundary condition that may strengthen or weaken the AI Use-Perceived Integrity relationship.

Conceptual model and hypotheses

The conceptual model (Figure 1) illustrates these relationships. It posits a direct positive effect of AI Use on Perceived Integrity (H1). More importantly, it proposes that this effect is transmitted through User Experience (H2: Mediation). Furthermore, it suggests that the strength of the direct relationship is conditional upon the Disciplinary Context (H3: Moderation). This model allows us to test not just if AI use relates to integrity, but how and under what conditions.

Figure 1
Flowchart showing relationships between four concepts.

Figure 1. Conceptual research model.

Based on theoretical framework, the study tests the following hypotheses:

H1: AI use in academic research has a statistically significant positive effect on perceived integrity.

H2: User experience mediates the relationship between AI use in academic research and perceived integrity.

H3: Disciplinary context moderates the relationship between AI use in academic research and perceived integrity.

Method

Research design

A quantitative, cross-sectional research design was employed to examine the hypothesized relationships.

Study setting and population

The study was conducted among academic researchers (including lecturers and postgraduate students actively engaged in research) from three major universities in Mogadishu, Somalia: Mogadishu University, Somali National University and SIMAD University. These institutions were selected due to their established postgraduate programs and research activities, providing a relevant and accessible population. The total target population of active academic researchers across these institutions was estimated to be approximately 250.

Sampling and recruitment

A random sampling strategy was used. The sample size was determined a priori using G*Power 3.1 for a linear multiple regression (fixed model, R2 increase) with an effect size f2 = 0.35, α = 0.05, and power = 0.95, which suggested a minimum of 50 participants (Kang, 2021). To account for potential non-response and ensure robustness, we aimed for a larger sample. Eligibility criteria included: (1) being an active lecturer or postgraduate researcher at one of the target universities, and (2) having some level of awareness or experience with AI tools for research. Recruitment was conducted over 4 weeks through email lists and coordination with departmental heads. Of 150 invitations distributed, 100 completed responses were received (response rate = 66.7%). Non-response was primarily due to lack of time or no experience with AI.

Instrument and measures

Data were collected via a structured online questionnaire. All constructs were measured using adopted scales from prior studies on a 5-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree). The scales measured: AI Use in Research (AIU, 5 items, e.g., “I frequently use AI in different parts of my research”) (Al‐Rousan et al., 2025); User Experience (UX, 5 items, e.g., “The AI system has a user-friendly interface”) (Yin and Li, 2024); Perceived Integrity (PI, 5 items, e.g., “I am confident that AI integration enhances research rigor”; Oyekunle, 2024); and Disciplinary Context (DC, 4 items, e.g., “I believe that the use of AI technologies enhances the learning experience within my academic discipline”) (Santos et al., 2024). The instrument was first piloted with 10 researchers to check for clarity and contextual relevance; minor wording adjustments were made based on feedback; the following Table 1 is Construct Measurement Summary.

Table 1
www.frontiersin.org

Table 1. Construct measurement summary.

Data analysis plan

Data analysis was conducted using SPSS version 24, which utilized descriptive statistics (weighted means and standard deviations) on a survey Likert data scale, and the mediating and moderating analyses were examined through JAMOVI version 2.6.44.

Ethical considerations

Participation was voluntary and anonymous. Informed consent was obtained electronically prior to starting the questionnaire. Data were stored securely on a password-protected computer and will be deleted after 5 years.

Demographic profile of respondents

Table 2 presents the demographic characteristics of the 100 respondents. The sample was predominantly male (88%), reflecting a known gender disparity in higher education roles in the region (Ahmed et al., 2023). The majority were aged 36–40 years (32%) and were Master’s level researchers (65%). Respondents came primarily from Social Sciences & Humanities (35%) and Health Sciences/Medicine (31%).

Table 2
www.frontiersin.org

Table 2. Demographic profile of respondents (N = 100).

Results

Table 3 depicts Harman’s Single Factor Test results showing that the first component explains 38.564% of the total variance, which is below the 50% threshold. This suggests that common method bias (CMB) is not a significant concern in the data, as no single dominant factor accounts for the majority of variance.

Table 3
www.frontiersin.org

Table 3. Result of Harman’s single factor test for common method bias (CMB).

Table 4 presents the mean scores and standard deviations of survey items measuring four key constructs related to AI use in academic research:

Table 4
www.frontiersin.org

Table 4. Results for the study factors’ items.

AI use in academic research

The findings indicate a high level of AI integration into research practices among Somali academics (Grand Mean = 3.76). Respondents reported frequent use of AI across various research stages and expressed confidence in selecting appropriate tools. Notably, there is a strong commitment to professional development related to AI (mean = 3.91), reflecting a proactive stance toward adopting and mastering emerging technologies.

User experience

Overall, user experience with AI systems was highly positive (Grand Mean = 3.77). Participants found AI interfaces user-friendly and easy to learn, and they valued the efficiency gains in their workflows. However, a notable exception was the moderate rating for the accuracy and reliability of AI outcomes (mean = 3.31), signaling a specific area of concern that may affect trust.

Perceived integrity

Participants generally perceived AI as enhancing research integrity (Grand Mean = 3.58). They agreed that AI benefits outweigh integrity risks and emphasized the need for continuous policy monitoring. Nevertheless, ratings regarding the ethical integrity of AI systems themselves were more moderate (mean = 3.31 on relevant items), suggesting nuanced trust.

Disciplinary context

AI was viewed as well-aligned with disciplinary norms and innovative practices across fields (Grand Mean = 3.76). Respondents believed AI enhances learning and problem-solving within their disciplines, indicating broad contextual acceptance.

Figure 2 presents a comparison of the grand means for four study factors: AI Use in Academic Research, User Experience, Perceived Integrity, and Disciplinary Context. All means relatively high, indicating positive responses across all factors, with User Experience having the highest mean. Standard deviations are smaller, suggesting less variability in responses, although Perceived Integrity shows slightly more variation than the other factors. Overall, the figure suggests that participants generally have positive perceptions of AI use, user experience, integrity, and disciplinary context in academic research.

Figure 2
Bar chart comparing mean and standard deviation for four categories: AI Use in Academic Research (mean 3.76, standard deviation 0.91), User Experience (mean 3.77, standard deviation 0.92), Perceived Integrity (mean 3.58, standard deviation 0.94), and Disciplinary Context (mean 3.76, standard deviation 0.8). Mean values are presented in blue and standard deviation values in orange.

Figure 2. Grand means and standard deviations of study factors. All grand means fell in the “High” range (3.40–4.19), indicating generally positive perceptions.

Table 5 presents the correlation matrix analyzes the relationships between four key variables in AI adoption in academic research: AI Use in Academic Research, User Experience, Perceived Integrity, and Disciplinary Context. With a sample of 100 respondents, the study finds significant positive correlations at the 0.01 level. Notably, AI Use correlates strongly with User Experience (r = 0.658), Perceived Integrity (r = 0.615), and Disciplinary Context (r = 0.628). The strongest links are between User Experience and both Perceived Integrity (r = 0.678) and Disciplinary Context (r = 0.674), indicating that positive AI experiences boost trust and alignment with supportive disciplines. Additionally, Perceived Integrity correlates with Disciplinary Context (r = 0.607), emphasizing the connection between ethical perceptions and field norms. All correlations suggest moderately strong relationships without multicollinearity issues. Based on these findings, we confirm Hypothesis 1, which stated that AI use in academic research has a statistically significant positive effect on perceived integrity (p = 0.000) less than (a = 0.05). Therefore, we reject the null hypothesis and accept the alternative hypothesis.

Table 5
www.frontiersin.org

Table 5. Correlations matrix.

Table 6 depicts the mediation analysis revealing important insights into the relationships among AI use, user experience, and perceived integrity. The indirect effect of AI use on perceived integrity through user experience is estimated at 0.280, with a standard error of 0.0628, yielding a Z-value of 4.47 and a highly significant p-value (< 0.001). This suggests that user experience significantly mediates the relationship between AI use and perceived integrity. The direct effect of AI use on perceived integrity is also significant, with an estimate of 0.262, a standard error of 0.0820, and a Z-value of 3.20 (p = 0.001), indicating that AI use directly influences perceived integrity as well. The total effect, combining both direct and indirect pathways, is 0.542, with a standard error of 0.0695 and a Z-value of 7.80 (p < 0.001), confirming the overall significance of the model. Thus, Hypothesis 2, which stated that User experience mediates the relationship between AI use in academic research and perceived integrity was supported (p < 0.001).

Table 6
www.frontiersin.org

Table 6. Mediation estimates.

Table 7 displays the results of the moderation analysis, illustrating how disciplinary context affects the relationship between AI use and perceived integrity. The estimate for AI use is 0.33694, with a standard error of 0.0749 and a Z-value of 4.496, indicating a strong positive effect on perceived integrity that is statistically significant (p < 0.001). This suggests that higher AI use is associated with greater perceived integrity. The estimate for disciplinary context is even higher at 0.45606, also statistically significant (p < 0.001), indicating a robust positive influence on perceived integrity. However, the interaction term for AI use and disciplinary context has an estimate of −0.00294, a standard error of 0.0192, and a Z-value of −0.153, with a p-value of 0.878, suggesting no significant moderation effect. Thus, Hypothesis 3, which stated that disciplinary context moderates the relationship between AI use in academic research and perceived integrity, was not supported (p = 0.878).

Table 7
www.frontiersin.org

Table 7. Moderation estimates.

Exploration of non-significant moderation

Given the non-significant interaction effect, we explored potential explanations. First, the measure of Disciplinary Context focused on perceived alignment and innovativeness of AI within a field, which may be uniformly high across disciplines in our sample, reducing variability. Second, the benefits of AI for research integrity (e.g., efficiency, rigor) may be perceived similarly regardless of discipline among Somali researchers who are in earlier stages of AI adoption. Third, the sample distribution across disciplines (e.g., fewer from Natural Sciences) may have limited our ability to detect moderation. These possibilities suggest that while disciplinary norms exist, their moderating role on the specific relationship between AI use and integrity perceptions may be less pronounced in this context.

Discussion of findings

AI use in academic research

The findings indicate that researchers are actively incorporating AI into their workflows, viewing it as valuable for enhancing literature reviews, data analysis, and fostering creativity. The strong commitment to professional development suggests a proactive approach to adopting these technologies, aligning with the performance expectancy dimension of UTAUT. This widespread adoption reflects AI’s growing integral role in the academic research landscape, consistent with prior studies on the transformative impact of AI tools (Ng et al., 2024; Kouam, 2024).

User experience

The generally positive user experience, characterized by user-friendly interfaces and efficient workflows, supports the effort expectancy dimension of UTAUT. However, concerns regarding the reliability and accuracy of AI outcomes highlight a critical area for improvement, as trust in AI systems is heavily influenced by perceived reliability (Nasra et al., 2024). This mixed UX—positive in usability but cautious in trust—underscores the nuanced role UX plays in mediating the relationship between use and integrity perceptions.

Perceived integrity

Participants generally believe AI can enhance research rigor and credibility while upholding ethical standards, and they emphasize the need for continuous policy monitoring. However, moderate scores on ethical integrity indicate lingering concerns. This aligns with discussions on the necessity of legal, regulatory, and ethical frameworks to ensure accountability and uphold integrity in AI-assisted research (O’Sullivan et al., 2019). The belief in AI’s accountability reinforces trust but points to the need for further efforts in ethical safeguards.

Disciplinary context

Researchers perceive AI as enhancing learning and problem-solving in their disciplines, consistent with current trends and standards. This suggests AI is well-integrated across various academic contexts, supporting its role as an adaptive and innovative educational tool (Abbas et al., 2023; Guerrero et al., 2020). However, the non-significant moderating effect of disciplinary context suggests that, in this setting, the perceived benefits and integrity of AI may transcend disciplinary boundaries.

Moderation analysis: the non-significant role of discipline

The moderation analysis found no significant interaction between AI use and disciplinary context on perceived integrity. This suggests that the positive effect of AI use on integrity perceptions is consistent across different fields in the Somali academic context. This finding contrasts with literature emphasizing strong disciplinary differences in technology adoption (Liu et al., 2024; Fontaine et al., 2024) but aligns with perspectives that core trust drivers like perceived integrity can be universal across contexts (Lalot and Bertram, 2025; Shinners et al., 2019). Possible explanations include the relatively nascent stage of AI adoption in Somalia, where researchers across disciplines share similar challenges and expectations, or the measurement of disciplinary context focusing on general alignment rather than deep epistemic differences.

Mediation analysis: the central role of user experience

The mediation analysis confirms that user experience significantly mediates the relationship between AI use and perceived integrity. The significant indirect effect underscores that a positive UX enhances trust in AI’s integrity, while the remaining direct effect suggests other unmeasured factors also contribute. This aligns with technology acceptance research where perceived usefulness and ease of use shape attitudes and trust (Liang et al., 2019; Venkatesh, 2000).

Theoretical implications

This study adds to two important theories: The Unified Theory of Acceptance and Use of Technology (UTAUT) and trust-based models of technology. First, while UTAUT usually focuses on why people start using technology, we show that actually using AI can shape how much people trust its integrity afterward. Second, we found that user experience (UX) is the key link explaining this connection—how easy and useful AI feels directly affects whether researchers see it as ethical. Third, contrary to expectations, disciplinary differences did not change this relationship in our Somali sample. This suggests that in settings where AI is still new and resources are limited, practical and ethical concerns may matter more than field-specific norms. Together, these insights call for a more context-aware approach to technology adoption theories, especially in developing academic systems where trust is critical.

Contribution and novelty

This study contributes novel insights by providing empirical evidence from a Somali academic context, where AI research is underexplored. It uniquely tests dual mechanisms (mediation and moderation) in the AI-integrity relationship, revealing the paramount importance of user experience over disciplinary boundaries in this setting. These findings challenge the assumed primacy of disciplinary culture in technology adoption literature and underscore the need for context-specific models of AI integration and governance in resource-constrained higher education environments.

Conclusion

This study aimed to examine how AI use in academic research is associated with perceived integrity, and to test whether user experience mediates and disciplinary context moderates this relationship within Somali higher education. The findings reveal a positive trend in AI adoption, with researchers integrating AI tools to enhance literature reviews, data analysis, and research outcomes, and demonstrating commitment to related professional development. Overall, perceived integrity was high, suggesting that researchers believe AI can enhance research rigor and ethical standards, though concerns about reliability and ethical integrity persist. Key synthesized takeaways are as follows: First, user experience plays a significant mediating role, indicating that a positive experience with AI systems strengthens perceptions of their integrity. Second, disciplinary context did not moderate the relationship, implying that the association between AI use and perceived integrity is consistent across different academic fields in this context. These insights advance our understanding of the mechanisms linking AI adoption to trust in an under-researched setting. The study contributes by providing evidence-based insights from Somalia, highlighting the central role of user experience and the context-dependent nature of disciplinary influences. However, limitations should be noted, including the cross-sectional design, which precludes causal inferences; the gender imbalance in the sample, which may affect generalizability; and the focus on a single national context. Future research should employ longitudinal designs, include more diverse samples, and explore additional potential moderators such as institutional support or researcher seniority. Such work will further inform the development of responsible AI integration policies tailored to the needs of developing higher education systems.

Recommendations

Based on the findings of this study, the following actionable recommendations are prioritized for academic institutions and researchers in Somalia and similar contexts:

• Develop and implement structured AI literacy training programs focused not only on tool usage but also on critical evaluation of AI outputs, ethical guidelines, and integrity verification. This addresses the UX-mediating role and reliability concerns identified in the study.

• Establish clear, context-appropriate disclosure guidelines for AI use in research proposals, manuscripts, and theses to promote transparency and accountability, thereby bolstering perceived integrity.

• Integrate mandatory AI-output verification and integrity checks into the research workflow and supervision processes to mitigate risks related to inaccuracy, plagiarism, and data fabrication.

• Form institutional AI ethics committees with clear Terms of Reference (TOR) to develop, review, and regularly update AI use policies, provide ethical oversight, and handle related grievances.

• Create equitable access plans to ensure AI tools and necessary computational resources are available across disciplines and to all researchers, preventing a digital divide that could exacerbate existing inequalities.

• Define and monitor key evaluation metrics for AI integration, such as user satisfaction scores, perceived integrity measures, and research output quality indicators, to iteratively improve support systems and policies.

• For researchers, it is recommended to: engage proactively with available training, apply critical scrutiny to AI-generated content, maintain transparent records of AI use, and participate in interdisciplinary dialogs to shape locally relevant best practices.

Data availability statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request.

Author contributions

SA: Writing – original draft, Writing – review & editing. MA: Writing – original draft, Writing – review & editing.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abbas, N., Ali, I., Manzoor, R., Hussain, T., and Hussain, M. (2023). Role of artificial intelligence tools in enhancing students' educational performance at higher levels. J. Artif. Intell. Mach. Learn. Neural Netw 35, 36–49. doi: 10.55529/jaimlnn.35.36.49

Crossref Full Text | Google Scholar

Ahmed, M. I., Spooner, B., Isherwood, J., Lane, M., Orrock, E., and Dennison, A. (2023). A systematic review of the barriers to the implementation of artificial intelligence in healthcare. Cureus, 15.

Google Scholar

Aisyah, D., Yulianti, P. D., Yandhini, S., Putri Sari, A. D., Herawani, I., and Oktarini, I. (2024). The influence of AI on students’ mind patterns. BICC Proceedings, 2, 183–186. doi: 10.30983/bicc.v1i1.125

Crossref Full Text | Google Scholar

Ajwang, S., and Ikoha, A. (2024). Publish or perish in the era of artificial intelligence: which way for the Kenyan research community? Libr. Hi Tech News 41, 7–11. doi: 10.1108/LHTN-04-2024-0065

Crossref Full Text | Google Scholar

Al-Bukhrani, M., Alrefaee, Y., and Tawfik, M. (2025). Adoption of AI writing tools among academic researchers: a theory of reasoned action approach. PLoS One 20:e0313837. doi: 10.1371/journal.pone.0313837,

PubMed Abstract | Crossref Full Text | Google Scholar

Al‐Rousan, A. H., Ayasrah, M. N., Salih Yahya, S. M., and Khasawneh, M. A. S. (2025). Design and psychometric evaluation of the artificial intelligence acceptance and usage in research creativity scale among faculty members: insights from the network analysis perspective. Eur J Educ, 60, e12927.

Google Scholar

Amirjalili, F., Neysani, M., and Nikbakht, A. (2024). Exploring the boundaries of authorship: a comparative analysis of AI-generated text and human academic writing in English literature. Front. Educ. 9:421. doi: 10.3389/feduc.2024.1347421

Crossref Full Text | Google Scholar

Bhavsar, D., Duffy, L., Jo, H., Lokker, C., Haynes, R. B., Iorio, A., et al. (2024). Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit. medRxiv 10:148. doi: 10.1101/2024.06.19.24309148

Crossref Full Text | Google Scholar

Chen, Z., Chen, C., Yang, G., He, X., Chi, X., Zeng, Z., et al. (2024). Research integrity in the era of artificial intelligence: challenges and responses. Medicine 103:e38811. doi: 10.1097/MD.0000000000038811,

PubMed Abstract | Crossref Full Text | Google Scholar

Dave, A., Saxena, A., and Jha, A. (2023). Understanding user comfort and expectations in AI-based systems. Res. Sq. 2023:320. doi: 10.21203/rs.3.rs-3135320/v1

Crossref Full Text | Google Scholar

Fecher, B., and Hebing, M. (2021). How do researchers approach societal impact? PLoS One 16:e0254006. doi: 10.1371/journal.pone.0254006,

PubMed Abstract | Crossref Full Text | Google Scholar

Fontaine, S., Gargiulo, F., Dubois, M., and Tubaro, P. (2024). Epistemic integration and social segregation of AI in neuroscience. Appl. Netw. Sci. 9:618. doi: 10.1007/s41109-024-00618-2

Crossref Full Text | Google Scholar

Forgas, R., Koulouris, A., and Kouis, D. (2024). ‘AI-navigating’ or ‘AI-sinking’? An analysis of verbs in research articles titles suspicious of containing AI-generated/assisted content. Learn. Publ. 38:1647. doi: 10.1002/leap.1647

Crossref Full Text | Google Scholar

Granjeiro, J. (2025). The future of scientific writing: AI tools, benefits, and ethical implications. Braz. Dent. J. 36:471. doi: 10.1590/0103-644020256471,

PubMed Abstract | Crossref Full Text | Google Scholar

Guerrero, A., López-Belmonte, J., Marín, J., and Costa, R. (2020). Scientific development of educational artificial intelligence in web of science. Future Internet 12:124. doi: 10.3390/fi12080124

Crossref Full Text | Google Scholar

Guleria, A., Krishan, K., Sharma, V., and Kanchan, T. (2023). ChatGPT: ethical concerns and challenges in academics and research. J. Infect. Dev. Ctries. 17, 1292–1299. doi: 10.3855/jidc.18738,

PubMed Abstract | Crossref Full Text | Google Scholar

Hryciw, B., Seely, A., and Kyeremanteng, K. (2023). Guiding principles and proposed classification system for the responsible adoption of artificial intelligence in scientific writing in medicine. Front. Artif. Intell. 6:353. doi: 10.3389/frai.2023.1283353,

PubMed Abstract | Crossref Full Text | Google Scholar

Jiang, P., Niu, W., Wang, Q., Yuan, R., and Chen, K. (2024). Understanding users’ acceptance of artificial intelligence applications: a literature review. Behav. Sci. 14:671. doi: 10.3390/bs14080671,

PubMed Abstract | Crossref Full Text | Google Scholar

Kang, H. (2021). Sample size determination and power analysis using the G*power software. J. Educ. Eval. Health Prof. 18:17. doi: 10.3352/jeehp.2021.18.17,

PubMed Abstract | Crossref Full Text | Google Scholar

Khlaif, Z. N., Mousa, A., Hattab, M. K., Itmazi, J., Hassan, A. A., Sanmugam, M., et al. (2023). The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Med. Educ. 9:e47049. doi: 10.2196/47049,

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, H. (2024). Investigating the effects of generative-AI responses on user experience after AI hallucination. In: Proceedings of the International Conference on Social Science and Humanities, pp. 92–101.

Google Scholar

Kim, J., Kim, M., Kwak, D., and Lee, S. (2021). Home-tutoring services assisted with technology: investigating the role of artificial intelligence using a randomized field experiment. J. Mark. Res. 59, 79–96. doi: 10.1177/00222437211050351,

PubMed Abstract | Crossref Full Text | Google Scholar

Kouam, A. (2024). AI in academic writing: ally or foe? Int. J. Res. Publ. 148:427. doi: 10.47119/IJRP1001481520246427

Crossref Full Text | Google Scholar

Lalot, F., and Bertram, A. K. (2025). When the bot walks the talk: investigating the foundations of trust in an artificial intelligence (AI) chatbot. J. Exp. Psychol. Gen. 154, 533–551. doi: 10.1037/xge0001696,

PubMed Abstract | Crossref Full Text | Google Scholar

Leung, T. I., Cardoso, T., Mavragani, A., and Eysenbach, G. (2023). Best practices for using AI tools as an author, peer reviewer, or editor. J. Med. Internet Res. 25:e51584. doi: 10.2196/51584,

PubMed Abstract | Crossref Full Text | Google Scholar

Liang, Y., Lee, S., and Workman, J. E. (2019). Implementation of artificial intelligence in fashion: are consumers ready? Cloth. Text. Res. J. 38, 3–18. doi: 10.1177/0887302X19873437

Crossref Full Text | Google Scholar

Liu, R., Mao, J., Li, G., and Cao, Y. (2024). Characterizing structure of cross-disciplinary impact of global disciplines: a perspective of the hierarchy of science. J. Data Inf. Sci. 9, 53–81. doi: 10.2478/jdis-2024-0008

Crossref Full Text | Google Scholar

Lukyanenko, R., Maass, W., and Storey, V. C. (2022). Trust in artificial intelligence: from a foundational trust framework to emerging research opportunities. Electron. Mark. 32, 1993–2020. doi: 10.1007/s12525-022-00605-4

Crossref Full Text | Google Scholar

Lund, B. D., and Naheem, K. T. (2023). Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals. Learn. Publ. 37, 13–21. doi: 10.1002/leap.1582,

PubMed Abstract | Crossref Full Text | Google Scholar

Luo, T., Mohamed, A., and Yusof, N. A. (2024). Travel choices and perceived images influenced by AI interactive approaches of travel apps: an evidence from Chinese mobile travel users. SAGE Open 14:393. doi: 10.1177/21582440241290393

Crossref Full Text | Google Scholar

Májovský, M., Černý, M., Kasal, M., Komarc, M., and Netuka, D. (2023). Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. J. Med. Internet Res. 25:e46924. doi: 10.2196/46924,

PubMed Abstract | Crossref Full Text | Google Scholar

Marmoah, S., Adika, D., Haryati, S., and Yurni, Y. (2024). Leveraging AI to optimize English academic writing (EAW) in intelligent decision support systems (IDSS). J. Ilm. Univ. Batanghari Jambi 24:1265. doi: 10.33087/jiubj.v24i2.5483

Crossref Full Text | Google Scholar

Nasra, M., Jaffri, R., Pavlin-Premrl, D., Kok, H. K., Khabaza, A., Barras, C., et al. (2024). Can artificial intelligence improve patient educational material readability? A systematic review and narrative synthesis. Intern. Med. J. 55, 20–34. doi: 10.1111/imj.16607,

PubMed Abstract | Crossref Full Text | Google Scholar

Ng, J. Y., Maduranayagam, S., Suthakar, N., Li, A., Lokker, C., Iorio, A., et al. (2024). Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: a large-scale, international cross-sectional survey. medRxiv 7, 94–102. doi: 10.1101/2024.02.27.24303462

Crossref Full Text | Google Scholar

O’Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Léonard, S., Pagallo, U., et al. (2019). Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 15:e1968. doi: 10.1002/rcs.1968,

PubMed Abstract | Crossref Full Text | Google Scholar

Oyekunle, A. (2024). Trust beyond technology algorithms: a theoretical exploration of consumer trust and behavior in technological consumption and AI projects. J. Comput. Commun. 12, 102–123. doi: 10.4236/jcc.2024.126006

Crossref Full Text | Google Scholar

Park, S., Kim, H., Park, J., and Lee, Y. (2023). Designing and evaluating user experience of an AI-based defense system. IEEE Access 11, 122045–122056. doi: 10.1109/ACCESS.2023.3329257

Crossref Full Text | Google Scholar

Perkins, M., and Roe, J. (2024). Academic publisher guidelines on AI usage: a ChatGPT supported thematic analysis. F1000Res 12:1398. doi: 10.12688/f1000research.142411.2,

PubMed Abstract | Crossref Full Text | Google Scholar

Pinzolits, R. (2023). AI in academia: an overview of selected tools and their areas of application. MAP Educ. Hum. 4, 37–50. doi: 10.53880/2744-2373.2023.4.37

Crossref Full Text | Google Scholar

Roy, R., Ashmika, R., Chakraborty, A., and Sharafat, I. (2024). “Future trends in AI and academic research writing” in AI-assisted specialized translation. ed. A. S. Kumar (IGI Global), 232–254.

Google Scholar

Santos, J. M., Varela, C., Fischer, M., and Kerridge, S. (2024). Beyond the bench: The professional identity of research management and administration. High. Educ. Policy. 1–23.

Google Scholar

Seeber, M., Vlegels, J., and Cattaneo, M. (2022). Conditions that do or do not disadvantage interdisciplinary research proposals in project evaluation. J. Assoc. Inf. Sci. Technol. 73, 1106–1126. doi: 10.1002/asi.24617

Crossref Full Text | Google Scholar

Sharma, A., Rao, P., Ahmed, M., and Chaturvedi, K. (2024). Artificial intelligence in scientific writing: opportunities and ethical considerations. Int. J. Res. Med. Sci. 13, 532–542. doi: 10.18203/2320-6012.ijrms20244167

Crossref Full Text | Google Scholar

Shinners, L., Aggar, C., Grace, S., and Smith, S. (2019). Exploring healthcare professionals’ understanding and experiences of artificial intelligence technology use in the delivery of healthcare: an integrative review. Health Informatics J. 26, 1225–1236. doi: 10.1177/1460458219874641,

PubMed Abstract | Crossref Full Text | Google Scholar

Venkatesh, V. (2000). Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model. Inf. Syst. Res. 11, 342–365. doi: 10.1287/isre.11.4.342.11872,

PubMed Abstract | Crossref Full Text | Google Scholar

Wang, Y., Duan, J., Talia, S., and Zhu, H. (2023). A study of comfortability between interactive AI and human. arXiv 2023:14360. doi: 10.48550/arXiv.2302.14360

Crossref Full Text | Google Scholar

Wei, L. (2024). Explore the factors and influences of the frequency of use of artificial intelligence technology in entertainment software. Lecture Notes Educ. Psychol. Public Med. 42, 187–194. doi: 10.54254/2753-7048/42/20240787

Crossref Full Text | Google Scholar

Yin, Y., and Li, C. (2024). Application and innovation of artificial intelligence in economics and management courses in universities. J. Serv. Sci. Manag. 17, 345–353.

Google Scholar

Zheng, B., and Wang, A. (2022). Business management reference on AI product marketing strategies. In Proceedings of the 2022 2nd international conference on business administration and data science. Paris, France: Atlantis Press, pp. 301–314.

Google Scholar

Keywords: academic research, AI use, mediating user experience, moderating disciplinary context, perceived integrity

Citation: Abubakar S and Adan MY (2026) AI use in academic research: Examining the mediating effect of user experience and the moderating effect of disciplinary context on perceived integrity. Front. Educ. 10:1711539. doi: 10.3389/feduc.2025.1711539

Received: 23 September 2025; Revised: 14 December 2025; Accepted: 17 December 2025;
Published: 12 January 2026.

Edited by:

Patrick Ngulube, University of South Africa, South Africa

Reviewed by:

Diana Atuase, University of Cape Coast, Ghana
Metwaly Eldakar, Minia University, Egypt

Copyright © 2026 Abubakar and Adan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Said Abubakar, c2FpZEBtdS5lZHUuc28=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.