ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. AI in Business
Volume 8 - 2025 | doi: 10.3389/frai.2025.1611024
This article is part of the Research TopicIntegrating AI in Social Engineering: Impacts and Ethics across Business, Medicine, and IndustryView all 4 articles
Navigating Ethical Minefields: A Multi-Stakeholder Approach to Assessing Interconnected Risks in Generative AI Using Grey DEMATEL
Provisionally accepted- 1IBM (India), Bengaluru, India
- 2International School of Management Excellence, Bangalore, Karnataka, India
- 3Symbiosis Institute of Management Studies, Pune, Maharashtra, India
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The rapid advancement of generative artificial intelligence (AI) technologies has introduced unprecedented capabilities in content creation and human-AI interaction, while simultaneously raising significant ethical concerns. This study examined the complex landscape of ethical risks associated with generative AI (GAI) through a novel multi-stakeholder empirical analysis using the grey decision-making-trial-and-evaluation-laboratory methodology to quantitatively analyse the causal relationships between risks and their relative influence on AI deployment outcomes. Through a comprehensive literature review and expert validation across three key stakeholder groups (AI developers, end users, and policymakers), we identified and analysed 14 critical ethical challenges across the input, training, and output modules, including both traditional and emerging risks, such as deepfakes, intellectual property rights, data transparency, and algorithmic bias. This study analysed the perspectives of key stakeholders to understand how ethical risks are perceived, prioritised, and interconnected in practice. Using Euclidean-distance analysis, we identified significant divergences in risk perception among stakeholders, particularly in areas of adversarial prompts, data bias, and output bias. Our findings contribute to the development of a balanced ethical risk framework by categorising risks into four distinct zones: critical enablers, mild enablers, independent enablers, and critical dependents. This categorisation promotes technological advancement and responsible AI deployment. This study addressed the current gaps in academic work by providing actionable recommendations for risk-mitigation strategies and policy development while highlighting the need for collaborative approaches among stakeholders in the rapidly evolving field of GAI.
Keywords: Generative AI, Foundation models, Ethical AI, AI Risks, responsible ai
Received: 13 Apr 2025; Accepted: 09 Jun 2025.
Copyright: © 2025 Jonnala, Thomas and Mishra. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Sridhar Jonnala, IBM (India), Bengaluru, India
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.