Your new experience awaits. Try the new design now and help us make it even better

MINI REVIEW article

Front. Polit. Sci., 28 October 2025

Sec. Politics of Technology

Volume 7 - 2025 | https://doi.org/10.3389/fpos.2025.1570384

The inclusion and participation of actors involved in artificial intelligence governance applied to public administrative systems and procedures

  • 1Facultad de Filosofía y Letras, Instituto de Investigaciones Sociales, Universidad Autónoma de Nuevo León, Monterrey, Mexico
  • 2Autonomous University of Nuevo León, San Nicolás de los Garza, Mexico

The primary objective was to build a model that complements the provisions of the recent European Union AI regulation, the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework—which operationalizes the US President’s Executive Order 14110—and the first international standard for the Artificial Intelligence Management Systems (ISO/IEC 42001:2023) of the International Standardization Organization (ISO). This objective was a priority because Responsibility Articles 28 and 57 of the recent European Union regulations have set a deadline of August 2025 and August 2026 for designating an authority responsible for evaluations and testing before artificial intelligence (AI) systems are put into service. The previous objective analyzes the above regulations and provisions from the perspective of AI governance (AIG). That is, the approach seeks to balance the empowerment of algorithms in public administration with the citizen aspiration of empowerment. The method analytically reviews the stages and actions of an AI system to infer the design and development of the action and responsibilities of the actors involved in the AIG process from end to end. The results show a general AIG model for public administration that uses Artificial Intelligence before, during, and after its complete life cycle. The conclusions demand a holistic vision that includes both social and technical infrastructure.

1 Introduction

The absolute sovereignty of States cannot be transferred to the sovereignty of artificial intelligence (AI). Therefore, public administrative systems (in the political sphere) and the autonomous guidance of AI in procedures (in the technological sphere) present significant challenges for AI governance (AIG). This addresses the challenge of proportionate and appropriate inclusion and participation of all stakeholders in AIG.

Specifying the stages of AI in which the various stakeholders should be involved is the main contribution of this text, as European (European Union, 2024b), North American, and international guidelines lack this precision.

2 Theoretical framework

Governance can be so adjective-driven that it runs the risk of strangling the noun. Theorists and scholars write about centralized or decentralized governance, organizational structure, administrative procedures, corporations, markets, networks, multi-level or transversal systems, global models, and a perilous list of others. Choi and Park (2023) reviewed extensive literature and empirical practices in Korea, concluding with five archetypes of AIG in the public sector. Tan (2023) identified seven approaches to AI and noted overlapping dimensions and unique aspects in each approach. Many scholars, in a disjointed manner, agree on common elements and point to other peculiar ones that need to be systematized. The review of successful practices in Netherlands, Sweden, the UK (Safarov, 2019), Canada, and Mexico (Aguirre, 2022) supports the AIG model presented here because it indicates who should participate, when, and how throughout the entire AI lifecycle. In addition, algorithmic impact assessments are included. The AIG model is based on these theoretical and practical lessons. It organizes and complements them by designing the assignment of roles and responsibilities within common ethical principles and expanding the legal frameworks that protect them.

3 Methods and results

To establish the responsibilities and scope of each actor involved in AI, the deficits of International Standardization Organization (ISO)/IEC 42001:2023, the National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF 1.0) (National Institute of Standards and Technology (NIST), 2024), and the European Union’s AI Act (European Union, 2024a; Keber et al., 2024) are highlighted in this text. Based on the identification of these inadequacies, a general model for the governance of public administration through AI was developed, encompassing actions before, during, and after the lifecycle of an AI system.

While ISO 42001 provides a comprehensive overview of the technical aspects, its main limitation is that it restricts the allocation of authority to AI development organizations, without considering user and citizen participation. For example, only organizations decide on the prevention or rejection of risk impacts (International Standardization Organization (ISO), 2023, p. 8). Development organizations are solely responsible for overseeing assessments (International Standardization Organization (ISO), 2023, p. 10 and Table A.1, p. 18). Although risks may materialize for individuals and societies, developers have exclusive authority to analyze and determine the level of risks and prioritize their management throughout the AI system lifecycle (International Standardization Organization (ISO), 2023, p. 9 and Table A.6, p. 18). In a nutshell, ISO 42001 grants a monopoly on authority and certification exclusively to corporate developers, while ignoring affected individuals and groups by omitting opportunities for intervention and assigning responsibilities.

The NIST Risk Management Framework does not assign specific responsibilities to the actors involved in the risks. The proposed governance, mapping, measures, and management actions (National Institute of Standards and Technology (NIST), 2024, pp. 13–46) suggest general actions, without distinguishing between the actors involved and the instruments needed to achieve the action, nor specifying which actors should use which instruments. It also does not indicate at what point in the AI lifecycle the suggested actions should be carried out.

The main loophole in the European Union AI Law is found in Article 27, which mandates assessments for high-risk impacts. The assessments do not include critical infrastructure (road traffic, water, gas, and electricity) and limit the participation of assessors to only those responsible for AI deployment. In the case of materialized risks, Article 27.1.f limits the measures to the internal governance of those responsible for the deployment, and only they can establish grievance mechanisms. In other words, the European Union AI Law does not establish a mandatory mechanism for public assessment or for challenges brought by those affected. It does not guarantee that fundamental rights will not be disregarded. Similar gaps exist in Recital 125 and Article 43.2, where high-risk assessments are carried out only through internal control, without the involvement of a notified body. Article 72 weakens post-market surveillance of suppliers’ internal systems. Recital 25 exempts the pre-market phases of AI in scientific research and development from assessments, ethical standards, or professional standards.

The successive application of this method and its results are reflected in the Table 1 titled “AI governance in the full AI life cycle,” which is divided into three sections: “Governance actions before operating the AI system,” “AI life cycle correlated with responsibilities and governance actions” and “Governance actions after operating the AI system.”

Table 1
www.frontiersin.org

Table 1. “AI governance in the full AI life cycle”—a general model for the governance of public administration using AI.

4 Conclusion and recommendations

The AIG model presents a holistic view of the entire social and technical infrastructure. We agree with the AINOW Institute (2023) on the strategy that all stakeholders “can require additional testing, inspections, disclosures, or modifications prior to approval [and throughout the lifecycle of an AI system]. A public version of all system documentation should be published in a database upon approval.” This overcomes the endogamous governance that can exist within an organization. Consequently, this model encompasses elements of Tan (2023) seven approaches and Choi and Park’s (2023) 25 variables. Essential elements should be protected by an AIG organization, agency, or tribunal composed of both government officials and citizens.

The AIG guidelines should not be limited to the number of interventions but rather include binding procedures for AI Impact Assessments. This should be included in a manual that allows for ISO/IEC 42001 certification.

The main contribution of the AIG model is to prevent the monopolization of AI and to point out the instances, mechanisms, and moments when final decisions should be made by humans.

Author contributions

JA-S: Writing – original draft.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. The author declare that financial support was received for publication of this article by Universidad Autónoma de Nuevo León.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Aguirre, J. (2022). Especificando la responsabilidad algorítmica. Teknokultura 19, 265–275. doi: 10.5209/tekn.79692

Crossref Full Text | Google Scholar

AINOW Institute (2023) Zero trust AI governance. AINOW Institute, +accountable tech. Available online at: epic.org; https://www.ainowinstitute.org/publication/zero-trust-ai-governance (Accessed February 21, 2025).

Google Scholar

Choi, H., and Park, M. (2023). To govern or be governed: an integrated framework for AI governance in the public sector. Sci. Public Policy 1–14. doi: 10.1093/scipol/scad045

Crossref Full Text | Google Scholar

European Union (2024a) Regulation (EU) 2024/1689 of the European Parliament and of the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) no 300/2008, (EU) no 167/2013, (EU) no 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (artificial intelligence act). Available online at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 (Accessed February 21, 2025).

Google Scholar

European Union (2024b) Responsibilities of member states. Available online at: https://artificialintelligenceact.eu/es/responsibilities-of-member-states/ (Accessed February 21, 2025).

Google Scholar

European Union and United States, Trade and Technology Council (2024) Terminology and taxonomy for artificial intelligence second edition. Available online at: https://www.nist.gov/artificial-intelligence/technical-contributions-ai-governance (Accessed February 21, 2025).

Google Scholar

International Standardization Organization (ISO) (2023) ISO/IEC 42001:2023Information technology — artificial intelligence — management system. Available online at: https://www.iso.org/obp/ui/es/#iso:std:iso-iec:42001:ed-1:v1:en; https://www.iso.org/contents/data/standard/08/12/81230.html#:~:text=ISO%2FIEC%2042001%20es%20una,(SGIA)%20en%20las%20organizaciones (Accessed February 21, 2025).

Google Scholar

Keber, T., Schwartmann, R., and Zenner, K. (2024). The EU AI act: a practice-oriented interpretation: an initial overview. Comput, Law Rev. Int. 25, 114–120. doi: 10.9785/cri-2024-250403

Crossref Full Text | Google Scholar

National Institute of Standards and Technology (NIST) (2024) NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. Gaithersburg, MD 20899-8900. Available online at: https://10.6028/NIST.AI.600-1 (Accessed February 21, 2025).

Google Scholar

Safarov, I. (2019). Institutional dimensions of open government data implementation: evidence from the Netherlands, Sweden, and the UK. Public Perform. Manag. Rev. 42, 305–328. doi: 10.1080/15309576.2018.1438296

Crossref Full Text | Google Scholar

Tan, E. (2023). Designing an AI compatible open government data ecosystem for public governance. Inf. Polity 28, 541–557. doi: 10.3233/IP-220020

Crossref Full Text | Google Scholar

Keywords: AI stages, AI products, actors involved, AIG actions, AI governance

Citation: Aguirre-Sala JF (2025) The inclusion and participation of actors involved in artificial intelligence governance applied to public administrative systems and procedures. Front. Polit. Sci. 7:1570384. doi: 10.3389/fpos.2025.1570384

Received: 03 February 2025; Accepted: 14 August 2025;
Published: 28 October 2025.

Edited by:

Alberto Asquer, SOAS University of London, United Kingdom

Reviewed by:

John R. T. Bustard, Ulster University, United Kingdom

Copyright © 2025 Aguirre-Sala. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jorge Francisco Aguirre-Sala, am9yZ2UuYWd1aXJyZXNAdWFubC5teA==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.