Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Polit. Sci.

Sec. Politics of Technology

This article is part of the Research TopicHuman Rights and Artificial IntelligenceView all 6 articles

Polycentrism, not polemics? Squaring the circle of non-discrimination law, accuracy metrics and public/private interests when addressing AI bias

Provisionally accepted
  • Raoul Wallenberg Institute, Lund University, Lund, Sweden

The final, formatted version of the article will be published soon.

Lon Fuller famously argued that polycentric issues are not readily amenable to binary and adversarial forms of adjudication. When it comes to resource allocations involving various interested parties, binary polemical forms of decision making may fail to capture the polycentric nature of the dispute, namely the fact that an advantage conferred to one party invariably involves (detrimentally) affecting the interests of others in an interconnected web. This article applies Fuller's idea in relation to artificial intelligence systems and examines how the human right to equality and non-discrimination takes on a polycentric form in AI-driven decision making and recommendations. This is where bias needs to be managed, including through the specification of impacted groups, error types, and acceptable error rates disaggregated by groups. For example, while the typical human rights response to non-discrimination claims involves the adversarial assertion of the rights of protected groups, this response is inadequate and does not go far enough in addressing polycentric interests-where groups are differentially impacted through debiasing measures when designing for 'fair AI'. Instead, the article frontloads the contention that a triangulation of polycentric interests, namely: respecting demands of the law; system accuracy and the commercial or public interest pursued by the AI system, has to be acknowledged. In connecting theory with practice, the article draws illustrative examples from the use of AI within migration and border management and offensive and hate speech detection within online platforms to examine how these polycentric interests are considered when addressing AI bias. It demonstrates that the problem of bias in AI can be managed, though not eliminated, through social policy choices and ex-ante tools such as human rights impact assessments that assess the contesting interests impacted by algorithmic design and which enable the accounting of statistical impacts of polycentrism. However, this has to be complemented with transparency and other backstop measures of accountability to close techno-legal gaps.

Keywords: AI Bias, Human Rights, Polycentric, Non-discrimination law, Equality

Received: 11 Jun 2025; Accepted: 31 Oct 2025.

Copyright: © 2025 TEO. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: SUE ANNE TEO, sue_anne.teo@rwi.lu.se

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.