Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Polit. Sci., 18 November 2025

Sec. Politics of Technology

Volume 7 - 2025 | https://doi.org/10.3389/fpos.2025.1645160

This article is part of the Research TopicHuman Rights and Artificial IntelligenceView all 6 articles

Polycentrism, not polemics? Squaring the circle of non-discrimination law, accuracy metrics and public/private interests when addressing AI bias

  • Raoul Wallenberg Institute of Human Rights and Humanitarian Law, Lund, Sweden

Lon Fuller famously argued that polycentric issues are not readily amenable to binary and adversarial forms of adjudication. When it comes to resource allocations involving various interested parties, binary polemical forms of decision making may fail to capture the polycentric nature of the dispute, namely the fact that an advantage conferred to one party invariably involves (detrimentally) affecting the interests of others in an interconnected web. This article applies Fuller’s idea in relation to artificial intelligence systems and examines how the human right to equality and non-discrimination takes on a polycentric form in AI-driven decision making and recommendations. This is where bias needs to be managed, including through the specification of impacted groups, error types, and acceptable error rates disaggregated by groups. For example, while the typical human rights response to non-discrimination claims involves the adversarial assertion of the rights of protected groups, this response is inadequate and does not go far enough in addressing polycentric interests- where groups are differentially impacted through debiasing measures when designing for ‘fair AI’. Instead, the article frontloads the contention that a triangulation of polycentric interests, namely: respecting demands of the law; system accuracy and the commercial or public interest pursued by the AI system, has to be acknowledged. In connecting theory with practice, the article draws illustrative examples from the use of AI within migration and border management and offensive and hate speech detection within online platforms to examine how these polycentric interests are considered when addressing AI bias. It demonstrates that the problem of bias in AI can be managed, though not eliminated, through social policy choices and ex-ante tools such as human rights impact assessments that assess the contesting interests impacted by algorithmic design and which enable the accounting of statistical impacts of polycentrism. However, this has to be complemented with transparency and other backstop measures of accountability to close techno-legal gaps.

1 Introduction

Lon Fuller famously argued that polycentric issues are not amenable to adversarial dispute resolution, such as through legal adjudication, that result in binary outcomes for the parties. Fuller centered his argument around the (in)appropriateness of adjudicating over the use and allocation of resources, wherein binary polemical forms of decision-making fail to capture the polycentric nature of the problem, namely the fact that an advantage conferred to one party invariably involves (detrimentally) affecting the interests of others in an interconnected web. This article applies Fuller’s argument to the problem of bias of artificial intelligence (‘AI’) systems and examines its impact on the human right to equality and non-discrimination. While human rights in general are considered as a particularly strong class of moral claims that finds its primary expression and protection as legal claims through human rights law, the pervasive and dispersed use of AI technologies within society, including in consequential areas of public administration such as social welfare, healthcare and education, exposes the polycentric nature of interests impacted by its use (Kleinberg et al., 2016; Balayn and Gürses, 2021; Wachter et al., 2021; Zehlike et al., 2022; Mittelstadt et al., 2024). A polemical assertion of rights claims may cause inadvertent harm to interests (of other groups) that are interconnected and mediated through AI systems and may also ironically fail to advance the normative reasons behind the right to equality and non-discrimination of protected groups in the first place (Binns, 2018; Mittelstadt et al., 2024). Further, purported solutions forwarded to address non-discrimination concerns in relation to AI systems, such as through human rights impact assessments, may create a techno-legal gap that can only be addressed by consciously designing complementary backstop accountability measures.

To start off, a polycentric problem is defined in this article as a problem that has multiple centers of gravity, representing varied centers of interest that potentially pull in different or opposite tangents. The article takes as its point of departure the premise that human rights and freedoms are increasingly accessed, mediated and afforded through AI systems (Hildebrandt, 2015; Commissioner for Human Rights, Council of Europe, 2019; Bakiner, 2023). As AI increasingly come to play a role in determining access to goods, services and opportunities, a distributive justice element comes into play. The article argues that human rights in the context of AI systems can be better protected and made accountable by acknowledging the polycentric nature of the interaction between human rights and AI rather than by addressing the impacts of AI upon human rights in a polemical manner. While the human rights framework is adept at balancing interests by weighing the necessity and proportionality of actions and impacts on human rights (Letsas, 2015; Verdirame, 2015; see however Greene, 2021), fast paced technological changes such as artificial intelligence and its increasingly diffused use in society mean that this balancing process is increasingly being moved ‘upstream’, at the level of design, rather than upon traditional ‘downstream’ legal (and therein polemical) determination of rights violation, including in the court of law. The article traces the difficulties of moving from the polemical (downstream) to the polycentric upstream.

The article proceeds as follows. Part 2 traces the expanding scope of the applicability of the human rights framework on account of new harms and threats to human rights, denoting that the expanding forms of obligations, duty bearers and normative frames as a natural and inevitable trajectory of the ‘living instrument’ of human rights. However, this section argues that the move upstream, at the level of the design of AI systems, is a novel development entailing increasing responsibilities that human rights practitioners, policymakers and business entities, including potentially AI developers and deployers, have to meaningfully contend with. I argue here that the change here is one of kind, not merely of scope. Part 3 unpacks how the polycentric lens feature in the well-known problem of bias within AI systems and how rights to equality and non-discrimination can potentially be impacted. This part analyses the polycentric interests at play when addressing bias and designing fair AI systems – noting where problems can arise and how the polycentric lens is accommodated within the design of AI systems. It does so by analysing two case study examples in addressing AI bias, namely within border and migration management and offensive and hate speech detection algorithms used by online platforms. These case studies were chosen as they illustrate the scaled and distributive effects of algorithmic choice making in two distinct contexts – public and private. This section problematises the polemical and adversarial approach of human rights accountability, which fails to acknowledge or accord with the polycentric concerns brought forth by contesting interests mediated through AI. Having outlined the contours of how polycentrism takes form, Part 4 addresses objections but also how to operationalise polycentrism in practice. This includes the acknowledgment of social policy choices made in addressing bias and the deployment of tools such as human rights impact assessments wherein the contesting interests of algorithmic design can be identified, assessed and mitigated, albeit imperfectly. It further highlights the limitations of the polycentric lens, noting that it does not seek to exhaust all human rights concerns raised by the design and deployment of AI systems. The final section concludes.

2 The expanding scope of human rights protection

Human rights law has been said to be a ‘living instrument’1 wherein changing circumstances, be it cultural, societal, technological or environmental, can be accommodated within a purposive interpretation of human rights (Schulz and Raman, 2020). This teleological lens has seen how human rights, such as the right to privacy, have been widened to encompass every aspect of private and family life, including that of prisons, the workplace and the family.2 Human rights has also been widened to encompass new actors when such actors have the power and means to cause human rights harms. The UN Guiding Principles on Business and Human Rights, where businesses have the duty to respect human rights and remedy violations, is a case in point.3 Human rights and the environment is also an emerging area, one that acknowledges the intertwined needs of humanity and the environment in which we belong. The rising tide of cases, from Europe4 and the Americas,5 are attempts to broaden the human rights language to encompass new pressing concerns with the moral force of (legal) human rights (Rodríguez-Garavito, 2021). Thus, new challenges faced by humanity can be claimed and encompassed within the human rights lens.

In this sense, the challenges introduced by technology, including artificial intelligence, is not per se a novel concern for human rights. Research and media coverage has highlighted how AI impacts upon the right to privacy, equality and non-discrimination, freedom of expression, freedom of thought and the right to an effective remedy, amongst others in a non-exhaustive list (Buolamwini and Gebru, 2018; McGregor et al., 2019; European Union Agency for Fundamental Rights, 2020; Su, 2022). As such, the human rights impacts of AI is not contested here. Instead, as human rights and freedoms are increasingly accessed, mediated and afforded through AI systems, the technological novelty of artificial intelligence has been associated with the problem of the scale of its impact (Heikkilä, 2022), the speed of uptake (Chow, 2023) and the relative invisibility of algorithmic mediation (Van Den Eede, 2011). These are not easily amenable to resolution through a ‘normative equivalency’ paradigm, wherein human rights offline need merely be transposed as an online equivalent (Dror-Shpoliansky and Shany, 2021). This approach, long favoured by the international human rights community in addressing challenges posed by technology to human rights, shows procedural and normative gaps (Dror-Shpoliansky and Shany, 2021). In addition, despite the popular conception of technology being neutral, Winner has argued that technology has politics (Winner, 1980) and therein encompasses, amongst others, the value systems of the designers. This allows for certain value-laden goals to be achieved through purposive design, often at the implicit and unconscious exclusion of other perspectives and outcomes (Yeung, 2018; Brownsword, 2019). To further unpack the value-laden nature of technologies such as AI systems, the next section unpacks the problem of bias of AI systems and examines it from a polycentric lens, highlighting that the polemical lens of human rights law is unsuitable to counter the many-centered interests that measures taken to address bias tend to impact.

3 Polycentrism, not polemics

3.1 Definitions and methodology

In order to unpack the polycentrism of AI and how it relates to the human rights framework, we first need to define both artificial intelligence and what we mean by the term polycentrism. Despite its widespread use and deployment, artificial intelligence does not have a fixed definition. Initial attempts to define the term has converged around attempting to define intelligence, an endeavor elusive even when it comes to elucidating the nature of human intelligence (Gardner, 1983; Goodlad, 2023). Instead, efforts have been directed to examine what such systems do. Thus, Russell and Norvig (2010) have defined artificial intelligence as agents that can perceive its environment and responds with actions. The EU AI Act, the world’s first comprehensive AI regulation which was adopted in August 2024, similarly identifies what these systems do. The Act defines AI as a ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.6 For the purposes of this paper, this functional definition is adequate and illuminative of the challenges I will subsequently unpack.

As for the term polycentric, the Cambridge dictionary defines it as an adjective, that of ‘having more than one center’ (McIntosh, 2013). The Merriam-Webster dictionary definition elaborates that this can be used to refer to multiple centers of control or development (The Merriam-Webster Dictionary, 2022). Bringing how Lon Fuller used the term back into the picture, the term polycentrism and its unsuitability for adjudication is due to the fact that such disputes tend to ‘involve many affected parties and a somewhat fluid state of affairs’ (Fuller and Winston, 1978, p. 397) wherein ‘each crossing of strands is a distinct center for distributing tension.’ Fuller argued that:

(T)he more interacting centers there are, the more the likelihood that one of them will be affected by a change in circumstances, and, if the situation is polycentric, this change will communicate itself after a complex pattern to other centers (Fuller and Winston, 1978, p. 397).

The article applies Fuller’s polycentrism as a theoretical framework in which to address the question of AI bias and non-discrimination law. It does so by presenting a ‘triangulation’ thesis (see Figure 1), namely examining the requirements of the law, accuracy metrics and the (public or private) purpose of AI deployment. The article takes a conceptual legal-theoretical approach grounded in philosophy of law and science & technology studies (STS) and employs a case study methodology in illustrating the theoretical points. The case studies are not meant to engage in empirical data generation and will be used illustratively rather than as a systematic analysis.

Figure 1
Triangle diagram labeled

Figure 1. The triangulation thesis.

To start off, we can already at this stage raise a preliminary objection to the polycentric framing, namely the fact that polycentric problems are not new to the law. Human rights law is not merely polemical—it adjudicates upon and impacts upon the interests of the many. Fuller himself recognised that adjudication of disputes before the courts themselves bear different polycentric shades and is not merely ‘a question of distinguishing black from white’ (Fuller and Winston, 1978, p. 398).

On the one hand, the polycentric nature of human rights disputes is arguably taken into account within the human rights framework. In gauging whether or not a human rights violation has taken place, a court has to, in many cases, undertake a balancing exercise in terms of examining the alleged breach on the one hand and the legality, necessity and proportionality of the action on the other hand. This acknowledges that many rights do not function in absolute terms but are built in with measures to take conflicting interests into account. In this sense, the acknowledgement that a particular dispute can involve many different gravitational centers is not only not peripheral to human rights, but lies at the very core of the operationalisation of human rights protection. Another way polycentrism is acknowledged within the human rights law framework is through the notion of the margin of appreciation conferred to states as duty bearers. This recognises that states are in the best position to not only weigh conflicting interests, as mentioned, but also to concretely operationalise human rights protections, having taken those conflicting interests into account. A concrete example of this can be gleaned from the example of public security concerns weighed against potential impacts to individual rights, such as the right to privacy. Thus, when it comes to the deployment of AI-driven remote facial recognition systems, this has been critiqued as a form of mass surveillance and thereby posing a potential severe impact to the right to privacy. Nonetheless, such a practice may be allowed in order to help authorities in combating threats to national and public security.7 In turn, these actions must be legal, necessary and proportionate.

At the same time, polycentrism afforded by AI systems go beyond the existing frames of the margin of appreciation and the legality, necessity and proportionality considerations. First, while seeming to appear as a polycentric dispute, the examples of polycentrism ostensibly accommodated within human rights law actually take on polemic forms. The paradoxical observation can be explained thus: while examples such as needing to uphold public security versus the rights of individuals such as that of privacy appear to pull the levers of interest in different directions, the dispute in question is in effect one that pulls in two different directions. The interests being weighed in the examples given are on the one hand, that of security, and on the other, that of respecting individual rights. A heavier weight attached upon the prioritisation of security in effect negatively impacts upon individual rights. Were the dispute be visually represented, it would resemble that of a weighing scale where the scale tips in one direction rather than a web that represents different centers of interests and tensions. In other words, this is an example of the polemical weighing of benefits and burdens, and not a dispute consisting of many centers of gravity.8 A polycentric dispute brings about plural effects based upon a discrete action. While it is the case that a steady undermining of the right to privacy through increasing surveillance in the interest of security can have knock on effects such as chilling the exercise of free speech, freedom of association and other rights (Penney, 2016; Stevens et al., 2023), these are cumulative effects and not per se polycentric ones.

The next section introduces the polycentric nature of bias in AI systems and examines how it impacts upon the right to equality and non-discrimination.

3.2 The polycentrism of AI bias and the impact upon the right to equality and non-discrimination

3.2.1 Introduction

The right to equality and non-discrimination is a paradigmatic and fundamental human right. While scholarship offer different normative motivations for non-discrimination law, the key reasons for protection has been traced to the equal status and worth of persons (Alexander, 1992), the need to protect and respect individual autonomy (Eidelson, 2013) and the affordance of conditions in which everyone is able to lead flourishing lives by removing group disadvantage (Khaitan, 2015). In turn, certain groups or protected characteristics come within the scope of equality and non-discrimination law, including, but not limited to: ‘sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status’.9 The protected characteristics and groups within non-discrimination law are meant to address the historical and in many cases, enduring injustices and inequality faced by certain groups in society. Thus, no one should be treated in less favourable terms, including when it comes to rights, and access to goods, services and opportunities, on account of the prohibited grounds. The implicit aim of the law is to not only seek for formal equality in terms of access to opportunities but to ensure substantive equality (Wachter, 2022; Weerts et al., 2023). Thus, non-discrimination law covers not only direct discrimination where one is expressly discriminated on the basis of the protected grounds, but also indirect discrimination, where actions disproportionately impact those from the protected groups even if no intention to discriminate against those groups was present (Barocas and Selbst, 2016).10 This distinction is also present in other jurisdictions, for example in US law, where these are, respectively, classed as disparate treatment and disparate impact. Non-discrimination claims involve the adversarial assertion of the rights of protected groups where the disadvantaged party has to in the minimum, demonstrate a prima facie claim of discrimination.

When it comes to the applicability and operationalisation of equality and non-discrimination within AI systems, the issues are typically examined through the lens of ‘fairness’, in order to address what is now known within the field as ‘AI bias.’ This is by now an established field of research within the AI community, both in the technical and AI ethics fields. In turn, the machine learning community has been actively addressing the issue of bias and fairness in algorithms, including a whole conference dedicated to that topic (ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), n.d.). The focus on AI bias and fairness is thus an earnest attempt, by both industry and academia, to operationalise equality and non-discrimination principles within the AI system itself.

Scholarship has demonstrated that AI systems can have biased effects, notably impacting marginalised, minority and vulnerable population groups (Eubanks, 2017; Noble, 2018; Snow, 2018; Gerards and Zuiderveen Borgesius, 2022). Facial recognition systems have been demonstrated to have the highest levels of inaccuracy when it came to the African American population, who are misidentified at a higher rate than other population groups (Buolamwini and Gebru, 2018; Grother et al., 2019). The COMPAS decision support algorithm used by judges in the US to determine the risk of recidivism has similarly been found to be biased, wherein the rate of false positives towards African Americans were found to be almost double when compared to white Americans (Angwin et al., 2016). Incidences of AI-driven bias has also been observed within online platforms such as social media where minority populations have been mislabeled within images and misrepresented (Hern, 2018; Mac, 2021). Offensive speech detection algorithms have also been alleged to be biased against minority and marginalised groups (European Union Agency for Fundamental Rights, 2022; Díaz and Hecht-Felella, 2023; Luscombe, 2023; Elswah, 2024).

The persistence of these incidents have rightly led to calls for those designing and developing AI systems to tackle the biases present. Measures called for have varied from the need to use more representative datasets that reflect the actual demographic composition, including of minority populations, hiring more diverse teams within the organisation, paying attention to design choices, including model parameters and the usage of proxy data that can have indirect biased effects over protected groups within non-discrimination law (Balayn and Gürses, 2021). Some of these measures are also expressed as a legal requirement through AI regulation, such as the EU AI Act.

3.2.2 Designing fair AI systems and addressing bias

Within human rights adjudication and judicial determination, a non-discrimination claim does not typically look like a polycentric problem. Adjudication in a non-discrimination case involves seeing whether or not a subject was discriminated within a particular domain or in accessing services, goods and opportunities, be it within hiring and recruitment, access to education, access to social services and other areas impacting upon welfare, self-development and life opportunities. As Teo (2022) has demonstrated, in order to be able to claim a human rights violation, one needs to be cognisant of the harm one has suffered, be relatively able to articulate the causal elements of the harms and be enabled in seeking for accountability measures.

However, scholarship demonstrates that the ‘black-box’ nature of machine learning AI systems not only complicates but systemically challenges the ability of a rights holder to seek accountability. It may not always be clear that an AI system was used for decision making or recommendation. Further, even if this is known, the claimant is unlikely to have a detailed understanding of the workings of an algorithmic model that led to a given harm. The knowledge of the usage of AI is normally limited to its high-level goals, for example, in assisting to ascertain recidivism risk, in assessing suitability for fast medical treatment and similar. The optimisation parameters of the algorithmic model is not usually made known, on account of trade secrets but also due to fears of abuse and gamification. In addition, it can be onerous for an individual to gain an understanding of how the data of others are used and correlated to infer a given decision or recommendation that led to discrete individual impacts (Weerts et al., 2023; Custers and Vrabec, 2024). Lastly, even if all these epistemic hurdles are cleared, the harm that result may be a minor inconvenience or not serious enough to warrant the cost and effort in pursuing a human rights claim. On the other hand, some algorithmic harms impact society as a whole, affecting individuals only tangentially or indirectly (Smuha, 2021). More generally, these difficulties point towards the distributive affordances of AI systems. Choice making through the design of AI systems that simultaneously apply to many at once do not bear the hallmarks of non-discrimination cases adjudicated before the court. Designing AI systems to respect equality and non-discrimination by addressing possible biases go beyond a mere conceptual transposal of the polemical mindset of human rights accountability. This is due to the fact that AI decision making and recommendation systems apply on a one-to-many scale and as mentioned, ‘distributes justice’ in this way. Thus, instead of focusing upon non-discrimination, the AI community has focused on the wider meaning of fairness—what does it mean for an AI system to be fair and how can it fairly distribute benefits and burdens? It is through the lens of fairness that the question of bias is addressed.

However, this can also mean that choices in design, including those taken to ostensibly address bias and to ensure non-discriminative outcomes can have plural, what we term here as polycentric, and therein potentially undesired effects. Huq agrees: ‘(c)hanges to a reward function or an interface, for example, are almost certain to have complex and plural effects. Efforts to reduce rates of false negatives, for example, are mathematically certain to change the rate (and the distribution) of false positives’ (Cuéllar and Huq, 2020, p. 11). Thus, while attempts to address bias within a dataset or to adjust the error rates of an algorithmic model to minimise biased effects seems intuitively correct, the polycentrism of AI design can mean that as these measures are taken, the plural effects can entail an overall less accurate model, and therein potentially affecting the majority or interests of larger groups.

Mittelstadt et al. (2024) argues that if an algorithmic model is deployed in highly consequential areas that affect human welfare and well-being, a more just and holistic response entails not only in ensuring that minority and vulnerable groups are not discriminated in terms of access but also in ensuring that the majority of the population are not inadvertently denied or negatively impacted when it comes to their access to their human rights, for example the rights to healthcare, education and access to other valuable social goods.

In turn, sources of bias are also not homogenous or of singular origin, but can be many-fold. The label ‘bias’ belies the varied sources of bias – stemming from societal bias, individual prejudice, model design, choosing of parameters and target variables, biased or missing data and once the model is deployed, through a dynamic emergence of bias as the algorithmic model encounters new data (Mann and Matzner, 2019; Buyl and De Bie, 2024). In turn, while awareness of bias in AI is increasing, it might be onerous to think through and test all possible permutations and manifestations of bias. For example, Raji et al. (2020) revealed that transgender Uber drivers had difficulties logging on to the application as the facial recognition models worked poorly. This was despite having a bias audit performed for the algorithm. Further, inferences and classifications generated by algorithmic models can sometimes fall outside of legal protection or even the human understanding of social concepts accommodated within the law (Wachter, 2022), hence making accountability for unjust outcomes difficult.

The measures taken to address AI bias inadvertently involve policy and choice making that can demonstrate tensions—namely between the requirements of human rights law, optimisation and goals of the AI system, which may be driven by commercial interests, and the accuracy of the AI system. This ‘trilemma’ epitomises the polycentrism that Fuller articulated, namely that the problem of AI bias and the measures to correct for it relate to many centers of interests with synchronous gravitational pull towards respecting these diverse interests.

However, policymakers and regulatory and ethical standards requirements in addressing the bias of AI systems rarely address the polycentric nature of potential interests and stakeholders that are impacted (Bambauer and Zarsky, 2025) nor the socially embedded causes of discrimination and inequality (Balayn and Gürses, 2021). The next section examines how the question of fairness has featured in algorithmic design and how it pertains to respecting the human right to equality and non-discrimination.

3.2.3 Fairness and non-discrimination law: an examination of implicit choices and aims

3.2.3.1 Setting the stage on the multiple ideas of fairness

To set the stage, we might start by considering the well-known example of the different contesting notions of fairness through the COMPAS example. As mentioned, COMPAS is an algorithmic system used by the judiciary in the United States to assist in determining the risk of recidivism within criminal justice. As such, the use of the algorithm bears strong consequential human rights impacts upon the defendant as the right to fair trial, right to liberty and the right to an effective remedy could be detrimentally affected. The COMPAS algorithm recommends a risk score to the judge presiding over the case, wherein a high-risk score indicates potential to re-offend while a lower one indicates lower risk. The algorithm was studied by the civil society organisation, Propublica, who found that when the outcomes predicted were disaggregated by race, the model demonstrated a much higher false positive error rate towards Black Americans compared to white Americans (Angwin et al., 2016). This plainly meant that black Americans were being unjustly recommended for sentencing or denied bail at much higher rates compared to others, a clear breach of non-discrimination principles. However, the (then) proprietary owner of the algorithmic model, Northpointe, argued that race did not even feature in the design of the model nor within the questionnaire that was filled in by defendants to feed into the COMPAS algorithm. Instead, it was argued that when considered from the lens of each risk bucket group – where groups with higher numbers denote a higher risk of re-offending while groups with lower risks were allocated lower numbers, the error rates were almost uniform throughout the different risk buckets. Quite simply, this meant that the error percentage of those who were judged to have a higher risk of reoffending were similar to those who were judged to possess lower re-offending risks. In other words, the risk bucket error rates, as the implicit measure of fairness adopted as a goal, were deemed acceptable across the board as no particular risk group had either an unjustifiably high error rate nor were risk bucket group error rates inconsistent when compared across different risk-bucket groups. The error rate consistency level between various risk groups was acceptable enough for Northpointe to deploy commercially. What this example shows is that an algorithmic model can be assessed from different fairness lenses and while acceptable from a commercial perspective (Northpointe’s view), it may fall foul of human rights standards (Propublica’s argument). In turn, attempts to reconcile these two positions may prove to be computationally and mathematically intractable, save for some very trivial situations (Kleinberg et al., 2016). This is popularly known as the impossibility theorem wherein it is impossible to satisfy multiple notions of fairness at the same time when groups differ in their underlying rates of a certain outcome. This example shows that when designing for fairness, different ideas as to what is considered fair can play out (Narayanan, 2018) and an explicit social policy choice on the most appropriate notion of fairness needs to be made.

3.2.3.2 Addressing algorithmic bias in conformity with equality and non-discrimination law

Given how designing AI systems for fairness might not always align with the requirements and normative aims of non-discrimination law, this section examines how existing efforts have attempted to address non-discrimination by taking into account polycentric considerations. Wachter et al. (2021) notes that, developers and designers of AI systems have three choices open to them: inaction, taking actions through the lens of formal equality or taking measures to address substantive equality. This dovetails with Fuller’s contention that when faced with polycentric disputes, courts will either fail in their task to adjudicate, evade the polycentric aspect of the dispute or reformulate the question from a polycentric to a polemical one (Fuller and Winston, 1978, p. 401).

In assessing whether it is enough for developers and designers of AI systems to not take any action and instead only optimise for overall system accuracy, Wachter further argues that the aim of non-discrimination law, through both formal and substantive equality, aims not only to ensure that persons are not systemically disadvantaged due to the protected groups they belong to, but also serves to undo structures and systems that lead to such disadvantages. Xenidis in turn argues that due to the overwhelming studies and community epistemic consensus, bias is no longer an outlier problem within AI systems but is by now an intrinsic and computationally inescapable concern—in other words, akin to a default condition (Xenidis, 2022). Taking this into account, it can be argued that a presumption of bias can be applied to algorithmic systems. Based on this presumption, positive measures taken to address bias are not only then optional but essential and necessary as the inescapability of bias within algorithmic systems is in turn complicated by the increasing difficult for discrete individuals to prove the existence of bias, let alone proving harms of non-discrimination. Where those designing AI systems do not undertake positive measures to address bias or are negligent in doing so, a presumption of responsibility can be applied upon them (Xenidis, 2022). While such a presumption is not yet reflected in any general or sector-based AI legislation, we concur with the contention that AI bias remains a pertinent problem requiring technical, policy and legislative attention.11

However, unlike how non-discrimination and equality claims are couched in polemical terms for legal accountability to take hold, bias and discrimination within machine learning AI systems do not bear this shade of clarity. Algorithmic measures taken using a polemical and single-axis lens will inevitably fall foul of the substantive demands of non-discrimination law (Weerts et al., 2023). In examining measures taken by the algorithmic community in addressing AI bias, Mittelstadt argues that the majority of such measures resemble that of ‘levelling down’ where ‘certain groups are needlessly made worse off for the sake of mathematical convenience’ (Mittelstadt et al., 2024, p. 7; see also Zietlow et al., 2022). Achieving equality of outcomes has also been empirically observed as a preferred means for data scientists to address statistical imbalances (Portela et al., 2024, sec. 7.2.3). Such measures prioritise formal equality over substantive equality. Thus, the notion of equality pursued here is seen through a polemical lens, namely how one equalises the outcomes of one group against another. In essence, it involves degrading the performance for advantaged group(s) ‘solely to reduce disparity in a given property (e.g., recall, decision rates) between groups’ (Mittelstadt et al., 2024, p. 9). This would fulfil the requirements attached to formal equality. However, at the same time, in strictly equalising between groups by levelling down, this undermines the normative purposes of equality and non-discrimination law (see Section 3.2.1) which aims at substantive equality. Mittelstadt argues that:

Levelling down is a symptom of the decision to measure fairness solely in terms of equality, or disparity between groups in performance and outcomes, while ignoring other relevant features of distributive justice such as absolute welfare or priority, which are more difficult to quantify and directly measure (Mittelstadt et al., 2024, p. 5).

Two key issues pertain to formal equality measures taken to address AI bias. First, levelling down fails to account for the polycentric interests at play when addressing AI bias. Many data relationships (and therein interests) can be impacted at once. Thus, measures to align the algorithm with non-discrimination law to ensure protected groups are not detrimentally impacted by the algorithm cannot be taken in isolation. We have thus demonstrated that this polemical approach is unsuitable in addressing the demands of equality and non-discrimination law. However, polycentric concerns are also pertinent. In effect, taking such measures can also impact upon the accuracy of the overall system (Barocas and Selbst, 2016). Thus, as an example, in equalising for the performance of the worst performing groups for a cancer screening algorithm, which may correspond to protected groups under equality laws, to the performance of the majority group may mean that the algorithm risks being less accurate for the majority group. Mittelstadt notes that in such situations, it can be the case that ‘more cases of cancer will be missed for advantaged groups than would have otherwise been the case’ (Mittelstadt et al., 2024, p. 10). Thus, in attempting to equalise as much as possible optimal performance for both groups, this can impact negatively upon the right of access to healthcare for many belonging to the majority group. Beyond that, Zietlow et al. (2022) also demonstrated that levelling down within computer vision lowers the performance for all groups, not just the best performing groups. In effect, every group is made worse off (p. 10400). Thus, levelling down measures should not be used ‘where the accuracy of any group is a primary concern’ (Zietlow et al., 2022, p. 10400).

Secondly, formal equality measures such as levelling down fails to fulfil the formative aims of non-discrimination law as it favours equality over harm reduction (Mittelstadt et al., 2024, p. 3) and can paradoxically cement further inequalities instead of dismantling them. Levelling down measures have also been criticised within philosophical literature (Brown, 2003; Christiano and Braynen, 2008). Instead, the optimal measure to address bias and non-discrimination involves thinking from a polycentric lens the ‘type of harm that should be equalised among groups’ (Mittelstadt et al., 2024, p. 9) as well as the normative or policy goals to be pursued (Weerts et al., 2023; Bambauer and Zarsky, 2025). Mittelstadt et al. (2024) has argued that leveling down in order to comply with formal equality is an inadequate measure and may even lead to further harm through the potential increase of misdiagnoses for the majority population. In other words, levelling down is unsuitable in situations where ‘performance is inherently valuable for patient health and welfare’ (p. 27). In turn, it can also potentially lead to externalities such as unravelling social solidarity and failing to undo systemic inequalities (Brake, 2004, p. 607).

In addition, there are also other measures to pursue formal equality without adopting levelling down measures. A key case study which demonstrated the inappropriateness of fairness as ‘comparative justice’—taken here to mean the equality of outcomes when compared across different years, is the Ofqual algorithm employed to predict the A Levels scores (Tieleman, 2025, p. 13). This was deployed as students in the UK could not sit for exams due to Covid-related school closures. The algorithm prioritised drawing from historical test scores from each A Levels exam centre and down prioritised student test score predictions provided by their teachers. The measure could be considered fair as predicted outcomes are comparable, measurable and replicable. However, student protests and widespread dissatisfaction arose over the predicted scores, as it was accused of perpetuating socio-economic inequalities in society. It has been called as England’s ‘greatest policy failure of modern times’ (Kelly, 2021, p. 725). In effect, students from traditionally under-performing centres, many of which are situated in areas of lower socio-economic demographics, had predicted scores conforming to center-based year-on-year patterns, even if individual students perform well on an individual basis. Such an action pulls down the achievements of well-performing ‘outliers’ and repeats and cements inequalities between socio-economic groups. It also had an impact on future life opportunities as university spot allocations would have been informed by the algorithmically predicted grades. The public outcry caused the UK government to retract the grades. Tieleman argues that:

The question then should not be whether algorithms can be inherently fair, but what account of fairness is promoted when such systems are implemented in contexts that thrive on inequality (Tieleman, 2025, p. 15).

If inaction and taking measures based solely to formally comply with equality demands are insufficient, algorithmic design has to contend more readily with social policy choices in addition to complying with the requirements under human rights law. In other words, the focus should be on finding out ‘why a particular distribution of burdens and benefits is right in a given context, and ultimately, who should bear the costs of inequality’ (Weerts et al., 2023, p. 814). Even though the formative aim of non-discrimination is the ultimate dismantling of structures and enablers of discrimination, the polycentric nature of interests and relationships impacted through algorithmic systems makes this a novel problem at the intersection of law and technology.

Instead of pursuing formal equality, scholars have proposed a levelling up when it comes to addressing AI bias. This entails taking a ‘harms-based’ perspective, namely actively reducing harms to a tolerable level across the groups of interest mediated by an algorithm (Mittelstadt et al., 2024). Others, such as Bambauer and Zasky argue that so long as ‘conscientious and well-considered decision for prioritising values’ are made, such AI systems can be considered as being ‘fair enough’ (Bambauer and Zarsky, 2025, p. 12).

At the same time, while some attempts to address inequalities through social policy choices in algorithmic design may be straightforward, others can be less so. At the same token, it seems to be the best, albeit imperfect, means to achieve the normative aims of non-discrimination law, namely substantive equality. The next section looks into how.

3.2.4 Triangulating polycentric interests: accuracy, non-discrimination law and the underlying commercial or public interests

We have thus seen that while non-discrimination law should feature as a guiding principle in algorithmic design, the fact that AI is used on a one-to-many basis means that a mere transposition is not ideal nor sufficient. What has been highlighted so far is that the design of fair algorithmic systems foregrounds three concerns. This can be visualised (see Figure 1) as a triangulation of polycentric interests between i. compliance with equality and non-discrimination law; ii. commercial and public interests pursued through the algorithmic system; and finally, iii. Accuracy (and therein the corresponding error types and error rate specification).

Designing AI with the aim of falling in line with equality and non-discrimination law also entails an awareness of how existing inequalities animate social relationships and access to social goods and services which are in turn mediated by the algorithmic system. These inequalities are reflected within the datasets used to train algorithmic models. Thus, it is insufficient to merely take heed of addressing the interests of and therein accuracy rates for protected groups as demanded by law, but to also take note of how the AI system navigates the distribution of goods and services. This can entail engaging with the normative purposes of not only equality and non-discrimination law but also the discrete areas of law impacted by the AI system in question, be it within human rights law such as the freedom of expression, refugee and migration laws or even competition law. For example, within the field of recruitment, it may be arguably less morally objectionable to increase the performance for the worst performing groups, even if this may entail narrowing job opportunities for the best performing majority group who may not face structural hurdles in the same degree as the worst performing groups where the latter corresponds to racial, ethnic or other protected groups within non-discrimination law. Such measures have been encouraged in certain areas, such as by increasing the pool of minority groups being shown on dating apps (Appelman, 2023; Veiligheid, 2023). Zehlike et al. (2025) demonstrates that fairness incompatibilities are not intractable but requires an express explication of the goals of the algorithmic model. In turn, other commercial or public interests may be pursued, such as maximising sales in the case of a private interest and increasing access to healthcare, in the case of a public interest. These goals must in turn not fall foul of the discrete branches of applicable law but also at the same time, respect the demands of equality and non-discrimination law.

The third element of the triangle is the metric of accuracy. However, accuracy is not a standalone metric measured against an incontrovertible ground truth. It has to take into account the other two other triangulating elements. Here we move from the relatively clearly defined terrain of non-discrimination law into murkier terrains of what types of accuracy metrics should feature, including questions on what types of errors and therein what error rates are acceptable for different disaggregated groups. Balayn and Gürses argues that systems that are ‘“unbiased,” or “fair,” according to a single metric is a far cry from a system free of discrimination’ (Balayn and Gürses, 2021, p. 57).

Thus, using the example of the design and use of algorithms within the criminal justice sector, the question of the overriding normative purpose of criminal justice and therein, the algorithm designed for this purpose has to be asked and answered. Huq argues that normative judgements permeate algorithmic design:

The manner in which predictions are reported, the feasibility of verifying the basis for predictions, and the nature of any dynamic updating all depend on normative judgments as much as the choice of training data and reward function. Worse, technical judgments (say, about what reward function is used) can be entangled in complex ways with system design choices (say, the manner in which predictions are expressed in a user interface) (Cuéllar and Huq, 2020, p. 12).

Bakiner concurs:

The fit between the model and new data is calculated probabilistically. In other words, some degree of error is intrinsic to estimations and predictions. From potentially affected citizens’ point of view, a medical test should not produce too many false negatives (i.e., failure to diagnose an actual case of illness). Likewise, a crime prediction algorithm should not produce too many false positives (i.e., predict nonoffenders as likely criminals), but the operators of such a system (i.e., the law enforcement community) may not prioritize the perspective of the adversely affected. In other words, there are multiple definitions of error built into these systems that defy simple identification of the risk of harm coming from AI systems (Bakiner, 2023, p. 9).

Coming back to the COMPAS example, making a policy choice on AI fairness involves asking whether it is more important for the AI system to optimise for false positives, meaning individuals being determined as high risk of re-offending who then did not go on to reoffend, or false negative rates, meaning to unjustly detain or deny bail to someone who actually had a low or no risk of reoffending. Studying the latter can be complicated or negated by impossible or unavailable counter-factual scenarios—detention negates the condition antecedent of not reoffending (Bambauer and Zarsky, 2025). In other words, one is unable to examine a condition that is disabled at the outset. What this triangulation shows is that engagement with the normative purposes entails a social policy choice wherein the law is only an imperfect guide (Gabriel, 2020), noting that the pursuit of statistical parity or the rates of false positives or negatives can have different sectoral, legal and societal costs (Zehlike et al., 2025). While these social policy choices are typically shaped through policy deliberations and decision making at the executive branch, the widespread use of AI systems across different domains directly squares these policy choices into algorithmic design from the get-go. Instead of viewing fairness criteria (accuracy, false negative, false positive rates) as separate concerns, Zehlike et al. (2025) proposes a ‘fair interpolation method’ which is able to interpolate and transport the desired outcome across a putative triangular fairness map that addresses these and other different policy concerns and choices. The paper does not take a position on the suitability of this method across different domains but concurs with the finding that designing for fairness in AI models requires the articulation of normative aims of the algorithmic system and a justification of that (algorithmic) choice.

The section below will examine polycentrism in algorithmic design that aims to triangulate the purpose of the AI system, non-discrimination and equality law demands and accuracy and error rates, through two examples of existing work that engage with human rights concerns, namely AI technologies deployed within border and migration management and hate and offensive speech detection within social media platforms.

3.2.4.1 Polycentrism in border and migration management and hate and offensive speech detection

Within the field of border and migration management, the increasing use of AI-driven tools such as biometric identification, emotion detection, algorithmic risk assessment and tools for migration monitoring and forecasting is by now well-documented (Dumbrava, 2021). Even though there is wide discretion by states in exercising its power to determine who can enter its borders or otherwise, the exercise of this power is not unlimited but must comply with international obligations, including those found within refugee law and international human rights law. As an example of polycentric interests pursued by border and migration management, facial recognition algorithms and the error type and error rate prioritisation depend on the normative aims of border and migration management. Border and migration management within EU external borders have been increasingly characterised by the logic of securitisation (Vavoula, 2021) and the strengthening of EU external borders (Reynolds, 2020). In turn, the logic of securitisation is increasingly moving the needle between the need to secure borders and the criminalisation of migration and mobility. AI technologies play a critical role in prediction, interdiction and decision-support in migration management, including through controversial technologies such as emotion recognition (Sánchez-Monedero and Dencik, 2022).

Using the example of facial recognition algorithms, this system is deployed to ascertain the probability of facial matches between an individual when compared against images in a database. As mentioned, the accuracy and therein error rates, including error type prioritisation need to be defined and adjusted when designing the system. Taking one specific example, according the Commission Implementing Decision (EU) laying down the specifications for the quality, resolution and use of fingerprints and facial image for biometric verification and identification in the EU Entry/Exit System (EES), the performance value for biometric accuracy is a maximum false positive identification rate of 0.1% (1 per 1,000) and a false negative identification rate of 1% (European Commission, 2019, sec. 1.2). What these percentages signify is that it is more important for the facial biometric system to miss correct identification (expressed through a higher false negative identification error rate allowance) than it is to incorrectly identify someone (expressed through the lower error rate tolerance towards false positive identifications). These error rate specifications engage human rights considerations as the differentiated tolerable error rates say something about the appropriate balance that is struck between border and migration management and human rights. On one reading, these error rate specifications prima facie prioritises human rights, including the right to be presumed innocent until proven guilty, the right to privacy, non-discrimination and human dignity. However, such normative reasons are implicit within algorithmic design and unless one probes into the numbers, the polycentric interests of securing borders, ensuring respect for fundamental rights generally and ensuring accuracy rates across different groups will remain buried. Another way to look into this example is that even though the 0.1% error rate tolerance seems low and justifiable, the deployment of this system in border and migration management that is expected to process in excess of 50 million third country national records would mean that thousands could be misidentified (Vavoula, 2021, p. 481). In other words, a seemingly low allowable error rate in one context can be considered as disproportionate or unsuitable in another context. The algorithmic design can potentially impact tens of thousands of people, within this migration context. In turn, those falling within the false positive category may not know why they have been flagged nor be able to meaningfully contest the finding as AI systems used within securitised border and migration settings are typically secretive in order to prevent abuse and gamification. Another potential aggravating factor is that where false positives are generated in one system, this might reverberate across the interoperable systems operational within the border and migration management space, not only in terms of error rates but in relation to specific groups encompassed within the choice of errors (Blasi Casagran, 2021). This is a disconcerting prospect against the backdrop where research has demonstrated how such systems work poorly on individuals from marginalised or minority groups (Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, 2021), and the drive towards interoperability of distinct border and migration management databases.

Against the backdrop of the increasing reliance and deployment of AI systems in one-to-many scenarios such as recommendations in border and migration management, the specification of a 0.1% tolerable false positive error rate may translate into a disproportionate targeting of certain minority groups. Error rates are not equally distributed and are informed by existing societal biases and prejudices that inform ‘ground truth’ baseline data or the lack of representative data in training the system. Further, the reliance on automated decision making or recommendation systems can lead to automation bias where one trusts the outputs of the system more than one’s self-judgement (Glickman and Sharot, 2024). This is especially true in situations in high stress and high workload scenarios, such as within border and migration management.

The act of designing an AI system is never a neutral activity as decisions on ‘weighting probabilities, sensitivity and accuracy thresholds happen behind the scenes’ (Dumbrava, 2021, p. 32). As it is computationally impossible to remove errors in such systems, the relegation of decisions on accuracy, error types and error rates are infused with politics and deserve a more comprehensive assessment involving the triangulation of polycentric interests we laid out, beyond relegating them as technical questions to those designing such systems.

In relation to offensive and hate speech detection by online social media platforms, there is increasing reliance on automated and algorithmic systems to identify and remove content considered as breaching the terms of service or community standards of various platforms (European Union Agency for Fundamental Rights, 2022, p. 54). According to a report by the European Union Fundamental Rights Agency (FRA), the reliance on algorithmic systems for the removal of hate and offensive speech has steadily increased, with figures as high as 96% (Rosen, 2022; The Oversight Board, 2024). The algorithmic effort is complementary to that of human oversight wherein AI tools help to flag content, which can then be passed on to humans to decide upon the appropriate action to take (European Union Agency for Fundamental Rights, 2022, p. 54).

Online platforms are bound by law to address unlawful speech, hate speech and the incitement of violence. However, beyond those legal obligations, companies can determine the type of content they deem appropriate for their platforms, including through limiting so-called ‘awful, but lawful’ speech. This is premised upon the business model of user engagement, including through positive user experiences. A high amount of spam content would have detrimental impacts upon engagement and are removed to a large degree on all large online platforms. However, allowed, or disallowed content are differently reflected as part of the terms of contract – often in the terms of service—of the various platforms. In turn, discretion is exercised over the types of content allowed or removed from the platforms (Díaz and Hecht-Felella, 2021). At the same time, despite the term discretion often invoking human discretion and choice, discretion in content moderation is largely driven by algorithmic determinations and generally not primarily reliant upon human decision-making.12 There is thus a distinction between the legal obligations of the platform, which in turn various across jurisdictions, and the removal of expression deemed as going against the platform’s own terms of service. While the latter is not illegal nor legally mandated, in practice, it can be difficult to precisely determine the contours of illegal, problematic and unproblematic expression, especially when it occurs at scale. Even if not amounting to illegal, hate speech or incitement towards violence, platforms can exercise their powers to remove expression that aims to bully or harass, inauthentic speech such as spam or impersonations or forms of misinformation and disinformation.

Much like in the border and migration management setting, the practice of content moderation has been found to have disproportionate detrimental impacts upon minority populations, with expression and content from minority groups being removed in higher frequency compared to others (Díaz and Hecht-Felella, 2023; Luscombe, 2023; Nicholas and Bhatia, 2023). This raises the issue of AI bias and therein equality and non-discrimination concerns. A measure to address this is to increase transparency around these removals and using disaggregated data to unearth the disproportionality of its impacts on minority groups.

Hate and offensive speech detection on online platforms also display polycentric interests pulling in opposing directions at the same time and this tension is similarly taken into account in algorithmic design. While there are similar issues pertaining to audio and visual content, this analysis will focus on text-based expression. Hateful expression is encountered through words. However, in examining algorithmic moderation in practice, the FRA report revealed that words are brittle indicators of the problematic nature of expression, especially when applied in different contextual environments. Thus, it was found that the insertion of the word ‘love’ ‘reduces the likelihood of content being rated as offensive’ and correspondingly trick the algorithms meant to be detecting hate speech (European Union Agency for Fundamental Rights, 2022, p. 66). Inserting the word ‘love’ into a problematic phrase—the example ‘Kill all Europeans’ was given – reduces the likelihood of the sentence being flagged as offensive. In another context, the Myanmar civil society group, Myanmar Witness, found that violent and misogynistic posts targeting minorities remained live for many weeks on social media platforms despite such content being clearly against the terms of service of such platforms. Amongst others, such posts failed to be captured due to the fact that coded slangs, such as the derogatory usage of the word ‘wife’ were added to those showing support to the minority Muslim population (Crystal, 2023). Such ‘hacks’ meant that hateful speech targeting the oppressed escaped detection. Inaction against such speech is extremely disconcerting, when situated against the backdrop of rising hostilities in conflict or post-conflict situations. In the case of Myanmar, inflammatory speech on social media led to allegations on the facilitation of genocide in the country (Mozur, 2018).

In another instance, the October 7 2023 attack in Israel and the subsequent retaliation has ignited controversies in terms of how Facebook handles content moderation. A Human Rights Watch report demonstrated that the practice unfairly impacted Palestinians in particular as even newsworthy content of the conflict, especially where Hamas were mentioned, were removed by the platform. In this way, algorithmic means of speech detection and removal, without requisite human oversight that enables the contextualisation of the expression in question, is brittle and easily removed or circumvented (Human Rights Watch, 2023; European Union Agency for Fundamental Rights, 2022, p. 66).

In addition, one can only test and therein design for a limited variation and possible permutations of words and expression. For example, one can test and design guardrails against pre-existing problematic terminologies targeting minority groups. This is however limited to what is known or reasonably foreseeable. However, even the best intentions can be insufficient. As observed by the FRA report, even where models are designed with cultural sensitivity and diverse participation of stakeholders, an algorithmic model can only at best detect a subset of biased expressions, leaving hate and offensive speech to continue to fester in online contexts.

The concrete example given by the FRA report notes that:

The wrong choice of template word (e.g. in German, including ‘Asylant’ (‘asylum seeker’) or ‘Flüchtling’ (‘refugee’) in the identity terms) could miss some possible source of bias in a model. Furthermore, the language models are trained on such huge bodies of text that they may include some correlations that fall outside our preconceived notions of grounds for prejudice (European Union Agency for Fundamental Rights, 2022, p. 68).

This finding is unsurprising as human speech and expression is dynamic, contextual and words can bear different meanings through time, according to changing social and political contexts. Problematic expressions might be empowering, depending on who is doing the talking (or in this case, writing), when and how. Problematic language can be also (re)appropriated as a means to counter narratives and to counter power. In essence, the political (mis)use of words and terminologies cannot be adequately captured by data as a syntactic means of informational transfer as meaning lies in its semantic and contextually situated use (Purtova, 2018). Expression can also be coded. This is a form of ‘malign creativity’ employed to ensure that only ‘in-group’ participants can understand and therein decode the expression (Jankovicz et al., 2021). This renders it difficult to detect legally problematic expression, or indeed to prevent harms towards minority groups. At the same time, it is important to get the balance right as it can impact upon the human right to the freedom of expression.

Bringing the triangulation of the respect for non-discrimination law on the one hand, taking into account accuracy rates and the underlying purpose of the private entity – in this case, a business model reliant on attention and data, back into the picture, the act of singling out particular words as problematic for content moderation and therein detecting hate and offensive speech, seems to strike an acceptable balance. It satisfies the need to ensure that bias is addressed by removing the worst excesses of expression directed at protected and minority groups. However, it also foregrounds technical limitations – content moderation is a socio-technical system where technology meets the indeterminacies of human expression and the people making or being subjected to those expressions. It is thus not merely about technical measures to detect and remove expression. As it is computationally impossible to remove all errors, the need to balance the interests of protected and minority groups, the human right to free speech, and the business-driven interest in ensuring a civil online environment can determine the error types and error rates of the detection algorithm. There has been, in other words, choice making of the algorithmically mediated and moderated content we encounter. The FRA report captures this balancing process: ‘users of the algorithm need to ask themselves to what extent people with protected characteristics may be put at a disadvantage, for example through flagging too many or too few pieces of text as offensive, compared with other groups’ (European Union Agency for Fundamental Rights, 2022, p. 72).

The findings that minority groups continue to suffer the brunt of unjustified removals of expression and content online means that a balance cannot easily be struck all respects in equal measure for the triangulation of polycentric interests. While human rights is familiar with striking the balance between competing interests, these are, as mentioned, usually carried out ex-post, through a judicial proceeding or similar. Resorting to human-only content moderation of might be a tempting proposition but looks increasingly unrealistic where content is generated at rapid speed and scale. This is also exacerbated by the potential flood of generated content enabled through large language models such as ChatGPT. It can take altogether too long for humans to identify, certify and moderate all potentially problematic content. In turn, a simplified call for platforms to take action to respect the normative purposes of non-discrimination law also fails to capture the polycentric interests that navigate this space (Bambauer and Zarsky, 2025). However, at the same time, the example of reducing bias in hate and offensive speech detection, triangulation model we proposed here is also revealed to be insufficient. At a minimum, two additional elements further expand the polycentric interest: resource constraints and the unpredictability of language and its contextualised use. Platforms have begun turning to large language models and other forms of generative AI in order to better detect, contextualise and capture illegal and other forms of problematic speech. While said to hold potential, it remains to be seen how this development plays out in practice (The Oversight Board, 2024).

The necessity of this exercise does not mean that the human rights movement should throw in the towel. Where the online platforms have been called as online and ‘networked publics’ (Boyd, 2010; Tufekci, 2017) and wherein its role in guarding and enabling the freedom of expression has elevated the role of platforms into a state-like function (De Gregorio, 2021), the striking of an appropriate balance between the polycentric interests should be accompanied by other backstop measures of accountability. At the same time, the very same technology (large language models) that can exacerbate these challenges may also be a possible technological solution. We will further explore a snapshot of potential solutions in section 4.

3.2.4.2 Polycentrism of AI bias and measures of human rights accountability

By way of summary, the article has demonstrated that while human rights accountability can involve the balancing of the interests of one group against other, the practical functioning of AI systems balances various algorithmically mediated interests in a different manner. It is important to draw out three relevant distinctions.

First, the algorithmic model is optimised for an objective function—essentially, a desired output that is the goal of the algorithmic system. These outputs may or may not impact upon human rights. Some outcomes, such as the COMPAS example shows, may ostensibly be entirely in line with human rights standards or in serving the cause of human rights, such as the right to a fair trial. However, it might not be obvious that the interests of protected groups encompassed within non-discrimination law are being impacted. Indeed, bias within AI systems is a deviation from a certain value, skewed towards another value. This is not per se legally problematic. Equality and non-discrimination law is only engaged where this skewed value impact upon protected groups or characteristic within the law without legal justifications (Engelfriet, 2025).

However, an implicit polemical perspective informs non-discrimination law as it is based on a determination of how equal treatment looks like by comparing how, between two groups—namely the disadvantaged group and the compared, or privileged group, are treated using single-axis thinking. This differs from how, as this article has shown, algorithms mediate between many interests at once, done at a larger scale. Instead, the human rights impact that we can take-away from the COMPAS and other examples is that the implicit idea of fairness encompassed within human rights law should be a priority concern. Thus, in order to comply with human rights law, the interests of protected groups or persons with certain protected characteristics should feature highly, or even directly prioritised, in algorithmic design. This is not disputed here. Algorithmic systems that either directly or indirectly affect protected groups in a disproportionate manner should not pass muster. This article highlights the trilemma involved where protected groups are not obviously disproportionately impacted but in which balance still needs to be struck between the demands of equality and non-discrimination law, accuracy rates—including error prioritisations, and the overall public or private interests pursued. Measures that can be taken to ensure a fair balancing process that include: having in place diverse teams, holding multistakeholder engagements—including with human rights lawyers, as well as technical design focusing on fairness. As not all forms of algorithmic bias raises to the level of discrimination, mere technical or measures concentrated around data representation and correctness alone are insufficient (Engelfriet, 2025). An ex-ante human rights impact assessment can help in unearthing the potential impacts of the AI system on protected groups.

Second, even if the fairness notion of non-discrimination law is taken into account in algorithmic design, this measure alone will not ensure that the letter of the law is translated in practice. To be fair, protection deficits are rampant even when it comes to the implementation and enforcement of human rights law. However, the implementation deficits arising from AI systems pertain not towards the cost, willingness to comply nor ill-will, although these elements can also feature, but sheer computational impossibility of compliant enforcement. Instead, as computationally based systems, algorithmic design is premised on acceptable error rates and not error elimination. When designing AI systems, there is no mathematical means to perfectly balance the interests of the many versus the interests of protected groups to not be subjected to discriminatory or biased outcomes at every single instance. Quite apart from intra-group complexities (e.g., intersectionality, data incompleteness, false data), the polycentrism of AI design and mediation entails acknowledging an acceptable compromise between respect for human rights, accuracy of the algorithmic system itself and the (public/private) end goals of the AI system. In other words, admitting the inevitability of errors within computational AI systems is not an anomaly but a feature of the system. This also means that potential harms of inequality and bias inheres within the system design itself (Teo, 2022). Additionally, bias can also emerge or change over time as an AI system encounters new data. Balayn and Gürses argues that systems can:

(a)ppear unbiased in development but be revealed as biased when deployed on the new data inputted to the system in deployment. Yet, there exists no principled method to deal with such biases arising. Such biases are due to differences in data distributions between development and deployment time (data shifts), that can arise for multiple reasons (Balayn and Gürses, 2021, p. 68).

Third, in complying with the equality and non-discrimination requirements, designers of AI systems have to take measures to ensure that the impacts towards certain (protected) groups be prioritised. However, when it comes to scenario specific examples, it might not be clear what are the legitimate measures that should be undertaken. First, where the ground truth status quo is unequal, for example, where applications from women are overwhelmingly represented for certain professions, does complying with equality and non-discrimination law require the explicit adjustment of weights to favour male applicants? What legitimate justifications need to be offered in order to motivate such algorithmic adjustments? The example of the Amazon hiring algorithm might be instructive here. When testing the system internally to facilitate technical hires within the company, the Amazon algorithm systematically favoured male candidates and downranked female ones (Dastin, 2018). This anomaly was discovered and the system was never deployed in a real-world setting. Were it deployed, this would be a situation of indirect discrimination. However, this example mirrors more closely an intuitive understanding and existing scholarship that demonstrably prove that male candidates are, at a minimum, unconsciously favoured for technical roles and have historically been more successful in attaining those positions. The entrenched injustice here was rightly addressed by not deploying the algorithm. However, the algorithm was not flawed—it successfully learned from past data and predicted the future scenario based upon it. Not all examples of bias however exhibit such clarity. The increased use of algorithms in all areas of life consequential to our freedoms and rights means leaving consequential and political forms of decision making to the (hopefully) best efforts of AI developers. Some, as the Amazon example has shown, may be more easily justified through personal intuitions and research. Others may not be so clear.

Wachter argued that at the very least, algorithmic design can assist in bringing forth and shedding light on forms of unfairness and bias that have thus far remained latent (Wachter et al., 2021). The discovery is a first step towards opening up a conversation in terms of how to address fairness in design, who should be consulted and in thinking through the effects of biased algorithms on impacted individuals and society generally (Sunstein, 2019). However, awareness is one thing and operationalisation of fairness another. To this, Binns quite rightly asks: ‘which variables are legitimate grounds for differential treatment, and why? Are all instances of disparity between groups objectionable’ (Binns, 2018, p. 2)? This leaves open the question as to who should make such a decision and where does the legitimacy for the decision-making lie. Even when pursuing ostensibly laudable goals, a social policy choice in terms of how to correct for an appearance of imbalance is being pursued here.

Further complications could also arise. In complying with data protection law, data pertaining to ethnicity, political beliefs, gender or other protected characteristics may not be gathered nor processed except in specific circumstances. In effect, even when attempting to correct for biases, designing AI systems may entail the use of proxy data which can indirectly correlate to those very protected grounds.

While some proxy data can be more easily correlated to protected grounds, for example the case of using postal codes which can indirectly reveal social class or ethnicity, others are less obvious and arguably discovered only through the detection of inferences and patterns through the application of the AI model itself. The EU’s AI Act more readily reckons with the difficulties of designing fair AI systems and expressly allows for the use of special categories of personal data in order to detect and correct for bias.13 However, this is a requirement only for high-risk AI systems – such as within border and migration management, critical infrastructure and others14 based upon the risk-based categorisation adopted in the AI Act. This leaves out non-high-risk AI systems, such as algorithms used to curate and moderate content as seen in our case study example above. Other algorithmic systems still need to pursue visions of fairness and to tackle bias as a social policy choice, balancing the triangulation of interests we have highlighted thus far.

The discussions so far reveal that while respecting non-discrimination and equality principles is the first port of call for those designing AI systems, the measures taken in respect of these rights are a far too narrow lens in which to determine and mediate the complexities in designing for fair AI that has distributive justice effects. This once again frontloads Lon Fuller’s concerns on the inadequacy of the law in tackling just allocations of goods and resources, albeit seen through an algorithmic lens. Binns echoes this sentiment:

(t)hese accounts do not necessarily imply that algorithmic decision-making is always morally benign–only that its potential wrongness is not to be found in the notion of discrimination as it is traditionally understood (Binns, 2018, p. 2).

Ultimately, designing AI and respecting non-discrimination principles involves engaging with two overlapping but also distinct normative aims. Non-discrimination law draws the line between what is right and wrong. Deploying AI involves asking about what is a good enough outcome to achieve through the system, noting the distributive justice element pursued. This means that while respecting non-discrimination and equality laws is one part of the triangulation process, the other two elements in the triangulation resemble utilitarian calculations that aim for the best possible outcomes (for accuracy rates, and for the aims of the public/private deployment of AI). Human rights are not engaged at the end of the line but needs to be upstreamed. In turn, the vision of fairness that is implicitly pursued within the choice making should be expressly made clear (Table 1).

Table 1
www.frontiersin.org

Table 1. This table illustrates the different features that pertain to the demands of equality and non-discrimination law and the triangulation thesis that takes into account polycentric interests.

4 Exploring solutions and addressing limitations

So far, we have argued that algorithmic mediations increasingly permeate our lives, in both public—using the example of border and migration management, and private settings—using the example of the content moderation practice of social media platforms. This necessitates the design of algorithmic systems in ways that respect human rights, notably the right to equality and non-discrimination. However, designing such systems using polemical mindsets, such as through single-axes thinking—targeting protections towards protected groups on account of race, ethnicity, political opinions or other markers, or by attempting to weigh impacts solely focusing upon the minority group is unrealistic and insufficient to realise fair AI systems—where interests of the many play out (Engelfriet, 2025). The polycentric lens offered here—which triangulates three elements, namely the (public/private) purpose served by the AI system, equality and non-discrimination laws and accuracy metrics, better accounts for how bias and therein fairness questions should be addressed in algorithmic design.

How should such a triangulation of polycentric interests be operationalised? While it is beyond the aim of this paper comprehensively spell out operationalisation pathways, three key considerations for purposes of operationalisation stand out. Firstly, a reckoning of these polycentric interests can be made transparent and therein taken into account when conducting a human rights impact assessment (Mantelero, 2022). Rather than adopting a polemical approach when examining the impacts towards human rights by the AI system, including when examining the duration and severity of possible breaches, a polycentric approach brings together different elements we examined in this article, that are typically not expressly encountered when conducting such assessments (Danish Institute for Human Rights, 2020). For example, while businesses are under a duty to conduct due diligence under the UN Guiding Principles in examining its impact on human rights, the element of accuracy of an AI system, including in terms of tolerable error rates and error type prioritisation, is typically pushed down the organisational chain and left to individual decision making by the AI engineer. In turn, existing models of human rights impact assessments, including those dealing with technology (Danish Institute for Human Rights, 2020; see however Ministry of the Interior and Kingdom Relations of the Netherlands, 2022) do not expressly account for the plurality of interests impacted by algorithmic design. A polycentric lens takes into account our new algorithmic-driven realities and elevates the interests and trade-offs into one that is elemental to the human rights discourse. In turn, such assessments should be made transparent to auditing bodies, external evaluators and researchers. However, considering human rights impacts, including equality and non-discrimination law in designing for debiasing measures in this way, is insufficient without backstop measures of accountability. This is because even the best impact assessments will not be able to imagine all scenarios of bias that could potentially have discriminatory effects, nor can external audits or evaluations catch every problematic scenario. A techno-legal gap—where best effort technical measures informed by a human rights assessment, will still leave legal accountability gaps. As such, other complementary measures need to be in place—these include adopting a human rights based approach to such impact assessments, conducting such assessments periodically, using AI as a means of detection (Franco et al., 2023) or using model cards to convey the parameters and limitations of the system, including the data it has been trained on.

The ‘human rights-based approach’ includes values such as participation, inclusion, transparency, accountability and legality, shortened to the acronym PANEL. In operationalising participation and inclusion within human rights assessments, Rahwan argued that where societal concerns arise or where social values are impacted, a wider stakeholder involvement that engages with ‘society-in-the-loop’ is needed. Multi-stakeholder involvement can be gleaned through stakeholder focus group studies, surveys, thought experiments and sociotechnical foresight, carried out as part of a comprehensive human rights impact assessment, as mentioned above. Democratic feedback can also be gleaned from a sampling of the population and therein inform the design of the AI system. Anthropic, a generative AI company, has been engaging with such a process, to glean from the ‘collective intelligence’ of the sampled population in designing the outputs of their large language model (Collective Constitutional AI: Aligning a Language Model with Public Input, n.d.). However, designing fair AI systems for use in consequential areas such as border and migration management, welfare and healthcare need to include the involvement of stakeholders that could be most impacted by the system. General sampling of the population is not sufficient in this instance as it has been demonstrated that non-vulnerable populations would more readily trade-off speed for accuracy, a sentiment not echoed by vulnerable populations (Dong et al., 2023) and for good reason.

However, this short survey of the possibilities cannot do justice to the intricacies in terms of exploring the solutions space. Instead, this paper’s theoretical contribution of the triangulation thesis aims to contribute towards ongoing interdisciplinary research and efforts, including within human rights practice, to understand that, beyond obvious cases of discriminatory impacts, addressing bias and designing for fairness involves triangulating polycentric interests and adopting a vision of fairness as a social policy choice.

This departs from how human rights are understood through the polemical language of violations. The polycentric lens does not however guarantee that weighing these considerations will ensure a perfect outcome, as a techno-legal gap will remain, merely that these different considerations are weighed and considered at the outset and mitigation measures can be taken. Frontloading the polycentric interests at play in determining a ‘fair enough’ AI system can foreground transparency and build trust and contribute towards better backstop mechanisms of accountability. Further, in cases where obvious discriminatory impacts can be seen and where the tensions of the polycentric interests cannot be resolved, these can result in the AI system not being deployed at all.

Despite taking polycentric interests into account, this article does not aim to have the last word on the matter. Specifically, while having raised the issue of dynamic and emergent forms of discrimination that is not based upon protected grounds under the law, the article did not devote space to analyse these concerns, and the possible need for new mechanisms, such as new rights, to address these emergent injustices. Further, the article did not explore issues that did not bear a polycentric shade. Just because some issues are polycentric, does not mean that all issues are polycentric in nature. For example, within the EU AI Act, some use cases of AI systems are deemed to be so detrimental to human dignity, fundamental rights, democracy, health and safety that they are prohibited from the outset. The article does not touch upon the use cases where red lines have been drawn by policy and lawmakers as the human rights consideration within these areas cannot be easily ‘traded off,’ such as where it concerns dignitarian harms (Hoffmann, 2017; Valdivia et al., 2023). Thus, the polycentric interests examined pertain not to all human rights nor to all AI systems, but a subset of AI systems with distributive effects and where polycentric interests feature in its design.

5 Conclusion

There is a popular conception on the neutrality of technology. However, this contention hides the fact that technology embeds values, including key considerations that impact equality and non-discrimination law. In addressing the latter however, the article has demonstrated that the polemical approach implicit within human rights law, is inadequate. Instead, it reasoned that polycentric interests steers how bias and fairness is addressed within the design of AI systems. While it is undoubted that legal obligations towards equality and non-discrimination law must be upheld, the polemical thinking in terms of how to respect equality and non-discrimination principles is not fit for purpose in the context of mediation through algorithms where polycentric interests play out. In turn, while measures to address human rights harm such as through the use of human rights impact assessments to assess the human rights impacts of the AI system, are recommended and increasingly being deployed, such measures leave a techno-legal gap, which can be closed through complementary backstop measures of accountability. Thinking through the human right to equality and non-discrimination through this polycentric lens helps to nuance the ways in which AI can impact upon this right, while also noting that measures to address AI bias both weighs and trades off different interests.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

ST: Writing – original draft, Conceptualization, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. The research was funded by the Marianne and Marcus Wallenberg Foundation under the ‘Future of Human Rights: The Raoul Wallenberg Visiting Chair in Human Rights and Humanitarian Law in Continuity and Change’ project.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author declares that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^Tyrer v. United Kingdom [GC], no. 5856/72, 25 April 1978.

2. ^Guide on Article 8 of the European Convention on Human Rights, European Court of Human Rights, 2021.

3. ^UN Guiding Principles on Business and Human Rights 2011.

4. ^Verein Klimaseniorinnen Schweiz and Others v. Switzerland, Grand Chamber, no. 53600/20, 9 April 2024.

5. ^See Request for an advisory opinion on the scope of the state obligations for responding to the climate emergency by Chile and Colombia, 2023.

6. ^Article 3(1), Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (2024). http://data.europa.eu/eli/reg/2024/1689/oj/eng (hereafter the EU AI Act).

7. ^Glukhin v. Russia, no. 11519/20, 4 July, 2023, §85. (‘The Court finds it to be beyond dispute that the fight against crime, and in particular against organised crime and terrorism, which is one of the challenges faced by today’s European societies, depends to a great extent on the use of modern scientific techniques of investigation and identification. However, while it recognises the importance of such techniques in the detection and investigation of crime, the Court must delimit the scope of its examination. The question is not whether the processing of biometric personal data by facial recognition technology may in general be regarded as justified under the Convention. The only issue to be considered by the Court is whether the processing of the applicant’s personal data was justified under Article 8 § 2 of the Convention in the present case.’)

8. ^This is however, not to discount that a plurality of intra-group interests can be impacted within the ‘security’ lens.

9. ^Article 14 European Convention on Human Rights.

10. ^Biao v. Denmark [GC], no. 38590/10, 24 May 2016.

11. ^The EU AI Act acknowledges the possibilities of AI bias (see Recitals 32, 54, 61, 67, and 70) and legislatively mandates, amongst others, that the training, validation and testing of datasets are robust. This includes requiring high risk AI providers to examine possible biases and take measures to detect, prevent and mitigate those biases under Article 10(2)(f) and (g) of the EU AI Act.

12. ^However, certain human discretion in content moderation is present, notably where the content is examined by human moderators or when a decision of removal or otherwise is made by an independent body such as the Facebook Oversight Board.

13. ^Article 10(5) EU AI Act.

14. ^See Article 6 EU AI Act.

References

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) (n.d.). Available online at: https://facctconference.org/ (Accessed October 21, 2025).

Google Scholar

Alexander, L. (1992). What makes wrongful discrimination wrong biases, preferences, stereotypes, and proxies. Univ. Pa. Law Rev. 141:149. doi: 10.2307/3312397

Crossref Full Text | Google Scholar

Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine Bias. ProPublica. Available online at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (Accessed October 21, 2025).

Google Scholar

Appelman, N. (2023). Equal love: dating app breeze seeks to address algorithmic discrimination. Available online at: https://racismandtechnology.center/2023/09/29/equal-love-dating-app-breeze-seeks-to-address-algorithmic-discrimination/ (Accessed June 9, 2025).

Google Scholar

Bakiner, O. (2023). The promises and challenges of addressing artificial intelligence with human rights. Big Data Soc. 10:20539517231205476. doi: 10.1177/20539517231205476

Crossref Full Text | Google Scholar

Balayn, A., and Gürses, S. (2021) Beyond Debiasing: regulating AI and its inequalities Available online at: https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf (Accessed October 21, 2025).

Google Scholar

Bambauer, J. R., and Zarsky, T. Z. (2025). Fair-enough AI. Available online at: https://yjolt.org/fair-enough-ai (Accessed June 6, 2025).

Google Scholar

Barocas, S., and Selbst, A. D. (2016). Big data’s disparate impact. Calif. Law Rev. 104, 671–732. doi: 10.15779/Z38BG31

Crossref Full Text | Google Scholar

Binns, R. (2018). Fairness in machine learning: lessons from political philosophy., in Proceedings of the 1st conference on fairness, accountability and transparency, (PMLR), 149–159. doi: 10.15779/Z38BG31

Crossref Full Text | Google Scholar

Blasi Casagran, C. (2021). Fundamental rights implications of interconnecting migration and policing databases in the EU. Hum. Rights Law Rev. 21, 433–457. doi: 10.1093/hrlr/ngaa057

Crossref Full Text | Google Scholar

Boyd, D. (2010). “Social network sites as networked publics: affordances, dynamics, and implications” in A networked self (New York: Routledge).

Google Scholar

Brake, D. (2004). When equality leaves everyone worse off: the problem of leveling down in equality law. Wm. Mary Law Rev. 46:513. Available online at: https://scholarship.law.wm.edu/wmlr/vol46/iss2/4

Google Scholar

Brown, C. (2003). Giving up levelling down. Econ. Philos. 19, 111–134. doi: 10.1017/S0266267103001044

Crossref Full Text | Google Scholar

Brownsword, R. (2019). Law, technology and society: Re-imagining the regulatory environment. New York, NY: Routledge, Taylor & Francis Group.

Google Scholar

Buolamwini, J., and Gebru, T. (2018) Gender shades: intersectional accuracy disparities in commercial gender classification., in Proceedings of machine learning research.

Google Scholar

Buyl, M., and De Bie, T. (2024). Inherent limitations of AI fairness. Commun. ACM 67, 48–55. doi: 10.1145/3624700

Crossref Full Text | Google Scholar

Chow, A. R. (2023). How ChatGPT managed to grow faster than TikTok or Instagram. Time. Available online at: https://time.com/6253615/chatgpt-fastest-growing/ (Accessed April 28, 2023).

Google Scholar

Christiano, T., and Braynen, W. (2008). Inequality, injustice and levelling down. Ratio 21, 392–420. doi: 10.1111/j.1467-9329.2008.00410.x

Crossref Full Text | Google Scholar

Collective Constitutional AI: Aligning a Language Model with Public Input (n.d.). Available online at: https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input (Accessed May 8, 2024).

Google Scholar

Commissioner for Human Rights, Council of Europe (2019) Unboxing artificial intelligence: 10 steps to protect human rights. Strasbourg: Council of Europe.

Google Scholar

Crystal, C. (2023) Facebook, telegram, and the ongoing struggle against online hate speech. Carnegie Endow. Int. Peace. Available online at: https://carnegieendowment.org/2023/09/07/facebook-telegram-and-ongoing-struggle-against-online-hate-speech-pub-90468 (Accessed April 25, 2024).

Google Scholar

Cuéllar, M.-F., and Huq, A. Z. (2020). Toward the democratic regulation of AI systems: a prolegomenon. Public Law Leg. Theory Work. Pap. doi: 10.2139/ssrn.3671011

Crossref Full Text | Google Scholar

Custers, B., and Vrabec, H. (2024). Tell me something new: data subject rights applied to inferred data and profiles. Comput. Law Secur. Rev. 52:105956. doi: 10.1016/j.clsr.2024.105956

PubMed Abstract | Crossref Full Text | Google Scholar

Danish Institute for Human Rights (2020) Guidance on human rights impact assessment of digital activities. Available online at: https://www.humanrights.dk/publications/human-rights-impact-assessment-digital-activities (Accessed October 21, 2025).

Google Scholar

Dastin, J. (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters Available online at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G (Accessed October 21, 2025).

Google Scholar

De Gregorio, G. (2021). The rise of digital constitutionalism in the European Union. Int. J. Const. Law 19, 41–70. doi: 10.1093/icon/moab001

Crossref Full Text | Google Scholar

Díaz, Á., and Hecht-Felella, L. (2021) Double standards in social media content moderation. New York University School of Law: Brennan Center for Justice. Available online at: https://www.brennancenter.org/media/7951/download/Double_Standards_Content_Moderation.pdf?inline=1 (Accessed October 21, 2025).

Google Scholar

Díaz, Á., and Hecht-Felella, L. (2023). Double standards in social media content moderation. Brennan Center for Justice. Available online at: https://www.brennancenter.org/our-work/research-reports/double-standards-social-media-content-moderation (Accessed April 21, 2024).

Google Scholar

Dong, M., Bonnefon, J.-F., and Rahwan, I. (2023) False consensus biases AI against vulnerable stakeholders., in AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics. Available online at: https://neurips.cc/virtual/2023/77060 (Accessed June 10, 2025).

Google Scholar

Dror-Shpoliansky, D., and Shany, Y. (2021). It’s the end of the (offline) world as we know it: from human rights to digital human rights – a proposed typology. Eur. J. Int. Law 32, 1249–1282. doi: 10.1093/ejil/chab087

Crossref Full Text | Google Scholar

Dumbrava, C. (2021) Artificial intelligence at EU borders: overview of applications and key issues. Available online at: https://op.europa.eu/en/publication-detail/-/publication/a4c1940f-ef4a-11eb-a71c-01aa75ed71a1 (Accessed October 21, 2025).

Google Scholar

Eidelson, B. (2013). “Treating people as individuals” in Philosophical foundations of discrimination law. eds. D. Hellman and S. Moreau (Oxford: Oxford University Press), 203–227.

Google Scholar

Elswah, M. (2024) Moderating Maghrebi Arabic content on social media the Center for Democracy & technology (CDT. Available online at: https://cdt.org/wp-content/uploads/2024/09/2024-09-26-CDT-Research-Global-South-Moderating-Report-English-Arabic-final.pdf (Accessed October 21, 2025).

Google Scholar

Engelfriet, A. (2025). From correlation to violation: distinguishing Bias from discrimination in the AI act. iGlobal.Lawyer. Available online at: https://www.iglobal.lawyer/post/from-correlation-to-violation-distinguishing-bias-from-discrimination-in-the-ai-act (Accessed June 7, 2025).

Google Scholar

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. First Edn. New York, NY: St. Martin’s Press.

Google Scholar

European Commission (2019). Commission implementing decision (EU) of 25.2.2019 laying down the specifications for the quality, resolution and use of fingerprints and facial image for biometric verification and identification in the entry/exit system (EES). Available online at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=PI_COM%3AC%282019%291280 (Accessed October 21, 2025).

Google Scholar

European Union Agency for Fundamental Rights (2020). Getting the future right – artificial intelligence and fundamental rights. Available online at: https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights (Accessed October 21, 2025).

Google Scholar

European Union Agency for Fundamental Rights (2022). Bias in algorithms - artificial intelligence and discrimination. Available online at: https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf (Accessed October 21, 2025).

Google Scholar

Franco, M., Gaggi, O., and Palazzi, C. E. (2023) Analyzing the use of large language models for content moderation with ChatGPT examples., in Proceedings of the 3rd international workshop on open challenges in online social networks, (New York, NY, USA: Association for Computing Machinery), 1–8.

Google Scholar

Fuller, L. L., and Winston, K. I. (1978). The forms and limits of adjudication. Harv. Law Rev. 92, 353–409. doi: 10.2307/1340368

Crossref Full Text | Google Scholar

Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds Mach. 30, 411–437. doi: 10.1007/s11023-020-09539-2

Crossref Full Text | Google Scholar

Gardner, H. (1983). Frames of mind: A theory of multiple intelligences. New York: Basic Books.

Google Scholar

Gerards, J., and Zuiderveen Borgesius, F. J. (2022). Protected grounds and the system of non-discrimination law in the context of algorithmic decision-making and artificial intelligence. Colo. Technol. Law J. 20, 3–54. Available online at: https://ctlj.colorado.edu/?p=860

Google Scholar

Glickman, M., and Sharot, T. (2024). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat. Hum. Behav. 9, 345–359. doi: 10.1038/s41562-024-02077-2

PubMed Abstract | Crossref Full Text | Google Scholar

Goodlad, L. M. E. (2023). Editor’s introduction: humanities in the loop. Crit. AI 1:16. doi: 10.1215/2834703X-10734016

Crossref Full Text | Google Scholar

Greene, J. (2021). How rights went wrong: Why our obsession with rights is tearing America apart. Boston: Houghton Mifflin Harcourt.

Google Scholar

Grother, P., Ngan, M., and Hanaoka, K. (2019). Face recognition vendor test part 3: Demographic effects. Gaithersburg, MD: National Institute of Standards and Technology.

Google Scholar

Heikkilä, M. (2022) AI: decoded: a Dutch algorithm scandal serves a warning to Europe — the AI act won’t save us. POLITICO. Available online at: https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ (Accessed October 21, 2025).

Google Scholar

Hern, A. (2018) Google’s solution to accidental algorithmic racism: ban gorillas. The Guardian. Available online at: https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people (Accessed October 21, 2025).

Google Scholar

Hildebrandt, M. (2015). Smart technologies and the end(s) of law, novel entanglements of law and technology. Cheltenham: Edward Elgar Publishing.

Google Scholar

Hoffmann, A. L. (2017). Beyond distributions and primary goods: assessing applications of rawls in information science and technology literature since 1990. J. Assoc. Inf. Sci. Technol. 68, 1601–1618. doi: 10.1002/asi.23747

Crossref Full Text | Google Scholar

Human Rights Watch (2023). Meta’s broken promises: systemic censorship of Palestine content on Instagram and Facebook. Available online at: https://www.hrw.org/report/2023/12/21/metas-broken-promises/systemic-censorship-palestine-content-instagram-and (Accessed November 6, 2025).

Google Scholar

Jankovicz, N., Pavliuc, A., Davies, C., Pierson, S., and Kaufmann, Z. (2021) Malign creativity: how gender, sex, and lies are weaponized against women online | Wilson Center. Wilson Center. Available online at: https://www.wilsoncenter.org/publication/malign-creativity-how-gender-sex-and-lies-are-weaponized-against-women-online (Accessed October 6, 2025).

Google Scholar

Kelly, A. (2021). A tale of two algorithms: the appeal and repeal of calculated grades systems in England and Ireland in 2020. Br. Educ. Res. J. 47, 725–741. doi: 10.1002/berj.3705

Crossref Full Text | Google Scholar

Khaitan, T. (2015). A theory of discrimination law. Oxford: Oxford University Press.

Google Scholar

Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. Arxiv [Preprint] doi: 10.48550/arXiv.1609.05807

Crossref Full Text | Google Scholar

Letsas, G. (2015). “Rescuing proportionality” in Philosophical foundations of human rights. eds. R. Cruft, S. M. Liao, and M. Renzo (Oxford: Oxford University Press).

Google Scholar

Luscombe, R. (2023). Meta censors pro-Palestinian views on a global scale, report claims. The Guardian. Available online at: https://www.theguardian.com/technology/2023/dec/21/meta-facebook-instagram-pro-palestine-censorship-human-rights-watch-report (Accessed January 18, 2024).

Google Scholar

Mac, R. (2021) Facebook apologizes after a.I. Puts ‘Primates’ label on video of black men. N. Y. Times. Available online at: https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html (Accessed October 21, 2025).

Google Scholar

Mann, M., and Matzner, T. (2019). Challenging algorithmic profiling: the limits of data protection and anti-discrimination in responding to emergent discrimination. Big Data Soc. 6:205395171989580. doi: 10.1177/2053951719895805

Crossref Full Text | Google Scholar

Mantelero, A. (2022). “Human rights impact assessment and AI” in Beyond data: Human rights, ethical and social impact assessment in AI. ed. A. Mantelero (The Hague: T.M.C. Asser Press), 45–91.

Google Scholar

McGregor, L., Murray, D., and Ng, V. (2019). International human rights law as a framework for algorithmic accountability. Int. Comp. Law Q. 68, 309–343. doi: 10.1017/S0020589319000046

Crossref Full Text | Google Scholar

McIntosh, C. (2013). Cambridge advanced learner’s dictionary. 4th Edn. Cambridge: Cambridge University Press.

Google Scholar

Ministry of the Interior and Kingdom Relations of the Netherlands (2022) Impact assessment: fundamental rights and algorithms. Available online at: https://www.government.nl/documents/reports/2021/07/31/impact-assessment-fundamental-rights-and-algorithms (Accessed November 7, 2025).

Google Scholar

Mittelstadt, B., Wachter, S., and Russell, C. (2024). The unfairness of fair machine learning: leveling down and strict egalitarianism by default. Mich. Technol. Law Rev. 30, 1–55. doi: 10.36645/mtlr.30.1.unfairness

Crossref Full Text | Google Scholar

Mozur, P. (2018) A genocide incited on Facebook, with posts from Myanmar’s military. N. Y. Times. Available online at: https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html (Accessed December 19, 2019).

Google Scholar

Narayanan, A. (2018) 21 definitions of fairness and their politics. Available online at: https://www.youtube.com/watch?v=wqamrPkF5kk (Accessed October 21, 2025).

Google Scholar

Nicholas, G., and Bhatia, A. (2023) Lost in translation: large language models in non-English content analysis. The Center for Democracy & Technology (CDT). Available online at: https://cdt.org/insights/lost-in-translation-large-language-models-in-non-english-content-analysis/ (Accessed April 30, 2024).

Google Scholar

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.

Google Scholar

Penney, J. W. (2016). Chilling effects: online surveillance and Wikipedia use. Berkeley Technol. Law J. 31, 117–182. doi: 10.15779/Z38SS13

Crossref Full Text | Google Scholar

Portela, M., Castillo, C., Tolan, S., Karimi-Haghighi, M., and Pueyo, A. A. (2024). A comparative user study of human predictions in algorithm-supported recidivism risk assessment. Artif. Intell. Law 33, 471–517. doi: 10.1007/s10506-024-09393-y

Crossref Full Text | Google Scholar

Purtova, N. (2018). The law of everything. Broad concept of personal data and future of EU data protection law. Law Innov. Technol. 10, 40–81. doi: 10.1080/17579961.2018.1452176

Crossref Full Text | Google Scholar

Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., and Denton, E. (2020) Saving face: investigating the ethical concerns of facial recognition auditing., in Proceedings of the AAAI/ACM conference on AI, ethics, and society, (New York, NY, USA: Association for Computing Machinery), 145–151.

Google Scholar

Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance (2021). Racial and xenophobic discrimination and the use of digital technologies in border and immigration enforcement. Human rights council forty-eighth session. Available online at: https://documents-dds-ny.un.org/doc/UNDOC/GEN/G21/379/61/PDF/G2137961.pdf?OpenElement (Accessed October 21, 2025).

Google Scholar

Reynolds, J. (2020). Fortress Europe, global migration & the global pandemic. AJIL Unbound 114, 342–348. doi: 10.1017/aju.2020.64

Crossref Full Text | Google Scholar

Rodríguez-Garavito, C. (2021). “Human rights 2030: existential challenges and a new paradigm for the human rights field” in The struggle for human rights: Essays in honour of Philip Alston. eds. N. Bhuta, F. Hoffmann, S. Knuckey, F. Mégret, and M. Satterthwaite (Oxford: Oxford University Press).

Google Scholar

Rosen, G. (2022) Community standards enforcement report, first quarter 2022. Meta Available online at: https://about.fb.com/news/2022/05/community-standards-enforcement-report-q1-2022/ (Accessed January 19, 2024).

Google Scholar

Russell, S. J., and Norvig, P. (2010). Artificial intelligence: A modern approach. 3rd Edn. Upper Saddle River: Prentice Hall.

Google Scholar

Sánchez-Monedero, J., and Dencik, L. (2022). The politics of deceptive borders: ‘biomarkers of deceit’ and the case of iBorderCtrl. Inf. Commun. Soc. 25, 413–430. doi: 10.1080/1369118X.2020.1792530

Crossref Full Text | Google Scholar

Schulz, W. F., and Raman, S. (2020). The coming good society: Why new realities demand new rights. Cambridge, MA: Harvard University Press.

Google Scholar

Smuha, N. A. (2021). Beyond the individual: governing AI’S societal harm. doi: 10.14763/2021.3.1574

Crossref Full Text | Google Scholar

Snow, J. (2018) Amazon’s face recognition falsely matched 28 members of congress with mugshots. Am. Civ. Lib. Union. Available online at: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28 (Accessed April 27, 2022).

Google Scholar

Stevens, A., Fussey, P., Murray, D., Hove, K., and Saki, O. (2023). I started seeing shadows everywhere’: the diverse chilling effects of surveillance in Zimbabwe. Big Data Soc. 10:20539517231158631. doi: 10.1177/20539517231158631

Crossref Full Text | Google Scholar

Su, A. (2022). The promise and perils of international human rights law for AI governance. Law Technol. Hum. 4, 166–182. doi: 10.5204/lthj.2332

Crossref Full Text | Google Scholar

Sunstein, C. R. (2019). Algorithms, correcting biases. Soc. Res. 86, 499–511. doi: 10.1353/sor.2019.0024

Crossref Full Text | Google Scholar

Teo, S. A. (2022). How artificial intelligence systems challenge the conceptual foundations of the human rights legal framework. Nord. J. Hum. Rights 40, 216–234. doi: 10.1080/18918131.2022.2073078

Crossref Full Text | Google Scholar

The Merriam-Webster Dictionary (2022). Merriam-Webster. Available online at: https://www.merriam-webster.com/ (Accessed October 21, 2025).

Google Scholar

The Oversight Board (2024). Content moderation in the new era for AI and automation. Available online at: https://www.oversightboard.com/wp-content/uploads/2024/09/Oversight-Board-Content-Moderation-in-a-New-Era-for-AI-and-Automation-September-2024.pdf (Accessed October 21, 2025).

Google Scholar

Tieleman, M. (2025). Fairness in tension: a socio-technical analysis of an algorithm used to grade students. Camb. Forum AI Law Gov. 1:e19. doi: 10.1017/cfl.2025.6

Crossref Full Text | Google Scholar

Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. New Haven London: Yale University Press.

Google Scholar

Valdivia, A., Serrajòrdia, J. C., and Swianiewicz, A. (2023). There is an elephant in the room: towards a critique on the use of fairness in biometrics. AI Ethics 3, 1407–1422. doi: 10.1007/s43681-022-00249-2

Crossref Full Text | Google Scholar

Van Den Eede, Y. (2011). In between us: on the transparency and opacity of technological mediation. Found. Sci. 16, 139–159. doi: 10.1007/s10699-010-9190-y

Crossref Full Text | Google Scholar

Vavoula, N. (2021). Artificial intelligence (AI) at Schengen borders: automated processing, algorithmic profiling and facial recognition in the era of techno-solutionism. Eur. J. Migr. Law. 23, 457–484. doi: 10.1163/15718166-12340114

Crossref Full Text | Google Scholar

Veiligheid, M. J. (2023). Dating-app Breeze mag (en moet) algoritme aanpassen om discriminatie te voorkomen - Nieuwsbericht - College voor de Rechten van de Mens. Available online at: https://www.mensenrechten.nl/actueel/nieuws/2023/09/06/dating-app-breeze-mag-en-moet-algoritme-aanpassen-om-discriminatie-te-voorkomen (Accessed June 9, 2025).

Google Scholar

Verdirame, G. (2015). “Rescuing human rights from proportionality” in Philosophical foundations of human rights. (Eds.) Fowan Cruft, S., Matthew, L., and Massimo, R. (Oxford: Oxford University Press).

Google Scholar

Wachter, S. (2022). The theory of artificial immutability: protecting algorithmic groups under anti-discrimination law. Tulane Law Rev. 97:149.

Google Scholar

Wachter, S., Mittelstadt, B., and Russell, C. (2021). Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. W. Va. Law Rev. 123:735. Available online at: https://researchrepository.wvu.edu/wvlr/vol123/iss3/4

Google Scholar

Weerts, H., Xenidis, R., Tarissan, F., Olsen, H. P., and Pechenizkiy, M. (2023). Algorithmic unfairness through the Lens of EU non-discrimination law: or why the law is not a decision tree., in Proceedings of the 2023 ACM conference on fairness, accountability, and transparency, (New York, NY, USA: Association for Computing Machinery), 805–816.

Google Scholar

Winner, L. (1980). Do artifacts have politics? Daedalus 109, 121–136.

Google Scholar

Xenidis, R. (2022). Algorithmic neutrality vs neutralising discriminatory algorithms: for a paradigm shift in EU anti-discrimination law. Lav. Dirit. 4, 729–734.

Google Scholar

Yeung, K. (2018). Algorithmic regulation: a critical interrogation. Regul. Gov. 12, 505–523. doi: 10.1111/rego.12158

Crossref Full Text | Google Scholar

Zehlike, M., Loosley, A., Jonsson, H., Wiedemann, E., and Hacker, P. (2022). Beyond incompatibility: trade-offs between mutually exclusive fairness criteria in machine learning and law. Arxiv [Preprint]. doi: 10.48550/arXiv.2212.00469

Crossref Full Text | Google Scholar

Zehlike, M., Loosley, A., Jonsson, H., Wiedemann, E., and Hacker, P. (2025). Beyond incompatibility: trade-offs between mutually exclusive fairness criteria in machine learning and law. Artif. Intell. 340:104280. doi: 10.1016/j.artint.2024.104280

Crossref Full Text | Google Scholar

Zietlow, D., Lohaus, M., Balakrishnan, G., Kleindessner, M., Locatello, F., Schölkopf, B., et al. (2022). Leveling down in computer vision: Pareto inefficiencies in fair deep classifiers. IEEE Comput. Soc. 2022, 10400–10411. doi: 10.1109/CVPR52688.2022.01016

Crossref Full Text | Google Scholar

Keywords: AI bias, human rights, polycentric, non-discrimination law, equality

Citation: Teo SA (2025) Polycentrism, not polemics? Squaring the circle of non-discrimination law, accuracy metrics and public/private interests when addressing AI bias. Front. Polit. Sci. 7:1645160. doi: 10.3389/fpos.2025.1645160

Received: 11 June 2025; Accepted: 31 October 2025;
Published: 18 November 2025.

Edited by:

Carlos Rodrigues, Fernando Pessoa University, Portugal

Reviewed by:

Arkadiusz Modrzejewski, University of Gdansk, Poland
Krunoslav Antoliš, Veleuciliste kriminalistike i javne sigurnosti, Croatia

Copyright © 2025 Teo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sue Anne Teo, c3VlX2FubmUudGVvQHJ3aS5sdS5zZQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.