CONCEPTUAL ANALYSIS article
Front. Big Data
Sec. Machine Learning and Artificial Intelligence
Volume 8 - 2025 | doi: 10.3389/fdata.2025.1532397
AI Biases as Asymmetries: A Review to Guide Practice
Provisionally accepted- Center for Equitable AI & Machine Learning Systems (CEAMLS), Morgan State University, Baltimore, Maryland, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The understanding of bias in AI is currently undergoing a revolution. Initially understood as errors or flaws, biases are increasingly recognized as integral to AI systems and sometimes preferable to less biased alternatives. In this paper we review the reasons for this changed understanding and provide new guidance on three questions: First, how should we think about and measure biases in AI systems, consistent with the new understanding? Second, what kinds of bias in an AI system should we accept or even amplify, and why? And, third, what kinds should we attempt to minimize or eliminate, and why? In answer to the first question, we argue that biases are "violations of a symmetry standard" (following Kelly). Per this definition, many biases in AI systems are benign. This raises the question of how to identify biases that are problematic or undesirable when they occur. To address this question, we distinguish three main ways that asymmetries in AI systems can be problematic or undesirable -erroneous representation, unfair treatment, and violation of process ideals -and highlight places in the pipeline of AI development and application where bias of these types can occur.
Keywords: Bias, artificial intelligence, machine learning, Symmetry, Statistical bias, cognitive bias, Inductive bias, Bias-variance trade-off
Received: 21 Nov 2024; Accepted: 11 Aug 2025.
Copyright: © 2025 Waters and Honenberger. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Phillip Honenberger, Center for Equitable AI & Machine Learning Systems (CEAMLS), Morgan State University, Baltimore, Maryland, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.