REVIEW article
Front. Digit. Health
Sec. Digital Mental Health
Volume 7 - 2025 | doi: 10.3389/fdgth.2025.1431246
This article is part of the Research TopicCybersecurity in Digital Mental HealthView all 5 articles
Privacy, Ethics, Transparency, and Accountability in AI Systems for Wearable Devices
Provisionally accepted- University of Oxford, Oxford, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The integration of artificial intelligence (AI) and machine learning (ML) into wearable sensor technologies has substantially advanced health data science, enabling continuous monitoring, personalised interventions, and predictive analytics. However, the fast advancement of these technologies has raised critical ethical and regulatory concerns, particularly around data privacy, algorithmic bias, informed consent, and the opacity of automated decision-making. This study undertakes a systematic examination of these challenges, highlighting the risks posed by unregulated data aggregation, biased model training, and inadequate transparency in AI-powered health applications. Through an analysis of current privacy frameworks and empirical assessment of publicly available datasets, the study identifies significant disparities in model performance across demographic groups and exposes vulnerabilities in both technical design and ethical governance. To address these issues, this article introduces a data-driven methodological framework that embeds transparency, accountability, and regulatory alignment across all stages of AI development. The framework operationalises ethical principles through concrete mechanisms, including explainable AI, bias mitigation techniques, and consent-aware data processing pipelines, while aligning with legal standards such as the GDPR, the UK Data Protection Act, and the EU AI Act. By incorporating transparency as a structural and procedural requirement, the framework presented in this article offers a replicable model for the responsible development of AI systems in wearable healthcare. In doing so, the study advocates for a regulatory paradigm that balances technological innovation with the protection of individual rights, fostering fair, secure, and trustworthy AI-driven health monitoring.
Keywords: Wearable Technology, artificial intelligence, machine learning, Data privacy, ethical considerations, Health Data Science
Received: 11 May 2024; Accepted: 26 May 2025.
Copyright: © 2025 Radanliev. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Petar Radanliev, University of Oxford, Oxford, United Kingdom
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.