- 1Computational Psychiatry Group, Department of Psychiatry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
- 2Department of Psychiatry, Division of Neuroscience and Translational Psychiatry, University of British Columbia, Vancouver, BC, Canada
- 3Department of Psychiatry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
Editorial on the Research Topic
Mental health in the age of artificial intelligence
Introduction
Artificial intelligence (AI) is rapidly maturing, with applications spanning the full spectrum of healthcare, from foundational research to clinical implementation. Its use has extended beyond lab-based proofs of concept into real-world settings (1), including but not limited to clinical documentation (2, 3), administrative triage (4, 5), diagnostic support (6), and conversational agents (7, 8). These innovations hold the potential to expand access to care (9, 10), personalize treatment (7, 11–13), predict risk (14, 15), and ease the administrative burdens (2, 4) on strained systems.
Despite the global prevalence and economic burden of mental health conditions (16), the clinical adoption of AI in the field of psychiatry remains limited (13). This lag reflects the intrinsic complexity of the field, which challenges conventional algorithmic approaches and underscores the need for AI solutions that are ethically grounded, reproducible, contextually adaptive, and attuned and supportive to the nuances of human experience.
The Research Topic, “Mental Health in the Age of Artificial Intelligence”, explores this rapidly shifting terrain through five timely contributions. Each publication offers a unique lens, spanning methodological, empirical, cultural, and existential dimensions, with ethical reflection as a common, unifying thread. Together they converge on a single, urgent imperative: for AI to be more widely trusted and in turn adopted in the mental health ecosystem, it must be designed, deployed, and evaluated in ways that are accountable, reproducible, fair, and deeply human-centered.
These themes surface immediately in the first contribution. The question of reproducibility and interpretability has become a cornerstone of responsible AI (13, 17), and Celeste et al. tackle it head-on in their contribution, “A software pipeline for systematizing machine learning of speech data”. Here they offer a suite of configurable software pipelines built within Python Luigi which are then used to test the reproducibility of three machine learning studies, involving depression, mild cognitive impairment, and aphasia, as a proof of concept. Authors then warn of the reproducibility crisis and argue that the ability to reproduce machine learning experiments, including model configurations, optimal hyperparameters, validation predictions, and performance metrics, is not merely a methodological ideal, but a moral and professional responsibility.
The ethical conversation then shifts to raise the spotlight on the human experience. Lee et al in “Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop,” gathers feedback from patients with self-reported mild to moderate anxiety in terms of their experiences, perceptions, and acceptability of mental health AI conversational agents (CAs). This timely study poses a clinically meaningful question at a critical juncture in the adoption of digital mental health tools, highlighting the importance of amplifying the patient’s voice within a values-based framework that is increasingly recognized as best practice (13, 18). The authors findings remind us while there are perceptions of utility and benefit in terms of potential increases in access to care through use of AI chatbots, the perceived risk of lack of empathy, concerns over privacy and other technical limitations of these models remain leading concerns, and participants signalled a consistent preference for “human-in-the-loop” models wherein AI serves as an extender of care, not a replacement for it. This notion is reinforced in a recent review, where while AI-based CAs were found to be more effective for clinical and subclinical populations, the need persists to “untangle the complex interplay” between a variety of factors, including when “human support is indispensable.” (19).
The ethical conversation continues in, Denecke and Gabarron’s “The ethical aspects of integrating sentiment and emotion analysis in chatbots for depression intervention,” where authors explore the ethical dimensions of sentiment analysis in chatbots seeking to support individuals with the provision of depression specific interventions. Reflecting the cautions of other authors in other works included in the special topic, the importance of balance is reiterated, as misclassification of emotion/harms increase risk of inappropriate or missed system responses, risk detection, and risk escalation. Authors emphasize the importance of thoughtfully integrating chatbots into care settings under the supervision of qualified health professionals. They stress that emotion should be treated as a complex, clinically significant, and ethically sensitive signal that demands careful and responsible handling. This reflects findings from other recent meta-analysis of AI-based CAs, where while a large effect size for AI-based CA is observed for the mitigation of psychological distress (especially when multi-modal), they can still generate “unnatural or repetitive interactions, potentially reducing clinical effectiveness.” (19).
Questions about acceptability and safety are magnified when AI systems are designed to generate text, make inferences, or interact dynamically with users. In “Applications of large language models in psychiatry: a systematic review,” Omar et al. synthesize a growing body of evidence on the use of large language models (LLMs) in mental health contexts. While they identify promising applications in clinical reasoning, educational tools, and even therapeutic support, the review also highlights critical issues in the underestimation of suicide risk, inconsistency in complex scenarios, and the lack of rigorous safety evaluation. Their findings align with emerging international concerns and caution in the field, such as declining medical safety messaging in generative AI models which are estimated to have reduced from ~26% to 1% over the last 2–3 years (20). While LLMs offer flexibility and scalability, their implementation should proceed cautiously and with significant oversight.
Finally, Alkhalifah et al. remind us to reflect on the human experience. In “Existential anxiety about artificial intelligence (AI): is it the end of humanity era or a new chapter in the human revolution?,” The authors explore public perceptions of AI’s role in society, and in particular, its psychological and existential consequences. Drawing on survey data from a public population in Saudi Arabia, they find significant levels of AI-related anxiety, including fears of human obsolescence, concerns of unpredictability, and sense of emptiness. The authors acknowledge this underlying unease as a sentiment that warrants attention and consideration, positioning it as pertinent to broader discussions about AI adoption, particularly its influence on social systems and its deeper implications for our understanding of what it means to be human.
Taken together, these five articles provide a cross-section of where the field stands, while offering an ethical and reflective pathway forward. Each surface that while the technological promise of AI is real and growing, its ethical, clinical, and human foundations require careful consideration. We must ensure systems are reproducible in their development, calibrated and fair in their outputs, interpretable in their logic and iterations, accountable in their consequences, and deeply and deliberately human-centered.
We must do all this while juggling the need to fully realize the potential of these innovations, while ensuring the appropriate safeguards are in place to protect end users. In a world arguably gripped by what Günther Anders’ described as “Promethean Shame,” a sense of human inadequacy in the face of our own technological creations, we find ourselves drawn to this promise to transcend our biological limits, even as we fail to grasp the full intended and unintended repercussions of these innovations, and the ways it unsettles deeply held values, purposes, and understandings of meaning (21).
At the same time, the accelerating momentum of commercial AI development, often obscured by proprietary opacity, is outpacing our existing systems for evaluation, governance, and ethical oversight. As this monopolized and monetized structure threatens to consolidate power further, we must confront a pressing question: on whose terms is mental health care evolving? Again, we see that the future of AI in this space is not solely a technical matter. It is also a clinical, philosophical, and political one, requiring sustained dialogue, shared standards, and a commitment to human-centered care.
Author contributions
JN: Writing – original draft, Writing – review & editing. KH: Writing – original draft, Writing – review & editing. AG: Writing – original draft, Writing – review & editing.
Conflict of interest
The authors declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Han R, Acosta J, Shakeri Z, Ioannidis J, Topol E, and Rajpurkar P. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. Lancet Digital Health. (2024) 6:e367–73. doi: 10.1016/S2589-7500(24)00047-5
2. Olson K, Meeker D, Troup M, Barker T, Nguyen V, Manders J, et al. Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Netw Open. (2025) 8:e2534976. doi: 10.1001/jamanetworkopen.2025.34976
3. Bracken A, Reilly C, Feeley A, Sheehan E, Merghani K, and Feeley I. Artificial intelligence (AI) – powered documentation systems in healthcare: A systematic review. J Med Syst. (2025) 49:28. doi: 10.1007/s10916-025-02157-4
4. Garcia P, Ma S, Shah S, Smith M, Jeong Y, Devon-Sand A, et al. Artificial intelligence–generated draft replies to patient inbox messages. JAMA Netw Open. (2024) 7:e243201. doi: 10.1001/jamanetworkopen.2024.3201
5. Hu D, Guo Y, Zhou Y, Flores L, and Zheng K. A systematic review of early evidence on generative AI for drafting responses to patient messages. NPJ Health Syst. (2025) 2:27. doi: 10.1038/s44401-025-00032-5
6. Martinez-Gutierrez J, Kim Y, Salazar-Marioni S, Tariq M, Abdelkhaleq R, Niktabe A, et al. Automated large vessel occlusion detection software and thrombectomy treatment times: A cluster randomized clinical trial. JAMA Neurol. (2023) 80:1182–90. doi: 10.1001/jamaneurol.2023.3206
7. Nayak A, Vakili S, Nayak K, Nikolov M, Chiu M, Sosseinheimer P, et al. Use of voice-based conversational artificial intelligence for basal insulin prescription management among patients with type 2 diabetes: A randomized clinical trial. JAMA Netw Open. (2023) 6:e2340232. doi: 10.1001/jamanetworkopen.2023.40232
8. Noble JM, Zamani A, Gharaat M, Merrick D, Maeda N, Lambe Foster A, et al. Developing, Implementing, and Evaluating an Artificial Intelligence–Guided Mental Health Resource Navigation Chatbot for Health Care Workers and Their Families During and Following the COVID-19 Pandemic: Protocol for a Cross-sectional Study JMIR. Res Protoc. (2022) 11:e33717. doi: 10.2196/33717
9. Habicht J, Viswanathan S, Carrington B, Hauser T, Harper R, and Rollwage M. Closing the accessibility gap to mental health treatment with a personalized self-referral chatbot. Nat Med. (2024) 30:595–602. doi: 10.1038/s41591-023-02766-x
10. Rahmati M, Smith L, Piyasena M, Bowen M, Boyer L, Fond G, et al. Artificial Intelligence improves follow-up appointment uptake for diabetic retinal assessment: a systematic review and meta-analysis. Eye. (2025) 39:2398–406. doi: 10.1038/s41433-025-03849-4
11. Kather J, Pearson A, Halama N, Jager D, Krause J, Loosen S, et al. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nat Med. (2019) 25:1054–6. doi: 10.1038/s41591-019-0462-y
12. McCutcheon R, Harrison P, Howes O, McGuire P, Taylor D, and Pillinger T. Data-driven taxonomy for antipsychotic medication: a new classification system. Biol Psychiatry. (2023) 94:561–8. doi: 10.1016/j.biopsych.2023.04.004
13. Sun J, Lu T, Shao X, Han Y, Xia Y, Zheng Y, et al. Practical AI application in psychiatry: historical review and future directions. Mol Psychiatry. (2025) 30:4399–408. doi: 10.1038/s41380-025-03072-3
14. Yao X, Rushlow D, Inselman J, McCoy R, Thacher T, Behnken E, et al. Artificial intelligence-enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial. Nat Med. (2021) 27:815–9. doi: 10.1038/s41591-021-01335-4
15. Tomašev N, Glorot X, Rae J, Zielinski M, Askham H, Saraiva A, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. (2019) 572:116–9. doi: 10.1038/s41586-019-1390-1
16. The Lancet Global Health. Mental health matters. Lancet Glob Health. (2020) 8:e1352. doi: 10.1016/S2214-109X(20)30432-0
17. Bienefeld N, Boss J, Lüthy R, Brodbeck D, Azzati J, Blaser M, et al. Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals. NPJ Digit Med. (2023) 6:1–7. doi: 10.1038/s41746-023-00837-4
18. Womersley K, Fulford K, Peile E, Koralus P, and Handa A. Hearing the patient’s voice in AI-enhanced healthcare. BMJ. (2023) 383:2758. doi: 10.1136/bmj.p2758
19. Li H, Zhang R, Lee Y, Kraut R, and Mohr D. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digit Med. (2023) 6:236. doi: 10.1038/s41746-023-00979-5
20. Sharma S, Alaa A, and Daneshjou R. A longitudinal analysis of declining medical safety messaging in generative AI models. NPJ Digit Med. (2025) 8:592. doi: 10.1038/s41746-025-01943-1
Keywords: artificial intelligence, conversational agent, chatbot, mental health, psychiatry, digital mental health, large language model, ethical AI
Citation: Noble JM, Ha K and Greenshaw AJ (2026) Editorial: Mental health in the age of artificial intelligence. Front. Psychiatry 16:1750256. doi: 10.3389/fpsyt.2025.1750256
Received: 20 November 2025; Accepted: 18 December 2025; Revised: 20 November 2025;
Published: 06 January 2026.
Edited and reviewed by:
Andreea Oliviana Diaconescu, University of Toronto, CanadaCopyright © 2026 Noble, Ha and Greenshaw. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jasmine M. Noble, am1icm93bjFAdWFsYmVydGEuY2E=