- 1Faculty of Education, Tel Hai College, Upper Galilee, Kiryat Shmona, Israel
- 2The Faculty of Education, The University of Haifa, Haifa, Israel
- 3Department of Psychology, University of Houston – Clear Lake, Houston, TX, United States
- 4Clinical Psychology and Psychotherapy – Methods and Approaches, Department of Psychology, Trier University, Trier, Germany
- 5Department of Digital and Blended Psychotherapy and Psychosomatics, Psychosomatic Medicine, University Hospital and University of Basel, Basel, Switzerland
- 6Department of Emergency Psychiatry and Acute Care, Lapeyronie Hospital, Montpellier, France
- 7Institute of Functional Genomics, Hôpital La Colombière, University Montpellier, Montpellier, France
- 8Department of Psychology, Max Stern Yezreel Valley College, Ramat Gan, Israel
- 9The Lior Tsfaty Center for Suicide and Mental Pain Studies, Ruppin Academic Center, Emek Hefer, Israel
- 10The Louis and Gabi Weisfeld School of Social Work, Bar-Ilan University, Ramat Gan, Israel
- 11Copernicus Center for Interdisciplinary Studies, Jagiellonian University, Kraków, Poland
Editorial on the Research Topic
Empowering suicide prevention efforts with generative AI technology
Suicide claims approximately 746,000 lives each year, ranking among the leading causes of premature mortality and psychological distress worldwide (1). Precision risk assessment is especially challenging for high-vulnerability groups, such as military veterans, middle-aged men, and LGBTQ+ individuals. Additional challenges are posed by multiple aspects of stigma (internalized, anticipated, and public), which impedes help-seeking behaviors (2). Stigma of all varieties is driven by the growing influence of media, including user-generated content, which dramatically impacts public perceptions of suicide (3, 4).
In recent years, the rapid advancement of Artificial Intelligence (AI), particularly Generative AI and the large language models (LLMs) that power it, has opened new avenues for suicide risk assessment, prevention, and intervention. Emerging evidence suggests that these technologies can contribute to more personalized and scalable screening tools, enhance the training of mental health professionals, reduce stigma, and support early detection in clinical and digital environments (5, 6). Researchers have also explored how cultural context can influence AI’s sensitivity to suicide risks (7) and how clinical variables such as history of depression, previous suicide attempts, or access to weapons can be integrated into AI models to improve prediction accuracy (5).
The Research Topic brings together multidisciplinary contributions that investigate how Generative AI technologies can be ethically and effectively harnessed to improve suicide prevention. The included articles offer diverse perspectives on technological applications, clinical insights, and ethical considerations with the aim of promoting evidence-based innovation in one of the most urgent areas of mental health.
The articles included in this Research Topic demonstrate the multidisciplinary potential of Generative Artificial Intelligence (GenAI) and large language models (LLMs) to advance suicide prevention. The researchers apply these technologies across diverse contexts, including risk assessment, professional training, public health monitoring, and qualitative analysis, leveraging machine learning to identify previously overlooked risk factors, improve diagnostic accuracy, and support complex clinical decision-making.
The study by Lissak et al. highlights boredom, particularly disengaged boredom, as a significant risk factor for suicide. This conclusion was reached through a hybrid approach that combined large-scale natural language processing with validated psychological measures. Lauderdale et al. examined the ability of three GenAI systems to assess suicide risk among U.S. military veterans. Although the models showed some alignment with clinical judgments regarding chronic risk, they tended to recommend more intensive interventions and displayed greater variability in evaluating acute risk. Zheng et al. developed a predictive model based on comprehensive epidemiological data from the SEER database, identifying heightened risk among older men, residents of rural areas, and patients diagnosed with acute myeloid leukemia. Lastly, Balt et al. evaluated the performance of the Llama 3 model in deductive coding of interviews with individuals bereaved by suicide. While the model demonstrated strong capacity for identifying central themes, the authors also reported concerns related to conceptual inaccuracies and overgeneralizations.
Together, these studies illustrate both the promise as well as the complexity of integrating GenAI into suicide prevention efforts. They emphasize the need for ongoing refinement of AI models, close collaboration with clinical professionals, and the application of ethical frameworks that ensure responsible, context-sensitive, and human-centered implementation.
Future Directions: Needs, Opportunities, Challenges, and Perspectives
Generative AI holds transformative potential for suicide prevention, but progress demands a multidisciplinary, ethically grounded approach. As highlighted in several articles in this Research Topic, there is an urgent need to enhance model transparency, interpretability, and contextual sensitivity in both clinical and cultural settings (7, 8). These technologies should be viewed not as replacements for clinical expertise, but as supportive tools that assist in making ethically grounded and context-aware decisions.
There are clear opportunities in developing personalized interventions, emotionally rich training simulations, and novel methods for detecting hidden suicide risk factors. For instance, boredom was identified as a central predictor of suicidality (Lissak et al.), while other studies demonstrated GenAI’s capacity to assess risk in high-vulnerability populations such as veterans (Lauderdale et al.) and leukemia patients (Zheng et al.). In parallel, Generative AI systems may help reduce stigma and encourage help-seeking behavior in marginalized communities (5).
Nevertheless, important challenges remain. These include algorithmic bias, digital inequities, and cultural variability in the expression and recognition of distress (9–11). AI-based tools must be designed with careful attention to gender, cultural diversity, and clinical nuance to ensure fairness and relevance across populations.
Meaningful progress in the field will require collaboration across disciplines, involving clinicians, ethicists, computer scientists, and policymakers, to establish ethical and regulatory frameworks that protect human dignity (12). As illustrated in the work of Balt et al., human feedback and iterative review can strengthen the validity of AI outputs and foster user trust.
As technological capabilities continue to expand, the application of Generative AI in suicide prevention must be guided by principles of responsibility, inclusivity, and a sustained commitment to the wellbeing of vulnerable individuals. Its success will depend on thoughtful implementation, ongoing professional oversight, and continuous ethical engagement.
Conclusion
The contributions to this Research Topic illustrate the transformative potential of Generative AI in suicide prevention. Across quantitative, qualitative, and theoretical frameworks, the included studies demonstrate how LLMs can support more nuanced risk detection, automate complex coding processes, and uncover novel psychological risks. Simultaneously, the limitations of these technologies, ranging from ethical concerns to contextual misinterpretation, reinforce the need for responsible implementation. Collectively, these studies offer a foundation for AI-assisted suicide-prevention tools – tools that must always complement, not replace, expert human judgement, cultural sensitivity, and empirical rigor.
Author contributions
IL: Writing – review & editing, Writing – original draft. ZE: Writing – review & editing. SL: Writing – review & editing, Writing – original draft. GM: Writing – review & editing, Writing – original draft. BN: Writing – review & editing. DH: Writing – review & editing. YL: Writing – review & editing. SS-A: Writing – review & editing. JG: Writing – review & editing.
Conflict of interest
GM received funding from the Stanley Thomas Johnson Stiftung & Gottfried und Julia Bangerter-Rhyner-Stiftung under projects no. PC 28/17 and PC 05/18, from Gesundheitsförderung Schweiz under project no. 18.191/K50001, complemented by funds from the Health Department of the Canton of Basel-Stadt, from the Swiss Heart Foundation under project no. FF21101, from the Research Foundation of the International Psychoanalytic University IPU Berlin under projects no. 5087 and 5217, from the Teaching Incentive Fund of Trier University under project no. TIF2024_01, from the Research Fund of Trier University under project no. FoF/A 2024-15, from the German Federal Ministry of Education and Research under budget item 68606, from the Hasler Foundation under project No. 23004, in the context of a Horizon Europe project from the Swiss State Secretariat for Education, Research and lnnovation SERI under contract number 22.00094, and from Wings Health in the context of a proof-of-concept study. GM is co-founder and shareholder of Therayou AG, active in digital and blended mental healthcare. GM received royalties from publishing companies as author, including a book published by Springer, and an honorarium from Lundbeck for speaking at a symposium. Furthermore, GM is compensated for providing psychotherapy to patients, acting as a supervisor, serving as a self-experience facilitator ‘Selbsterfahrungsleiter’, and for postgraduate training of psychotherapists, psychosomatic specialists, and supervisors.
The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.
Generative AI statement
The author(s) declare that no Generative AI was used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Weaver ND, Bertolacci GJ, Rosenblad E, Ghoba S, Cunningham M, Ikuta KS, et al. Global, regional, and national burden of suicide 1990–2021: A systematic analysis for the Global Burden of Disease Study 2021. Lancet Public Health. (2025) 10:e189–202. doi: 10.1016/S2468-2667(25)00006-4
2. Carpiniello B and Pinna F. The reciprocal relationship between suicidality and stigma. Front Psychiatry. (2017) 8:35. doi: 10.3389/fpsyt.2017.00035
3. Levi-Belz Y, Groweiss Y, Shachar Lavie I, Shoval Zuckerman Y, and Blank C. We’re all in this together”: The protective role of belongingness in the contribution of moral injury to mental health among participants in Israel’s civil protest movement. Eur J Psych Traumatol. (2025) 16:2474374. doi: 10.1080/20008198.2023.2474374
4. Nobile B, Gourguechon-Buot E, Gorwood P, Olié E, and Courtet P. Association of clinical characteristics, depression remission and suicide risk with discrepancies between self-and clinician-rated suicidal ideation: Two large naturalistic cohorts of outpatients with depression. Psychiatry Res. (2024) 335:115833. doi: 10.1016/j.psychres.2023.115833
5. Levkovich I and Omar M. Evaluating of BERT-based and large language models for suicide detection, prevention, and risk assessment: A systematic review. J Med Syst. (2024) 48:113. doi: 10.1007/s10916-024-02067-6
6. Shinan-Altman S, Elyoseph Z, and Levkovich I. Integrating previous suicide attempts, gender, and age into suicide risk assessment using advanced artificial intelligence models. J Clin Psychiatry. (2024) 85:57125. doi: 10.4088/JCP.23m15364
7. Levkovich I, Shinan-Altman S, and Elyoseph Z. Can large language models be sensitive to culture suicide risk assessment? J Cultural Cogn Sci. (2024) 8:275–87. doi: 10.1007/s41809-024-00136-3
8. Shinan-Altman S, Elyoseph Z, and Levkovich I. The impact of history of depression and access to weapons on suicide risk assessment: A comparison of ChatGPT-3.5 and ChatGPT-4. PeerJ. (2024) 12:e17468. doi: 10.7717/peerj.17468
9. Nobile B, Jaussent I, Kahn JP, Leboyer M, Risch N, Olié E, et al. Risk factors of suicide re-attempt: A two-year prospective study. J Affect Disord. (2024) 356:535–44. doi: 10.1016/j.jad.2023.12.187
10. Omar M, Soffer S, Agbareia R, Bragazzi NL, Apakama DU, Horowitz CR, et al. Sociodemographic biases in medical decision making by large language models. Nat Med. (2025) 31:1873–81. doi: 10.1038/s41591-025-03626-6
11. Schnepper R, Roemmel N, Schaefert R, Lambrecht-Walzinger L, and Meinlschmidt G. Exploring biases of large language models in the field of mental health: comparative questionnaire study of the effect of gender and sexual orientation in anorexia nervosa and bulimia nervosa case vignettes. JMIR Ment Health. (2025) 12:e57986. doi: 10.2196/57986
Keywords: suicide prevention, generative AI (GenAI), large language models (LLM), risk assessment, mental health, psychosocial autopsy, digital psychiatry
Citation: Levkovich I, Elyoseph Z, Lauderdale S, Meinlschmidt G, Nobile B, Hadar Shoval D, Levi-Belz Y, Shinan-Altman S and Grodniewicz JP (2025) Editorial: Empowering suicide prevention efforts with generative AI technology. Front. Psychiatry 16:1643893. doi: 10.3389/fpsyt.2025.1643893
Received: 09 June 2025; Accepted: 06 August 2025;
Published: 26 August 2025.
Edited and reviewed by:
Andreea Oliviana Diaconescu, University of Toronto, CanadaCopyright © 2025 Levkovich, Elyoseph, Lauderdale, Meinlschmidt, Nobile, Hadar Shoval, Levi-Belz, Shinan-Altman and Grodniewicz. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Inbar Levkovich, aW5iYXIubGV2MkBnbWFpbC5jb20=