Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Hum. Dyn., 30 July 2025

Sec. Digital Impacts

Volume 7 - 2025 | https://doi.org/10.3389/fhumd.2025.1654355

This article is part of the Research TopicEthical Dilemmas of Digitalisation of Mental HealthView all 6 articles

Editorial: Ethical dilemmas of digitalisation of mental health

  • 1Lokmanya Tilak Municipal General Hospital, Mumbai, India
  • 2Department of Public Health, Indian Institute of Public Health Gandhinagar, Gandhinagar, India

Editorial on the Research Topic
Ethical dilemmas of digitalisation of mental health

Advancement with ethical considerations

The digitalisation of mental healthcare has emerged as a promising avenue to address the growing global burden of mental illnesses. Mobile apps, online therapy platforms, and chatbots offer increased accessibility, anonymity, and potential for personalized interventions. However, this technological revolution presents a complex ethical landscape that necessitates careful considerations (Torous and Firth, 2018). The Research Topic presents: evidence synthesis in the area of ChatGPT for diagnosis of mental health and providing psychotherapy; application of artificial intelligence (AI) for positive mental health; digitalisation of brains (impact of technology on mental health and brain functioning); comprehensive bibliometric analysis of ethical concerns and dilemmas regarding the digital mental health domain; and a research measure to formulate ethical regulations for digital mental health services. Thakkar et al. explore the role of AI in awareness, diagnosis and intervention in various psychiatric disorders, neurodegenerative disorders, and intellectual disabilities such as schizophrenia, autism spectrum disorders and mood disorders. Boulos discusses a crucial topic of how the accelerated technology use is negatively correlated to cognitive and affective alterations in apparently healthy individuals, including increased feelings of isolation, stress, memory and attention deficits, as well as modifications in information and reward processing. Pandya et al. highlight the internet trend of the year, ChatGPT. The authors comment on the ethical challenges with ChatGPT in mental healthcare that need attention at various levels, outlining six major concerns, from lack of accurate diagnosis of mental health conditions to need to bridge the research gap. Sharma et al. present a comprehensive bibliometric analysis of scholarly articles on ethical concerns and dilemmas of digital mental health care by utilizing data extracted from the Scopus database from 2000 to 2024. Bapat and Jog propose a 30-item research measure developed using Principal Component Analysis, to bridge the gap in literature with a data-driven, empirical approach to formulate ethical regulations for digitized mental health services. Findings from the study are proposed to benefit the developers of digital mental health apps, and organizations offering such services in ensuring ethical standards as well as effectively communicating them to the potential consumers.

We outline below some of the key ethical dilemmas uncovered by the above articles and suggest some strategies to navigate them while maximizing the benefits of technological advancements.

Data breaches and misuse of information

One of the primary concerns surrounding digital mental health tools is the potential for data breaches and the misuse of sensitive personal information. Mental health data is highly confidential, and its unauthorized exposure can exacerbate social stigma, discrimination, and threaten the employment status of workers (Ebert and Michaelis, 2020). To ensure ethical use, developers must prioritize robust data encryption protocols, implement clear opt-in mechanisms for data collection, and be transparent about data storage and usage practices (Chowdhary et al., 2020). Regulatory bodies need to establish and enforce data protection frameworks specific to mental health apps and platforms.

Misdiagnosis and inappropriate treatment recommendations

Many digital mental health tools rely on algorithms to assess user needs and deliver interventions. However, algorithms can inherit and perpetuate societal biases present in the datasets used to train them. This can lead to misdiagnosis, inappropriate treatment recommendations, and grow existing disparities in mental healthcare access and quality (Blease, 2020). Mitigating this risk requires employing diverse datasets that represent the intended user population and implementing ongoing audits to identify and address potential biases within algorithms. This also requires continuous updating of algorithms in local cultural contexts in order to unshackle multiple laters of biases, stereotypes and stigma around mental health.

Irreplaceable human connection

While digital tools offer valuable support, are cost effective and make mental health services far more accessible, the lack of a human element remains a crucial challenge in the face of effective mental health treatment. Psychologists, Counselors, and Therapists in face to face settings, have the edge over digital platforms of providing a safe space for exploration, building trust and rapport, and tailoring interventions based on individual needs. There is an increasing over-dependence on apps and chatbots that have the potential to delay diagnoses, create a false sense of progress, and can neglect the importance of mental health professional facilitated in-person therapeutic process (Torous and Keshavan, 2019). Another challenge despite the evolution of artificial intelligence is the lack of accurate empathy, imbibed morals and values that digital platforms and bots lack while addressing sensitive socio-political and cultural concerns, which inevitably are a part of the human therapist's approach. Digital platforms should aim to complement, not replace, mental health professionals' in-person presence.

The digital divide and accessibility problem

The potential benefits of digital mental health tools can be limited by the digital divide—the gap between those who have access to and utilize technology and those who do not. Unequal access to technology and the internet risks exacerbating existing disparities in mental healthcare—this is a real challenge with internet access available to only about 60% of the global population. The digital divide may be the result of geographic disparities (urban, rural and tribal geographies, developed, developing, and poor countries); socio-economic disparities (income levels, education and digital literacy, age & generational gaps); and gender and ethnic disparities (gender minority, marginalized populations, lower caste, language barriers, and cultural prohibitions). Individuals with limited access to technology, internet connectivity, or who lack the digital literacy to navigate these platforms risk being excluded from mental healthcare service delivery options. To ensure inclusivity, initiatives that subsidize devices, offer data-free access to mental health platforms, and provide digital literacy training are essential (Landers et al., 2022). Bridging this digital divide calls for investing in infrastructure and expanding internet connectivity, promoting digital literacy and skills development, ensuring affordability and accessibility, and fostering inclusive content and applications.

Apart from implementing solutions to address the above challenges, we propose multi-pronged strategies for promoting ethical use and developing a global digital mental health eco-system.

Training of developers as well as mental healthcare service providers

Advanced technologies alone are not enough for digital mental health care to be effective and secure. Developers need training in data privacy and security best practices, including encryption protocols and user consent procedures. They also need to understand potential biases in algorithms and how to mitigate them through diverse data collection and responsible AI development, validation and evaluation frameworks for establishing accuracy, and reliability (Ganatra et al., 2024). Mental health professionals, on the other hand, require training to evaluate the efficacy and limitations of digital tools, integrate them effectively into the therapeutic process, and identify situations where human intervention remains necessary (Jobin et al., 2019). By fostering such training and research, we can promote a development and implementation process that prioritizes user safety and ethical considerations from the outset that does not hamper the progress of individuals despite the digital interface. Indeed, collaboration is warranted between developers, mental health professionals and mental health researchers for evidence generation and promoting ethical, evidence-informed mental healthcare servives.

Generating evidence on patient-safety and cost-effectiveness

Promoting multidisciplinary research is essential for customizing the digital tools to the needs of the patient and mental healthcare service providers. By equipping developers and mental health professionals with the necessary knowledge, promoting research into digital mental health domain and user training, we can ensure that technological advancements in mental health are not only innovative but also address the ethical challenges of the digital age. With the novel digitalisation of mental health services, there is a need for greater evidence-based support to ensure high quality and effective mental health services. Ethical considerations need to be embedded in the design and testing phases of digital mental health tools (Torous and Keshavan, 2019). For any form of research and data collection from users, it is essential to have an in-built mechanism to provide information to the patient or users about the risks, benefits, alternative mental health care services and declaration that it is not a replacement of mental healthcare professionals for diagnosis, treatment, rehabilitation and reintegration. Failing to provide informed consent, could result in the mistrust between patient and mental healthcare service provider (Drake, 2023).

Moreover, generating robust evidence on cost-effectiveness of digital health technology is crucial. Hence, multidisciplinary research is needed.

Regulatory mechanism

The global digital landscape is rapidly evolving, necessitating a robust regulatory framework to ensure ethical use of digital tools and platforms. Examples of countries with established regulatory frameworks include the European Union with its General Data Protection Regulation (GDPR),1 and the United States with the Health Insurance Portability and Accountability Act (HIPAA)2 and the Digital Personal Data Protection Act (DPDP) 2023 in India (Ministry of Law and Justice, 2023). These frameworks address data privacy, security, and the ethical use of patient data. However, the rapid evolution of digital technologies necessitates a more comprehensive and global approach. A nation-wise regulatory mechanism would ensure consistent standards, facilitate data sharing for research and improvement, and prevent regulatory arbitrage where companies exploit loopholes in national regulations. Such a mechanism could involve international collaborations, the development of global ethical guidelines, and the establishment of independent oversight bodies. To effectively oversee the digital ecosystem, a multi-stakeholder approach, involving government, industry, academia, and civil society, is essential.

The digitalisation of mental health presents a significant opportunity to expand access to care, promote mental wellbeing, and personalize interventions. This would consequently not just battle stigma but also significantly reduce the burden of mental illnesses while bridging the overarching treatment gap. However, the same can be efficiently and sustainably achieved only with the prioritization of ethical considerations. Robust data protection, unbiased algorithms, inclusivity and focus on human-centric care are challenges ahead of us in the era of growing artificial intelligence in mental health landscapes.

Author contributions

PL: Writing – original draft. AP: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^European Union with its General Data Protection Regulation (GDPR). Available online at: https://www.eumonitor.eu/9353000/1/j4nvk6yhcbpeywk_j9vvik7m1c3gyxp/vk3t7p3lbczq.

2. ^The United States with the Health Insurance Portability and Accountability Act (HIPAA). Available online at: https://aspe.hhs.gov/reports/health-insurance-portability-accountability-act-1996.

References

Blease, C. (2020). 5 Insights into Digital Mind: AI Impact on Mental Health Ethics in the 21st Century. HyScaler. Available online at: https://healthnews.com/news/use-of-ai-chatbot-by-mental-health-company-causes-ethical-concerns/ (Accessed May 10, 2025).

Google Scholar

Chowdhary, N., Agarwal, A., and Bansal, S. (2020). Examining ethical and social implications of digital mental health technologies through expert interviews and sociotechnical systems theory. Int. J. Mental Health Addict. 1–13.

Google Scholar

Drake, K. (2023). Use of AI Chatbot by Mental Health Company Causes Ethical Concerns. Available online at: https://healthnews.com/news/use-of-ai-chatbot-by-mental-health-company-causes-ethical-concerns/ (Accessed May 10, 2025).

Google Scholar

Ebert, S., and Michaelis, D. (2020). Navigating the ethical landscape of digital therapeutics. Psychiatr. Times 37, 1–5.

PubMed Abstract | Google Scholar

Ganatra, A., Panchal, B., and Doshi, D. (2024). “Introduction to explainable AI,” in Explainable AI in Health Informatics. Computational Intelligence Methods and Applications, eds. R. Aluvalu, M. Mehta, and P. Siarry (Singapore: Springer).

Google Scholar

Jobin, A., Ienca, M., and Vayena, E. (2019). The state of the art in AI explainability. Nat. Mach. Intell. 1, 543–550. doi: 10.1038/s42256-019-0088-2

Crossref Full Text | Google Scholar

Landers, C., Blanche Wies, B., and Lenca, M. (2022). “Chapter 16 - ethical considerations of digital therapeutics for mental health,” in Digital Therapeutics for Mental Health and Addiction: The State of the Science and Vision for the Future, eds. N. Jacobson, T. Kowatsch, and L. Marsch (Academic Press, an imprint of Elsevier, UK). doi: 10.1016/B978-0-323-90045-4.00007-1

PubMed Abstract | Crossref Full Text | Google Scholar

Ministry of Law and Justice (2023). The Digital Personal Data Protection Act. Available online at: https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf (Accessed May 10, 2025).

Google Scholar

Torous, J. N., and Firth, J. (2018). Ethical dilemmas in digital mental health. Int. Rev. Psychiatry 30, 1–6. doi: 10.1080/09540261.2017.1397734

Crossref Full Text | Google Scholar

Torous, J. N., and Keshavan, M. S. (2019). Precision medicine for mental health: challenges and opportunities. World Psychiatry 18, 104–111.

Google Scholar

Keywords: ethical dilemmas, digitalisation, mental health, technological advancement, ethics in digital mental health

Citation: Lodha P and Pandya A (2025) Editorial: Ethical dilemmas of digitalisation of mental health. Front. Hum. Dyn. 7:1654355. doi: 10.3389/fhumd.2025.1654355

Received: 26 June 2025; Accepted: 14 July 2025;
Published: 30 July 2025.

Edited and reviewed by: Peter David Tolmie, University of Siegen, Germany

Copyright © 2025 Lodha and Pandya. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Pragya Lodha, cHJhZ3lhNmxvZGhhQGdtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.