OPINION article

Front. Hum. Dyn.

Sec. Digital Impacts

Volume 7 - 2025 | doi: 10.3389/fhumd.2025.1519872

This article is part of the Research TopicThe Role of Artificial Intelligence in Everyday Well-BeingView all 6 articles

The Importance of Justified Patient Trust in Unlocking AI's Potential in Mental Healthcare

Provisionally accepted
Niko  MännikköNiko Männikkö1*Tita  Alissa BachTita Alissa Bach2
  • 1Oulu University of Applied Sciences, Oulu, Finland
  • 2DNV, Høvik, Norway

The final, formatted version of the article will be published soon.

"Mental health is a basic human right." WHO mental health (1) Mental health is "a state of mental well-being that enables people to cope with the stresses of life, realize their abilities, learn well and work well, and contribute to their community" (1) The WHO reports that in 2019 an estimated 970 million people worldwide were affected by mental disorders, with anxiety and depression being the most prevalent. For example, 1:5 U.S. adults and 1:6 individuals in Europe live with a mental illness (2,3). The World Economic Forum projects that these conditions will contribute to a cumulative global economic loss of $16.3 trillion between 2011 and 2030 (4). A recent study indicates rising suicide rates among individuals aged 10-24 across the UK, the USA, much of Central Latin America, and Australasia (5).As mental health challenges continue to increase in number and complexity, the shortage of mental healthcare providers has become more acute, creating gaps in care (6). Artificial Intelligence (AI)-enabled systems 1 or, AI systems, have the potential to revolutionize mental healthcare by addressing these gaps, offering solutions that range from digital diagnostics to therapeutic tools (7). AI systems have been used to help mental healthcare by directly interacting with patients through self-management mobile health apps to aid in the treatment of depression, anxiety, post-traumatic stress disorders, sleep disorders, and suicidal behaviors (8,9). They also assist in diagnosing behaviors or responses associated with mental health conditions, developing risk profiles, and deploying context-specific interventions (10).However, the success of these AI-driven innovations hinges on one crucial factor: patient trust. Without trust, patients may hesitate to engage with AI systems, limiting the technology's impact. Real-world cases have already highlighted the risks of diminished trust. For instance, the National Eating Disorders Association (NEDA) recently removed AI chatbot, Tessa, from a support hotline after concerns arose that it was providing harmful advice, potentially exacerbating the conditions of vulnerable users who were patients with eating disorders (11). Similarly, Sonde Health's voice analysis AI, which uses vocal biomarkers to assess depression, has been criticized for overlooking the diverse speech patterns of nontypical users, such as those with disabilities or regional and non-native speech differences (12). In addition, patient concerns about data privacy and potential biases in AI systems, how patient data is used, and the potential for AI systems to perpetuate existing inequalities have been reported as key trust barriers (6). These examples highlight the fragility of trust in AI systems, particularly in the sensitive domain of mental health, where patient vulnerability is already high at baseline (13).Trust is delineated as the "willingness to render oneself vulnerable" to a capability, founded on an evaluation of congruence in intentions or values (14). Trust relationships can be established among individuals and between individuals and technology (15). Trust is often described as a connection between a trustor and a trustee, with the hopeful anticipation that the trustee will meet the trustor's expectations (16). Trust relationships usually do not have legally binding obligations and are therefore susceptible to deceit. As a result, various factors contribute to and affect the dynamic of trust relationships.Here, we focus on the trust placed in AI systems by mental health patients who are also the direct users, highlighting the most sensitive and direct relationship between AI systems and those whose mental healthcare is impacted by them. In mental healthcare, AI systems can be used by mental health professionals (17,18), patients, and patients' families or caregivers (19). However, when patients are not 1 Any system that contains or relies on one or more AI components. AI components are distinct units of software that perform specific functions or tasks within an AI-enabled system. They consist of a set of AI models, data, and algorithms, which, through implementation, create the AI component (32) the direct users of these systems, their trust in them is likely to be indirect and mediated by their trust in the healthcare professionals or family members who utilize AI systems on their behalf. Patients as users have different trust needs and significantly higher risks in using these systems than do non-patient users.Trust in this context is thus delicate, as patients' emotional and cognitive states may make them particularly vulnerable to the risks of over-reliance on AI-enabled systems. Patients are usually the primary stakeholders of many AI-enabled mental health applications, irrespective of who the users may be (20). The user experience for patients is deeply personal, and their trust in these systems directly influences their engagement and, ultimately, the outcomes of their care. This is because patient commitment to actively engage in the care plan is the single most critical determinant of positive outcomes (10). Therefore, building patient trust is not only beneficial for empowering patients and giving them a sense of control over their care, it is absolutely vital for ensuring successful and meaningful care outcomes (10).In some cases, giving patients access to use AI systems without any support from mental healthcare professionals or caregivers requires careful consideration (21). Such cases arise when patients are deemed clinically incapable of making their own decisions. This complicates the trust equation, as patients may no longer be seen as users of an AI system even if they interact directly with it. As a result, their caregivers may be viewed as the users.While patients' lack of trust may slow the adoption of AI systems in mental healthcare, a more significant concern arises when patients "trust incorrectly" (22). Initial hesitation or scepticism is a natural and expected reaction when humans encounter new or unfamiliar technology, making it easier to anticipate and address. However, the risks associated with overtrust or blind trust in AI systems are less frequently discussed (23), which could lead to serious consequences. For example, the Dutch childcare benefits scandal, where thousands of low-income families were wrongly accused of fraud due to a biased algorithm, led to victims committing suicide, suffering severe mental health issues, and the removal of their children into foster care (24). A similar case happened with the Swedish Social Insurance Agency's algorithm (25).Trusting correctly means that the trust is justified and based on evidence, knowledge, experiences and/or skills (22,26,27). In mental healthcare, this means that while patients may place trust in an AI system, they should still engage in critical thinking while interpreting the output of the AI system. Such critical thinking can be encouraged by providing patients with skills to understand AI systems' capabilities and limitations (6), so that patients can recognize deviations of AI systems' operations and output.Justified trust requires transparency, reliability, and appropriate human oversight rather than blind reliance on AI outputs. For example, Sonde Health's voice analysis AI claims to offer "objective" depression detection by analyzing vocal biomarkers (28). However, if users assume its outputs are definitive diagnoses rather than probabilistic assessments, this could lead to misplaced trust. To foster justified trust, these systems must clearly communicate their limitations, and patients should retain control over their data, with options to review, modify, or delete AI-collected information (6). In care treatments where the relationships between patient-professionals are fundamental to positive outcomes, such as in psychotherapy (29,30), it is important to also provide patients with access to mental health professionals, alongside patients' use of AI systems (30).However, there is still too little research investigating the effectiveness of various AI systems in mental healthcare to build evidence (10). Forming justified trust thus needs to depend on users' and domain experts' experiences, knowledge and/or skills with the hope that over time they can build evidence on the positive and negative effects of an AI system on patients' mental health (22,26,27). Integrating patient AI literacy can be achieved through interactive educational modules within applications, offering insights into system capabilities, limitations, and evaluation best practices (6,10). A structured framework with updates, tailored learning, and feedback can sustain engagement and foster justified trust in AI (10).User trust in AI systems is dynamic and can change over time (14,31). A review by Cabiddu et al. (2022) highlights that initial trust is shaped by users' propensity to trust, the presence of human-like features, and the perceived usefulness of the system (31). Human-like traits enhance emotional connections, making AI interactions more familiar and trustworthy. Over time, trust is further influenced by social factors, familiarity, and system reliability (31). Users assess whether AI performance aligns with initial expectations, fostering justified trust based on experience and knowledge (26).As users become more familiar with AI systems, especially if they have strong social support to encourage continued use and develop a positive perception of its usefulness through consistent, reliable, and predictable AI output, sustained user trust is established. In such a scenario, established user trust can still transition into user distrust or mistrust, particularly when AI systems make errors that directly impact users, or when overtrust occurs, such as when users under time pressure and/or with low cognitive capacity act upon AI output without any evaluation or judgment (32).To maintain justified trust, it is crucial to continually promote critical thinking so that users may base their evaluation rooted in collected evidence, if any, as well as knowledge, experiences, and/or skills. Patient education on AI's capabilities and limitations and the incorporation of patients' feedback to improve the systems are extremely valuable for maintaining justified trust. Incorporating feedback can be done by, for example, allowing users to rate AI systems' responses and flag inaccuracies (33), which then can be used to improve the AI systems' ability to retrieve and present more relevant information (34,35).The downside of maintaining justified trust in this manner is that it requires a high cognitive load, and it depends on the patients' ability to think critically each time. This can become an issue for mental health patient as patients may use AI systems in their most vulnerable conditions, when their cognitive capacity is likely limited.It is only ethical and responsible to develop, deploy, and continuously improve AI systems together with patients (32), especially to understand what influences patients' cognitive capacity and critical thinking when using AI systems. It is crucial to match specific user populations' characteristics and needs to the design of AI systems, specifically AI interface and features where human-AI interaction happens (14,32).For example, this can be done by identifying users' needs to determine which key aspects of AI output are to be displayed, or not, in the interface.An AI system for patients with sensory sensitivity should use fit-for-purpose visuals and audio, avoiding bright colors, loud noises, and overstimulating displays. AI systems for PTSD or trauma can gradually introduce challenging topics as trust builds rather than overwhelming patients. Customizable trigger detection allows patients to specify distressing words, topics, or stimuli, enabling AI systems to adjust accordingly. All these examples show the importance of embedding an AI feature to personalize user interface based on users' preferences, as well as giving control to users to decide what, how much, how, and when preferred information is to be presented to them (32). Such personalization can help patients to evaluate AI output without requiring additional workload and within their cognitive capacity at the time of use, maintaining their justified trust. Developers can use adaptive learning models (36) that adjust responses based on user interactions and multimodal AI systems that combine voice, text, and biometric inputs for tailored recommendations (37). For example, AI-driven therapy platform Woebot adapts to users' mood patterns (38), ensuring more contextually relevant support.Given that mental healthcare already presents unique ethical and legal challenges, the integration of AI systems demands scrutiny and fit-for-purpose regulation (21). Regulators play a crucial role in ensuring that AI development and deployment adhere to responsible and ethical principles (21). For instance, they are responsible for verifying that the claimed benefits of using AI systems, particularly those made by forprofit vendors, are true.Since the use of AI systems in mental healthcare is still emerging, creating structured platforms for stakeholders to exchange insights is essential for identifying both obstacles and best practices (39). Future efforts should focus on evaluating real-world effectiveness, understanding long-term impacts on patient outcomes, and mitigating biases in AI-driven decision-making.In conclusion, ensuring that AI systems provide personalized, clinically effective care while maintaining justified user trust is fundamental. Continued interdisciplinary collaboration between researchers, clinicians, and policymakers is key to maximizing AI's benefits while safeguarding patient well-being.

Keywords: Trust, Trustworthy AI, AI system, patient engagement, Ethics

Received: 30 Oct 2024; Accepted: 14 May 2025.

Copyright: © 2025 Männikkö and Bach. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Niko Männikkö, Oulu University of Applied Sciences, Oulu, Finland

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.