Abstract
Evidence-Based Practice (EBP) has increased the availability of psychological treatments, yet many people do not benefit from therapy, some report deteriorations in symptoms, and dropout rates remain high. Precision Mental Health (PMH) is proposed as an extension of EBP by combining systematic measurement with predictive analytics to support the right intervention, at the right time, for the right person. Recent advances in Artificial Intelligence (AI) make PMH increasingly feasible in routine psychotherapy; however, the implementation of these approaches in routine care is still incipient. In this context, the present article has two main aims. First, we summarize key advances in PMH, particularly measurement-based care and data-informed decision making. Second, we introduce the NOVA project (Navigating Outcomes via Analytics), a multi-phase translational program designed to implement PMH in real-world psychological services. Guided by the Implementing Precision Methods framework, NOVA integrates (i) stakeholder-informed work on clinician acceptability and intention to use, (ii) pragmatic evaluation of decision support tools in routine care, (iii) development of robust and interpretable predictive models, and (iv) training and dissemination activities aligned with responsible innovation and professional competencies for AI-supported precision care. By detailing NOVA’s implementation pathway, we aim to provide a concrete roadmap for bridging AI innovation and psychological practice, accelerating the sustainable adoption of PMH in real-world settings.
Introduction
Over the last three decades, Evidence-Based Practice (EBP) has been the dominant paradigm in mental healthcare. Randomized controlled trials (RCTs), meta-analyses and clinical guidelines have provided a robust answer to the question: “On average, does this treatment work better than no treatment or an alternative?.” Multiple treatments (e.g., cognitive-behavioral therapy, mindfulness, pharmacotherapy…) show moderate average effects in the treatment of depression and anxiety disorders, leading to guidelines recommending these as first-line interventions (e.g.,
National Institute for Health and Care Excellence, 2022;
American Psychological Association, 2019;
World Health Organization, 2019). However, classic EBP has three well-documented limitations in informing clinical practice:
It is built on group averages, but we treat individuals: unsurprisingly, psychological treatments designed for the average patient often led to average results, and interventions that demonstrate efficacy for one individual may be less effective or iatrogenic for another, thereby underscoring the limitations of the ‘one-size-fits-all’ approach (Purgato et al., 2021). Evidence suggests that nearly half of patients undergoing psychological therapy fail to achieve clinically meaningful improvements and approximately 10% experience worsening symptoms (Schwartz et al., 2021).
Treatment selection often relies on ‘trial and error’: although many outcome predictors have been identified, this knowledge has not yet translated into real-world treatment recommendations guiding which therapy to choose for whom (DeRubeis et al., 2014). Therefore, therapy non-completion (i.e., unilateral termination of therapy by the patient) remains a significant issue, with dropout rates estimated at around 30–40% (Cooper and Conklin, 2015). Importantly, a proportion of dropout occurs at very early stages of care (often within the first one to two sessions), before a specific treatment approach could reasonably be implemented, which may reflect factors such as expectations, logistics, or early therapeutic fit rather than ‘trial-and-error’ with treatment techniques per se (Fernandez et al., 2021).
Complex real-world presentations exceed guideline simplicity: predictors of outcome such as contextual factors, comorbidity, chronicity, treatment history, service constraints… interact with treatment characteristics and therapist factors in complex ways that are rarely captured by group analyses.
In practice, this means that applying “the treatment of choice for diagnosis X” often leaves a significant proportion of patients insufficiently helped and often under-specify how to personalize the intervention or what to do next when a patient is not improving. Surprisingly, despite improved and more widely disseminated psychological treatments, rates of depressive and anxiety disorders in the general population have not fallen, which have been described as the “treatment–prevalence paradox” (Ormel et al., 2022). Taken together, these findings suggest that the traditional EBP model is insufficient and the lack of personalization remains a critical factor, undermining the effectiveness of prevention, diagnosis, and treatment efforts.
Personalization has always been part of psychological practice, but it has been relatively unsystematic and guided primarily by clinical judgment. However, there is growing evidence showing that clinical judgment alone is limited for complex decisions such as when to change course in treatment, how to match a patient to a therapist, or which intervention components to prioritize. Classic work comparing clinical versus statistical prediction indicates that mechanical or actuarial methods of combining data are about 10% more accurate than unaided clinical judgment with experience-based predictions made without explicit actuarial rules, statistical models, or algorithmic decision support (Grove et al., 2000). In psychotherapy, studies repeatedly find that therapists substantially underestimate the proportion of individuals who will deteriorate and overestimate the likelihood of improvement (Østergård et al., 2024). Moreover, large naturalistic datasets suggest that therapists’ average outcomes do not improve over years of experience (Goldberg et al., 2016), challenging the assumption that expertise alone corrects this bias. Even well-established EBPs are often under-implemented or delivered with substantial variability across routine care settings, reinforcing the science-to-practice gap and limiting population-level impact (Herschell et al., 2010).
Given the limitations of traditional EBP and the clinical judgment, a family of “precision-based approaches” has emerged as a natural extension, guiding more tailored and data-informed decisions for each individual patient. The concept of “precision” comes originally from general medicine. Precision Medicine aims to tailor prevention and treatment to the characteristics of each individual, using information on genes, environment and lifestyle to move beyond one-size-fits-all protocols (Collins and Varmus, 2015). In mental health, this idea has evolved into Precision Mental Health (PMH; also known as Precision Psychiatry or Precision Psychology), an approach to prevention and intervention that seeks a more accurate understanding of each person’s needs, context, preferences, risk profile, and likely prognosis, in order to guide more precise decisions about assessment, treatment selection, monitoring and adaptation (Bickman et al., 2016). PMH aims to deliver the right intervention, at the right time, by the right clinician, for a given individual, using data to inform these choices rather than relying solely on averages or intuition (Delgadillo and Lutz, 2026), in press). This is not a rejection of EBP, but an extension of it: it aims to determine which evidence-based option, in what sequence or combination, is likely to be optimal for this particular person, and how we can update that plan as new data arrives. Because EBP is always delivered in the context of a real encounter (i.e., a conversation between clinician and patient), the evidence must be translated and personalized in that dialog (León-García et al., 2023).
What does precision mental health look like in clinical practice?
Conceptually, the field has evolved along a development pathway: from identifying prognostic factors, to building and validating prediction models, and ultimately to implementing decision support tools that can guide treatment selection and ongoing adaptation in routine care (
Delgadillo and Lutz, 2020). Within this pathway, conceptual and empirical work converges on two main pillars of PMH services (
Bickman et al., 2016;
Lutz et al., 2022a;
Lutz et al., 2024):
Measurement-Based Care (MBC): in PMH assessment is not an optional add-on but the infrastructure that makes prediction, personalization and adaptive intervention possible. MBC is the systematic assessment of clinically relevant information (e.g., outcomes, processes, preferences, context, etc.), becoming the backbone of data-informed decision-making. MBC integrates three core features (Barber and Resnick, 2023): (1) Collect: collecting patient data on regular basis; (2) Share: providing visual feedback on psychotherapy progress; and (3) Act: adapting the focus or direction of therapy based on the feedback received (see Figure 1A). Compared to traditional practice, where progress is judged primarily by clinical judgment, MBC enables timely adjustment of treatment intensity or format by making change visible early. It also strengthens shared decision-making, as progress feedback (e.g., brief progress graphs or risk alerts) can be discussed collaboratively with patients to refine goals and expectations and to guide adaptations in the treatment plan. Finally, when aggregated across patients and clinicians, MBC data creates a naturalistic learning system that can inform service-level quality improvement and the refinement of prognostic tools and personalization strategies over time. In addition to clinical benefits, aggregated MBC data can support organizational decision-making by demonstrating service impact, informing program leadership and policy discussions, and strengthening the justification for implementation resources (e.g., training time, technology procurement) or continued funding.
Data-Driven Decision Making (DDDM): continuous assessment generates valuable data, but data alone does not guarantee better decisions. Therefore, the second building block of PMH is the use of predictive models to transform raw measurements into clinically usable recommendations. DDDM refers to the structured use of systematically collected data combined with algorithms and statistical models to inform decisions about care while keeping psychologists accountable for final judgment and action (Tanguay-Sela et al., 2022). As mentioned above, a substantial body of research has consistently demonstrated that statistical models frequently outperform clinical judgments, particularly among less experienced practitioners. Ranging from relatively simple prognostic models to more complex Artificial Intelligence (AI) and Machine Learning (ML) algorithms, predictions can be used to estimate prognostic models on likely outcome trajectories, to inform treatment selection, treatment intensity or modality in stepped-care models, therapist–patient matching, or to trigger timely feedback or clinical risk alerts (see Figure 1B). This rationale closely aligns with large-scale, data-informed care pathways such as the IAPT/NHS Talking Therapies program (Wakefield et al., 2021), where routine outcome monitoring and algorithmic decision support are increasingly used to guide stepped-care decisions.
Figure 1
Over the past decade, mental health care has undergone a technological revolution, accelerating the digitization of psychotherapy (Domhardt et al., 2021), a trend further intensified by the COVID-19 pandemic (Appleton et al., 2021). Two complementary technological innovations have facilitated the implementation of PMH into clinical practice (Figure 2). First, Personal Digital Devices such as smartphones, wearables, and other sophisticated mobile technologies have significantly streamlined Experience Sampling Methods (ESM) and Ecological Momentary Assessment (EMA), enabling real-time, within-subject ambulatory assessments that capture intensive longitudinal data (Aan het Rot et al., 2012). These devices also support Digital Phenotyping (Montag et al., 2020), which uses sensor and usage data to infer contextual and behavioral patterns that can be leveraged to predict mental health status. Second, the vast databases generated through these assessment technologies can be analyzed using advanced AI and ML algorithms (Rutledge et al., 2019). Several studies have highlighted the potential advantages of applying AI and ML in mental health (Roca, 2025), for example, predicting the risk of mental health conditions, case formulation, treatment selection or reducing administrative and documentation burden.
Figure 2
PMH implementation challenges
Given the growing body of scientific literature demonstrating the positive impact of PMH technologies on treatment outcomes and adherence (Nye et al., 2023), both the American Psychological Association and the Roadmap for Mental Health Research in Europe have advocated for the integration of PMH technologies into routine clinical practice. However, implementation in clinical practice has been limited, resulting in a gap between scientific advancements and mental healthcare delivered in real-world settings. Implementation studies highlight numerous barriers that must be addressed for PMH to move from proof-of-concept models to standard practice. For instance, estimates suggest that only 14% of clinicians used progress measures (Jensen-Doss et al., 2018), and fewer than 20% employed PMH methods (Lewis et al., 2019). Moreover, a recent systematic review revealed that fewer than 1% of individualized prediction models in mental health research have been evaluated for potential implementation in real-world clinical settings (Salazar de Pablo et al., 2021).
Adopting PMH is not simply a matter of “plugging in” an algorithm, it entails cultivating new competencies (e.g., data literacy, critical appraisal), engaging with ethical and legal questions (e.g., consent, secondary use of data), and participating in organizational decisions about how these PMH systems are implemented and evaluated. This perspective is articulated in the Implementing Precision Methods (IPM; Deisenhofer et al., 2024) framework, which classifies implementation challenges into four main domains: (1) Attitudinal domain: effective implementation of PMH technologies requires addressing the needs and concerns of both clinicians and clients. Clear clinical value (e.g., fewer dropouts, better outcomes), loss of autonomy, damage to the therapeutic relationship, measurement burden for patients, integration into existing workflows, increased workload… are some of the critical determinants of PMH adoption; (2) Technological domain: Software and hardware challenges such as ensuring secure and efficient technological infrastructure, user-friendly interfaces, interoperability with electronic health records (EHR)… may pose barriers to the implementation of PMH technologies; (3) Statistical domain: When implementing PMH technologies, clinicians must have confidence in key features of the algorithms they rely on. Overfitting, poor generalizability in new populations, algorithmic bias (e.g., systematically under- or over-estimating risk for specific sociodemographic groups), long-term accuracy, transparency vs. black box… are issues that can undermine both accuracy and ethical acceptability; and (4) Contextual domain: Certain challenges may arise in specific settings where service policies, organizational culture, and available resources are not aligned to support the adoption of PMH technologies. Professional training, organizational culture, regulatory frameworks, reimbursement models… shape whether PMH tools are used as intended or sidelined.
There is a rapid development of PMH technologies to help clinicians choose and adapt interventions for individual patients. Clinical decision support systems (CDSS) embedded into EHR or dedicated platforms are one of the most widespread ways of translating PMH into real-world settings (Lutz et al., 2022b). CDSS translate the core ingredients of PMH into therapists’ daily decision-making, presenting predictions and recommended actions in a user-friendly way (visual dashboards, alerts, language models, etc.). Rather than replacing clinicians, CDSS presents predictions and recommendations that therapists can adopt or override, functioning as a form of technology-augmented case formulation. Clinical case studies illustrate how CDSS can enhance therapist awareness of subtle negative trends and support decisions such as adding specific modules, revisiting the therapeutic alliance, or considering referral to higher levels of care (Lutz et al., 2024). Internationally, there are some examples such as the Trier Treatment Navigator (TTN; Lutz et al., 2019), a CDSS developed in a university-based training clinic that integrates routine outcome monitoring with risk algorithms to provide case-level feedback, “off-track” alerts, and data-informed suggestions for case formulation and treatment adaptation. In Spain, platforms such as Psypilot are beginning to translate these ideas into scalable, practice-oriented software solutions for mental health professionals. Unlike other CDSS, Psypilot leverages AI to enhance implementation by offering a virtual assistant that interprets PMH recommendations through large language models acting as a clinical copilot for the professional.
Discussion
Within this landscape, the NOVA (Navigating Outcomes via Analytics) project was born with the commitment to investigate how to implement PMH in the messy reality of everyday psychological practice. The project focuses on three recurrent challenges in clinical practice: suboptimal outcomes, high dropout and limited use of data in decision-making- and tackles them through an integrated package of attitudinal research, clinical trials, algorithm development and policy translation. The NOVA project was explicitly designed around the two PMH pillars, combining measurement-based care with data-informed decision support, while using the IPM framework to anticipate and address implementation barriers. In doing so, the NOVA project aims to offer a concrete model for how PMH can move from theory and isolated pilots to sustainable use in everyday psychological practice.
NOVA is not limited to adopting new measures, algorithms or digital tools but involves embedding a whole PMH pipeline -from systematic data collection to predictive modeling and decision support- within routine services. Therefore, each work package is designed to address one or more IPM domains: (1) Attitudinal studies map psychologists’ attitudes, perceived benefits and concerns, informing them how PMH technologies should be designed, communicated and translated; (2) Clinical trials of a decision support system examine not only patient outcomes but also usability, workflow fit and impact on therapeutic processes; (3) ML work focuses on robust, interpretable models that can be updated and audited, rather than “black box” prototypes; and (4) Policy and dissemination activities explicitly connect project findings to national strategies in mental and digital health.
NOVA is a multi-phase program structured around sequential phases that move from stakeholder acceptability and implementation determinants, to pragmatic evaluation of a CDSS, to the development and pilot validation of prediction algorithms, and finally to training and policy translation (see Table 1). The project is conducted by a multi-stakeholder consortium combining clinical science, data science, implementation support, and real-world service partners. The project leadership and clinical research expertise are based in the Precision Mind Lab at Universidad Villanueva (Madrid, Spain), and the AI/ML and data mining expertise is provided by the Technical School of Computer and Telecommunication Engineering at the University of Granada. Routine-care implementation is enabled through a network of service partners that include hospital and outpatient clinic settings and broad professional networks. The CDSS platform used in the pragmatic trial is provided by the technological innovation partner Medea Mind, which supports adaptation, integration, and technical assistance for real-world deployment. International advisors contribute complementary expertise in PMH, digital health implementation, and outcomes research (e.g., University College London, Mayo Clinic, Complutense University).
Table 1
| Phase | Aim | Design | Participants | Outcomes | Deliverables |
|---|---|---|---|---|---|
| 1 | Identify determinants of acceptability and intention to use PMH technologies among psychologists | Online survey with model-based analyses (e.g., SEM) | Training and practicing psychologists | Acceptability, intention to use, perceived barriers/facilitators; predictors of adoption | Publication on an empirical, theory-guided PMH adoption model |
| 2 | Evaluate feasibility, acceptability, and impact of CDSS-supported care in routine psychotherapy | Pragmatic, multisite, 2-arm RCT (CDSS vs. TAU). | Adult outpatients and licensed psychologists in routine-care clinics | Clinical outcomes, functioning, dropout status; satisfaction, alliance; usability/acceptability; engagement/adherence. | Publication on the feasibility and outcomes of the pragmatic RCT |
| 3 | Develop and validate ML-based patient–therapist matching to improve fit and outcomes | Two-step approach: (1) model training using Phase 2 data; (2) pilot/feasibility RCT (Match vs. CAU) | Routine-care clinic sample for model training; new pilot RCT sample for validation | Dropout/retention, satisfaction, alliance, clinical change, model performance and explainability | Publication on model development and validation |
| 4 | Translate findings into implementable training, guidance, and policy recommendations | Development of training materials and implementation guidance | Clinicians, patients, policymakers, professional bodies | Sakeholder feedback on usability/feasibility of guidance | Publication on the training effects and materials and policy briefs/executive summaries |
Overview of NOVA phases, designs, settings, and deliverables.
NOVA is expected to deliver a set of tangible outputs and infrastructures that extend beyond the project’s timeframe: (1) a standardized measurement and data infrastructure for PMH in participating services, including validated assessment batteries, digital workflows and interoperable data pipelines; (2) curated datasets suitable for ongoing research on prognostic modeling, treatment personalization and therapist effects, contributing to international efforts to refine PMH methods; (3) working prototypes of CDSS and algorithms, co-designed with clinicians and patients, and ready for further scaling or integration into existing platforms; (4) a trained community of professionals with competencies in PMH, essential ingredients for the emergence of a “precision psychologist” profile in Spain; and (5) Policy recommendations about facilitators and barriers to implementing PMH, mapped onto the IPM framework and directly connected to ongoing national strategies in mental health.
A critical implementation implication concerns workforce development, and NOVA explicitly treats this as a core target rather than a downstream dissemination activity. In many real-world settings, clinicians have limited formal training in predictive modeling, algorithm evaluation, and uncertainty interpretation, which can foster both under-trust (dismissal of potentially useful feedback) and over-trust (automation bias). Responsible PMH implementation therefore requires explicit competency-building (measurement literacy, basic data interpretation, understanding model limits and uncertainty, and ethical/legal literacy around consent and secondary data use), as well as organizational supports (workflow integration, local champions, consultation access, and supervision norms for discussing when and why to follow or override recommendations). There are emerging examples of integrating routine outcome monitoring and decision support into training and supervision contexts (e.g., university-based training clinics using CDSS such as the Trier Treatment Navigator). Building on these precedents, NOVA’s dissemination and training deliverables are intended to support not only service-level implementation but also integration into supervision structures and training curricula (e.g., guidance materials, workshops/webinars, and practical recommendations for interpreting and discussing PMH feedback in supervision).
Statements
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
PR: Project administration, Funding acquisition, Validation, Resources, Supervision, Writing – review & editing, Methodology, Writing – original draft, Investigation, Visualization, Conceptualization. SN: Project administration, Conceptualization, Writing – review & editing, Supervision, Investigation, Resources. ER-R: Investigation, Supervision, Conceptualization, Writing – review & editing, Project administration. CM-A: Writing – review & editing, Project administration, Investigation, Conceptualization, Supervision. FM-Z: Project administration, Supervision, Conceptualization, Writing – review & editing, Investigation. RZ: Writing – review & editing, Supervision, Investigation, Conceptualization, Resources, Project administration. EG: Supervision, Conceptualization, Investigation, Visualization, Project administration, Writing – review & editing, Resources. MS-P: Investigation, Conceptualization, Supervision, Visualization, Writing – review & editing, Project administration. ML-G: Supervision, Methodology, Conceptualization, Investigation, Writing – review & editing. DG: Writing – review & editing, Methodology, Supervision, Conceptualization, Investigation. AE: Methodology, Supervision, Conceptualization, Investigation, Writing – review & editing. RS: Methodology, Supervision, Investigation, Writing – review & editing, Conceptualization. LM: Methodology, Supervision, Investigation, Conceptualization, Writing – review & editing. MC: Investigation, Supervision, Writing – review & editing, Conceptualization, Methodology. CV: Methodology, Investigation, Supervision, Conceptualization, Writing – review & editing. SR-M: Supervision, Project administration, Writing – original draft, Resources, Conceptualization, Visualization, Writing – review & editing, Funding acquisition, Validation, Methodology, Investigation.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This research was supported by the Grant PID2024-156740OA-I00 funded by MICIU/AEI/10.13039/501100011033 and by ERDF, EU.
Acknowledgments
The authors gratefully acknowledge the support of Universidad Villanueva. We also thank all researchers who were involved in any capacity in the NOVA project for their contributions and collaboration.
Conflict of interest
PR, RZ, EG, MS-P, and AE collaborate with a company that develops Precision Mental Health solutions for psychological practice, although the authors affirm that the work was conducted with full scientific independence.
The remaining author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. We acknowledge the use of generative AI tools to assist with writing and editing the manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
Aan het RotM.HogenelstK.SchoeversR. A. (2012). Mood disorders in everyday life: a systematic review of experience sampling and ecological momentary assessment studies. Clin. Psychol. Rev.32, 510–523. doi: 10.1016/j.cpr.2012.05.007,
2
American Psychological Association. (2019). Clinical practice guideline for the treatment of depression across three age cohorts. Available online at: https://www.apa.org/depression-guideline [Accessed February 1, 2026].
3
AppletonR.WilliamsJ.Vera San JuanN.NeedleJ. J.SchliefM.JordanH.et al. (2021). Implementation, adoption, and perceptions of telemental health during the COVID-19 pandemic: systematic review. J. Med. Internet Res.23:e31746. doi: 10.2196/31746,
4
BarberJ.ResnickS. G. (2023). Collect, share, act: a transtheoretical clinical model for doing measurement-based care in mental health treatment. Psychol. Serv.20, 150–157. doi: 10.1037/ser0000629,
5
BickmanL.LyonA. R.WolpertM. (2016). Achieving precision mental health through effective assessment, monitoring, and feedback processes. Adm. Policy Ment. Health Ment. Health Serv. Res.43, 271–276. doi: 10.1007/s10488-016-0718-5,
6
CollinsF. S.VarmusH. (2015). A new initiative on precision medicine. N. Engl. J. Med.372, 793–795. doi: 10.1056/NEJMp1500523,
7
CooperA. A.ConklinL. R. (2015). Dropout from individual psychotherapy for major depression: a meta-analysis of randomized clinical trials. Clin. Psychol. Rev.40, 57–65. doi: 10.1016/j.cpr.2015.05.001,
8
DeisenhoferA.-K.BarkhamM.BeierlE. T.SchwartzB. (2024). Implementing precision methods in personalizing psychological therapies: barriers and possible ways forward. Behav. Res. Ther.172:104443. doi: 10.1016/j.brat.2023.104443,
9
DelgadilloJ.LutzW. (2020). A development pathway towards precision mental health care. JAMA Psychiatry77, 889–890. doi: 10.1001/jamapsychiatry.2020.1048,
10
DelgadilloJ.LutzW. (2026). Precision Mental Health Care for Depression. In:T. Olino and J. Pettit. APA handbook of depression, Vol. 2, Minoritized populations, lifespan development, assessment, and treatment. American Psychological Association.
11
DeRubeisR. J.CohenZ. D.ForandN. R.FournierJ. C.GelfandL. A.Lorenzo-LuacesL. (2014). The personalized advantage index: translating research on prediction into individualized treatment recommendations. A demonstration. PLoS One9:e83875. doi: 10.1371/journal.pone.0083875,
12
DomhardtM.CuijpersP.EbertD. D.BaumeisterH. (2021). More light? Opportunities and pitfalls in digitalized psychotherapy process research. Front. Psychol.12:544129. doi: 10.3389/fpsyg.2021.544129,
13
FernandezD.VigoD.SampsonN. A.HwangI.Aguilar-GaxiolaS.Al-HamzawiA. O.et al. (2021). Patterns of care and dropout rates from outpatient mental healthcare in low-, middle-and high-income countries from the World Health Organization's world mental health survey initiative. Psychol. Med.51, 2104–2116. doi: 10.1017/S0033291720000884,
14
GoldbergS. B.RousmaniereT.MillerS. D.WhippleJ.NielsenS. L.HoytW. T.et al. (2016). Do psychotherapists improve with time and experience? A longitudinal analysis of outcomes in a clinical setting. J. Couns. Psychol.63, 1–11. doi: 10.1037/cou0000131,
15
GroveW. M.ZaldD. H.LebowB. S.SnitzB. E.NelsonC. (2000). Clinical versus mechanical prediction: a meta-analysis. Psychol. Assess.12, 19–30.
16
HerschellA. D.KolkoD. J.BaumannB. L.DavisA. C. (2010). The role of therapist training in the implementation of psychosocial treatments: a review and critique with recommendations. Clin. Psychol. Rev.30, 448–466. doi: 10.1016/j.cpr.2010.02.005,
17
Jensen-DossA.HaimesE. M. B.SmithA. M.LyonA. R.LewisC. C.StanickC. F.et al. (2018). Monitoring treatment Progress and providing feedback is viewed favorably but rarely used in practice. Adm. Policy Ment. Health Ment. Health Serv. Res.45, 48–61. doi: 10.1007/s10488-016-0763-0,
18
León-GarcíaM.HumphriesB.RocaP.GravholtD.EckmanM. H.BatesS. M.et al. (2023). Assessment of a venous thromboembolism prophylaxis shared decision-making intervention (DASH-TOP) using the decisional conflict scale: a mixed-method study. BMC Med. Inform. Decis. Mak.23:250. doi: 10.1186/s12911-023-02349-3
19
LewisC. C.BoydM.PuspitasariA.NavarroE.HowardJ.KassabH.et al. (2019). Implementing measurement-based care in behavioral health. JAMA Psychiatr.76:324. doi: 10.1001/jamapsychiatry.2018.3329,
20
LutzW.RubelJ.DeisenhoferA.MoggiaD. (2022a). Continuous outcome measurement in modern data-informed psychotherapies. World Psychiatry21, 215–216. doi: 10.1002/wps.20988,
21
LutzW.RubelJ. A.SchwartzB.SchillingV.DeisenhoferA.-K. (2019). Towards integrating personalized feedback research into clinical practice: development of the Trier treatment navigator (TTN). Behav. Res. Ther.120:103438. doi: 10.1016/j.brat.2019.103438,
22
LutzW.SchaffrathJ.EberhardtS. T.HehlmannM. I.SchwartzB.DeisenhoferA.-K.et al. (2024). Precision mental health and data-informed decision support in psychological therapy: an example. Adm. Policy Ment. Health Ment. Health Serv. Res.51, 674–685. doi: 10.1007/s10488-023-01330-6,
23
LutzW.SchwartzB.DelgadilloJ. (2022b). Measurement-based and data-informed psychological therapy. Annu. Rev. Clin. Psychol.18, 71–98. doi: 10.1146/annurev-clinpsy-071720-014821,
24
MontagC.SindermannC.BaumeisterH. (2020). Digital phenotyping in psychological and medical sciences: a reflection about necessary prerequisites to reduce harm and increase benefits. Curr. Opin. Psychol.36, 19–24. doi: 10.1016/j.copsyc.2020.03.013,
25
National Institute for Health and Care Excellence. (2022). Depression in adults: treatment and management (NG222). Available online at: https://www.nice.org.uk/guidance/ng222 [Accessed February 1, 2026].
26
NyeA.DelgadilloJ.BarkhamM. (2023). Efficacy of personalized psychological interventions: a systematic review and meta-analysis. J. Consult. Clin. Psychol.91, 389–397. doi: 10.1037/ccp0000820,
27
OrmelJ.HollonS. D.KesslerR. C.CuijpersP.MonroeS. M. (2022). More treatment but no less depression: the treatment-prevalence paradox. Clin. Psychol. Rev.91:102111. doi: 10.1016/j.cpr.2021.102111,
28
ØstergårdO. K.GrønnebækL.NilssonK. K. (2024). Do therapists know when their clients deteriorate? An investigation of therapists’ ability to estimate and predict client change during and after psychotherapy. Clin. Psychol. Psychother.31:e70015. doi: 10.1002/cpp.70015,
29
PurgatoM.SinghR.AcarturkC.CuijpersP. (2021). Moving beyond a ‘one-size-fits-all’ rationale in global mental health: prospects of a precision psychology paradigm. Epidemiol. Psychiatr. Sci.30:e63. doi: 10.1017/S2045796021000500,
30
RocaP. (2025). ¿Puede una mente artificial sanar una mente natural? Aplicaciones de la inteligencia artificial en psicología. In OrtizJ. M.BenguríaJ. (Coords.), Un nuevo conocimiento transversal: la inteligencia artificial aplicada (pp. 145–165). Madrid: Tirant lo Blanch.
31
RutledgeR. B.ChekroudA. M.HuysQ. J. (2019). Machine learning and big data in psychiatry: toward clinical applications. Curr. Opin. Neurobiol.55, 152–159. doi: 10.1016/j.conb.2019.02.006,
32
Salazar de PabloG.StuderusE.Vaquerizo-SerranoJ.IrvingJ.CatalanA.OliverD.et al. (2021). Implementing precision psychiatry: a systematic review of individualized prediction models for clinical practice. Schizophr. Bull.47, 284–297. doi: 10.1093/schbul/sbaa120,
33
SchwartzB.CohenZ. D.RubelJ. A.ZimmermannD.WittmannW. W.LutzW. (2021). Personalized treatment selection in routine care: integrating machine learning and statistical algorithms to recommend cognitive behavioral or psychodynamic therapy. Psychother. Res.31, 33–51. doi: 10.1080/10503307.2020.1769219,
34
Tanguay-SelaM.BenrimohD.PopescuC.PerezT.RollinsC.SnookE.et al. (2022). Evaluating the perceived utility of an artificial intelligence-powered clinical decision support system for depression treatment using a simulation center. Psychiatry Res.308:114336. doi: 10.1016/j.psychres.2021.114336,
35
WakefieldS.KellettS.Simmonds-BuckleyM.StocktonD.BradburyA.DelgadilloJ. (2021). Improving access to psychological therapies (IAPT) in the United Kingdom: a systematic review and meta-analysis of 10-years of practice-based evidence. Br. J. Clin. Psychol.60, 1–37. doi: 10.1111/bjc.12259,
36
World Health Organization. (2019). mhGAP intervention guide - version 2.0. Available online at: https://www.who.int/publications/i/item/9789241549790 [Accessed February 1, 2026].
Summary
Keywords
artificial intelligence, clinical decision support systems, data-informed decision making, implementation science, measurement-based care, precision mental health
Citation
Roca P, Noheda S, Ramírez-Riveros E, Martín-Azañedo C, Martínez-Zaragoza F, Zangri RM, García del Valle EP, Sanchez-Pedreño M, León-Garcia M, Gravholt DL, Enrique A, Saunders R, Herrera LJ, Pegalajar MC, Vázquez C and Rodriguez-Moreno S (2026) Transforming psychological practice with precision mental health: introduction to the NOVA project. Front. Psychol. 17:1775489. doi: 10.3389/fpsyg.2026.1775489
Received
03 January 2026
Revised
06 February 2026
Accepted
09 February 2026
Published
27 February 2026
Volume
17 - 2026
Edited by
Jesus de la Fuente, University of Navarra, Spain
Reviewed by
Mira Deanne Hoffman Snider, West Virginia University, United States
Updates
Copyright
© 2026 Roca, Noheda, Ramírez-Riveros, Martín-Azañedo, Martínez-Zaragoza, Zangri, García del Valle, Sanchez-Pedreño, León-Garcia, Gravholt, Enrique, Saunders, Herrera, Pegalajar, Vázquez and Rodriguez-Moreno.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Pablo Roca, pablo.roca@villanueva.edu
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.