Effect of Engagement With Digital Interventions on Mental Health Outcomes: A Systematic Review and Meta-Analysis

Digital mental health interventions (DMHIs) present a promising way to address gaps in mental health service provision. However, the relationship between user engagement and outcomes in the context of these interventions has not been established. This study addressed the current state of evidence on the relationship between engagement with DMHIs and mental health outcomes. MEDLINE, PsycINFO, and EmBASE databases were searched from inception to August 1, 2021. Original or secondary analyses of randomized controlled trials (RCTs) were included if they examined the relationship between DMHI engagement and post-intervention outcome(s). Thirty-five studies were eligible for inclusion in the narrative review and 25 studies had sufficient data for meta-analysis. Random-effects meta-analyses indicated that greater engagement was significantly associated with post-intervention mental health improvements, regardless of whether this relationship was explored using correlational [r = 0.24, 95% CI (0.17, 0.32), Z = 6.29, p < 0.001] or between-groups designs [Hedges' g = 0.40, 95% CI (0.097, 0.705), p = 0.010]. This association was also consistent regardless of intervention type (unguided/guided), diagnostic status, or mental health condition targeted. This is the first review providing empirical evidence that engagement with DMHIs is associated with therapeutic gains. Implications and future directions are discussed. Systematic Review Registration: PROSPERO, identifier: CRD 42020184706.


INTRODUCTION
Mental illness is a worldwide public health concern with an overall lifetime prevalence rate of ∼14%, and accounting for 7% of the overall global burden of disease (1). The estimated impact of mental illness on quality of life has progressively worsened since the 1990's, with the number of disability-adjusted life years attributed to mental illness estimated to have risen by 37% between 1990 and 2013 (2). This rising mental health burden has prompted calls for the development of alternative models of care to meet the increasing treatment needs, which is unlikely to be able to be adequately serviced through face-to-face services into the future (3).
Digital mental health interventions (DMHIs) are one alternative model of care with enormous potential as a scalable solution for narrowing this service provision gap. By leveraging technology platforms (e.g., computers and smartphones) well-established psychological treatments can be delivered directly into people's hand with high fidelity and noto-low human resources, empowering individuals to self-manage mental health issues (4). The anonymity, timeliness of access, and flexibility afforded by DMHIs also circumvents many of the commonly identified structural and attitudinal barriers to accessing care such as cost, time, or stigma (5). While there is a growing body of evidence supporting that DMHIs can have significant small-to-moderate effects on alleviating or preventing symptoms of mental health disorders (6)(7)(8)(9)(10), low levels of user engagement have been reported as barriers to both optimal efficacy and adoption into health settings and other translational pathways. Health professionals will require further convincing that digital interventions are a viable adjunct or alternative to traditional therapies before they are willing to advocate for patient usage as a therapeutic adjunct (11). Low engagementdefined as suboptimal levels of user access and/or adherence to an intervention (12)-is touted as one of the main reasons why the potential benefits of these interventions remain unrealized in the real world (13,14).
Recognizing the need to promote better engagement with digital interventions, several review studies have sought to establish both the nature of engagement with DMHIs, and the effectiveness of various engagement strategies to improve uptake of, and adherence to, DMHIs. A review of empirical studies has found collectively low rates of DMHI completion, with over 70% of users failing to complete all treatment modules, and more than 50% disengaging before completing half of all treatment modules (15). Reviews of the effectiveness of various engagement strategies have shown that such strategies can have a positive impact on engagement with, and efficacy of, digital interventions. Reminders, coaching, and tailored feedback delivered via telephone or email have been found to have modest to moderate effects toward increasing engagement with digital interventions targeting physical and mental health outcomes, compared to if no strategy was used (16). In addition, a review of efficacy studies of smartphone apps for depression and anxiety found that apps which incorporated more elements aimed at promoting user engagement had larger effect sizes (17).
Though these prior reviews are undoubtedly important to accelerating our understanding of how the benefits of DMHIs might be realized in real world settings, it may be premature to invest substantial effort in engagement strategies. To date, only one prior systematic review (18) has explored the effect of engagement on the effectiveness of digital interventions. This review was limited to a narrative synthesis, citing heterogeneity in how engagement is operationalized (e.g., number of logins, module completion, frequency of use, and time spent in the intervention) as a barrier to meta-analysis. Accordingly, no meta-analyses have yet empirically examined the pooled strength of the association between DMHI engagement and its impacts on mental health outcomes. Despite the lack of meta-analytic evidence, it has been widely assumed that the association between face-to-face treatment engagement and treatment success extends to these tools by virtue of the extensive replication of this relationship in the general intervention literature (19). However, extrapolating research findings from face-to-face therapies to their digital equivalents may not be appropriately comparable because differences in how content is delivered (e.g., via developing human relationships) is likely to affect both engagement and associated treatment responses (20).
The increasing number of individual studies examining the engagement-mental health outcome association published in recent years, now enables a review and quantification of the literature. To address this gap in knowledge, and justify the need for the development of engagement strategies, the primary purpose of this meta-analysis is to examine the relationship between level of engagement and change in mental health outcomes in the context of digital mental health interventions. Our primary hypothesis is that poor engagement is associated with non-significant changes in mental health outcomes, given that individual studies suggest that users who engage poorly with DMHIs derive limited treatment benefit (21,22). This is the first meta-analysis to quantify the association between engagement and primary mental health outcome measures in respect to digital interventions, and as such, extends previous work which has been constrained by a small pool characterized by substantial heterogeneity in how engagement with DMHIs is measured.

Search Strategy
This study is registered with PROSPERO (CRD 42020184706) and adheres to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guidelines (23).
A systematic search was undertaken in three online academic databases from inception to articles indexed as of August 1, 2021: MEDLINE (from 1946), PsycINFO (from 1806), and EmBASE (from 1947). A test set of five papers meeting inclusion criteria (described later) was obtained via manual search. From these papers, a set of primary search terms were developed using Medical Subject Heading (MeSH) terms and centered around four search blocks: (i) engagement/ adherence, (ii) digital interventions, (iii) mental health, and (iv) study design. This search strategy was tested in MEDLINE, where it achieved 100% sensitivity against the initial test set, and was subsequently adapted for use in the other databases. A manual ancestry search of reference lists of eligible studies identified from the database search was also conducted to identify other relevant studies that could have been missed. The search terms can be found in the Supplementary Material (p. 1).
Following removal of duplicate records, titles and abstracts were independently coded for relevance by two authors (DZQG and MT). Screening of full-text records was performed in a similar way (DZQG and LM). Disagreements were resolved via mutual discussion. Inter-rater consensus was acceptable for both title/abstract (κ = 0.86, p < 0.001) and full-text screening (κ = 0.80, p < 0.001).

Selection Criteria
Studies were eligible for inclusion if they were original or secondary analyses of randomized controlled trial (RCT) evaluations of digital interventions that were specifically designed for a mental health issue and which quantitatively examined the relationship between engagement and mental health. Only digital interventions that delivered manualised therapeutic content to users via a digital platform (e.g., smartphone, tablet, and computer) were included. The interventions could be unguided (self-directed, used independently without support or guidance by a trained health professional) or guided (health professional-assisted) and differences in these approaches were accounted for by analyzing them separately in the subanalyses. The rationale for including only RCT studies was to understand if the intervention being tested had a main effect on efficacy to then be able to contextualize the impact of engagement. Non-experimental studies are unable to causally establish efficacy, making it redundant as to whether user engagement improves outcomes for an intervention that we are unable to determine works or not. Engagement was defined as any objective indicator used to quantify the extent of intervention use. Continuous measures of engagement were generally focused on the extent of content accessed (e.g., number of modules, sessions, or lessons completed), or the extent of intervention-related activity (e.g., number of logins or visits, time spent, specific activities or exercises completed, number of online interactions with therapists or peers). Categorical measures of engagement focused on the percentage of users who downloaded, logged-in, and/or completed the intervention.
Studies were excluded if they were not peer-reviewed journal articles (e.g., dissertations, conference presentations, etc.), in a language other than English, or if they did not investigate the relationship between engagement and mental health outcomes. No restrictions were placed on the target population, setting, type of mental health condition targeted, intervention type, or language that the interventions were delivered in.
To improve comparability and reduce heterogeneity, studies were excluded if the digital intervention was (i) targeted at caregivers or health professionals (i.e., gatekeepers) rather than individuals with the mental health condition of interest, (ii) comprised solely of activities (e.g., journal and mood tracking) or psychoeducational material without any therapeutic content, or (iii) that were delivered by other digital means (e.g., fully text-based interventions, pre-recorded videos, and DVD).

Data Extraction and Analysis
Key study information was extracted and recorded in a custom spreadsheet by three authors (DZQG, MT, and LM). One author (DZQG) extracted data for all the included papers. Accuracy of the extraction was checked by another author (LM). Any differences were resolved through discussion, and in cases where no consensus was reached, a third author (MT) was consulted. Such information included the (i) study design, (ii) intervention characteristics, (iii) target population and recruitment, (v) how engagement was measured, (vi) primary mental health condition targeted, and (vii) findings pertaining to the relationship between DMHI engagement and the primary mental health outcome. Corresponding authors of studies were contacted by email if more information was needed to determine eligibility. Based on previous research (18), studies were expected to vary in the engagement measures and mental health outcomes reported. A narrative review was therefore undertaken to ensure a systematic discussion of the findings from all studies. Studies which utilized similar measures of engagement and which employed similar analytical techniques to test the association between engagement and outcome(s) were pooled together for metaanalysis.
Random effects meta-analyses with the Pearson correlation coefficient (r) were used to examine the relationship between engagement strategies and mental health outcomes. This was different from the effect size measure (Cohen's d) stated in the protocol. However, r was determined to be more appropriate as it was commonly reported by the included studies. Corresponding authors for 33 of the 35 studies were contacted for statistical or other study data. If correlation coefficients could not be obtained, estimates of r were derived based on the data available. If nonparametric correlations were reported, estimates were computed using formulae provided by Gilpin (24); if beta regression coefficients were reported, estimates were computed using formulae proposed by Peterson and Brown (25). All correlations were standardized such that a positive coefficient indicated that greater DMHI engagement was associated with improvements in mental health (i.e., reduced symptom severity) at postintervention.
Based on the available data, two separate analyses were conducted to address our a priori hypothesis that better engagement with digital interventions is associated with greater improvement in mental health outcomes at postintervention. Two analyses were needed because effect sizes of studies using between-group vs. correlational designs cannot be combined (26). One meta-analysis was performed for studies which examined this association using a correlational design, while the other was conducted for studies which investigated this relationship using a between-groups design. Quantitative data for each meta-analysis were summarized using the Pearson correlation coefficient r or the Hedges' g statistics (27), respectively. For studies where data were summarized using correlation coefficients, all analyses were performed on the transformed Fisher's z-values, and subsequently transformed back to r to yield an overall summary correlation (27). Subgroup analyses were planned a priori and included comparisons between (i) level of assistance with engagement (guided or unguided interventions), (ii) user mental health severity (diagnosed or non-diagnosed), and (iii) primary mental health target (depressive-or anxiety-related symptoms).
Study heterogeneity was evaluated for each meta-analysis using the I 2 statistic, with thresholds of 25, 50, and 75% denoting low, moderate, and high levels of heterogeneity, respectively (28). Ninety-five percent confidence intervals (CIs) for I 2 values were computed using formulae proposed by Borenstein et al. (29). For meta-analyses with a sufficient number of included studies (i.e., >10 studies), publication bias was assessed using funnel plots and Egger's regression test (30). All analyses were performed using Comprehensive Meta-Analysis version 3.0 (31).
Risk of bias was assessed using Version 2 of the Cochrane risk-of-bias tool for randomized trials (RoB 2) (32). The RoB 2 comprises five domains and an overall risk domain, each of which were scored against a three-point rating scale corresponding to "low, " "some, " or "high" risk of bias. Ratings were independently conducted by two pairs of coders (40% by DZQG and LM, and 60% by DZQG and JH). Discrepancies were resolved through mutual discussion. An overall agreement rate of 93.3% was reached.

RESULTS
The search yielded a total of 10,623 articles. Following removal of duplicates and non-eligible studies, 35 unique studies were identified that met inclusion criteria (Figure 1). All studies were included in the narrative synthesis, and 25 of the 35 were included in the meta-analysis. Ten studies could not be included in the meta-analysis because study authors were either unable to provide the data required for effect size calculation (n = 7) or were uncontactable (n = 3). Table 1 summarizes key information from the 35 studies, which provided baseline data on a total of 4,484 participants who were assigned to receive the digital intervention, and 8,110 participants in total (intervention and control conditions). Consistent with the aims of the present study, from herein we only report data for participants assigned to the intervention condition. Intervention condition sample sizes ranged from 22 to 542 participants (Mdn = 81). Mean participant ages ranged from 11.0 (SD 2.57) to 58.4 (SD 9.0) years. Proportion of female participants ranged from 5.3 to 100% (Mdn = 68.8%). Most studies were conducted with adults (aged 18 years and above; 30 studies; 85.7%). Duration between baseline and postintervention assessments for mental health outcomes ranged from 3 to 14 weeks (Mdn = 9). The studies were carried out in middle to high income countries across four continents: Europe (n = 21; 60%), North America (n = 9; 25.7%), Australia (n = 3; 8.6%), and Asia (n = 2, 5.7%). Most of the interventions were either fully or partially based on cognitivebehavioral therapy (27 studies, 80%). Digital interventions were designed to address a range of mental health symptoms; the most common symptoms targeted were anxiety (10 studies, 28.6%), depression, (nine studies, 25.7%), and psychological distress/recovery (nine studies, 25.7%). Thirty (85.7%) of the DMHIs were online programs accessed primarily via computer, while the remaining five were app-based interventions accessed via smartphones. The number of modules/activities/sessions completed was the most commonly reported engagement measure (33 studies, 94.3%). Other engagement metrics included number of logins (five studies; 14.3%), time spent using the intervention (five studies, 14.3%), and other actions performed in response to DMHI content, such as emails to therapist, interactions with other users (four studies, 11.4%). Ten studies (28.6%) reported on more than one measure of engagement.

Risk of Bias
Assessment of the methodological quality of studies on the Cochrane Risk of Bias 2.0 tool found most studies to have some level of potential bias (Supplementary Material, p. 2). Selective reporting was identified to be the largest source of bias, with 28 studies (80.0%) not reporting sufficient information-such as a prospectively published trial protocol-to rule out bias in this domain. Risk of outcome measurement bias was the second most common source of potential bias, with 19 studies (54.3%) reporting that outcome assessors-usually study participants themselves-were aware of the intervention received by study participants. Participants in 16 studies (45.7%) were not blinded to their assigned intervention. Most studies had sound random sequence generation and allocation concealment processes, and employed analytical techniques to minimize bias in missing postintervention outcome data.   ns, no association between engagement measure and post-intervention symptoms.

Main Analyses
Of the 25 studies included in the meta-analysis, 20 studies (20, 33-35, 37, 40, 41, 43, 47, 48, 51, 54-57, 59, 61, 63-65) employed correlational designs to investigate the association between engagement and mental health outcomes. The other five studies (36,38,50,52,66) used between-group mixed model comparisons to identify any differences in post-intervention outcomes between users who exhibited higher vs. lower levels of engagement. To maximize comparability across studies, the number of modules (also referred to as activities or sessions) completed was used as our primary measure of engagement, owing to its common use among the included studies.
Meta-analysis of the mean pooled correlation between number of modules completed and change in any mental health outcome (20 studies, N = 1,808 participants; Figure 2) showed a small, significant positive association [r = 0.25, 95% CI (0.17, 0.32), Z = 6.29, p < 0.001]. Leave-one-out analysis revealed that no single study rendered the random-effects model nonsignificant. Removal of Mira et al. (65) had the largest effect influence on the model, reducing the overall r from 0.25 to 0.21. There was a moderate level of heterogeneity in the distribution of individual study effect sizes (I² = 60.7%). Examination of the funnel plot (Supplementary Figure 5A, p. 6) revealed that there was no publication bias for this analysis, as indicated by the one-tailed p-value (p = 0.11).
Meta-analysis of the effect of engagement on any mental health outcome among studies that used between-group comparisons (n = 5 studies; Figure 3) showed that users reporting higher levels of content access had significant, moderate improvements in post-intervention mental health outcomes relative to users with lower levels of engagement [Hedges' g = 0.40, SE =0.16, 95% CI (0.097, 0.705), Z = 2.59, p = 0.010]. Leave-one-out analysis revealed that no single study rendered the random-effects model non-significant. Omission of Oromendia et al. (66) had the largest effect reduction, increasing the Hedges' g value from 0.40 to 0.25. There was a moderate level of heterogeneity in the distribution of individual study effect sizes (I² = 65.8%).

Subgroup Analyses
Figures for all subgroup analyses can be found in the Supplementary Material (p. [3][4][5].

Unguided and Guided Interventions
Data for nine of the 17 studies evaluating unguided interventions and for six of the 14 studies evaluating guided interventions were examined in two separate meta-analyses. To avoid ambiguity, studies that administered interventions in both guided and unguided formats (four studies) were excluded from the analyses. The overall mean pooled correlation for the meta-analysis of nine studies evaluating unguided interventions (N = 674; Supplementary Figure 4A) showed a small, significant positive association between engagement and post-intervention mental health outcomes (r = 0.23, 95% CI [0.16, 0.31], Z = 6.07, p < 0.001). Leave-one-out analysis revealed that no single study rendered the random-effects model non-significant. Removal of Forand et al. (40) had the largest effect influence on the model, increasing the overall r from 0.23 to 0.25. There was minimal heterogeneity in the distribution of individual study effect sizes (I² = 0.0%).
The overall mean pooled correlation for the meta-analysis of six studies evaluating guided interventions (N = 423; Supplementary Figure 4B) showed a significant, moderate positive association between engagement and post-intervention mental health outcomes (r = 0.30, 95% CI [0.20, 0.38], Z = 6.13, p < 0.001). Leave-one-out analysis revealed that no single study rendered the random-effects model non-significant. Removal of Spence et al. (61) had the largest effect influence on the model, reducing the overall r from 0.30 to 0.26. There was no heterogeneity in the distribution of individual study effect sizes (I² = 0.0%).

Mental Health Diagnostic Status
We analyzed studies that used self-report symptom screening (11 studies) separately to those studies which used diagnostic instruments (six studies). The overall mean pooled correlation for the meta-analysis of self-report symptom measures (N = 1,056; Supplementary Figure 4C) showed a significant positive association between engagement and post-intervention mental health outcomes (r = 0.24, 95% CI [0.17, 0.32], Z = 6.29, p < 0.001). Leave-one-out analysis revealed that no single study rendered the random-effects model non-significant. Removal of Mira et al. (65) had the largest effect influence on the model, reducing the overall r from 0.27 to 0.17. There was a moderate level of heterogeneity in the distribution of individual  study effect sizes (I² = 70.2%). Examination of the funnel plot (Supplementary Figure 5B, p. 6) revealed that there was no publication bias for this analysis, as indicated by the one-tailed p-value (p = 0.08).
The overall mean pooled correlation for the meta-analysis of 6 studies with participants who fulfilled criteria for a psychiatric diagnosis (N = 530; Supplementary Figure 4D) showed a significant positive association between engagement and postintervention mental health outcomes (r = 0.28, 95% CI [0.19, 0.36], Z = 6.03, p < 0.001). Leave-one-out analysis revealed that no single study rendered the random-effects model nonsignificant. Removal of Spence et al. (61) had the largest effect influence on the model, reducing the overall r from 0.28 to 0.26. Heterogeneity in the distribution of individual study effect sizes was minimal (I² = 11.6%).

Specific Mental Health Outcomes
Data for five of the 10 studies which investigated the relationship between engagement and anxiety-related symptoms, and for six of the 12 studies involving participants who met diagnostic criteria for a mental health condition were combined and examined in two separate meta-analyses.
The overall mean pooled correlation for the meta-analysis of 5 studies with anxiety-related symptoms as the primary outcome (N = 411; Supplementary Figure 4E) showed a significant positive association between engagement and post-intervention mental health outcomes (r = 0.33, 95% CI [0.24, 0.41], Z = 6.76, p < 0.001). Leave-one-out analysis revealed that no single study rendered the random-effects model non-significant. Removal of Spence et al. (61) had the largest effect on the model, reducing the overall r from 0.33 to 0.20. There was no heterogeneity in the distribution of individual study effect sizes (I² = 0.0%).
The overall mean pooled correlation for the meta-analysis of 6 studies with depressive symptoms as the primary outcome (N = 735; Supplementary Figure 4F) showed a significant positive association between engagement and post-intervention mental health outcomes (r = 0.33, 95% CI [0.13, 0.50], Z = 3.12, p = 0.002). Leave-one-out analysis revealed that no single study rendered the random-effects model non-significant. Removal of Mira et al. (65) had the largest effect on the model, reducing the overall r from 0.33 to 0.17. There was a high level of heterogeneity in the distribution of individual study effect sizes (I² = 82.2%).

DISCUSSION
To our knowledge, this is the first systematic review and metaanalysis to quantitatively examine whether the level of user engagement with a digital intervention was associated with change in mental health outcomes after the intervention period. Although it is widely accepted that the extent of engagement with digital interventions will be positively associated with improvements in mental health, robust empirical evidence to support or validate this hypothesis is scant. While the narrative synthesis showed mixed support for a positive engagementoutcome relationship, the meta-analyses (main and subgroup) consistently supported our main a priori hypothesis. That is, the results unequivocally support that greater engagement with digital interventions is modestly but significantly associated with improvements in mental health (effect size range: r = 0.23 to Hedges' g = −0.40) regardless of the level of guidance provided, mental health symptom severity of users, or type of mental health condition(s) targeted by the intervention.
Our findings validate the qualitative findings reported in the systematic review by Donkin and colleagues (18), who reported that improvements in mental health-related outcomes appeared to be associated with the number of modules accessed, but not with other engagement indicators (e.g., time spent, logins, and online interactions). In our study, it was not possible to quantitatively explore this relationship using these latter engagement indicators as too few of the included studies reported on such data. Future studies should consider reporting associations between multiple engagement measures and mental health outcomes to continue to build the evidence base for the impact of engagement on treatment outcomes, and to reach an understanding of what level or threshold of engagement is needed to achieve therapeutic benefits.
The study findings have several important implications for clinical practice and research. Firstly, they support the view that users' level of engagement with intervention content is likely a key mechanism for predicting the amount of treatment benefit obtained (18), justifying the development of strategies aimed at increasing engagement with digital interventions. There is already some promising research being done in this space, with several studies finding that external strategies such as automated reminders (13,67), therapist-led coaching (68,69), and moderated peer-support groups (70) can be effective toward promoting engagement with digital interventions for mental health. Though the literature on the use of strategies is still emerging, it is worthwhile for healthcare, educational, or community-based organizations who may eventually recommend or deliver digital health interventions to consider incorporating such strategies as part of their implementation models. To ensure that digital interventions are being built in ways that users are motivated to engage with them, researchers should consider involving those with lived experience in the design and development process so that these programs are appropriately solving problems that users care about, building dynamic rather than static programs, ensuring well-integrated and meaningful gamification, and allowing personalisation or tailoring of these programs to the user (12,71).
Secondly, our findings suggest that module completion may be one of the more acceptable measures of engagement to evaluate. Given that research on the impact of engagement on mental health outcomes has been hindered by the lack of consensus over a suitable engagement measure (72,73), we recommend that future studies consider including module completion as the primary engagement measure to facilitate future corroboration of the present findings.
Our study had several limitations. First, the included studies differed in many ways, such as by target population, DMHIs employed, and the types of mental health conditions examined. Thus, specific analyses of the effect of a specific measure of engagement on a particular mental health outcome could not be conducted. Second, there were also differences in the statistical approaches employed by studies in how they quantified the engagement-outcome relationship. As effect sizes from repeated measures and between-subject designs are not comparable (26), data provided by both types of studies had to be analyzed separately. Third, Pearson's r had to be estimated for some of the studies which employed correlational designs. While estimating r from other indices may not be ideal, it is preferred to omitting studies without this data so as to maximize objectivity and minimize selection bias (26,29). Finally, bivariate correlations between engagement and outcomes do not account for the possibility that other variables may influence this association. For example, one of the included studies reported that sessions completed and time spent were correlated with reduced depression scores at post-intervention; however, these associations were non-significant after email support was accounted for (63). Controlling for factors linked with engagement may be necessary for verifying the robustness of its relationship with outcomes.
This systematic review and meta-analysis provides the first meta-analytic evidence that the more that users engage with digital interventions the greater the improvements in mental health symptoms. Our findings speak to the importance of ensuring that those individuals who are less motivated to engage, or experience more barriers to engagement, have access to strategies that can overcome these challenges if we are to maximize therapeutic benefits. On a methodological level, the findings underscore the importance of standardizing measures of user engagement in future trials to build our certainty in this evidence. To further advance the field, it is important for future research to explore which engagement metrics (log-ins, sessions completed, time spent, etc.) have the greatest impact on improving mental health outcomes. This information will enable the targeted development of engagement strategies that will support users to interact with interventions in ways that they are most likely to benefit from them.

DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

AUTHOR CONTRIBUTIONS
DZQG, MT, LM, and HC designed the study. DZQG and MT planned the statistical analysis. DZQG extracted and analyzed the data, with assistance from MT and LM. DZQG, MT, LM, and JH assessed study eligibility and quality. DZQG, MT, and LM wrote the first draft of the manuscript. All authors contributed to the interpretation and subsequent edits of the manuscript.