Investigating the Persuasive Effects of Testimonials on the Acceptance of Digital Stress Management Trainings Among University Students and Underlying Mechanisms: A Randomized Controlled Trial

Objective: This experiment aims to investigate the influence of narrative information varying in the degree of perceived similarity and source credibility in supplemented testimonials on the acceptance of digital mental health services (digi-MHSs). Methods: In fall 2020, n=231 university students were randomly assigned to an active control group (aCG, n=55, “information only”) or one of three intervention groups (IGs) receiving information plus different testimonials being presented either by nonacademic staff (IG1, n=60), university students (IG2, n=58) or experts (IG3, n=58). We assessed mediation effects of similarity and credibility on acceptance in terms of attitudes and usage intentions. Results: Exposure to testimonials was associated with higher usage intentions (d=0.50) and more positive attitudes toward digi-MHSs (d=0.32) compared to mere information (aCG). Regarding source-related effects, one-way ANOVA showed group differences in intentions ( ηp2 =0.13) that were significantly higher after exposure to testimonials targeted at students than in the other groups after adjusting for baseline intentions ( ηp2 =0.24). Concerning underlying mechanisms, there were full mediation effects of similarity (IG1 versus IG2) on attitudes [95%CI (0.030, 0.441)] and intentions to use digi-MHSs [95%CI (0.100, 0.528)] and of credibility on attitudes [IG2 versus IG3; 95%CI (−0.217, −0.004)], all favoring students’ testimonials. Conclusion: Overall, this study indicates that the acceptance of digi-MHSs can be substantially increased by providing a simple, context-sensitive information intervention, including testimonials by university students. Since we identified mediating effects of credibility on cognitive attitudes and similarity on affect-driven intentions, a future trial could vary these features using narrative versus statistic information on digi-MHSs.


INTRODUCTION
Mental health promotion for college and university students has become a central topic on the international research and health policy agenda in recent years, given the increasing prevalence for psychological problems in this population (Cuijpers et al., 2019). Still, there is an immense discrepancy between the supposed need and actual uptake of mental health services by students worldwide . Moreover, since the onset of the Covid-19 pandemic, university students have been found to experience further psychosocial strain and helpseeking barriers (Benjet, 2020;Davenport et al., 2020;Kohls et al., 2021). Digital mental health services (digi-MHSs) provide additional options to increase the availability of health promotion and treatment offers (van Daele et al., 2020). In general, digi-MHSs include a broad range of interventions differing in theory base [e.g., internet-delivered cognitive behavioral therapy (iCBT)], application fields (e.g., stepped care), guidance (e.g., asynchronous feedback), and technical implementation (e.g., virtual reality; Ebert et al., 2019). To date, solid evidence exists for the efficacy of digi-MHSs for improving subjective wellbeing or coping with stress, anxiety, and depression across student populations (Harrer et al., 2018;Lattie et al., 2019). As an example, online stress management trainings have been demonstrated to be efficacious for distressed to moderately depressed traditional and nontraditional university students facing multiple challenges, like study-work-family-conflicts .
Interestingly, research indicated a higher acceptance of digi-MHSs among university students with personal use experience, but the utilization rates of existing digital interventions remain very low (Dunbar et al., 2018;Hadler et al., 2021;Lavergne and Kennedy, 2021). Potentially, suitable digi-MHSs are yet not well known and thus seldom used by university students despite overall positive attitudes (Mayer et al., 2019;Apolinário-Hagen et al., 2021). Although many university students appear ready to use digital health solutions, they still report difficulties in finding reliable information online (Machleid et al., 2020;Dadaczynski et al., 2021). Accordingly, the willingness to use digital media for mental health purposes depends on appropriate, easy accessible information regarding core requirements, like data security (Montagni et al., 2020). Uncertainties grounded on limited or conflicting information, besides unmet preferences, may thus impede the adoption of evidence-based psychological services (Cunningham et al., 2014(Cunningham et al., , 2017. Recent research suggests that tailored fact-based psychoeducational information can help increase intentions to use mental health services among university students (Ebert et al., 2018). Under "real world" conditions, consumer choices are oftentimes based on the opinions, anecdotes, or recommendations from trustworthy sources. Hence, a commonly applied practice is to make use of the supposed impact of user reviews, including star ratings, quality claims, and expert statements, especially in order to advertise commercial mental health apps (Apolinário-Hagen et al., 2018a;Larsen et al., 2019). Narrative messages can facilitate experience-based heuristic decisions, based on rules of thumb or practical examples. Simple heuristics are particularly useful in pragmatic decisions in new situations in daily life (e.g., reducing complexity, dealing with limited information; Gigerenzer and Gaissmaier, 2011).
Consequently, dual-processing models, like the Elaboration Likelihood Model (Petty et al., 2009) and the Heuristic-Systematic Model (Chaiken, 1980), propose two main pathways of persuasion or attitude change (analytical versus heuristic) that depend on the individual ability and motivation to process health messages as well as various contextual factors. To date, though, knowledge on the specific influence of different features of mental health information, especially of those being related to the context (e.g., expert heuristics, reputation) instead of the content (e.g., facts like duration, themes), is limited and inconclusive. Most research on health-related testimonials has dealt with prevention and treatment choices regarding somatic disorders and yielded mixed findings on the benefits of statistical over narrative information, like testimonials (e.g., Zebregs et al., 2015;Perrier and Martin Ginis, 2017). Among message recipients without own experience with mental health interventions, testimonials by past users may be more influential on hypothetical treatment choices than among recipients with first-hand treatment experience (Pruitt et al., 2012). In addition, it may be possible that educational material combining fact-based statistical information with testimonials can improve attitudes toward digi-MHSs such as iCBT among both concerned and unconcerned people (Soucy et al., 2016).
Regarding variables related to attitude change, perceived similarity between testimonial sources and oneself as well as source credibility have been identified as persuasive factors across various health communication fields (Green and Clark, 2013;Shen et al., 2015;Shaffer et al., 2018). Medical students, for instance, have been shown to prefer digital interventions that are tailored to students and approved by trustworthy academic sources . Accordingly, testimonials on digi-MHSs may represent a simple way to facilitate their acceptance among university students as they are seldom familiar with such offers and may thus likely be affected by heuristics based on perceived similarity or source credibility (Quintero Johnson et al., 2017. Abbreviations: aCG, active control group (information only); APOI, attitudes toward psychological online interventions (questionnaire); digi-MHSs, digital mental health services; ETAM, e-therapy attitude measure; IG, intervention group (receiving information plus testimonials); PSS, perceived stress scale; PU, perceived usefulness (attitude short scale); RQ, research question; TPB, theory of planned behavior; UTAUT, unified theory of acceptance and use of technology. Taken together, little is known about the usefulness of testimonials as a widely applied marketing tool to promote the acceptance of digi-MHSs among university students as well as mechanisms underlying testimonial effects, which could help tailor health messages.

Objectives
This study aimed to investigate the influence of information varying in the degree of supposed similarity of narrators with oneself and source credibility of testimonials compared to mere information on the acceptance of digi-MHSs (in terms of attitudes and intentions) among university students. Another purpose was to explore whether perceived similarity and source credibility mediate the influence of testimonials on the acceptance of digi-MHSs, like digital stress management trainings. In view of the inconclusive evidence of testimonials effects, we postulate three research questions (RQs).

RQ1:
Is there an added value of testimonials as a supplement to neutral information compared to mere information regarding the acceptance of digi-MHSs among university students?
We assumed positive influences on (RQ1a) attitudes and (RQ1b) intentions to use digi-MHSs among university students after the exposure to information augmented with testimonials compared to information only.

RQ2
: Are there differences in students' acceptance of digi-MHSs following information varying in source credibility and perceived similarity?
We explored differences in (RQ2a) attitudes and (RQ2b) intentions based on the exposure to testimonials from different sources (i.e., employees working outside of academia versus university students versus qualified academic experts). We supposed a higher influence of university students' and experts' testimonials compared to nonacademic staff testimonials and information only.

RQ3: Do perceived similarity and source credibility mediate the effects of different testimonial sources on students' acceptance of digi-MHSs?
Concerning mechanisms underlying testimonial effects, we explored mediation effects of (RQ3a) perceived similarity with oneself and (RQ3b) source credibility on students' attitudes and intentions.

Study Design and Interventions
In a randomized controlled trial with four parallel information groups (study arms), we assessed differentiated effects of brief, written testimonials in addition to text-based information on attitudes and intentions to use digi-MHSs, as shown in Figure 1.
The anonymously conducted survey-based online experiment was designed based on previous work (Apolinário-Hagen et al., 2018a. Using a computer-based algorithm (balanced, 1:1:1:1) implemented in the online survey tool Unipark (Questback), participants were randomly assigned to an active control condition (aCG = "information only") or to one of three narrative intervention groups (IGs). The IGs involved information differing in supplemented fictitious testimonials that were either presented by staff outside of academia (IG1), university students (IG2), or academic experts (IG3) in order to vary similarity to students (IG2 versus IG1 and IG3) and credibility (IG3 versus IG2 and IG1). All participants received the same general information on digi-MHSs, while the IGs additionally received three text-based testimonials that were constructed in orientation to real-world examples, theoretical considerations and pretested stimulus material. The complete contents of the interventions are shown in Supplementary Material 1. In contrast to the pilot studies (e.g., Apolinário-Hagen et al., 2021), we did not include brands of existing digi-MHSs, albeit the described service was based on an evidence-based, guided digital stress management training (Ebert et al., 2016;Harrer et al., 2021). Moreover, the revised study material was designed more consistently and less detailed (e.g., exclusively scoping on stress prevention, missing age of testimonial sources) while trying to achieve external validity (e.g., emulating existing testimonials, experts as additional source).
The valence of testimonials was positive, focused on advantages (personal experience in IG1 and IG2, third-party: IG3), and the intended effect was persuasion (c.f., Shaffer and Zikmund-Fisher, 2013). All testimonials were fictional in order to control for contextual factors related to varying knowledge and popularity of experts. The online survey was pretested with n = 10 university students. The average completion time was between 12 and 17 min. This study was approved by the ethic committee of the University of Hagen, Faculty of Psychology, Germany (EA_278_2020).

Sample and Recruitment
Inclusion criteria were self-reported student status, age of at least 18 years, and provided consent (click-to-agree). Data were collected online between September 3, 2020 and October 3, 2020 using Unipark. German-speaking participants were recruited using the virtual lab and Moodle groups of the University of Hagen as Germany's only state distancelearning university, social media (e.g., Facebook), flyers with QR code distributed across different German universities and emails (e.g., student representatives). Psychology undergraduate students could receive study credits, while all completers had the chance to win book vouchers. Regarding the required sample size, a priori power analyses using G*Power (Faul et al., 2007) indicated n = 170 for RQ1 [two-tailed independent-sample t-test, unequal group ratio 3:1, moderate effect size (d = 0.5), power = 0.8] and n = 180 for RQ2 [one-way ANOVA, power = 0.80, alpha = 0.05, moderate effect (f 2 = 0.0.25)], respectively.

Measures and Procedure
The survey consisted of three main parts: (1) baseline, (2) intervention, and (3) post-intervention assessment. Different validated and pre-tested self-constructed scales were used for assessing information effects on the acceptance of digi-MHSs, as illustrated in Table 1.
At baseline, participants were asked to answer few background questions (e.g., age, gender, study model, and experience with digi-MHSs). Next, baseline attitudes and intentions regarding digi-MHSs were measured using three items each on a response scale ranging from 1 ("fully disagree") to 7 ("fully agree"). Specifically, we assessed behavioral intentions based on the Unified Theory of Acceptance and Use of Technology (UTAUT; Venkatesh et al., 2003) using a German adaptation (Hennemann et al., 2016), while the attitude short scale emphasizing perceived usefulness (PU) was grounded on the Theory of Planned Behavior (TPB; Ajzen, 1991) and pretested in previous work . Perceived stress in the past 2 weeks was measured with the validated 10-items German version of the Perceived Stress Scale (PSS-10; Klein et al., 2016) on a Likert scale ranging from 1 ("never") to 5 ("very often"; adapted scale sum range: 10-50).
Next, participants were automatically randomized to one of four information groups, either the aCG ("information only") or one of three IGs: IG1 (employees), IG2 (university students), or IG3 (experts), each receiving different additional testimonials on an online stress management training, as documented in Supplementary Material 1.
At post-intervention, we assessed the mediators perceived similarity (five items, IGs only) and source credibility (three items, all four groups) in line with a pilot trial . Attitudes and intentions were measured again with the short scales described above. To extend the scope to further digi-MHS applications, attitudes toward FIGURE 1 | Study flow chart. Procedure of the online experiment comparing attitudes and intentions to use digital mental health interventions (digi-MHSs) between three narrative intervention groups (IGs) receiving information plus various testimonials and the active control group (aCG) receiving information only.

Statistical Analyses
Collected data were extracted from Unipark, if marked as completed or screened out. Data handling was done in accordance with a pilot trial (Apolinário-Hagen et al., 2021), as documented in Figure S1 in the Supplementary Material 2. Imputation by mean was performed in case of few missing values (e.g., one missing value in the ETAM). Research questions were tested on an alpha level of 0.05 (two-fold) using IBM-SPSS, version 26.
We conducted independent two-sample t-tests to compare the aCG with the IGs (RQ1) in attitudes and intentions at post-intervention (added value of testimonials, coding: aCG = 0, IGs = 1). Furthermore, we performed one-way ANOVA to determine differences in attitudes and intentions between the four study arms (RQ2; differentiated effects of testimonial sources) at post-intervention, including post-hoc tests (Bonferroni, multiple comparisons) and the adjustment of baseline values (ANCOVA for sensitivity analyses). Effect sizes were classified according to social sciences' conventions (Cohen, 1988).
With 46.3%, the majority reported the general university entrance qualification as highest educational attainment, followed by 26.0% with a bachelor's degree, 15.6% with a master's degree, and 12.1% with other qualifications. Most (66.2%) indicated distance learning university as study model, while 26.9% were enrolled in a traditional university and 6.5% in both study models simultaneously (other: 0.4%). Sixty-one percent (n = 141) studied full-time and 39.0% (n = 90) in part-time.

Descriptive and Ancillary Analyses
Regarding awareness, 26.8% (n = 62) of the sample reported to have heard about digi-MHSs ["no": n = 155 (67.1%); "not sure": n = 14 (6.1%)], while 7.8% (n = 18) stated to have obtained more information on specific digi-MHSs and 4.8% (n = 11) indicated respective experience. Table 1 shows psychometric data of the assessed scales. Descriptive data differentiated by experimental group and ancillary analyses can be found in the Supplementary Material 2. For instance, perceived stress was moderately high according the PSS-10, but only weakly correlated with intentions (r = 0.159, p = 0.015) and attitudes (ETAM; r = 0.161, p = 0.014) at post-intervention.  As shown in Table 2, Bonferroni-adjusted post-hoc tests of the ANCOVA revealed higher intentions to use digi-MHSs only after exposure to students' testimonials (ps < 0.001).
In addition, there was a partial mediation for perceived similarity in IG3 versus IG2 [indirect effect = −0.364, 95% CI (−0.651, −0.101)], with higher intentions in case of greater similarity following the exposure to testimonials by students compared to experts.

RQ3b: Source Credibility
Source credibility fully mediated the influence of students' testimonials on attitudes in comparison to expert testimonials [IG2 versus IG3, indirect effect = −0.103, 95% CI (−0.217, −0.004)]. There was no mediation effect of source credibility, neither on attitudes in comparison of IG1 versus IG3 (staff versus expert) nor on intentions.

DISCUSSION
This study aimed to explore the influence of testimonials on the acceptance of digi-MHS among university students as well as mediation effects.

RQ1: Added Value of Testimonials
Concerning the efficacy of narrative interventions, our analyses showed that the exposure to testimonials in addition to written information was associated with higher intentions to use (d = 0.50) and more positive attitudes toward digi-MHSs for stress prevention (d = 0.32), compared to mere information. Hence, this study indicated that the acceptance of digi-MHS for stress management can be improved to a meaningful extent by a simple testimonial intervention. This finding corresponds to prior work on acceptance-facilitating interventions involving multi-component information on digi-MHSs, like iCBT (Ebert et al., 2015;Soucy et al., 2016). In contrast, we found no testimonial effects on attitudes toward online therapies, which is potentially due to the scope on health promotion in the stimulus material. Overall, however, the evidence base for narrative interventions is indecisive (Shaffer et al., 2018) and particularly scarce for mental health services. Consequently, the identified testimonial effects in digital mental health promotion can be considered as one major contribution of this experiment.

RQ2: Differences in Acceptance
Another goal was to compare the influence of different information types on the acceptance of digi-MHSs. We identified higher intentions attributable only to students' testimonials compared to each other information group before and after adjusting for baseline intention values, with moderateto-high effect size. While the influence of students' testimonials appears plausible, it was unexpected to identify no influence of expert statements. Potentially, expert testimonials were processed rather more analytically than first-person testimonials. Participants may have concluded that these testimonial sources intended to persuade them, which may have led to less trustworthiness and more reactance (Wang and Shen, 2019). Accordingly, a recent survey indicated that recipients of health advertisements were concerned regarding the inappropriate use of academic reputation (doctors as expert sources) and found that testimonials should be viewed more critically in healthcare compared to consumer contexts (Holden et al., 2021). To date, only few investigations on effects of expert versus lay people testimonials on digi-MHSs exist and yielded indecisive results (Healey et al., 2017).
Here, we confirmed positive influences of first-person testimonials, which have been shown to be more persuasive than third-person narratives in other health promotion experiments (Chen and Bell, 2021). In contrast to intentions, we found no group difference in attitudes toward online interventions for stress coping or therapy. Possibly, attitudes were easier biased by social desirability than intentions, making it more difficult to induce improvements with source-related differences. In addition, attitudes and intentions represent different stages of adoption in terms of intention as mediator of the effect of attitude on behavior in line with the TPB (Ajzen, 1991;MacKinnon et al., 2000). According to a meta-analysis, research showed that statistical evidence seems to be more suitable to improve attitudes and beliefs (cognitive elaboration) and that narrative messages rather influence affective responses, like behavioral intentions (Zebregs et al., 2015). However, none of the reviewed studies focused on mental health or eHealth. Therefore, more research is required to explore source-related effects of testimonials and related factors not only on the acceptance of digi-MHSs but also on the influence of acceptance on subsequent registration or uptake rates (Healey et al., 2017;Wopperer et al., 2019) as well as successful program completions (Fleming et al., 2018).
Recent research demonstrated an association between prior experience with digi-MHSs and higher acceptance among university students, but also little experience and low uptake rates at the same time (Lavergne and Kennedy, 2021). Additionally, previous experience appears not Frontiers in Psychology | www.frontiersin.org mandatory to form positive attitudes toward digi-MHSs (Mayer et al., 2019). Overall, the low experience rates regarding digi-MHSs in our sample (5%) correspond to earlier surveys from Germany (Webelhorst et al., 2020;Breil et al., 2021) and international findings across different populations (Toscos et al., 2018;Clough et al., 2019;Richardson et al., 2020). Although research has revealed positive attitudes and the readiness to try stand-alone digi-MHSs among university students (Hadler et al., 2021), in direct comparison face-to-face support, including blended care, have been shown to be preferred in surveys, including discrete choice experiments (Phillips et al., 2021). Future experiments on acceptance-facilitating interventions may therefore extend the scope to blended interventions.

RQ3: Mediation Effects
Another purpose was to identify mediators of attitudes and intentions. Consistent with prior work , perceived similarity mediated the influence of exposure to testimonials on attitudes toward and intentions to use digi-MHSs for mental health promotion, favoring students' over employees' testimonials (IG1 versus IG2). In addition, there was a full mediation effect of similarity on intentions (students' versus expert testimonials). Thus, the acceptance-facilitating role of similarity seems to be a promising future focus when designing information aiming at promoting the adoption of digi-MHSs. Furthermore, we found a full mediation effect of source credibility on attitudes, with student testimonials being assessed as more credible than those by experts (IG2 versus IG3). In contrast, there were no mediation effects of source credibility on intentions. Interestingly, there were no differences between the IGs in source credibility, while the aCG assessed the information significantly as more credible than participants receiving expert testimonials. Potentially, the expert testimonials were not optimally designed, could have been presented by recognized experts and additionally integrated critical statements. Increasing credibility can be a starting point, which may be promoted by certification and quality seals. Since fall 2020, the Digital Healthcare Act allows for the prescription of certified health apps in Germany (Gerke et al., 2020). Yet, many concerns persist among health professionals, especially regarding data security (Heidel and Hagist, 2020). Future studies on acceptance-facilitating interventions could therefore focus on balancing information on the benefits (safety, effectiveness) with contraindications of quality-approved mental health apps.

Limitations
Limitations of this study include multiple testing (i.e., risk of false positive findings, "p-hacking"), fictional, positively framed testimonials and unequal group sizes for testing the effects of testimonials in RQ1.
Out of n = 368 data sets, n = 184 (37%) were removed mostly due to withdrawal of consent, dropping out prior to randomization or unrealistic participation time, as shown in Figure S1 (Supplementary Material 2). Nonetheless, this rate corresponds to other online studies using the virtual lab .
Due to the integration in a Master thesis project, the recruitment period was limited and scheduled in the early winter semester 2020/21. Furthermore, we did neither measure the semester nor mental health status, except for stress, to reduce the amount of identifiable or sensitive data.
In addition, it may also have been useful to repeat health messages to achieve more robust persuasive effects (Suka et al., 2020). Furthermore, we did not include information on the costs of eMHSs to reduce the amount of attributes and given the universally free access to healthcare in Germany.
Finally, it should be considered that about two-third of the sample were distance-learning students who differ from traditional students in demographic background and study conditions , whereas the Covid-19 pandemic contributed to at least comparable distance study conditions in fall 2020. Moreover, online education and the unavailability of face-to-face support may have had an impact on the acceptance of digi-MHSs.

CONCLUSION
Taken together, this experiment identified positive influences of first-person testimonials on the acceptance of digi-MHS among university students, indicating that even such simple narrative interventions may be an option for information campaigns. Specifically, program information supplemented with students' testimonials could be useful in increasing behavioral intentions. In a next step, the most relevant domains for fostering perceived similarity and credibility could be explored in more detail. Further insights into these mediating effects on acceptance may help develop tailored information on digi-MHSs.

DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://doi.org/ 10.7802/2287.

ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics committee of the University of Hagen, Germany (EA_278_2020). The patients/participants provided their written informed consent to participate in this study.

AUTHOR CONTRIBUTIONS
JA-H conceived the study idea and study design, initiated the study, wrote the first draft of the manuscript, and coordinated and finalized the article. LF and JW sought ethical approval. CS, JW, LF, MH, FW, DE, and DL made relevant contributions to the study design and interpretation of data. JW programmed the online questionnaire, recruited participants, and collected and analyzed data under supervision of LF and CS within JW's master thesis project. FW and JA-H cross-checked the data underlying the manuscript and prepared the data set for sharing for non-commercial purposes. All authors read the manuscript, provided feedback, and approved the final version.