ORIGINAL RESEARCH article

Front. Psychiatry, 27 January 2022
Sec. Social Psychiatry and Psychiatric Rehabilitation
https://doi.org/10.3389/fpsyt.2022.781726

Needs and Experiences in Psychiatric Treatment (NEPT)- Piloting a Collaboratively Generated, Initial Research Tool to Evaluate Cross-Sectoral Mental Health Services

Sebastian von Peter1*, Helene Krispin1, Rosa Kato Glück1, Jenny Ziegenhagen1,2, Lena Göppert1, Patrick Jänchen1, Christine Schmid1, Anne Neumann3, Fabian Baum3, Bettina Soltmann3, Martin Heinze1, Julian Schwarz1, Timo Beeker1 and Yuriy Ignatyev1
  • 1Department of Psychiatry and Psychotherapy, Brandenburg Medical School Theodor Fontane, Immanuel Klinik Rüdersdorf, Rüdersdorf, Germany
  • 2ExPEERienced- Experience With Mental Health Crises- Registered Non-profit Organization, Berlin, Germany
  • 3Zentrum für Evidenzbasierte Gesundheitsversorgung, Technische Universität Dresden, Dresden, Germany

Background: Research tools to evaluate institutions or interventions in the field of mental health have rarely been constructed by researchers with personal experience of using the mental health system (“experiential expertise”). This paper presents a preliminary tool that has been developed within a participatory-collaborative process evaluation as part of a controlled, multi-center, prospective cohort study (PsychCare) to evaluate psychiatric flexible and integrative treatment, FIT for short, models in Germany.

Method: The collaborative research team consisting of researchers with and without experiential expertise developed 12 experiential program components of FIT models by an iterative research process based on the Grounded Theory Methodology. These components were transformed into a preliminary research tool that was evaluated by a participatory expert panel, and during a pilot and validation study, the latter using a random sample of 327 users from 14 mental health departments. Internal consistency of the tool was tested using Cronbach's alpha. Construct validity was evaluated using a Principal Components Analysis (PCA) and a Jonckheere Terpstra test in relation to different implementation levels of the FIT model. Concurrent validity was tested against a German version of the Client Satisfaction Questionnaire (ZUF-8) using correlation analysis and a linear regression model.

Results: The evaluation of the expert panel reduced 29 initial items to 16 that were further reduced to 11 items during the pilot study, resulting into a research tool (Needs and Experiences in Psychiatric Treatment—NEPT) that demonstrated good internal consistency (Cronbach's alpha of 0.89). PCA yielded a 1-component structure, which accounted for 49% of the total variance supporting the unidimensional structure of the tool. The total NEPT score increased alongside the increasing implementation of the FIT model (p < 0.05). There was evidence (p < 0.001) for convergent validity assessed against the ZUF-8 as criterion measure.

Conclusions: The NEPT tool seems to be promising for further development to assess the experiences with and fulfillment of needs of psychiatric care models from the perspective of users. This paper demonstrates that it is possible to use a participatory-collaborative approach within the methodologically rigorous confines of a prospective, controlled research design.

Introduction

Research tools and psychometric scales used to evaluate institutions and interventions in the field of mental health have mostly been constructed by clinical scientists with no personal experience of the psychiatric care system, mental crises and disabilities or recovery from them (in the following designated as “experiential expertise”). Yet, there is a rich tradition of research and knowledge production by scholars with experiential expertise that has been contributing to the mental health field for more than two decades in various countries (14). Using different epistemological and theoretical approaches (58), these studies frequently articulate valid criticism toward the current medicalized approach to psychiatric care (2), the psy-centrism of contemporary social infrastructures (8), as well as the appropriation of contrasting perspectives and positions (1), resulting into the silencing of possible alternatives—also on the level of knowledge production.

Given this context, only a few research groups in the field of mental health, led by or including researchers with experiential expertise, have been able to establish. One of these exceptions is SURE (Service User Research Enterprise)/United Kingdom, hosting exclusively researchers with experiential expertise who investigate and evaluate various health care services using self-developed criteria, standards, and instruments (9). Developed by this group and others, several scales have been created by researchers with experiential expertise: As early as 1996, Diana Rose's hybrid team created the “CONTINU-UM scale” to evaluate the continuity of psychiatric treatment (10). In the following year, Rogers created the “Empowerment Scale,” using expertise from a participatory board staffed by activists from self-help groups (11). The “Evans VOICE Inpatient Care Scale” was also developed in a participatory way and surveys the aspects of care that users consider to be important (12). The questionnaire about the process of recovery (QPR) was developed by a collaborative research team and may assist users to set treatment goas (13). The last example is the “CEO-MHS,” for which researchers with experiential expertise created a questionnaire to record user satisfaction (14).

This paper presents the first steps of developing a novel research tool that aims at evaluating the experiences and fulfillment of needs during psychiatric treatment from the perspective of users. This tool was developed during a participatory-collaborative process evaluation as part of a controlled, multi-center, prospective cohort study (PsychCare), aiming at evaluating psychiatric, innovative, flexible, and integrative treatment (FIT) models in Germany (15). These FIT model projects are mainly hospital-based and enable a more need-adapted, cross-sectoral service delivery, including complex outpatient forms of psychiatric treatment (16). Our approach involved the continuous collaboration between researchers with and without experiential expertise with the psychiatric care system, crises and disabilities or recovery from them (17). It is based on a cooperation that neither intends to meet the strict and egalitarian criteria of co-production (18, 19), nor the systematic involvement of actors in the field under investigation, as practiced in participatory research projects (20). Instead, our mode of collaboration allowed to substantially build upon knowledge of researchers with experiential expertise within the methodologically rather rigorous confines of a prospective, controlled cohort study.

The overall aim of the PsychCare study was to examine the benefits, costs, and efficiency of more flexible, continuous, and setting-integrated treatment models in Germany in comparison to standard care currently provided. Following the MRC Guidelines for the Evaluation of Complex Evaluations (21), one part of this study included a participatory process evaluation that was realized by the mentioned collaborative teamwork. The main results of this process evaluation will be presented elsewhere (22). This paper focusses on the collaborative development of a research tool during this process evaluation that aimed at evaluating the experiences and fulfillment of needs during psychiatric treatment from the perspective of users. The construction of this research tool and the initial steps of piloting and validation will be described, followed by a discussion on its value within the context of this study and beyond.

Materials and Methods

The PsychCare study is financed by the German Innovation Fund of the Federal Joint Committee (G-BA) (grant reference no. 01VSF16053), which invests resources from the health care insurance system in researching innovative health programs (23). The study is aimed at evaluating innovative psychiatric treatment models that have gradually been developed following the 2012 introduction of the § 64b of the German Social Code Book V (22). Results of previous studies on this topic are published elsewhere (16, 21, 22, 2429). The above-mentioned law enables the implementation of more flexible and integrative, psychiatric treatment models (FIT models) based on a Global Treatment Budget (GTB). Given the rather rigid and fragmented nature of the German health care system, these FIT models allow for more user-oriented and outpatient forms of treatments (30). As a result, users stay mainly in their home environment but can also be treated flexibly in the clinic with less bureaucratic hurdles. Ideally, this allows better integration of the treatment into the user's everyday life and a better insight into their reality of life by the staff (31).

The aforementioned GTB targets a fixed number of people to be treated per year. How this budget is used, for which treatment, in which settings and for what purpose is decided by the relevant institution. A total of 22 of these FIT models can currently be found in the hospital sector in Germany.

Team Structure and Cooperation

The results presented in this manuscript build on a previous study, Eva-Mod64 (22), in which 13 FIT hospital departments were evaluated between 2016 and 2017, resulting into the development of 11 process and structure-related program components of FIT models (16, 22, 24). Whereas this precursor study was carried out only by researchers without experiential expertise, the team of the PsychCare participatory process evaluation was staffed by both researchers with (in the following “experiential experts,” EE) and without experiential expertise (”conventional researchers,” CR). This team composition was chosen to direct the evaluative focus on the specific experiences of FIT model users. The three EE involved in the team were researchers with and without academic degrees. The CR group consisted of two medical students, two paid researchers and the team principal, the latter working in psychiatry but not having personal experiences as mental health service users.

The team met as a whole or in subgroups (CR only, EE only, or EE + CR). In between meetings, the team members worked individually, alone or in tandems, consisting of one EE and one CR each. In addition, supervision sessions took place three times per year, covering the whole group or CR and EE as individual groups. During these supervisory sessions, the collaborative approach and its impact on the research results were reflected upon. The results of this work will be published elsewhere (32). The whole team contributed to all phases of this project, and also as authors of this paper.

Construction of the NEPT Research Tool

The construction of the NEPT research tool was carried out in several steps shown in Figure 1 to reduce complexity. Chapter 2.2 describes the construction of the experiential components and the preliminary items of the NEPT research tool. An ethics vote of the TU Dresden dated 07.09.2017 was available.

FIGURE 1
www.frontiersin.org

Figure 1. Multi-step, collaborative construction of the NEPT research tool.

Construction of the Experiential Components

At the beginning of the study, the 15 core transcripts containing focus group material from the EvaMod64b precursor study (25) were re-coded to familiarize the team with the research topic. An evaluation method based on Grounded Theory Methodology (GTM) (33) was chosen, as the GTM allows for the systematic inclusion of various positions and forms of knowledge during a coding process, a high degree of process orientation and flexibility, and the systematic handling of conflicting perspectives and irritations (31). The coding of these transcripts was carried out in tandems of EE and CR using a computer assisted qualitative data analysis software NVivo (34) and the 11 process- and structure-related components of the precursor study as deductive categories (26). While the CR coded deductively, the EE were encouraged to add open codes, which enabled them to systematically feed personal experiences and/or collectivized forms of experiential knowledge into the coding process.

This process enabled “creative chaos” (30) allowing the group to discover and code new aspects and to open-up the possibility of systematically enriching the insights from the precursor study through experiential expertise and generalizing them further (22). As a result, a set of 12 so-called experiential program components emerged (Figure 2), aiming at capturing the experience of the FIT model users. As these components emerged from the coding process described above and the underlying experiential knowledge of the EE involved, they were framed as “I-sentences” to highlight their experiential character. They were further defined, repeatedly discussed, and finally agreed upon by the whole group and, in accordance with GTM, systematically linked to each other, as well as to the process and structure-related components of the precursor study.

FIGURE 2
www.frontiersin.org

Figure 2. The developed 12 experiential components that reflect the experiences and fulfillment of needs of psychiatric treatment from the perspective of service users. To this end, they were framed as I-statements and their definitions were given accordingly.

Construction of the Questionnaire Items

The experiential components served as a theoretical basis to develop the research guidelines for the qualitative part of the process evaluation (35). They were further used to construct items of a standardized research tool that assesses the experiences with and fulfillment of needs of the evaluated care models from the perspective of users. This tool was introduced during the 15 months follow-up of the PsychCare study's outcome evaluation to assess the experiences of a larger number of users in a more standardized way (15), to be able to triangulate the results of the study's process with those of the outcome evaluation. A second aim was to better understand the value of an evaluative construct to assess user experiences, and in how far such construct may share similarities with other constructs, for example to evaluate treatment satisfaction. Literature on this question usually targets PREM constructs (patient reported experiential measures) (36, 37), however generally not involving experiential expertise during their processes of construction.

To this end, the experiential components were transformed in several stages into questionnaire items that ultimately resulted in a questionnaire called “Needs and Experiences in Psychiatric Treatment” (NEPT). The first stage consisted in converting the I-sentences of the components into questions by the EE subgroup. These questions were discussed and further developed into 2 or 3 different questions per item by the whole group. The component “Flexibility” (see Figure 2), for instance, was assigned to the questions: “Were you treated overall in the settings that were suitable for you (full-time inpatient, day clinic, at home)” and “Did the change between the settings take place in a way that was suitable for you?.” To facilitate understanding and to do justice to their experiential nature, all questions selected were then re-converted into I-statements that, finally, were endued with a five-level Likert scale (strongly disagree, disagree, neither agree nor disagree, agree, strongly agree).

Piloting the NEPT Research Tool

After developing the questionnaire items (s. 2.2), they were piloted and validated. This took place in three surveys, first in an expert survey, then in a pilot survey and finally using a larger population during the 15 months follow-up of the PsychCare study's outcome evaluation.

Expert Survey

The content validity of the questionnaire items was calculated using an expert panel (38). The expert survey targeted a group of 10 EEs, also deviating from the precursor study, in which only CR had undertaken this phase of the validation process (21). For this expert survey, the group consisted of users, mental health activists, patient representatives, recovery companions and peer and user researchers, with the majority of these experts having several of these identities. Overall, women predominated in the group (7:3), members ranged in age from 26 to 72 years.

The preliminary items were presented in two rounds to the expert group that was asked to assess which of the assigned questions best captured the essence of the underlying components. A rating from 1 to 3 was given, 1 standing for ”essential,” 2 for “appropriate but not essential” and 3 for “not essential.” At least half of the experts had to agree that a question should be classified as ”essential“ to confirm its content validity (3). Based on the results of this expert survey, a scale with five levels of ”not at all applicable,” “somewhat inapplicable ,” ”neutral,” “somewhat applicable ” and “fully applicable” was assigned, which served to evaluate the items during the pilot survey.

Based on the results of the expert assessment, the final questionnaire was developed, which contained a total of 16 items, assigned to the 12 experiential components on which it was based.

Pilot Survey

The pilot survey included 94 users of one of the FIT model departments that was not included in the main study, with the sample drawn from three treatment settings (outpatient clinic, day clinic and hospital ward) according to a quota principle. Respondents were asked to rate the items using the above-mentioned five-point scale. In addition, the respondents' detailed comments on individual items were recorded.

Based on the feedback of the participants, the correlations and reliability of the items were determined. With reference to previous studies (39), in which socially desirable response behavior was shown to occur in the evaluation of health care services, the items were coded as follows: “not at all applicable” to “neutral” = 0, ”somewhat applicable“ = 1 and ”fully applicable“ = 2. Further, selection of items on the questionnaire was based on the principle of excluding items with low internal consistency with the scale, the cut off for dripping items being set at ≤ 0.7. This strategy, called alpha maximization (38), was used with the greatest caution, as it can lead to the elimination of items with low selectivity, necessary to distinguish all areas of the dimensional spectrum. In addition, this strategy can lead to a reduction of content validity, which is why the research team also took content considerations into account when selecting the items (40).

Validation of the NEPT Research Tool

The developed NEPT questionnaire was handed out to the investigated users of the 15 months follow-up of the PsychCare outcome evaluation. For details on inclusion and exclusion criteria see Soltmann et al., (15). Socio-demographic characteristics of the sample were assessed using descriptive statistics.

Testing Internal Consistency

The internal consistency of the NEPT tool was assessed by estimating item–total correlations by Spearman's rank, which expresses the degree to which the items of an instrument are measuring the same attribute. Additionally, the correlation matrix was checked. The size of the correlations was based on the following interpretation limits according to Cohen (41): 0.10 > r < 0.30, small effect size; 0.30 > r < 0.50, medium effect size and r > 0.50, large effect size. Internal consistency was also estimated using a Cronbach's alpha reliability coefficient. A Cronbach's α > 0.6 and ≤ 0.7 was considered an acceptable value; a value >0.7 and <0.9 a good value; and a value of 0.9 or higher indicated excellent reliability (42). For the pilot testing, alpha maximization was used as a criterion for item elimination; the cut off for dropping items was 0.7.

Testing Validity

Validity was assessed in several ways: First, an exploratory Principal Components Analysis (PCA) was conducted to evaluate the underlying structure of the instrument (43). To test for the adequacy of PCA, we used the Kaiser-Meyer-Olkin measure (should be ≥ 0.5) (ibid) and the Bartlett test of sphericity (ibid) (should be significant). A strict cut-off of factor loading of > 0.50, used by other researchers (44) was adopted. This method is primarily used to explore covariation without having a prior hypothesis or theory (45). In our case, the number of components to extract were based on three criteria: the Eigenvalue >1 (Kaiser), the Velicer MAP criterion (Polychoric correlations), and the Velicer MAP criterion/ 4th power (46, 47) using simulated polychoric correlation matrices.

In a second step and to support construct validity, known-groups validity was examined, testing hypothesized groupings of the survey outcomes, and detecting differences between them (48). A linear trend was tested across participants from three mental health hospital groups with different health care providing levels (centers providing standard health care, centers providing both FIT and standard treatment, and centers exclusively providing FIT treatment), using the Jonckheere-Terpstra test, one-tailed from a Monte Carlo simulation (10 000 samples) (49). The hypothesis was that these groups were ordered in a specific sequence, expecting that the participants from the hospital groups with a higher level of the FIT treatment would have higher NEPT total scores.

Third, concurrent validity was assessed by comparing the total NEPT scores with the total ZUF-8 scores (50), a German version of the Client Satisfaction Questionnaire CSQ-8 (51) that was also used at the 15 months follow-up of the PsychCare study. Concurrent validity was analyzed both by calculating a Pearson's correlation (38) and by using a multiple linear regression model, adjusted for the influence of the two demographic covariates gender and age. Missing data of NEPT and ZUF-8 questionnaires were not imputed. The size of the Coefficient of determination (R2) was based on the following interpretation limits according to Cohen: R2 <0.02—very weak, 0.02 ≤ R2 <0.13—weak, 0.13 ≤ R2 <0.26—moderate, R2 ≥ 0.26—substantial (52). We expected to find a significant correlation with a large effect size between ZUF-8 total and the NEPT total-scores using correlation analysis and a significant association with a substantial R2 in a linear regression analysis with the NEPT total value as a dependent and the ZUF-8 total as an independent variable.

The significance level was set at p ≤ 0.05. Most analyses were performed with the Statistical Package for Social Sciences (SPSS), version 23.0 for Windows. The Velicer MAP criterion and the Velicer MAP criterion/4th power were examined with the r package “random.polychor.pa” running in r version 4.0.5 (45).

Results

Cooperation Within the Group

A detailed description of the teams' collaborating processes and experiences while conducting this study has been published elsewhere (32). As described above, staffing the team with a mix of researchers with and without experiential expertise, organizing our work in different sub-groups and tandems, made it possible to systematically incorporate experiential expertise throughout the whole research process. It opened-up an “area of negotiating meaning and representation” (53), enabling new forms of knowledge and the recombination of different forms of knowledge to evaluate and (hopefully) ultimately improve psychiatric treatment.

At the same time, the research group was located within a privileged site of knowledge production (university) and entrenched within the confines of a rather traditional research design (prospective, controlled study). Thus, collaborative knowledge production was subject to various contingencies, emerging from academic rules and parameters that also defined to a certain extent the roles and responsibilities of the researchers involved. This led to a rather disciplined form of experiential expertise coming into play, that stretched the standard criteria of health service research and/or psychiatric discourse but ended up subjugating its emancipatory potential to the authority of scientific knowledge and academic knowledge production. Longstanding, structural inequalities of university knowledge production as well as rather strict (mental) health service research epistemologies remained largely untouched, leading to various frustrations especially on the side of the researchers with experiential expertise [for further details see Beeker et al. (32)].

Construction of the NEPT Research Tool

Construction of the Experiential Components

Over the course of the construction process, 12 experiential components were developed based primarily on the knowledge of the researchers with experiential expertise. The differences between these experiential components and the set of process and structure-related components from the precursor project, and the role that experiential knowledge played in producing them, will be described elsewhere (22). At this point, it is sufficient to point out that the collaboration between researchers with and without experiential expertise resulted in (1) a number of new components with new areas of content, (2) the re-definition and/or -operationalization of the previous components, in some cases considerably, and (3) further generalization of these experiential components, transcending their original evaluative focus on FIT models to move toward the evaluation of ”good psychiatric care“ (see Discussion Section). A compilation of the 12 experiential components and their definitions can be found in Figure 2.

Construction of Questionnaire Items

A total of 29 survey items were developed in several steps, with 2-3 items assigned to each of the experiential components. The items were listed and can be found in the accessory material to this manuscript (Supplementary Table A1).

Piloting of the NEPT Research Tool

Results of the Expert Survey

In the expert survey, 16 out of the 29 survey items were rated “substantial” by at least half of the experts. Thereafter, the following number of items remained in the preliminary questionnaire: Flexibility = 1 item, Activity = 1 item, Avoidance of stigmatization = 1 item, Compatibility = 1 item, Autonomy = 2 items, Safety = 1 item, Continuity = 2 items, Intensity = 1 item, Knowledge = 2 items, Time = 1 item, Solidarity = 1 item, Absence of coercion = 2 items.

The remaining 13 items were eliminated. The main reasons for the low rating of eliminated items were that, according to experts, they did not sufficiently reflect the essential aspects of experience or were redundant, such as the items: “Switching between different settings went so well that it suited me” (eliminated due to redundancy), “I was supported in developing activities that were helpful to me” (eliminated as activity was not sufficiently specified), “The treatment conditions (behavior of personals, rooms, regulations) allowed me to look at myself benevolently” (elimination as it does not sufficiently differentiates between self-stigmatization and stigmatization from outside), “During my treatment I was supported in developing skills that I can also use in my life” (eliminated as the “life” was too unspecific), “I experienced support and safety during the treatment” (eliminated as it mixes two items), “During my treatment I was able to deal with my own situation” (eliminated as it was too vague), “I was given sufficient time during the treatment” (eliminated due to redundancy), “The team encouraged users to support one another” (eliminated as it does not thematize exchange between the users).

Results of the Pilot Survey

Using the alpha maximization method, nine of the remaining 16 items were found to have relatively low discriminatory power. Considerations of the research team led to the retention of four items relating to the characteristics of Compatibility with everyday life, Safety, Time, Solidarity, and the elimination of five items relating to the characteristics of Autonomy (1 item), Continuity (1 item), Knowledge (1 item), and- unfortunately- Avoidance of coercion (2 items). The items relating to the last characteristic were eliminated due to comments of the respondents which clearly indicated they had difficulties answering the corresponding questions. The final version of the scale contained 11 items (Table 2), one item each for Flexibility, Activity, Avoidance of stigmatization, Compatibility with everyday life, Autonomy, Safety, Continuity, Intensity, Knowledge, Time, Solidarity. The Cronbach's alpha value for the overall scale was 0.82 (0.77–0.88).

Validation of the NEPT Research Tool

A sample of 374 participants was tested during the 15 months follow-up of the PsychCare study. Because of missing data, 47 cases were excluded from further analyzes. The final sample included 327 people who were treated in 14 mental health centers, including 140 male and 187 female participants that were part of the study. The mean age was 47 (±13.48) years for the men, and 47.9 (±13.94) years for the women. Table 1 shows the mean scores of the NEPT items for both genders. The mean total NEPT score for the entire sample was 4.02 (±1.19). Women [M (SD) = 4.06 (0.71)] had a slightly higher total score than men [M (SD) = 3.95 (0.68)].

TABLE 1
www.frontiersin.org

Table 1. Mean scores of NEPT items and total score.

Internal Consistency

Table 2 shows the inter-correlations between the remaining 11 items as well as correlations between the items and the NEPT total score. All correlations were significant at the level not <p < 0.01 except for the correlation between the Items Compatibility with everyday life and Solidarity. Except for these two items, the coefficients ranging from 0.54 to 0.77 were calculated for the corrected item-total correlations, which indicated adequate homogeneity of items. The correlations of the items Compatibility with everyday life and Solidarity were 0.45 and 0.44, respectively, which indicated that these items contributed relatively less to the tool. According to the inter-item correlation matrix, no items were above 0.80, indicating a lack of multicollinearity (41). The Cronbach's alpha coefficient for the summary scale was good (0.89). The Cronbach's alpha coefficient if item deleted ranged from 0.87 to 0.89, indicating that no items were unreliable. However, the contribution of the items Compatibility with everyday life and Solidarity for the internal consistency of the tool was critical.

TABLE 2
www.frontiersin.org

Table 2. Correlations on NEPT items and Cronbach's alpha (α).

Validity

Structural Validity. Prior to performing the multivariate analysis, the adequacy of the correlation matrix of the scale was checked. The observed values KMO = 0.91 and Bartlett's Sphericity Test, χ2 = 1632.63, df = 55, p < 0.001 supported a multivariate analysis, which was carried out using PCA. Without fixing the number of components to extract, the PCA identified two components with Eigenvalue (Kaiser's criterion) >1 (5.37 and 1.03), conjointly accounting for 58.16% of the total variance: This solution clearly produced a general unipolar component, all items with positive loadings > 0.50, ranging from 0.53 (Item: Compatibility with everyday life) to 0.84 (Item: Knowledge). The second component aggregated items with lower component loadings (no items attained the component loading cut-off) and therefore was initially regarded as dubious. However, subsequent application of other criteria (Velicer MAP criterion and Velicer MAP criterion/4th power) confirmed the 1-component solution, which accounted for 48.65% of the total variances. Based on these results, the unidimensional structure of the tool was acknowledged.

The Jonckheere-Terpstra test results (z = 1.859, p = 0.03) showed that the NEPT total score differed based on the experiences in order (i.e., three independent groups: “centers providing standard health care, Mdn = 3.9,” “centers providing both FIT and standard treatment health care, Mdn = 4.0,” and “centers exclusively providing FIT treatment, Mdn = 4.1”) and therefore provided known-groups validity evidence for the scale.

Concurrent Validity. The Pearson's correlation analysis to assess the relationship between ZUF-8 total score and NEPT total score in a total of 299 participants preliminarily showed the relationship to be monotonic, as assessed by visual inspection of a scatterplot (see Figures 1, 3). As expected, there was a strong positive correlation between ZUF-8 total score and NEPT total score rs = 0.56, p < 0.001, indicating the tools are measuring comparable constructs. Using linear regression analysis, a significant association (p < 0.001) between total scores of both scales was found. The R2 for the overall model was 0.33 (adjusted R2 = 0.32), indicative for a substantial goodness-of-fit according to Cohen (41).

FIGURE 3
www.frontiersin.org

Figure 3. Scatterplot presenting the linear relationship between the ZUF-8 total score and NEPT total score. ZUF-8, German version of the Client Satisfaction Questionnaire CSQ-8; NEPT, Needs and Experiences in Psychiatric Treatment scale. Solid line represents a hyppthetical 1:1 relationship between total scores of both instruments.

Discussion

This paper presents the construction and validation of a collaboratively generated, preliminary research tool to evaluate the experiences and fulfillment of needs during psychiatric treatment from the perspective of service users. The instrument showed good internal consistency and its structural validity examination suggested its unidimensional structure. On known-group validity, a linear increasing trend of the total instrument score was observed across three independent mental health hospital groups as the level of flexible and integrative (FIT) psychiatric treatment increased. In addition, there was evidence for convergent validity assessed against the ZUF-8 as the criterion measure.

“Experience” as a Construct

Our research tool operationalizes how psychiatric treatment was experienced by the users in relation to their needs. This focus fits the growing interest in assessing patients' experiences with health care services, meanwhile representing one of the three pillars in assessing the quality of health care services alongside clinical effectiveness and patient's safety (36, 37, 54). Online platforms for user input as well as internet-based reviews and ratings are increasingly developed to make room for critical feedback on the health care system and to give more space to the user experience (55). Quality assurance is increasingly focused on the user experience, with the aim of transforming care systems accordingly (56). And the users' experiences are playing an increasing role in research and evaluation, often justified on grounds of their intrinsic value, or findings that demonstrate associations between positive experiences and patient adherence, safety culture, and service utilization (37, 54).

Yet, a clear definition of what exactly counts as an experience is often lacking, most probably due to the complex nature of this construct (54). In the field of health service research, most confusingly, various notions are used interchangeably, such as user/patient-perspective, -reports, -perception, or -satisfaction Ahmed (36). In this manuscript and following Ahmed (36) and Price (54), experience refers to any users' perceptions of both objective facts and subjective evaluations (“Erleben” in German), also reflecting the evaluation of structures and processes that are not directly observable by them. In this sense, experience is an inherently multi-dimensional construct, encompassing needs, preferences, hopes, and expectations (57). It is deeply value-based at the same time opening-up potentials to serve as a useful proxy for assessing the quality of a received service.

As widely described, experience resembles the outcome parameter of patient satisfaction (37). This correspondence was confirmed by our results that demonstrated a convergent validity of both these constructs. Yet, the construct of ”experience“ was chosen, as it focusses more on concrete situations and is less reflexively charged (58). Further, as experiences are always intertwined with how they are evaluated, in our case with the question of what the treatment was or should have been like for the users, our construct also seems to relate to users' needs, as the extent to which a lack felt by the user had been eliminated through the services offered by the institution (59). Further, our construct generalizes experiences beyond those that merely relate to FIT departments, also building on users' experiences with the control group services. The FIT models having a broader scope in providing insights for the further development of the German mental health care system, our construct may be perceived to provide for a more general measure of ”good mental health care“ from the perspective of users- a hypothesis that will have to be confirmed in future studies.

Both experience- and satisfaction-based evaluation instruments are susceptible to subjective bias, being strongly linked to previous expectations, subjective judgment, social expectancy, and divergent perspectives (36, 37). In this context, more facts-based evaluation approaches are needed that increase objectivity in the evaluation of services, at the same time diminishing the possibilities for subjective interpretation. Examples in this context can be drawn from the development of fact-based PREMS and PROMS models (60, 61) that aim at evaluating key situations. A further development of our preliminary research tool in this direction, building on the qualitative findings of our participatory process evaluation (31), is planned as well as both its validation across various treatment models and care contexts.

Impact of the Collaboration

Our work has shown that participatory-collaborative research undertaken together by CR and EE is indeed possible even within the confines of a rather conventional mental health research project. This collaboration was not free from academic, structural inequalities [see Section Cooperation Within the Group and (32)], at the same time opening-up space for the researchers with experiential expertise to contribute with their specific knowledge, leading to the development of the experiential components and the instrument based on them. In our view, the framework of a process evaluation, as it is recommended in the MRC Guidelines (21), is well suited to host such a form of systematic collaboration. In contrast, the design requirements related to other parts of the PsychCare study, e.g., using fixed outcome measures or analyzing routine data or health economic parameters, would have generated significantly less opportunities for such collaborative work. Thus, the largely inductive-qualitative logic of a process evaluation (62) seems to suit research involving a collaborative approach and may enable, as in our project, the step-by-step development of a collaboratively generated instrument to be used as an evaluative research tool.

As described, our collaborative work was built upon a previous non-collaborative evaluation of German FIT models during the EvaMode64b precursor project that- despite of the fact that it aimed at evaluating user experiences too- resulted into the development of a set of components useful to assess FIT specific processes and structures (25). Thus, the ongoing collaboration between researchers with and without experiential expertise within the PsychCare study enabled us to develop a research tool that is now more in line with the needs and experiences of service users, a finding that is also described in the literature: Opinions on what is and is not considered good care may differ largely, depending on whether users or practitioners have been questioned and by whom the related evaluative criteria have been developed and/or established (17, 6366). In this context, the Basque scientist Joan Trujols introduced the term “user generated” (versus “user-valued” and “-centered”) to elucidate not only the orientation of a scale but also its ways of generation (67). Referring to our research aim- the evaluation of users' experiences- we affirm that it is essential for researchers with experiential expertise to be included in all steps of a research process and to be entitled to have substantial decision-making power.

Contrasting to this assertion, and as stated in the introduction, participatory, user or collaboratively generated scales are still scarce. Our research tool shares features with the VOICE instrument that is designed to evaluate experiences and opinions on psychiatric treatment (12). Although the items in VOICE are aimed more at evaluating the structural quality of the provision of mental health care, similar items can be found in both scales, for example the question of continuity of everyday activities or the high level of availability of support from staff. In contrast, the CEO-MHS Questionnaire designed by Oades et al. and equally based on a participatory construction process, is an instrument to measure satisfaction and therefore refers less directly to the situational and objectifiable experiences of psychiatric treatment (14). Finally, the items of the PREM construct by Wallang (68) resemble in their operationalization (I-sentences) and to some extent also domains (“I feel safe,” “I feel supported,” “I feel independent” etc.) but unfortunately lacks a clear description of how its development was co-produced.

In contrast, PREM mental health scales that have been developed in conventional, non-participatory ways widely diverge in their domains and operationalization from the research tool developed in our project: As much as we appreciate (69) stressing on the need of PREM scales for scientific or routine evaluation, the domains of their scale do not seem to sufficiently specify, what they mean by “quality” or “good care.” As answers to these questions can only be normative, a lack of participatory engagement in their developmental process seems to be perilous. Thomas et al. (70) developed a PREM scale for evaluating the experiences of an emergency department, and, thus, depart from our project in their research aim. The DIALOG instrument incorporates both PROM and PREM items, the latter being only a few and rather broad in their scope (71). These only few examples, as well as our attempt of comparison, underscore the urgent need for collaborating with researchers with experiential expertise in the construction of PREM scales. As stated by various authors (4, 71), user-oriented services may only develop if the instruments to evaluate them will be better grounded in their perspectives and experiences into the future.

Limitations

The participants of the general study sample were recruited from very diverse mental health hospital departments and therefore may differ to those in the pilot study sample that was conducted in only one department, in which some of FIT related aspects, e.g., home treatment, were barely implemented. Further, the limited project resources did not allow for a broader participatory negotiation of the developed, experiential components beyond the expert panel and the qualitative part of our process evaluation. Maybe as a result, they focus on experiences and needs of a “higher order,” rather neglecting more basic aspects of the service delivery, such as spaces for privacy, the quality of the served food, or the hygienic conditions of the treatment context. Fourth, we relied on self-report measures for assessing needs and experiences in psychiatric treatment which may have resulted in both error and bias in their measurement. As stated above, objective measurements of needs and experiences were not used and wait for further development. Our lack of a “gold standard” metric against which to compare needs and experiences limits our understanding of their concurrent validity. Finally, this study had a cross-sectional design: additional longitudinal studies in different mental health care settings are needed to establish psychometric properties of the NEPT research tool over time.

Conclusions

Our project resulted in a psychometrically robust, object-appropriate, preliminary research tool that in its orientation corresponds to the interests and knowledge of users and so-called survivors of psychiatric treatment. As such, it may be perceived as a contribution to better align mental health care with the inherently value-based experiences and judgments of their users, an endeavor that is so urgently needed (4, 72, 73). The greatest methodological strength of our work is the systematic form of collaboration between researchers with and without experiential expertise within the framework of a prospective, controlled mental health services research design. By adopting a participatory process evaluation method, this collaboration took place in each study phase, which led to the results described above. Thus, although being constrained by the confines of a mental health service research epistemology, this collaborative knowledge production was possible at the level of process evaluation and can be reproduced accordingly in other projects.

We conclude by taking a critical look at the inevitable “side effects” of such an approach. There is great debate over the extent to which the provision of specific knowledge and approaches of survivors and researchers with experiential expertise in the context of projects such as ours are appropriated or co-opted by psychiatry without actually improving the conditions of care or services (74). Since the influential text of the American activist Judi Chamberlin ”On our Own“ (75), the question remains as to whether the experiential knowledge of people and researchers with experiential expertise are more useful if primarily incorporated into the conceptual and practical development of alternatives to psychiatry. We, as authors, are not sure how to answer this question, but it is important for us to point out the danger of such appropriation, also to ensure a continuous and fundamental problematization of this topic in similar projects of participatory and collaborative research.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.

Ethics Statement

The studies involving human participants were reviewed and approved by an Ethics Vote of the TU Dresden dated 07.09.2017 was available. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

SP and HK were responsible for the draft of this manuscript. RKG, JZ, LG, PJ, TB, and YI contributed to the research process, interactive reviewing, literature search, interpretation of literature, and helped to draft the final version of the manuscript. CS, AN, FB, BS, MH, and JS revised the article critically. All authors approved the final version to be published.

Funding

We acknowledge funding by the MHB Open Access Publication Fund supported by the German Research Association (DFG). The Psych Care study was financed by the German Innovation Fund of the Federal Joint Committee G-BA (No. 01VSF16053).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.2022.781726/full#supplementary-material

References

1. Carr S. ‘I am not your nutter’: a personal reflection on commodification and comradeship in service user and survivor research. Disabil Soc. (2019). 34:1140–53. doi: 10.1080/09687599.2019.1608424

CrossRef Full Text | Google Scholar

2. Russo J. Survivor-Controlled Research: A New Foundation for Thinking about Psychiatry and Mental Health (2012).

Google Scholar

3. Rose D, Evans J, Sweeney, A. A model for developing outcome measures from the perspectives of mental health service users. Int rev psychiatry. (2011) 23:41–6. doi: 10.3109/09540261.2010.545990

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Rose D. Service user views and service user research in the journal of mental health. J Ment Health. (2011) 20:423–8. doi: 10.3109/09638237.2011.613959

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Ingram, R. Doing mad studies: making (non)sense together. Intersectionalities: A Global Journal of Social Work Analysis, Research, Polity, and Practice. (2016) Available online at: https://journals.library.mun.ca/ojs/index.php/IJ/article/view/1680. (accessed: 12.9.2021).

Google Scholar

6. von Unger, H. Partizipative Forschung. New York, NY: Springer Fachmedien Wiesbaden. (2014).

Google Scholar

7. INVOLVE. Briefing note two: What is public involvement in research?—INVOLVE. (2012). Available online at: www.invo.org.uk. (accessed: 12.9.2021).

Google Scholar

8. Slay, J., Stephens, L. Co-Production in Mental Health: A Literature Review. London: New Economics Foundation/Mind (2013).

Google Scholar

9. SURE (2001). https://www.kcl.ac.uk/research/sure, (accessed: 12.9.2021).

Google Scholar

10. Rose D, Sweeney A, Leese M, Clement S, Jones I, Burns T, et al. Developing a user-generated measure of continuity of care: Brief report. Acta psychiatr Scand. (2008) 119:320–4. doi: 10.1111/j.1600-0447.2008.01296.x

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Rogers ES, Chamberlin J, Ellison ML, Crean, T. A consumer-constructed scale to measure empowerment among users of mental health services. Psychiatr Serv. (1997) 48:1042–7. doi: 10.1176/ps.48.8.1042

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Evans, J, Rose D., Flach C., et al. VOICE: developing a new measure of service users' perceptions of inpatient care, using a participatory methodology. J Ment Health. (2012) 21:57–71. doi: 10.3109/09638237.2011.629240

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Neil ST, Kilbride M, Pitt L, Nothard S, Welford M, Sellwood W, et al. The questionnaire about the process of recovery (QPR): A measurement tool developed in collaboration with service users. Psychosis. (2009) 1:145–55. doi: 10.1080/17522430902913450

CrossRef Full Text | Google Scholar

14. Viney LL, Oades LG, Strang J, Eman Y, Lambert WG, Malins G, et al. A Framework for Consumers Evaluating Mental Health Services. London: University of Wollongong, Illawarra Institute for Mental Health (2004).

Google Scholar

15. Soltmann B, Neumann A, March S, Weinhold I, Häckl D, Kliemt R, et al. Multiperspective and multimethod evaluation of flexible and integrative psychiatric care models in germany: study protocol of a prospective, controlled multicenter observational study (PsychCare). Front Psychiatry. (2021). 12:659773. doi: 10.3389/fpsyt.2021.659773

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Johne J, von Peter S, Schwarz J, Timm J, Heinze M, Ignatyev, et al. Evaluation of new flexible and integrative psychiatric treatment models in Germany- assessment and preliminary validation of specific program components. BMC Psychiatry. (2018) 18:1. doi: 10.1186/s12888-018-1861-1

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Gillard S, Borschmann R, Turner K, Goodrich-Purnell N, Lovell K, Chambers M. 'What difference does it make?' Finding evidence of the impact of mental health service user researchers on research into the experiences of detained psychiatric patients. Health Expect. (2010). 13:185–94. doi: 10.1111/j.1369-7625.2010.00596.x

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Lambert N, Carr, S. 'Outside the Original Remit': Co-production in UK mental health research, lessons from the field. Int J Ment Health Nurs. (2018). 27:1273–81. doi: 10.1111/inm.12499

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Williams O, Sarre S, Papoulias C, Knowles S, Robert G, Beresford P, et al. Lost in the shadows: reflections on the dark side of co-production. health research policy and systems. Health Res Policy Syst. (2020) 18:43. doi: 10.1186/s12961-020-00558-0

PubMed Abstract | CrossRef Full Text | Google Scholar

20. International Collaboration for Participatory Health Research (ICPHR). Position Paper 1: What is Participatory Health Research? Version: Mai 2013. Berlin: International Collaboration for Participatory Health Research. (2013)

Google Scholar

22. von Peter S, Ignatyev Y, Johne J, Indefrey S, Kankaya O, Rehr B, et al. Evaluation of flexible and integrative psychiatric treatment models in germany- a mixed-method patient and staff-oriented exploratory study. Front Psychiatr. (2019) 9:785. doi: 10.3389/fpsyt.2018.00785

PubMed Abstract | CrossRef Full Text | Google Scholar

24. von Peter S, Ignatyev Y, Indefrey S, Johne J, Schwarz J, Timm J, et al. Spezifische Merkmale zur Einstufung der Modellversorgung nach § 64b SGB V. Der Nervenarzt. (2018) 89:559–564. doi: 10.1007/s00115-017-0459-z

PubMed Abstract | CrossRef Full Text | Google Scholar

25. von Peter S, Schwarz J, Bechdolf A, Birker T, Deister A, Ignatyev Y, et al. Implementation of new flexible and integrative psychiatric care models (According to §64b SGB V) in Rural Northern Germany in comparison to federal territory. Das Gesundheitswesen. (2019) 19:2. doi: 10.1055/a-0945-9851

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Schwarz J, Zeipert M, Ignatyev Y, Indefrey S, Rehr B, Timm J, et al. Implementation and stakeholders' experiences with home treatment in germany's integrative and flexible psychiatric care models—a mixed-methods study. Psychother, Psychosom Medizinische Psychol. (2020) 70:65–71. doi: 10.1055/a-0942-2163

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Ignatyev Y, Mundt A, von Peter S, Heinze, M. Hospital length of stay among older people treated with flexible and integrative psychiatric service models in Germany. Int J of Geriatr Psychiatry. (2019) 34:1557–64. doi: 10.1002/gps.5165

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Baum F, Schoffer O, Neumann A, Seifert M, Kliemt R, March S, et al. (2020). Effectiveness of global treatment budgets for patients with mental disorders-claims data based meta-analysis of 13 controlled studies from Germany. Front Psychiatry. (2020). 11:131. doi: 10.3389/fpsyt.2020.00131

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Neumann A, Baum F, Seifert M, Schoffer O, Kliemt R, March S, et al. Reduction of days in inpatient care in psychiatric hospitals with flexible and integrated treatment for patient-centered care with a global budget—results with three-year follow-up from the evaluation study EVA64. Psychiatr Prax. (2021). 48:127–34. doi: 10.1055/a-1274-3731

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Schwarz J, Bechdolf A, Hirschmeier C, Hochwarter S, Holthoff-Detto V, Mühlensiepen F, et al. “I indeed consider it to be a temporary solution”—a qualitative analysis of the conditions and obstacles to implementation of psychiatric home-treatment in Berlin and Brandenburg. Psychiatr Prax. (2021). 48:193–200. doi: 10.1055/a-1274-3662

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Glaser B, Strauß, A. Grounded Theory: Strategien qualitativer Sozialforschung. Göttingen: Hrsg.: Hans Huber. (2007).

PubMed Abstract | Google Scholar

32. Beeker T, Glück R, Ziegenhagen J, Göppert L, Jänchen, Patrick., et al. Designed to clash? reflecting on the practical, personal, and structural challenges of collaborative research in psychiatry. Front Psychiatry. (2021). 12:701312. doi: 10.3389/fpsyt.2021.701312

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Schaefer I, Bär G. Die Auswertung qualitativer Daten mit Peerforschenden: Ein Anwendungsbeispiel aus der partizipativen Gesundheitsforschung. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. (2019).

Google Scholar

34. NVivo Qualitative Data Analysis Software. QSR International Pty Ltd. Version 12 (2018).

Google Scholar

35. Jänchen P, von Peter S, Göppert L, Beeker T, Ziegenhagen J, Glück R, et al. Erlebensbezogene Merkmale für eine gute psychiatrische Versorgung aus Sicht von Nutzer*innen—Vorstellung eines ersten multivariaten Konstrukts. (forthcoming) (2021).

Google Scholar

36. Ahmed F, Burt, J. & Roland, M. Measuring patient experience: concepts and methods. Patient. (2014). 7:235–41. doi: 10.1007/s40271-014-0060-5

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Doyle C, Lennox L, Bell, D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. (2013). 3:e001570. doi: 10.1136/bmjopen-2012-001570

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Lawshe, C.H. A quantitative approach to content validity 1. Personnel Psychol. (1975). 28:563–75. doi: 10.1111/j.1744-6570.1975.tb01393.x

CrossRef Full Text | Google Scholar

39. Visser AP, Breemhaar B, Kleijnen JGVM. Social desirability and program evaluation in health care. Impact Assessment Bulletin. (1989). 7:99–112. doi: 10.1080/07349165.1989.9726015

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Bühner M. Einführung in die Test- und Fragebogenkonstruktion. Pearson Deutschland GmbH (2011).

Google Scholar

41. Cohen J. Statistical Power and Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence (1988)

PubMed Abstract | Google Scholar

42. Fayers P, Machin, D. Quality of Life: The Assessment, Analysis and Interpretation of Patient-Reported Outcomes. Chichester: John Wiley & Sons (2007).

Google Scholar

43. Tabachnick BG, Fidell, L.S. Using multivariate statistics (5th ed.). Boston: Allyn & Bacon (2007).

Google Scholar

44. Peterson RA. A meta-analysis of variance accounted for and factor loadings in exploratory factor analysis. Market Lett. (2000). 11:261–75. doi: 10.4236/ojs.2015.56061

CrossRef Full Text | Google Scholar

45. Kim, J.-O, Mueller, C. Factor Analysis. Statistical Methods and Practical Issues. Newbury Park, CA: Sage Publication (1978).

Google Scholar

46. Velicer WF, Eaton CA, Fava JL. “Construct explication through factor or component analysis: a review and evaluation of alternative procedures for determining the number of factors or components,” In: editors Goffin R.D., Helmes E. Problems and Solutions in Human Assessment. Boston, MA: Springer (2000).

Google Scholar

47. Presaghi F, Desimoni, M. Random polychor.pa: A Parallel Analysis with Polychoric Correlation Matrices. r package version 1.1.4-4. (2020).

Google Scholar

48. Portney L, Watkins, M. Foundations of Clinical Research: Applications to Practice. Stamford (CT): McGraw Hill/ Appleton & Lange. (1993).

Google Scholar

49. Conover WJ. Practical Non-parametric Statistics. New York, NY: Auflage, John Wiley & Sons (1999).

Google Scholar

50. Schmidt J, Wittmann W. “Fragebogen zur Messung der Patientenzufriedenheit,” In: Diagnostische Verfahren in der Psychotherapie. Göttingen: Hogrefe (2002).

Google Scholar

51. Attkisson CC, Zwick, R. The client satisfaction questionnaire. psychometric properties and correlations with service utilization and psychotherapy outcome. Eval Program Plann. (1982). 5:233–7. doi: 10.1016/0149-7189(82)90074-X

PubMed Abstract | CrossRef Full Text | Google Scholar

52. Cohen J. Statistical Power and Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence (1988).

PubMed Abstract | Google Scholar

53. Rose DS. Kalathil J. Power, privilege and knowledge: the untenable promise of co-production in mental “health.” Front Sociol. (2019). 4:57. doi: 10.3389/fsoc.2019.00057

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Anhang Price R, Elliott MN, Zaslavsky AM, Hays RD, Lehrman WG, Rybowski L, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. (2014). 1:522–54. doi: 10.1177/1077558714541480

PubMed Abstract | CrossRef Full Text | Google Scholar

55. Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Int Res. (2012). 14:e38. doi: 10.2196/jmir.2003

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Crawford MJ, Rutter D, Manley C, Weaver T, Bhui K, Fulop N, et al. Systematic review of involving patients in the planning and development of health care. BMJ. (2002). 325:1263. doi: 10.1136/bmj.325.7375.1263

PubMed Abstract | CrossRef Full Text | Google Scholar

57. Fulford K.W. Values-based practice: a new partner to evidence-based practice and a first for psychiatry? Mens Sana Monog. (2008). 6:10–21. doi: 10.4103/0973-1229.40565

PubMed Abstract | CrossRef Full Text | Google Scholar

58. Dilthey, W. “Die Geistige Welt. Einleitungen in die Philosophie des Lebens Erste Hälfte: Abhandlung zur Grundlegung der Geisteswissenschaften,” In: Gesammelte Schriften, Band V. (1990), p. 4.

Google Scholar

59. Arzberger K, Hondrich KO, Murck M, Schumacher J. Was machen die Bedürfnisforscher? Klarstellungen zu einer Kritik. Leviathan. (1978).6:354–73.

Google Scholar

60. Cleary P.D. The increasing importance of patient surveys. now that sound methods exist, patient surveys can facilitate improvement. BMJ. (1999). 319:720–1. doi: 10.1136/bmj.319.7212.720

PubMed Abstract | CrossRef Full Text | Google Scholar

61. Gerteis, M, Edgman-Levitan S, Walker JD, Stoke DM, Cleary PD, Delbanco, et al. L. What patients really want. Health Manage Q. (1993). 15:2–6.

Google Scholar

62. Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: medical research council guidance. BMJ. (2015). 350:h1258. doi: 10.1136/bmj.h1258

PubMed Abstract | CrossRef Full Text | Google Scholar

63. Rose D, Thornicroft G, Slade, M. Who decides what evidence is? developing a multiple perspectives paradigm in mental health. Acta Psychiatr Scand Suppl. (2006). 429:109–14. doi: 10.1111/j.1600-0447.2005.00727.x

PubMed Abstract | CrossRef Full Text | Google Scholar

64. Russo J, Beresford, P. Between exclusion and colonisation: Seeking a place for mad people's knowledge in academia. Disabil Soc. (2014). 24:25. doi: 10.1080/09687599.2014.957925

CrossRef Full Text | Google Scholar

65. Oades L, Law J, Marshall, S. Development of a consumer constructed scale to evaluate mental health service provision. J Eval Clin Pract. (2011). 17:1102. doi: 10.1111/j.1365-2753.2010.01474.x

PubMed Abstract | CrossRef Full Text | Google Scholar

66. Crawford MJ, Robotham D, Thana L, et al. Selecting outcome measures in mental health: the views of service users. J Ment Health. (2011). 20:336–46. doi: 10.3109/09638237.2011.577114

PubMed Abstract | CrossRef Full Text | Google Scholar

67. Trujols J, Portella MJ, Iraurgi I, Campins MJ, Siñol N, de Los Cobos JP. (2013). Patient-reported outcome measures: are they patient-generated, patient-centred or patient-valued? J Ment Health. (2013). 22:555–62. doi: 10.3109/09638237.2012.734653

PubMed Abstract | CrossRef Full Text | Google Scholar

68. Wallang P, Kamath S, Parshall A, Saridar T, Shah, M. Implementation of outcomes-driven and value-based mental health care in the UK. Br J Hosp Med (Lond). (2018). 79:322–327. doi: 10.12968/hmed.2018.79.6.322

PubMed Abstract | CrossRef Full Text | Google Scholar

69. Fernandes S, Fond G, Zendjidjian X, Michel P, Lancon C, Berna F, et al. A conceptual framework to develop a patient-reported experience measure of the quality of mental health care: A qualitative study of the PREMIUM project in France. J Mark Access Health Policy. (2021) 9:1885789. doi: 10.1080/20016689.2021.1885789

PubMed Abstract | CrossRef Full Text | Google Scholar

70. Thomas KC, Owino H, Ansari S, Adams L, Cyr JM, Gaynes BN, et al. Patient-centered values and experiences with emergency department and mental health crisis care. Adm Policy Ment Health. (2018) 45:611–22. doi: 10.1007/s10488-018-0849-y

PubMed Abstract | CrossRef Full Text | Google Scholar

71. Mosler F, Priebe S, Bird V. Routine measurement of satisfaction with life and treatment aspects in mental health patients—the DIALOG scale in East London. BMC Health Serv Res. (2020) 20:1020. doi: 10.1186/s12913-020-05840-z

PubMed Abstract | CrossRef Full Text | Google Scholar

72. Fisher CE, Spaeth-Rublee B, Pincus HA, for the IIMHL Clinical Leaders Group. (2013). Developing mental health-care quality indicators: toward a common framework. Int J Qual Health Care. (2013) 25:75–80. doi: 10.1093/intqhc/mzs074

PubMed Abstract | CrossRef Full Text | Google Scholar

73. Baggaley, M. Value-based healthcare in mental health services. BJPsych Adv. (2020) 26:198–204. doi: 10.1192/bja.2019.82

CrossRef Full Text | Google Scholar

74. Penney D, Prescott, L. “The co-optation of survivor knowledge: The danger of substituted values and voice,” In: Searching for a rose garden: Challenging psychiatry, fostering mad studies. Ross-on-Wye, England: PCCS Books. (2016), p. 35–45.

Google Scholar

75. Chamberlin, J. On our own: Patient-controlled Alternatives to the Mental Health System. New York, NY: McGraw-Hill (1978).

Google Scholar

Keywords: PREM, peer research, coproduction, collaboration, tokenism, experience, user involvement

Citation: von Peter S, Krispin H, Kato Glück R, Ziegenhagen J, Göppert L, Jänchen P, Schmid C, Neumann A, Baum F, Soltmann B, Heinze M, Schwarz J, Beeker T and Ignatyev Y (2022) Needs and Experiences in Psychiatric Treatment (NEPT)- Piloting a Collaboratively Generated, Initial Research Tool to Evaluate Cross-Sectoral Mental Health Services. Front. Psychiatry 13:781726. doi: 10.3389/fpsyt.2022.781726

Received: 23 September 2021; Accepted: 07 January 2022;
Published: 27 January 2022.

Edited by:

Shulamit Ramon, University of Hertfordshire, United Kingdom

Reviewed by:

Emma Kaminskiy, Anglia Ruskin University, United Kingdom
Joanna Fox, Anglia Ruskin University, United Kingdom

Copyright © 2022 von Peter, Krispin, Kato Glück, Ziegenhagen, Göppert, Jänchen, Schmid, Neumann, Baum, Soltmann, Heinze, Schwarz, Beeker and Ignatyev. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sebastian von Peter, sebastian.vonpeter@mhb-fontane.de

These authors share first authorship

Download