Skip to main content

SYSTEMATIC REVIEW article

Front. Psychiatry, 15 June 2023
Sec. Forensic Psychiatry

Models, frameworks and theories in the implementation of programs targeted to reduce formal coercion in mental health settings: a systematic review

\r\nTella Lantta,
Tella Lantta1,2*Joy DuxburyJoy Duxbury2Alina Haines-DelmontAlina Haines-Delmont2Anna BjrkdahlAnna Björkdahl3Tonje Lossius Husum,Tonje Lossius Husum4,5Jakub LickiewiczJakub Lickiewicz6Athanassios DouzenisAthanassios Douzenis7Elaine CraigElaine Craig2Katie GoodallKatie Goodall2Christina BoraChristina Bora8Rachel WhyteRachel Whyte2Richard Whittington,,Richard Whittington9,10,11
  • 1Department of Nursing Science, University of Turku, Turku, Finland
  • 2Department of Nursing, Manchester Metropolitan University, Manchester, United Kingdom
  • 3Centre for Psychiatric Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm Health Care Services, Stockholm, Sweden
  • 4Department of Health Sciences, Oslo Metropolitan University, Oslo, Norway
  • 5Centre for Medical Ethics, University of Oslo, Oslo, Norway
  • 6Department of Health Psychology, Jagiellonian University Medical College, Krakow, Poland
  • 7Second Psychiatry Department, Attikon University Hospital, National and Kapodistrian University of Athens, Athens, Greece
  • 8Second Psychiatry Department, Attikon University Hospital, National and Kapodistrian University of Athens Medical School, Athens, Greece
  • 9Centre for Research and Education in Security, Prisons and Forensic Psychiatry, Forensic Department Østmarka, St. Olav's Hospital, Trondheim, Norway
  • 10Department of Mental Health, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
  • 11Department of Public Health, Policy and Systems, University of Liverpool, Liverpool, United Kingdom

Introduction: Implementation models, frameworks and theories (hereafter tools) provide researchers and clinicians with an approach to understand the processes and mechanisms for the successful implementation of healthcare innovations. Previous research in mental health settings has revealed, that the implementation of coercion reduction programs presents a number of challenges. However, there is a lack of systematized knowledge of whether the advantages of implementation science have been utilized in this field of research. This systematic review aims to gain a better understanding of which tools have been used by studies when implementing programs aiming to reduce formal coercion in mental health settings, and what implementation outcomes they have reported.

Methods: A systematic search was conducted using PubMed, CINAHL, PsycINFO, Cochrane, Scopus, and Web of Science. A manual search was used to supplement database searches. Quality appraisal of included studies was undertaken using MMAT—Mixed Methods Appraisal Tool. A descriptive and narrative synthesis was formed based on extracted data. Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines were followed in this review.

Results: We identified 5,295 references after duplicates were removed. Four additional references were found with a manual search. In total eight studies reported in nine papers were included in the review. Coercion reduction programs that were implemented included those that were holistic, and/or used professional judgement, staff training and sensory modulation interventions. Eight different implementation tools were identified from the included studies. None of them reported all eight implementation outcomes sought from the papers. The most frequently reported outcomes were acceptability (4/8 studies) and adaptation (3/8). With regards to implementation costs, no data were provided by any of the studies. The quality of the studies was assessed to be overall quite low.

Discussion: Systematic implementation tools are seldom used when efforts are being made to embed interventions to reduce coercive measures in routine mental health care. More high-quality studies are needed in the research area that also involves perspectives of service users and carers. In addition, based on our review, it is unclear what the costs and resources are needed to implement complex interventions with the guidance of an implementation tool.

Systematic review registration: [Prospero], identifier [CRD42021284959].

1. Introduction

1.1. Implementation theories, models, and frameworks

Implementation science has its origins in the 1990's with the rise of evidence-based practice in the field of medicine. This evidence-based movement noted that research findings and empirically supported practices should be more widely spread and applied to achieve improved health and welfare for populations. However, it was evident that the implementation of these effective practices and findings was facing many challenges. Thus, it was assumed that research into implementation itself as a process can create knowledge to close or reduce the gap between evidence and practice (1).

Implementation theories, models and frameworks are three different types of conceptual tool which provide insights into the mechanisms by which implementation is more likely to succeed (2). Implementation theories are generally specific and predictive. They propose directional relationships between concepts making them suitable for hypothesis testing as they may guide what may or may not work (3). Models are often more specific and prescriptive: for example, describing steps in the implementation process. They are commonly used to describe and guide the process of translating research into practice (2). Frameworks in contrast usually organize, explain or describe information and relationships between concepts (4). A framework gives a “structure, overview, outline, system or plan consisting of various descriptive categories,” as stated by Nilsen (2). As opposite to theories, models or frameworks do not specify the mechanisms of change, but are more like checklists of factors relevant to various aspects of implementation (2). Each of these constructs will have one of the following aims: (1) process models that describe or guide the implementation process, (2) determinants frameworks, classic theories, and implementation theories that aim to understand/or explain what influences implementation outcomes, and (3) evaluation frameworks evaluating or measuring the success of implementation (5).

1.2. Implementation of coercion reduction programs in mental health settings

In mental health settings, coercion can be defined as forceful action, involuntary treatment, or threats undertaken in the course of providing treatment or addressing perceived harm that a person poses to themselves or others (6). Examples of formal coercion include mechanical restraint using belts, manual restraint, seclusion or physically enforced administration of medication. The use of coercion has multiple known negative effects on service users, including psychological (7) and physical harm and even death (8).

Reducing or ending the use of coercion is one of the key health policy issues in mental health services worldwide (6, 9). Although having shared goals, the use of coercion has a great amount of variation between regions and countries. Sources of this variation include different service configurations, different mental health laws, and different social policies and cultures (10).

Many successful programs have been developed and tested to reduce coercion (9). However, there are issues in implementing these programs in mental health settings, as has been noted with other evidence-based practices in mental health and beyond (1). A recent European survey in 17 countries showed that two forms of coercion, seclusion and physical restraint, were still the most used techniques to manage service users' aggression in mental health settings, but variations between countries exist also here (11). So, the successful coercion-reduction programs do not seem to be adopted into current practice despite good evidence of their efficacy. This is a clear example of an implementation problem, which might benefit from an implementation science approach, i.e., that clinicians do not implement these programs at all, or if they do, these programs do not have their intended effect (efficacy-effectiveness gap) and/or the clinicians do not accept the implementation outcomes. With new coercion-reduction programs, it might be that clinicians are not engaged with using the new practice (12, 13), they are not accepting the intervention and their negative attitudes have an impact on how the program is implemented and sustained (13). Other potential reasons might be that the intervention is too difficult to use (14) and the environment for the implementation has high acuity and therefore there is not enough time or resources for adopting new interventions (13).

Previous systematic reviews on this topic have focused on the question of effectiveness of coercion reduction programs (15) rather than implementation issues, giving an overview of existing interventions (16, 17), or focusing on single programs, such as Safewards (13). There have been systematic implementation reviews in mental health services such as one that looked at effective strategies when implementing trauma-informed care in youth inpatient psychiatric and residential treatment settings (18). But as far as we are aware, there have not been any systematic attempts to review implementation theories, models and frameworks in the implementation of coercion reduction programs in mental health settings. Lack of awareness of the extent to which implementation of coercion reduction programs has utilized implementation science prevents understanding fully the obstacles to be overcome when translating evidence to practice.

The main aim of this systematic review therefore is to gain a better understanding of which models, theories and frameworks (hereafter all referred to as tools) have been used by studies when attempting to implement coercion reduction programs in mental health settings, and what implementation outcomes they have reported. We see this as an important step in growing understanding of the role of implementation science in coercion reduction and indicating future directions in research and practice in this area. This work is part of COST Action FOSTREN: Fostering and Strengthening Approaches to Reducing Coercion in European Mental Health Services (CA19133).

2. Materials and methods

The review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (19) guidance. The protocol is registered with Prospero (CRD42021284959).

2.1. Search strategy

The literature search was carried out from November 19 to 20, 2021, using the following databases: PubMed, CINAHL, PsycINFO, Cochrane, Scopus, and Web of Science. The search strategy was externally validated by a librarian. We also searched Google Scholar and references from included articles for additional studies. No restriction was used in the databases. The full search strategy for Web of Science is available as Supplementary material. We contacted the authors if a full-text version was not available.

We included implementation studies (any design) reporting on the implementation of non-pharmacological interventions in any of the mental health settings. We defined interventions as any new intervention or practice improvement effort related to patient care. We required interventions to be focused on aiming to reduce patient coercion and restrictions in care within an explicit implementation science framework. We defined formal coercion as including at least one of the following measures: seclusion, segregation, physical restraint, mechanical restraint, involuntary medication treatment, constant observation, intermittent observation, time out, net bed, open area seclusion, involuntary admission and care, and outpatient commitment, or restrictions in care as defined as coercion by individual studies. We required that included studies reported a referenced tool to guide, analyze or evaluate the implementation. We considered studies published in peer-reviewed journals, in any language and any year.

We excluded studies using mainly pharmacological treatments (drug studies), as these interventions may involve formal or informal coercion. Studies conducted outside healthcare settings, for example, schools or non-governmental organizations were excluded. We also excluded those studies where the implementation only consisted of a single strategy; and/or a specific tool had not been described. We did not consider letters, opinions, editorials, books, theses, study protocols, systematic reviews, or meta-analyses. The detailed PICO criteria were stated in the protocol as follows:

P (Participants, population) = The setting needed to be inpatient or outpatient care in the field of mental health care. No restrictions of age, diagnosis, or professional group were applied. We included both patients and professionals as participants.

I (Intervention) = The intervention was required to have two components. First, we considered studies utilizing any named and referenced implementation tool. Tools were required to have at least one of the three specific aims proposed by Nilsen (2): (1) describing and/or guiding the process of translating research into practice, (2) understanding and/or explaining what influences implementation outcomes and (3) evaluating implementation. They needed to have a detailed structure described in the included paper or in a cited reference. Second, the intervention being implemented had to be a non-pharmacological approach/technique used in a mental health setting. This included any new intervention or practice improvement effort related to patient care or education of professionals and coercion reduction programs.

C (Comparator, control) = We did not restrict comparators as the review did not focus solely on randomized controlled trials.

O (Outcomes) = We considered the following implementation outcome domains: acceptability, adaptation, appropriateness, costs, feasibility, fidelity, penetration, and sustainability (20), as defined by each study. Outcome domains were our primary outcome, irrespective of the differences among tools considered by the studies.

The titles and abstracts (Stage 1) and full text (Stage 2) were independently screened by two reviewers (TL, AH-D, AD, TLH, JD, CB, EC, JL, AB, and KG). Any disagreements/conflicts were resolved by a third reviewer (TL, AH-D, AB, and JD) who was not authoring the paper to be evaluated.

2.2. Data extraction

Data on implementation processes and other study characteristics were extracted from the included studies independently by two members of the research team (RiW and KG) and then reviewed by a third member (TL). Any disagreements and unclear items were discussed by RiW and KG, and a consensus was sought by validating the decision with TL.

Eight key implementation processes specified by Proctor et al. (20) were searched for in each of the studies. These processes are: acceptability, adoption, appropriateness, feasibility, fidelity, implementation costs, penetration and sustainability. Although these terms are clearly defined by Proctor et al. (20), their meaning as used in the literature is not fixed with much variability in how researchers employ each term.

Therefore, each study was initially examined using automated searching of truncated terms: accept*, adopt*, appropriate*, feasib*, fidelity*, cost* (only when adjacent to “implementation”), penetrat*, and sustain*. This search was restricted to the empirical sections of each paper (i.e., methods and results).

If a term was detected in this way the following additional aspects were extracted:

1. Are data reported for this term? (yes, quantitative data only; yes, qualitative data only; yes, both types of data; no).

2. What types of respondents provided the data? (staff; patients; both; other).

3. Brief details of the reported data.

In addition, each study was examined with regard to whether its approach or method was based on a specified implementation checklist or tool. If so, the checklist name was extracted and the key implementation process from the eight specified above was noted (including an option for “multiple” processes).

Data on the study characteristics were also extracted for contextual purposes.

Both implementation outcomes and study characteristics coding sheets were piloted by completing the extraction forms on two studies. These were then discussed before agreeing to complete data extraction for all included papers. In addition, data extraction forms were cross referenced and double checked once completed. Any conflicts were discussed and resolved between the two people completing data extraction and a consensus was reached on all of these conflicts.

2.3. Study quality

Quality appraisal of included studies was undertaken using MMAT—Mixed Methods Appraisal Tool (21). MMAT is designed for systematic reviews that include qualitative, quantitative and mixed methods studies. It enables appraisal of the methodological quality of five categories of studies: qualitative research, randomized controlled trials, non-randomized studies, quantitative descriptive studies, and mixed methods studies. Scoring guidance provided by the developers of MMAT was followed (21). Appraisal was done by two authors (RaW and CB) and reviewed by a third (TL).

2.4. Data synthesis and analysis

The extracted data were first descriptively summarized using numbers, dichotomous yes/no categories, and text, as appropriate, followed by a narrative synthesis. We did not identify any relevant studies using randomized controlled study design and therefore were not able to include meta-analysis about effectiveness of the interventions to our study.

2.5. Amendments to information provided at registration

Because of the heterogeneity and small number of the studies, we were not able to report secondary outcomes of the studies (effectiveness) but focused only on the implementation outcomes. Study heterogeneity led us to develop a data extraction sheet that could capture all the important aspects of the included studies, instead of using JBI data extraction tools that do not include an implementation-specific extraction tool.

3. Results

3.1. Characteristics of included papers

Our search generated 5,295 references after duplicates were removed. Four additional references were found with a manual search. After application of the inclusion and exclusion criteria eight studies reported in nine papers were included in the review (see PRISMA flow diagram Figure 1). Papers excluded at full-text screening (N = 193, including 24 duplicates, n = 169) are listed in Supplementary material 2.

FIGURE 1
www.frontiersin.org

Figure 1. PRISMA 2020 flow diagram for new systematic reviews which included searches of databases, registers, and other sources. *TFM, Theory, framework or model. Adapted from Page et al. (19) with permission under the Creative Commons CC BY 4.0 license. For more information, visit http://www.prisma-statement.org/.

Studies were conducted in Australia (n = 3, 37.5%), the USA (n = 2, 25.0%), Finland, the Netherlands and Germany (all n = 1, 12.5%). They were all written in English language. Implementation was most commonly studied within a non-randomized (2224) or mixed-method study design (2528). All studies were conducted in inpatient settings, either in forensic mental health (23) or general psychiatric wards (22, 24, 2830), or in mixed settings, including both forensic and general mental health wards (2527). Implementation periods (an active time that an intervention was implemented, as defined by each study) varied between 1 and 17 months, and in one study the length of the period was not stated (30).

3.2. Quality appraisal

Table 1 summarizes the quality appraisal of the included studies against MMAT criteria. Two of the qualitative studies (29, 30) fulfilled all the criteria set (7/7). For the quantitative and mixed-methods studies, many items could not be rated because insufficient information was provided in the papers.

TABLE 1
www.frontiersin.org

Table 1. Quality assessment using MMAT tool.

3.3. Implementation tools used by the studies

Eight different tools were identified from the included studies: none of them used the same approach Three of them were used to guide the implementation process, and the rest of them to evaluate or analyze the process (Table 2). We also identified additional checklists and tools that were used by the included studies to collect data related to implementation outcomes. Additional checklists and tools identified measured fidelity and included the Organization Fidelity Checklist and Safewards Fidelity Checklist.

TABLE 2
www.frontiersin.org

Table 2. Implementation models, frameworks, or theories used guide, evaluate or analyze implementation processes.

3.4. The interventions applied by the studies

We identified four types of interventions: holistic, professional judgement, staff training and sensory modulation. With regards to holistic, we classified Safewards (22, 25, 29) and Trauma-informed Care approaches (24). For professional judgement, we identified violence risk assessment either using the DASA (26, 27) or START:AV (23). The only staff training was a Recovery-Oriented Training Program (28) and one study used a sensory modulation approaches (30).

3.5. The implementation outcomes of the studies

We sought implementation outcomes as defined by Proctor et al. (20) from the eight included studies. A summary of the implementation outcomes is described in Table 3. None of the studies reported all of eight implementation outcomes. The number of implementation outcomes mentioned varied between 3 and 5 outcomes. Acceptability (seven out of nine papers), appropriateness (8/9) and sustainability (7/9) were most commonly named in the papers, whereas penetration was found in only one of the studies. However, most of the studies only mentioned an outcome by the name in their paper and did not report any actual data about the outcomes.

TABLE 3
www.frontiersin.org

Table 3. Summary of the implementation outcomes.

In the next section, we will report a narrative, outcome by outcome, based on the data found.

3.5.1. Acceptability

Acceptability is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory (20). Four papers reported data about acceptability (22, 23, 27, 30).

In most of these studies (22, 23, 30), acceptability of the intervention was evaluated from the staff's viewpoint with mixed views expressed toward the intervention. Baumgardt et al. (22) evaluated acceptability of Safewards by staff before the implementation period began. Staff who were not willing to use the intervention were given the option to change their working unit. Two out of 40 used this possibility. Staff in the study by De Beuf et al. (23), were increasingly dissatisfied over time (100% of the staff in time point two) with the implemented violence risk assessment tool START:AV, although they did find that the content of the tool was acceptable and not too complex. However, they were unconvinced of the credibility of the intervention i.e., they did not believe that START:AV would help them to prevent violent events. Staff in the study by Wright et al. (30) thought that the intervention, sensory modulation, was acceptable for some patients, but too risky to be used for highly distressed patients.

In contrast, Lantta et al. (27) evaluated the acceptability of the DASA violence risk assessment tool from the patient's perspective. They found that the acceptability of the tool and the research process, measured by patient's willingness to give written informed consent for the study, varied between wards where the implementation took place but remained low as a whole (17% of the patients).

3.5.2. Adoption

Adoption is defined as the intention, initial decision, or action to try or employ a program (20). Three papers reported data about adoption (23, 27, 28) all from the staff's perspectives.

All three studies found that there is scope for improving adoption of the intervention during and after the implementation period. De Beuf et al. (23) evaluated adoption of the intervention by measuring how frequently the START:AV tool was used to assess patients' violence risk. They found that the percentage increased slowly over time, from 74 to 78%, but did not reach the set goal of an 80% completion rate. Completion rate also varied between individual assessors (range 29–100%). Lantta et al. (27) reported similar findings with another violence risk tool, the DASA. In their study, a 64% completion rate was reached, but it varied substantially between wards involved in the implementation ranging between 15 and 89%.

Repique et al. (28) used staff focus groups to evaluate adoption of recovery-oriented principles in care after a training program. According to their findings, staff were doing a “decent job” [sic] with incorporating recovery principles. However, they felt that more buy-in was still needed among staff, and it should start from the leadership level.

3.5.3. Appropriateness

Appropriateness is the perceived fit, relevance, or compatibility of the program for a given practice setting, clinician or service user; and/or perceived fit to address a particular issue or problem (20). Two papers reported data about appropriateness (23, 30) and their respondents were staff members in both cases.

These two studies evaluated appropriateness from a slightly different viewpoint. In the De Beuf et al.'s (23) study staff evaluated if the intervention was useful for treatment. Again, there were diverse views. After the implementation period, 33% disagreed that the START:AV was useful. Still, there was a substantial proportion of staff who agreed with its usefulness as a whole (67%) and complete agreement of its usefulness concerning different factors of the tool (100%). In the Wright et al.'s (30) study, focus groups with staff revealed both appropriate and inappropriate ways sensory modulation approaches had been used in care, For example, a sensory room had been used as a place to play video games, not as a calming area for patients.

3.5.4. Feasibility

Feasibility is defined as the extent to which a new program can be successfully used or carried out within a given setting (20). Two papers reported data about feasibility from the staff viewpoint (23, 27).

De Beuf et al. (23) asked staff about their intervention's practicality. Staff thought that they lacked time to use the intervention. They also reported that it took much longer to complete the START:AV assessment than indicated from previous studies. Lantta et al. (27) evaluated how the intervention actually worked. They evaluated how well the DASA predicted aggression in the wards. That outcome reached the set goal (AUC ≥ 0.70), varying between different forms of aggression 0.75–0.93.

3.5.5. Fidelity

Fidelity is defined as the degree to which a program was implemented as it was planned in the original protocol or as it was intended by the program developers (20). Two of the included studies provided data on fidelity (22, 25) based on staff implementation activities.

Both studies reported a high level of fidelity when implementing Safewards. Baumgardt et al. (22) reported that wards were able to fully implement eight out of the 10 Safewards interventions, indicating high fidelity. Fletcher et al. (25) also had high implementers among their participating wards (8–10 interventions out of 10), but also some wards were only implementing 1–4 interventions.

3.5.6. Implementation costs

Cost is defined as the cost impact of an implementation effort (20). None of the included studies provided data about implementation costs.

3.5.7. Penetration

Penetration is defined as the integration of a program within a service setting and its subsystems (20). De Beuf et al. (23) was the only paper providing information on this outcome. They asked the staff if the intervention (START:AV) was integrated in the setting's treatment plans and case conferences. First, a negative result of 100% of staff disagreeing was reported on the question if the tool contributed to more effective communication and whether it increased structure during case conferences. This result did not change over time. Despite this negative finding, the integration of the tool into the treatment process seemed to improve over time based on the second question evaluating penetration. The proportion of staff who disagreed that the tool was sufficiently integrated decreased from 75 to 17%.

3.5.8. Sustainability

Sustainability is defined as the extent to which a newly implemented program is maintained or institutionalized within a service setting's ongoing, stable operations (20). Only Hale and Wendler (24) reported data for the sustainability of the intervention and this was from the staff viewpoint. According to their results, there was a 9.3% reduction of physical holding and seclusion 12 months later after implementing trauma-informed care in children and adolescent inpatient services.

4. Discussion

Over the past decade, several promising coercion reduction programs that constitute complex interventions in mental health settings, have been developed and reported to be successful (9).

However, the use and testing of implementation tools, based on the principles of implementation science, in this context appear in our review to be in its infancy. We screened 204 full-text versions of coercion reduction intervention studies but of those we could only find nine (4.4%) that had used a named implementation tool. This indicates that although there are many reportedly successful reduction studies published, the extent and quality of the implementation of the intervention and the sustainability over time is often unknown. Consequently, there is a lack of knowledge about which coercion reduction programs are robust enough to be successful under routine less-than-ideal conditions in clinical settings where interventions may be only partially or poorly implemented.

Using the MMAT scoring system, we found that the quality of the included studies was mostly quite low, with the exception of two qualitative papers (29, 30). The implementation process was generally poorly reported with significant variation in what was reported and how. Tools were mostly used to evaluate or analyze the implementation, not to guide it. Similarly, all studies did mention three to five of the implementation outcomes we used but half of the studies provided no data on them. To make future replications and comparisons between different implementation approaches possible, there is a need to find a more standardized and streamlined process for reporting of specific implementation aspects when introducing coercion reduction programs into mental health settings. This is an acknowledged need in general with implementation studies in a wide range of applied settings, not only in this field of research (31).

There may be several reasons for the low prevalence of implementation tools used in coercion reduction studies. Greenhalgh argues that real-world challenges to implementation of evidence-based practice are often characterized by the uniqueness, complexity and incomplete, contradictory and changing requirements of identified barriers (32). Further, Greenhalgh states that because of the messiness of the real-world context, there will never be a perfectly fitted tool to choose. Instead, theoretical tools should be approached carefully but pragmatically, without any expectation of finding a tool that will completely solve the implementation issues. Rather the tool should be seen more modestly as a way of organizing thoughts and ideas about complex challenges.

At the same time, the approach suggested by Greenhalgh would make it difficult to compare the effectiveness of different implementation strategies across studies to enhance the effectiveness of coercion reduction programs if they were all pragmatically set up based on each clinical setting's own local “messiness.” We therefore believe there is a need for some standardized reporting of implementation outcomes. Possibly, a way forward could be a template for reporting key elements of an implementation study, based on an overarching generic terminology. If not too highly detailed, this would cover the use of different implementation tools and still enable useful comparisons to be made. To meet this need, Pinnock et al. (31) developed the guideline and checklist StaRI (Standards for Reporting Implementation Studies Statement). StaRI includes the parallel reporting of both the implementation strategy, regardless of implementation tool used, and the effectiveness of the intervention implemented. To enhance the quality of future intervention studies on coercion reduction, StaRI could be one option to address the challenges of structured reporting on both the intervention effectiveness and the implementation strategy.

Each of the nine papers in this review used a different implementation tool (Table 2) which suggest that coercion reduction programs are at different stages of development or implementation. Only three of these studies described a somewhat clear rationale behind the choice of the tool. It is therefore unclear if the tools were chosen based on an assessment of which tool would be best suited or for some other reason. To provide clarity on what distinguishes different implementation models, and to aid the choice of a tool, Nilsen (5) has suggested five tool categories: process models, determinant frameworks, classic theories, implementation theories and evaluation frameworks. Of the nine papers here, two were process models (OMRU, Iowa model), three were determinant frameworks (CFIR, TDF, and PARIHS), one was an implementation theory (BCW), one was an evaluation framework (IOF), and one a tool by Skolarus and Sales. Most authors used their chosen tool to analyze and evaluate the implementation process retrospectively, a purpose that might not be best suited for process models which aim at describing and guiding the process of translating research into practice (1).

In our review we included only overarching implementation tools and excluded studies that referred only to practices based on improvement science, such as the PDSA (plan-do-study-act)-cycle. It is possible that improvement strategies which tend to focus on very specific ways of introducing interventions, so “how” to implement new practice (quality improvement work) are more commonly used in intervention studies than the more theoretically driven implementation tools that focus on “what” should be included in implementation strategies. At the same time, Leeman et al. (33) argue that it is time that the fields of implementation and improvement science should start to align. An alignment would, according to the authors, benefit both areas of knowledge by reducing the research/practice gap, fostering local ownership of implementation, generating evidence, developing context-specific implementation strategies and building practice-based evidence capacity to improve care (33).

The studies in this review evaluated between three and five components out of the eight we extracted. With limited resources in clinical settings, it is not known whether outcome measurement quality improves when multiple components are used. It is more likely that this quality declines as the implementation strategy becomes more complex and it may be more effective to focus on only one or two components with the capacity but to evaluate them really well. The effectiveness of the implementation strategy chosen could depend more on whether there was an a priori rationale for the strategy, based on an assessment of expected enablers and barriers, rather than the number of implementation strategy components (34). This thought is further strengthened by a review of systematic reviews on interventions to change health-care professionals' behaviors, by Squires et al. (35), which concluded that there was no evidence to suggest that multifaceted strategies were more effective than single-component strategies. This finding should be further explored for the relevance to coercion reduction studies as it could offer important guidance for clinical settings with limited resources (35).

Acceptability (4/9) and adoption (3/9) were the most popular components reported upon (Table 3). Appropriateness and acceptability are conceptually close terms that have been found to overlap and be used inconsistently in the literature. However, they are not synonyms, and a new intervention can be assessed as appropriate but not acceptable, due to, for example, cost (20). Similarly, appropriateness has been used interchangeably with terms like perceived fit, relevance, compatibility, suitability, usefulness, and practicability (36), highlighting the challenges involved regarding conceptual clarity when planning an implementation study and reviewing the literature. In order to continue to develop this field, it would be useful to find a common conceptual framework to promote implementation work in the future. This is in line with the recommendation by Proctor et al. (20).

Furthermore, given the resources needed in clinical settings to successfully implement a new complex intervention, sustainability could be viewed as one of the most valid measures of implementation itself. It can be objectively evaluated through behavioral observation and inherently indicates the ongoing adoption of an innovation over long time periods post-introduction. However, Wiltsey Stirman et al. (37) found in their implementation review of empirical literature including sustainability aspects of implementation, that <½ of the studies presented data on sustainability outcomes. It was also evident that what was considered a clinically meaningful sustainability measure varied between the studies so again consensus was a problem. Examples of different measures were the proportion of sites that implemented the intervention over time, the proportion of patients receiving the intervention, and the level of desired patient outcomes (37). Sustainability measures are therefore considered a challenge due to the unclear definitions of what should be measured and when to assess the level of sustainability (38).

Penetration (1/9), implementation costs (0/9) and sustainability (1/9) were the least used components in our review. According to Proctor et al. (20), penetration as a concept is not commonly used in the literature. Lack of penetration can be an indication that staff might be reluctant to change as found by De Beuf et al. (23). It is probably useful to identify and address resistance to change in organizations to enhance successful implementation of interventions, for example by use of ORIC (Organizational Readiness for Implementing Change) (39).

None of the identified studies provided data about implementation costs. This is in line with the findings of a recent review (40) about implementation of early psychosis services in Latin America where they too did not identify any studies reporting costs of implementation. At a time when there is more focus on the costs of healthcare services, this is also a possible area of improvement in implementation reporting. Economic implementation studies with standard economic costing methods are warranted in mental health areas (41).

To increase the use of implementation tools in coercion reduction projects, we believe the advantages gained from their contribution to implementation processes, such as clinically meaningful outcomes and more rigorous evaluations of what implementation strategy will enhance intervention effectiveness and efficacy, need to be clarified and demonstrated. Moreover, there is a need to identify measures for implementation outcomes that can monitor and evaluate implementation determinants, mechanisms, processes, strategies and outcomes (42). Implementation tools should be evidence based, and when operationalized in clinical research, the measurement tools associated with them should be statistically tested for psychometric properties, including the capacity to discriminate between different tool items (36). Promising psychometric research like this is ongoing. Studies have for example been conducted to develop and psychometrically test measures for acceptability, appropriateness and feasibility (42), as well as to assess the validity and reliability of seven domains of the CFIR implementation tool (43).

It is noteworthy that only one of the studies in this review (27) reported on any type of service user involvement or patient perspectives on implementation process or its evaluation. It could be argued that since staff are responsible for implementing interventions as part of their professional employment, it is only their point of view which is relevant. Interventions to be implemented in coercion reduction have also lacked user involvement (44), likewise research in the area (45). However, with increasing emphasis on formal collaboration in treatment and care, full implementation requires an understanding of how to engage service users in the process.

4.1. Limitations

A potential limitation of this review is linked to selection bias; it is possible that some articles were not identified at screening stage because of the different terminologies used. As highlighted above, the fuzziness of the core concepts made it hard to apply inclusion/exclusion criteria with confidence. The decision to use truncated terms in data extraction and not to include possible synonyms may have led us to overlook some of the results in the included papers. However, as the implementation outcomes were not fully defined in the papers, we were not able to include analogous synonyms in our analysis with a risk of over-interpreting the meaning of these terms. Overlapping terms and terminologies used to define or describe the same concepts pointed to the need to clarify language and definitions, although, as this review suggest, even an existing “umbrella tool” is not the best way forward, as too complex and not feasible to apply in practice.

One of the authors of this review (TL) was a lead author of two of the included studies. To avoid the potential conflict of interest when screening, the inclusion of the papers involving members of the review team was done by reviewers outside of the particular study. In addition, the main responsibility for the data extraction was in the hands of authors (RiW and KG) not involved in the included studies.

There is also the limitation due to underlying bias within the included studies, as these were only from so-called “Western Europe,” Australia and the United States, missing the cultural diversity, historical background and approaches to healthcare in other countries, especially African, Asian and Eastern European countries. A limitation inherent to any systematic review is that its quality relies heavily on the methodological rigor and biases of included papers. Our quality appraisal indicated that only 2/7 studies fulfilled all the MMAT criteria; many items could not be rated due to lack of insufficient information, especially in the quantitative/mixed methods papers.

Finally, due to the diversity of studies included—different tools, methodology, terminology, and the gaps in reporting, it was difficult to synthesize across and report on any robustly established mechanisms.

5. Conclusion

An impetus to minimize the use of coercion and its negative impact in mental health settings has been gaining momentum for some time now, globally. This has resulted in an increase in research in this area, policies to support this drive and practice-based initiatives to facilitate and support less restrictive environments and relationships. However, the implementation of such approaches has been hindered or at least poorly reported upon by the complexities of frameworks currently proposed, leading to lost opportunities for feasible, impactful and sustained interventions to be easily introduced into many practice settings.

From our review it is clear that systematic implementation tools appear to be seldom used when efforts are being made to embed interventions aimed at reducing use of coercive measures in routine mental health care. This is compounded by evidence from clinical experience that it can be difficult to implement an intervention in a new setting and that it is often not sustainable across time (46).

Most notably, the lack of clear descriptions about the underlying stages and principles needed to support the implementation of evidence in a meaningful way is lacking, as demonstrated in the small number of studies that we were able to report upon in this review. In particular matters relating to the costs and resources needed to implement complex interventions in settings where there are vulnerable populations is poorly considered. Additionally, the need to incorporate the views of all stakeholders in the “how” to successfully change practice for the better is crucial and the service user voice is missing.

To improve implementation efforts, the quality of mental health care services, and indeed to minimize the use of coercion, greater efforts are required to make the world of implementation science more accessible. There is a real need to identify and adequately describe the use of achievable, targeted and well-explained frameworks that allow change to be enacted upon and maintained. The use of streamlined, comprehensive, and less costly implementation tools should be more freely available, with the necessary workforce development in how to use and evaluate their impact in mental health care. Only then can we improve our practices and services in a way that is important to and valued by those using or working in care settings to allow for positive outcomes that can be replicated elsewhere.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

TL, JD, AB, and RWhi: conceptualization. TL, JD, AH-D, AB, TLH, JL, EC, KG, CB, AD, RWhy, and RWhi: design, methodology, and conduction of the study. TL, RWhi, and KG: analysis and interpretation. TL: writing—original draft preparation. AB, TLH, JD, KG, EC, JL, AH-D, CB, RWhy, and RWhi: writing—review and editing. All authors contributed to the article and approved the submitted version.

Funding

This work is done under the COST Action FOSTREN: Fostering and Strengthening Approaches to Reducing Coercion in European Mental Health Services (CA19133). FOSTREN supported publishing this article (open access fee).

Acknowledgments

We would like to thank BNSc, RN Tinja Rautiainen (University of Turku) for help with manuscript preparation and Erica Hateley (Evidence Reviewer, Mersey Care NHS Foundation Trust) for reviewing our search strategy.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.2023.1158145/full#supplementary-material

References

1. Nilsen P, Birken SA. Handbook on Implementation Science. Cheltenham: Edward Elgar Publishing (2020).

Google Scholar

2. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. (2015) 10:53. doi: 10.1186/s13012-015-0242-0

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Rycroft-Malone J, Bucknall T. Models and Frameworks for Implementing Evidence-Based Practice: Linking Evidence to Action. Hoboken, NJ: Wiley (2010).

Google Scholar

4. Moullin JC, Dickson KS, Stadnick NA, Albers B, Nilsen P, Broder-Fingert S, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. (2020) 1:42. doi: 10.1186/s43058-020-00023-7

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Nilsen P. Overview of Theories, Models and Frameworks in Implementation Science. Handbook on Implementation Science. Cheltenham: Edward Elgar Publishing. (2020).

Google Scholar

6. Herrman H, Allan J, Galderisi S, Javed A, Rodrigues M. Alternatives to coercion in mental health care: WPA Position Statement and Call to Action. World Psychiatry. (2022) 21:159–60. doi: 10.1002/wps.20950

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Chieze M, Hurst S, Kaiser S, Sentissi O. Effects of seclusion and restraint in adult psychiatry: A systematic review. Front Psychiatry. (2019) 10:491. doi: 10.3389/fpsyt.2019.00491

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Kersting XAK, Hirsch S, Steinert T. Physical harm and death in the context of coercive measures in psychiatric patients: A systematic review. Front Psychiatry. (2019) 10:400. doi: 10.3389/fpsyt.2019.00400

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Gooding P. Compendium Report: Good Practices in the Council of Europe to Promote Voluntary Measures in Mental Health Services. Council of Europe (2021).

Google Scholar

10. Szmukler G. Compulsion and “coercion” in mental health care. World Psychiatry. (2015) 14:259–61. doi: 10.1002/wps.20264

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Cowman S, Björkdahl A, Clarke E, Gethin G, Maguire J. A descriptive survey study of violence management and priorities among psychiatric staff in mental health services, across seventeen european countries. BMC Health Serv Res. (2017) 17:59. doi: 10.1186/s12913-017-1988-7

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Thornicroft G, Farrelly S, Szmukler G, Birchwood M, Waheed W, Flach C, et al. Clinical outcomes of Joint Crisis Plans to reduce compulsory treatment for people with psychosis: a randomised controlled trial. Lancet. (2013) 381:1634–41. doi: 10.1016/S0140-6736(13)60105-1

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Mullen A, Browne G, Hamilton B, Skinner S, Happell B. Safewards: An integrative review of the literature within inpatient and forensic mental health units. Int J Ment Health Nurs. (2022) 31:1090–108. doi: 10.1111/inm.13001

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Henderson C, Farrelly S, Moran P, Borschmann R, Thornicroft G, Birchwood M, et al. Joint crisis planning in mental health care: The challenge of implementation in randomized trials and in routine care. World Psychiatry. (2015) 14:281–3. doi: 10.1002/wps.20256

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Hirsch S, Steinert T. Measures to avoid coercion in psychiatry and their efficacy. Dtsch Arztebl Int. (2019) 116:336–43. doi: 10.3238/arztebl.2019.0336

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Baker J, Berzins K, Canvin K, Benson I, Kellar I, Wright J, et al. Health Services and Delivery Research. Non-pharmacological Interventions to Reduce Restrictive Practices in Adult Mental Health Inpatient Settings: The COMPARE Systematic Mapping Review. Southampton: NIHR Journals Library.

PubMed Abstract | Google Scholar

17. Väkiparta L, Suominen T, Paavilainen E, Kylmä J. Using interventions to reduce seclusion and mechanical restraint use in adult psychiatric units: An integrative review. Scand J Caring Sci. (2019) 33:765–78. doi: 10.1111/scs.12701

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Bryson SA, Gauvin E, Jamieson A, Rathgeber M, Faulkner-Gibson L, Bell S, et al. What are effective strategies for implementing trauma-informed care in youth inpatient psychiatric and residential treatment settings? A realist systematic review. Int J Ment Health Syst. (2017) 11:36. doi: 10.1186/s13033-017-0137-3

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Br Med J. (2021) 372:n71. doi: 10.1136/bmj.n71

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. (2011) 38:65–76. doi: 10.1007/s10488-010-0319-7

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inform. (2018) 34:285–91. doi: 10.3233/EFI-180221

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Baumgardt J, Jäckel D, Helber-Böhlen H, Stiehm N, Morgenstern K, Voigt A, et al. Preventing and reducing coercive measures—An evaluation of the implementation of the safewards model in two locked wards in Germany. Front Psychiatry. (2019) 10:340. doi: 10.3389/fpsyt.2019.00340

PubMed Abstract | CrossRef Full Text | Google Scholar

23. De Beuf T, Vogel V, De Ruiter C. Implementing the START:AV in a Dutch residential youth facility: Outcomes of success. Transl Iss Psycholog Sci. (2019) 5:193–205. doi: 10.1037/tps0000193

CrossRef Full Text | Google Scholar

24. Hale R, Wendler MC. Evidence-based practice: Implementing trauma-informed care of children and adolescents in the inpatient psychiatric setting. J Am Psychiatric Nurs Assoc. (2023) 29:161–70. doi: 10.1177/1078390320980045

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Fletcher J, Brophy L, Pirkis J, Hamilton B. Contextual barriers and enablers to safewards implementation in Victoria, Australia: Application of the consolidated framework for implementation research. Front Psychiatry. (2021) 12:733272. doi: 10.3389/fpsyt.2021.733272

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Lantta T, Daffern M, Kontio R, Välimäki M. Implementing the dynamic appraisal of situational aggression in mental health units. Clin Nurse Specialist. (2015) 29:230–43. doi: 10.1097/NUR.0000000000000140

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Lantta T, Kontio R, Daffern M, Adams CE, Välimäki M. Using the Dynamic Appraisal of Situational Aggression with mental health inpatients: A feasibility study. Pat Prefer Adher. (2016) 10:691–701. doi: 10.2147/PPA.S103840

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Repique RJ, Vernig PM, Lowe J, Thompson JA, Yap TL. Implementation of a recovery-oriented training program for psychiatric nurses in the inpatient setting: A mixed-methods hospital quality improvement study. Arch Psychiatr Nurs. (2016) 30:722–8. doi: 10.1016/j.apnu.2016.06.003

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Higgins N, Meehan T, Dart N, Kilshaw M, Fawcett L. Implementation of the Safewards model in public mental health facilities: A qualitative evaluation of staff perceptions. Int J Nurs Stud. (2018) 88:114–20. doi: 10.1016/j.ijnurstu.2018.08.008

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Wright L, Bennett S, Meredith P. ‘Why didn't you just give them PRN?': A qualitative study investigating the factors influencing implementation of sensory modulation approaches in inpatient mental health units. Int J Ment Health Nurs. (2020) 29:608–21. doi: 10.1111/inm.12693

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI) statement. Br Med J. (2017) 356:i6795. doi: 10.1136/bmj.i6795

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Greenhalgh T. Foreword. Handbook on Implementation Science. Cheltenham: Edward Edgar Publishing. (2020).

Google Scholar

33. Leeman J, Rohweder C, Lee M, Brenner A, Dwyer A, Ko LK, et al. Aligning implementation science with improvement practice: A call to action. Implement. Sci. Commun. (2021) 2:99. doi: 10.1186/s43058-021-00201-1

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. (2012) 7:50. doi: 10.1186/1748-5908-7-50

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Squires JE, Sullivan K, Eccles MP, Worswick J, Grimshaw JM. Are multifaceted interventions more effective than single-component interventions in changing health-care professionals' behaviours? An overview of systematic reviews. Implement. Sci. (2014) 9:152. doi: 10.1186/s13012-014-0152-6

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implement. Sci. (2014) 9:118. doi: 10.1186/s13012-014-0118-8

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implement. Sci. (2012) 7:17. doi: 10.1186/1748-5908-7-17

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Lennox L. Sustainability. Handbook on Implementation Science. Cheltenham: Edward Edgar Publishing (2020).

Google Scholar

39. Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: A psychometric assessment of a new measure. Implement Sci. (2014) 9:7. doi: 10.1186/1748-5908-9-7

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Aceituno D, Mena C, Vera N, Gonzalez-Valderrama A, Gadelha A, Diniz E, et al. Implementation of early psychosis services in Latin America: A scoping review. Early Interv Psychiatry. (2021) 15:1104–14. doi: 10.1111/eip.13060

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Bowser DM, Henry BF, McCollister KE. Cost analysis in implementation studies of evidence-based practices for mental health and substance use disorders: A systematic review. Implement Sci. (2021) 16:26. doi: 10.1186/s13012-021-01094-3

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement. Sci. (2017) 12:108. doi: 10.1186/s13012-017-0635-3

PubMed Abstract | CrossRef Full Text | Google Scholar

43. Fernandez ME, Walker TJ, Weiner BJ, Calo WA, Liang S, Risendal B, et al. Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research. Implement Sci. (2018) 13:52. doi: 10.1186/s13012-018-0736-7

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Eidhammer G, Fluttert FA, Bjørkly S. User involvement in structured violence risk management within forensic mental health facilities—A systematic literature review. J Clin Nurs. (2014) 23:2716–24. doi: 10.1111/jocn.12571

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Brierley-Jones L, Ramsey L, Canvin K, Kendal S, Baker J. To what extent are patients involved in researching safety in acute mental healthcare? Res Involv Engagem. (2022) 8:8. doi: 10.1186/s40900-022-00337-x

PubMed Abstract | CrossRef Full Text | Google Scholar

46. van Melle AL, van der Ham AJ, Widdershoven GAM, Voskes Y. Implementation of high and intensive care (HIC) in the Netherlands: A process evaluation. Psychiatr Q. (2021) 92:1327–39. doi: 10.1007/s11126-021-09906-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: implementation science, mental health, psychiatric care, coercive measures, coercion, intervention, implementation tools

Citation: Lantta T, Duxbury J, Haines-Delmont A, Björkdahl A, Husum TL, Lickiewicz J, Douzenis A, Craig E, Goodall K, Bora C, Whyte R and Whittington R (2023) Models, frameworks and theories in the implementation of programs targeted to reduce formal coercion in mental health settings: a systematic review. Front. Psychiatry 14:1158145. doi: 10.3389/fpsyt.2023.1158145

Received: 03 February 2023; Accepted: 29 May 2023;
Published: 15 June 2023.

Edited by:

Howard Ryland, University of Oxford, United Kingdom

Reviewed by:

Andreas Hoell, University of Heidelberg, Germany
Pierre Pariseau-Legault, University of Quebec in Outaouais, Canada

Copyright © 2023 Lantta, Duxbury, Haines-Delmont, Björkdahl, Husum, Lickiewicz, Douzenis, Craig, Goodall, Bora, Whyte and Whittington. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tella Lantta, tella.lantta@utu.fi

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.