Skip to main content

ORIGINAL RESEARCH article

Front. Public Health, 04 June 2021
Sec. Planetary Health
This article is part of the Research Topic Health Systems, Health Professionals, and Planetary Health View all 6 articles

System Thinking and Citizen Participation Is Still Missing in One Health Initiatives – Lessons From Fifteen Evaluations

  • 1Section of Epidemiology, Vetsuisse Faculty, University of Zürich, Zurich, Switzerland
  • 2Veterinary Public Health Institute, Vetsuisse Faculty, University of Bern, Bern, Switzerland
  • 3Institute of Infection, Veterinary and Ecological Sciences, University of Liverpool, Liverpool, United Kingdom
  • 4International Livestock Research Institute, Nairobi, Kenya
  • 5Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, Basel, Switzerland
  • 6Department of Public Health, Medical Faculty, University of Basel, Basel, Switzerland
  • 7Department of Environmental Health and Ecological Sciences, Ifakara Health Institute, Dar es Salaam, Tanzania
  • 8Scientific Center for Risk Assessment and Analysis in Food Safety Area, Yerevan, Armenia
  • 9Centre for One Health Education, Advocacy, Research and Training, Kerala Veterinary and Animal Sciences University, Wayanad, India

Tackling complex public health challenges requires integrated approaches to health, such as One Health (OH). A key element of these approaches is the integration of knowledge across sectors, disciplines and stakeholders. It is not yet clear which elements of knowledge integration need endorsement to achieve best outcomes. This paper assesses 15 OH initiatives in 16 African, Asian and European countries to identify opportunities to improve knowledge integration and to investigate geographic influences on knowledge integration capacities. Two related evaluation tools, both relying on semi-quantitative questionnaires, were applied to two sets of case studies. In one tool, the questions relate to operations and infrastructure, while the other assigns questions to the three phases of “design,” “implementation,” and “evaluation” of the project life cycle. In both, the question scores are aggregated using medians. For analysis, extreme values were identified to highlight strengths and weaknesses. Seven initiatives were assessed by a single evaluator external to the initiative, and the other eight initiatives were jointly assessed by several internal and external evaluators. The knowledge integration capacity was greatest during the project implementation stage, and lowest during the evaluation stage. The main weaknesses pointing towards concrete potential for improvement were identified to be a lack of consideration of systemic characteristics, missing engagement of external stakeholders and poor bridging of knowledge, amplified by the absence of opportunities to learn and evolve in a collective process. Most users were unfamiliar with the systems approach to evaluation and found the use of the tools challenging, but they appreciated the new perspective and saw benefits in the ensuing reflections. We conclude that systems thinking and associated practises for OH require not only specific education in OH core competencies, but also methodological and institutional measures to endorse broad participation. To facilitate meta-analyses and generic improvement of integrated approaches to health we suggest including knowledge integration processes as elements to report according to the COHERE guidelines.

Background

Integrated approaches to health are designed to tackle complex health challenges and exist under different names (1). One Health (OH) is such an approach; it addresses challenges at the interface between people, animals, plants, and their environments, and consequently requires transdisciplinary collaboration among multiple stakeholders from different sectors (2). A key element of the OH approach is the integration of knowledge across sectors, disciplines and stakeholders (3), and at global scale, health innovation can be seen as a result of successful knowledge integration across continental boundaries (4). However, it is not yet clear what form and degree of knowledge integration is needed to conduct successful OH initiatives. Frankson and colleagues collated skills and competences of relevance to OH implementation (5). Other authors have proposed multi-criteria decision analyses, transdisciplinary and participatory approaches, and systems thinking as rigorous methodological tools to support knowledge integration in policy cycles, and have provided case studies of their application (69). With the Checklist for One Health Epidemiological Reporting of Evidence (COHERE) a benchmark of reportable elements was stated that should promote the integration of knowledge from the domains of humans, animals and their environment (10). In another attempt, the EU COST Action (TD1404) “Network for Evaluation of One Health (NEOH)” characterised OH and proposed a framework to assess the added value of integrated approaches by comparing knowledge integration (assessed with a specific tool) to the outcomes achieved or expected from the theory of change of a given initiative (11, 12). A method for evaluating knowledge integration capacity in multi-stakeholder governance (EVOLvINC) was derived from the NEOH tool in collaboration with experts from transdisciplinary research, sustainability sciences, and international development research (13, 14).

Structural and procedural barriers to knowledge integration have only recently received increasing attention, mostly in the sustainability sciences, where such barriers are analysed in the context of challenges to the implementation of different stages of transdisciplinary collaborations (15, 16). As far as OH is concerned, underlying epistemological, institutional, political and social factors that are associated with the implementation of multi-sectoral and transdisciplinary approaches appear to be neglected (17, 18). Context also matters, as OH is implemented within a global health framework that is characterised by fragmentation of interests, programs and sectors, a lack of societal participation, and professional focus on very limited areas of expertise (19, 20).

In order to identify a trajectory for improvement and to provide data for future benchmarking, this manuscript analyses empirical evidence of 15 evaluations addressing four research questions:

• Are there particularly strong/weak aspects that point towards opportunities to systematically improve knowledge integration in One Health?

• Can we discern patterns of knowledge integration capacities in relation to geography?

• Do evaluation procedures influence the assessment outcomes?

• Do people working on these initiatives perceive the tools to be understandable and beneficial?

Methods

This manuscript considers the Standards for Reporting Qualitative Research (21). The utilised evaluation tools were co-created by groups of experts in the context of a systems approach with the assumption of a constructivist epistemology. Three authors contributed to the development of the tools (MH, JZ, SR), while all other authors were members of evaluated initiatives to warrant credibility of the conclusions.

Recruitment of Initiatives and Evaluation Context

The data was drawn from two groups of initiatives that self-declared using a One Health approach: one group was recruited as case studies to develop an evaluation framework in the EU COST Action NEOH; the other was recruited to test the EVOLvINC tool. Both study groups were convenience samples.

The group of NEOH case studies (H-O) comprised eight previously published evaluations of OH initiatives in Europe and Africa (Table 1). Since they were conducted during the development of the NEOH tool for assessment of One Health-ness (see “evaluation tools, procedures and data processing”), processes varied slightly, and different evaluators carried out the evaluations. Four evaluations were formative, conducted during the implementation stage of the assessed initiative, two were retrospective, and one was prospective. Five used interviews for data collection, three relied partially on document analysis, and two administered questionnaire surveys. Each initiative was scored by two to six evaluators, three were self-evaluations by the initiative participants, and all included external or internal review of the evaluation scores.

TABLE 1
www.frontiersin.org

Table 1. Overview of the eight case studies evaluated with the NEOH tool.

The group of EVOLvINC case studies consisted of six initiatives (A–F) in Africa, Asia, and Europe that responded to a call for collaboration during the 2016 annual conference of the International Society for Disease Surveillance (ISDS). A further study (G) was recruited through personal relations of the authors. Their format varied from individual, small-scale projects to long-term, government-sponsored institutional programs. Six evaluations were formative, and one was prospective. All evaluations were conducted by the same person (MH) using the EVOLvINC tool (see “evaluation tools, procedures, and data processing”). Two initiatives were assessed on-site and five were assessed off-site. Table 2 describes these initiatives and evaluations in more detail.

TABLE 2
www.frontiersin.org

Table 2. Descriptions of initiatives evaluated with the EVOLvINC tool.

Evaluation Tools, Procedures, and Data Processing

Two evaluation tools were employed: a) the NEOH tool assesses the “One Health-ness” of initiatives with a catalogue of ~80 semi-quantitative questions about knowledge integration (12). The tool was later scrutinised in a more generic context. The second tool b) was developed in collaboration with experts from transdisciplinary research, sustainability sciences and international development research: Evaluating knOwLedge Integration Capacity in multi-stakeholder governance (EVOLvINC) (13).

First, the case studies from the NEOH group (H-O) were used to test the NEOH tool and all employed the concept of mapping the system in which the initiative is situated and then use these data to answer 80 questions relating to six aspects: systemic thinking, planning, transdisciplinary working, sharing, learning, and the systemic organisation (14). The framing of these aspects was established in an earlier workshop to characterise OH (11). Ultimately, the semi-quantitative scores of the questions between 0 and 1 provide a median score for each aspect that is then represented as a spoke in a hexagonal spider diagram. The surface of this diagram is interpreted as the OH index. The surface spanned by the first three aspects, divided by the surface of the latter three aspects, is the OH ratio (14). These evaluations were conducted by various evaluators in parallel to developing the evaluation method itself.

The EVOLvINC tool arranges the same six aspects used in the NEOH tool along the three stages of the project life cycle (13), but the aspects are operationalized in 3–5 criteria (Figure 1). Each criterion is measured by 3–5 indicators articulated as questions that are scored on a four-level Likert scale. The levels translate into a score between 0 (not conducive to knowledge integration) and 1 (highly conducive to knowledge integration). The median indicator scores correspond to the criterion score, and median criteria scores are aggregated to aspect scores. The comparison of the questions used in the two tools is provided in Table 3.

FIGURE 1
www.frontiersin.org

Figure 1. Structure of the EVOLvINC tool for Evaluating knOwLedge Integration Capacity in multi-stakeholder governance according to Hitziger et al. (13). The bold black cycle contains the three stages of the project life cycle (formulation, implementation, and evaluation). The knowledge integration capacity in each of these stages is assessed by two aspects (bold, capital letters), and each aspect is measured by 3–5 criteria. The questions attributed to each criterion in the EVOLvINC tool are stated in Table 3.

TABLE 3
www.frontiersin.org

Table 3. Comparison of the NEOH tool for assessment of “One Health-ness” and the EVOLvINC tool.

The EVOLvINC tool was applied in a three-stage process:

1. To build a common ground, information on the conceptual background of the evaluation and the complete EVOLvINC questionnaire (Supplementary Material) were provided to the initiative leaders and discussed with them. For on-site evaluations, these aspects were presented during a seminar. In initiative B, this seminar was a certified full day event attended by 150 stakeholders and students.

2. EVOLvINC was administered in structured interviews with initiative leaders and participants. Each took ~3 h, allowing sufficient time to address problems of understanding, rephrase questions where required, and discuss how the answers translated to the four-level Likert scale (see below). For on-site evaluations, interviews were complemented with discussions with other initiative participants.

3. The results of the analysis were provided to the initiatives in an executive summary and in graphical form. They were discussed and opportunities for improving the initiative's capacity to foster knowledge integration were elaborated with the leadership.

The tools provide two different perspectives using similar data: while the NEOH tool compares operational aspects (working, thinking, planning) and infrastructural aspects (learning, sharing, organisation) in a OH ratio, EVOLvINC links the aspects to the project life cycle. Thus, the NEOH representation facilitates reflection about investments in the system operations or infrastructure, and the EVOLvINC tool conveys the contribution of these aspects to the project life cycle and the capacity to evolve. In the EVOLvINC tool some questions are deleted and added compared to the NEOH tool. Also, a number of questions are attributed to different aspects in the two tools (Table 3). Questions related to reflexivity and adaptiveness in the working aspect of the NEOH tool are attributed to the planning aspect in EVOLvINC. The involvement of external stakeholders in the working aspect of the NEOH tool is attributed to the organisation aspect of the EVOLvINC tool. Questions probing for leadership are in the organisation aspect of the NEOH tool, but in the working aspect of EVOLvINC. It is thus not possible to compute the OH index and ratio relying on the median of the aspects, but imperative to aggregate the scores of the relevant questions. As studies A-G were collected based on the data requirements of the EVOLvINC tool, while studies H-O relied on the NEOH tool, availability of data was not identical. However, as this concerned to only a small proportion of questions, missing and surplus questions were simply omitted for aggregating calculations. We deemed this appropriate as the main aggregating operation is the median, which is relatively robust in such conditions. For the NEOH case study from North Macedonia, detailed data to compute the EVOLvINC aggregates was not available.

For both tools, the aggregated numerical results are complemented by a qualitative synthesis and recommendations for improving the initiative's capacity to foster knowledge integration. Due to their different composition, the aspects “working,” “planning,” and “organisation” are discussed differently in the context of either tool.

Since the number of studies does not allow for statistical analysis, the comparisons are presented as coloured tables with light colours for the scores in the lowest quartile and dark for scores in highest quartile. To complement, we computed the median of the aspects for each initiative and then compared medians of different clusters of initiatives in terms of median, OH index and OH ratio. With these representations we investigate whether there is a difference in scores obtained from internal and external evaluations, and if we could observe a difference across the geographical regions in which the case studies were situated. Furthermore, this helped us to identify particularly strong/weak aspects that point towards opportunities to systematically improve knowledge integration in OH.

User Satisfaction

Care was taken to assess the usability and potential adoption of the evaluation tools by OH practitioners. The NEOH case studies contained a critical appraisal of the tool as part of the discussions in the published manuscripts. For the ISDS case studies, a final discussion with the initiative leaders allowed us to gather structured feedback with regard to the lessons they learnt throughout the assessment, and their perspectives on the relevance and applicability of the tool and the evaluation process. In the present manuscript, we combine these qualitative results with observations drawn from the visual analysis of the scoring exercise to determine whether initiatives perceived the tools to be understandable and beneficial, and to determine whether there were indicators on how knowledge integration processes in OH can be systematically enhanced.

Results

The scores of the aspects ranged from 0.18 to 1 (Table 4). The table is colour coded to highlight the top (dark) and lowest (light) quartile of the aspect scores. It illustrates that sharing and learning are the least developed aspects, both linked to the evaluation stage of the policy or project life cycle. The strongest scores were achieved for the thinking and organisation aspects. The OH indices ranged from 0.29 to 0.74 with a median of 0.36, while the OH ratios were between 0.79 and 1.97 with a median of 1.22 (Table 5). All OH ratios were larger than one except for those of three initiatives.

TABLE 4
www.frontiersin.org

Table 4. Aspect scores computed for the 15 case studies using the EVOLvINC tool.

TABLE 5
www.frontiersin.org

Table 5. One Health index and ratio computed according to the NEOH tool.

Opportunities to Improve Knowledge Integration Capacity in One Health

To report the most significant observations from the evaluations of the ISDS case studies (initiatives A–G), we cite some qualitative notes associated with the criteria that had most scores in the top and bottom 10 percentile within the respective project life cycle phase (Supplementary Material, Excel: “Criteria EVOLvINC group” sheet).

Knowledge Integration Capacity in Initiative Formulation

The most positively evaluated criteria of the formulation phase were “leverage potential” (B, F), “competences and methods,” (E, F) and “resource allocation” (C, F). For example, initiative F was scored with a high leverage potential, since it addresses rabies in Tanzania with a comprehensive approach that encompasses human and animal health alongside societal and economic dimensions, and since it addresses impacts caused by the disease, by facilitating affordable and accessible rabies vaccinations in remote areas that are epidemiologically affected. Initiative E received high scores for competences and methods, since the project leaders reported field and research teams in Kenya to be trained in elaborate methods and capable in all required professional competences, including human and animal health, laboratory and clinical services, and data analysis. Initiative C received high scores on allocated resources since their governmental supervisors assigned sufficient financial and time allocations to ministry officials to carry out all the duties that are required from them. The weakest criterion was the “consideration of system characteristics” (C–E). For example, initiative E was identified to be working at the level of population patterns of zoonoses in Kenya rather than at the level of transmission structures, and activities during project formulation did not explicitly consider time delays, feedback loops, and causal interactions between different processes.

Knowledge Integration Capacity in Initiative Implementation

The most positively evaluated criteria for the implementation phase were “internal team structure” (A, B, D, E, F), “power distribution” (B, E, F, G), and “conflict resolution” (A, D, E) processes. For example, initiative D received high scores since two fieldwork teams in each of the six communities were clearly defined. Even though inter-team relations in each community ranged between support, competition, and ignorance, each team was working towards explicitly defined objectives. The power distribution was generally assessed as equal, but with a high interquartile range (IQR). Highest levels of inequality were recorded for differences between disciplines and sectors, such as medical and veterinary professions. For example, initiative B received high scores for a power distribution despite a lack of participation of local tribal populations, since it achieved a balanced participation of different disciplines and sectors, societal classes and genders (which could be confirmed by the evaluator (MH) during a 1 day seminar event in Kerala). Initiative A was scored high on conflict resolution, since it considered that collaborators were mostly motivated by the mission to eradicate a zoonotic disease (rabies), while financial conflicts were mostly resolved through imposition and personal conflicts through mediation. Teams reported approaching most conflicts through dialogue (though concealment had possibly occurred as well), and conflicts were usually resolved on factual levels or through team building. The poorest scores were achieved for “external actor and stakeholder network” (C, D) and “bridging knowledge” (C, G). For example, initiative C, a government effort, reported that external stakeholders had never been involved or participated. For the criterion on bridging knowledge, a rich set of integration methods was employed in many initiatives, while scores for integration processes were often low. While using various methods to integrate knowledge (including written information exchange, unstructured dialogues, mediation through bridge persons and boundary institutions, and joint project design), initiative G considered, for example, that its' knowledge integration processes were mostly centralised though the project leader and project management.

Knowledge Integration Capacity in Initiative Evaluation

The strongest evaluation scores for the evaluation phase were achieved for the “sharing of methods and results” (C, E, G) and the “direct learning environment” (A, F). For example, initiative G reported sharing its methods and results with the entire initiative, while initiative A reported that workshops demonstrated how involved stakeholders were frequently supportive of adaptive (improving existing procedures), and even generative (questioning existing norms) learning. The weakest criteria were “organisational learning” (A, D) and “general learning environment” (A, B, D). For example, initiative D considered that there was no mechanism for information storage (basic learning) at an organisation level, and while results were discussed within the team in a bi-weekly rhythm, these would never result in generative organisational learning (change in fundamentals and objectives). Initiative B reported that during the 4 years of its existence, its general environment (cultural, economic or political institutions beyond the core initiative stakeholders) had rarely been receptive to either adaptive or generative learning.

Geographic and Procedural Influences on Evaluation Outcomes

The median score of the ISDS initiatives assessed by the first author was 0.71, whereas of those conducted by different assessors in the NEOH group was 0.59. The median of the OH indices of the ISDS studies was 0.44 and that of the NEOH studies 0.36, while the median OH ratios were 1.18 and 1.22 respectively. Aggregation by geographical situation resulted in a median score of 0.71, 0.68, and 0.60 for initiatives situated in Africa, Asia, and Europe, respectively. The median OH indices were 0.44, 0.36, and 0.42 respectively, while the median OH ratio was 1.32 for Africa, 1.18 for Asia and 1.22 for Europe.

User Satisfaction

All initiatives appreciated the evaluation process, theoretical approach, and questionnaires. Each initiative experienced the evaluation as a trigger for important reflections that they intend to apply in the future. The lessons learnt were derived from all phases of the evaluation process, and the conceptual background. Insights that were singled out as particularly relevant or thought-provoking included each of the six aspects, and the criteria “inclusive design process,” “identification and engagement of sectors, actors and stakeholders,” “reflectivity,” “internal team structure,” “external stakeholder network,” “power distribution,” “leadership,” and “conflict resolution.” Several initiatives realised the importance of additional systemic and environmental factors of relevance to their OH focus.

Participants from several initiatives mentioned that they required time to understand the process and rationale of the evaluation. Interviews were perceived to be long and challenging, and some concepts were found to be complex and abstract. Questions addressing systems thinking frequently needed additional explanations or adaptation to the concrete context at hand. Some questions required simplification. These findings concur with evaluator (MH) observations and with the conclusions drawn by evaluators of two reviewed studies (I, L). However, the interactive, iterative process and the graphical representations of the EVOLvINC rationale and results were reported to be helpful for participants to attain the necessary levels of abstraction and complexity. The combination of qualitative and quantitative assessments was considered useful for allowing joint scoring by evaluators and participants, based on reflection, richness of detail, and development of personal input by the interviewees (J, L). The two evaluations (A, B) conducted during field visits of the first author highly valued the in-person collaboration.

Discussion

Our study includes a rich data set with 15 initiatives located in 16 African, Asian, and European countries from two different study groups. Despite being a convenience sample and thus not representative of the total body of activities that self-declare as “One Health initiatives,” they appear to underpin general concerns in regard to such initiatives and also delineate a path for improvement.

Opportunities to Improve Knowledge Integration in One Health

Knowledge integration seems to be emphasised by most OH initiatives primarily during the implementation phase, but there are opportunities to enhance participation and knowledge integration in all three phases of the project life cycle. In the formulation phase, a strong baseline was provided through the leverage potential and a diversity of competencies. This was endorsed by the attention given to power distribution and conflict resolution during implementation, and the willingness to share data. The main challenges in adopting a systems perspective were rooted in a lack of consideration of systemic characteristics. This is amplified by the lack of external stakeholder engagement, the poor bridging of knowledge in the implementation phase, and the limited attention given to an initiative or policy as a learning organisation with evolutionary features.

These findings support the call for specific education in OH core competencies, and emphasise that systems thinking and associated practises are crucial skills to develop in the OH community (5, 22). This necessity is well-illustrated by Prieto et al. (23), who reported that initiative N did not engage with stakeholders or in any intersectoral collaborative processes. On the contrary, Buttigieg et al. (24) describe the evolutionary process through one century of trial and error to eradicate brucellosis in Malta, culminating in a systemic and inclusive approach. These examples also emphasise the high dependency of OH initiatives on their work environment. Low scores for conduciveness of the general environment to learning reflect the extremely challenging institutional and societal contexts of many such initiatives. Notably, higher learning scores for initiatives that are close to national or regional governments highlight the value of involving high-ranking decision makers. However, the relative ease of mobilising these stakeholders compared to engaging citizens may add to the imbalance and emphasises the necessity to actively mobilise the tacit knowledge held by citizens when embarking on a OH initiative. In addition to involving the appropriate diversity of stakeholders, transforming different bodies of observations into joint narratives of how situations emerge and might unfold in the future requires true collaboration, with dedicated processes for information sharing and bridging, and a context that enables learning. While many methods are employed to bridge and integrate knowledge, these are predominantly focused on a small set of experts or project leaders. Processes that facilitate common group learning and reduce inequality between participating disciplines and sectors are needed to further enhance knowledge integration and strengthen networks for collective action. This concurs with Léger et al. (25), who suggest that lack of true collaboration results in low scores for sharing and learning: “Although the […] theory of change was highly relevant, reasonably well-planned and highly integrated with many disciplines and with relevant stakeholders involved from the beginning, it proved difficult to carry out the OH approach in practise, and many of the actors went back to unisectorial and disciplinary work in their daily tasks. This […] potentially reduced the societal impact of the initiative.”

Sharing and learning contribute to the evaluation phase which, on average, got lower scores than the other phases. This observation resonates with the claim that evaluation is a deficit in current practise of OH (26). Consequently, the low scores for generative learning (addressing and revising deep, complex beliefs, assumptions, paradigms, or objectives) provide some insight into why the envisaged paradigm shift through OH has not yet occurred (27).

Geographic and Procedural Influences on Evaluation Outcomes

The small size of the sample and the geographical bias in the groups make it difficult to attribute further procedural effects to the scoring results with certainty. Yet, a median OH ratio above one is consistent across initiatives from all three continents, which suggests that more emphasis is given to operational aspects than to setting up infrastructure. This finding probably reflects a selection bias, as initiatives setting up infrastructures to share, learn and distribute leadership may not self-identify as “OH initiatives.” Secondly, high scores in “sharing methods and results” within overall low scores in sharing and learning reflect the scientific community through which the studies were selected, and the importance given to publishing methods and results in research.

From a geographical standpoint, case studies located in Africa had a higher median score and IQR (0.71, 0.18) than the European studies (0.60, 0.10). At closer inspection they have higher scores and less variance in the evaluation phase than the European studies. The median OH indices did not differ much (0.44 vs. 0.42), but the median OH ratios (1.32 vs. 1.22) suggest that the African studies put particularly more emphasis on operational aspects and particularly less on infrastructure. We are thus inclined to think that the funding context of the studies located in Africa (primarily international development funding) require more effort for evaluation and that they have less impact on the local infrastructure for knowledge integration, than in the European context where domestic funding requires less accountability and promotes local infrastructure investments.

When comparing the two study groups, the median score and the IQR were higher in the ISDS group (0.71, 0.085), than in the NEOH study group (0.59, 0.067). These differences were most pronounced for the implementation phase and were also reflected in analogous differences in the OH indices. Because of the different evaluation procedures in the two groups, we cannot discern if this indicates a more severe scrutiny in the NEOH study group or whether it is due to the characteristics of the two samples.

User Satisfaction

All initiatives in this study appreciated the evaluation approach as an opportunity for a learning process and identified specific indicators or elements that they intend to improve or apply in the future. The combination of qualitative and quantitative assessments was deemed beneficial. Acquiring the necessary knowledge to understand and interpret the systems approach was found to be time consuming and challenging. Some concepts were even too abstract for a straightforward application. In these instances, the iterative process and graphical support were considered helpful, and the in-person collaboration was highly appreciated. Several initiatives (D, E, G) suggested further collaborative evaluation workshops were needed to enhance understanding and depth of responses to intellectually challenging or politically sensitive concepts, such as systems thinking, project design, and power distribution. This aligns with other authors who have advocated for the use of focus groups to investigate sensitive issues (28).

A final note should be made regarding the timing of an evaluation within an initiative's life cycle. Both tools are easily adapted to prospective, formative or retrospective evaluations with only minor rephrasing. Fonseca et al. (29) recommend prospective and repeated formative evaluations at early stages of the initiative, since they allow for anticipation of subsequent aspects and enhancement of knowledge integration capacity in the future. This would require a short and concise questionnaire that can be used in self-evaluations without much expertise. In the present study, prospective evaluation of the learning aspect was found challenging (G, I) since many learning opportunities develop spontaneously during implementation stages, and many questions relate to actual lessons learnt, rather than mere opportunities to learn. At the other end of the cycle, retrospective evaluations may be limited by data availability if collaborators are hard to contact (K), and because process-oriented information is usually scarce in academic or grey literature. To redress this gap, we believe that such data should be required in the Checklist for One Health Epidemiological Reporting of Evidence (COHERE) guidelines (10).

Conclusion

We conclude that systems thinking and associated practices for OH require not only specific education in OH core competencies, but also methodological and institutional measures to endorse broad participation (3, 5, 22, 30). Particular attention should be given to conceiving initiatives as cyclic iterative processes and including evaluation as a collective learning opportunity. In this spirit, reporting knowledge integration processes should be included in the COHERE guidelines (10).

The two tools employed in this study were successful in triggering reflexive dialogues on the facilitation of knowledge integration, and in enabling co-production of improvements. A key factor was the social, didactic, and emotional competence of the evaluator(s). A further challenge is the scalability and comparability of the results obtained with the tools. More work should be invested in refining the indicator scales and establishing benchmarks. Also, how the size of initiatives affects their ability to integrate knowledge remains to be investigated, since large-scale initiatives have obvious advantages to affect broad systems, identify and engage stakeholders sustainably and mobilise significant resources and competencies, while smaller initiatives favour personal contact which support common group learning, internal sharing, and deep reflection required for bridging processes (31).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author Contributions

MH, JB, and SR conceived the research and the manuscript. MH and SR analysed the data and drafted the manuscript. MH, JB, and JZ established contact with the evaluated research initiatives. NP and JZ coordinated field visits, seminars, facilitated in kind support, and established local contacts. MH coordinated the research and conducted the fieldwork. SD, LF, ML, KL, TM, CM, KM, RÖ, NP, and JZ discussed the methodology, contributed to fieldwork, and commented on the manuscript. All authors read and approved the final manuscript.

Funding

The work reported in this article was funded by the Swiss National Science Foundation (grant IZCNZ0-174587). It was conducted in association with the Action TD1404 (Network for Evaluation of One Health, NEOH) funded by the European Cooperation in Science and Technology (COST). LF and KM were supported by the Biotechnology and Biological Sciences Research Council, the Department for International Development, the Economic & Social Research Council, the Medical Research Council, the Natural Environment Research Council and the Defence Science & Technology Laboratory, under the Zoonoses and Emerging Livestock Systems (ZELS) programme, grant reference BB/L019019/1. RÖ was supported by the Federal Food Safety and Veterinary Office and the Wolfermann-Nägeli Foundation. We also received financial support from University of Zurich's Graduate Campus, the Swiss Institute for Tropical and Public Health, and in-kind support from the Kerala Veterinary and Animal Sciences University in India.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We are grateful to the International Society for Disease Surveillance for facilitating the initial contact to evaluated research initiatives. We thank Dr. Meghan Davis, Dr. Victor del Rio, and Dr. Cassidy Rist for valuable input and support. Furthermore, we would like to thank four reviewers who have scrutinised previous versions of the manuscript and allowed for it to improve.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2021.653398/full#supplementary-material

The EVOLvINC questionnaire,

The Excel workbook with data aggregation and analysis, i.e., sheets

• EVOLvINC aggregation (Table 4).

• NEOH aggregation

• Criteria EVOLvINC group

• Indicators EVOLvINC group

References

1. Assmuth T, Chen X, Degeling C, Haahtela T, Irvine KN, Keune H, et al. Integrative concepts and practices of health in transdisciplinary social ecology. Socio-Ecological Pract Res. (2020) 2:71–90. doi: 10.1007/s42532-019-00038-y

CrossRef Full Text | Google Scholar

2. Cork SC, Geale DW, Hall DC. One health in policy development: an integrated approach to translating science into policy. In: Zinstag J, Schelling E, Waltner-Toews D, Whittaker M, Tanner M, editors. One Health: The Theory and Practice of Integrated Health Approaches. Wallingford: CAB International. p. 304–16. doi: 10.1079/9781780643410.0304

CrossRef Full Text | Google Scholar

3. Hitziger M, Esposito R, Canali M, Aragrande M, Häsler B, Rüegg SRR. Knowledge integration in One Health policy formulation, implementation and evaluation. Bull World Health Organ. (2018) 96:211–8. doi: 10.2471/BLT.17.202705

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Zinsstag J, Pelikan K, Hammel T, Tischler J, Flahault A, Utzinger J, et al. Reverse innovation in global health. J Public Heal Emerg. (2019) 3:2–2. doi: 10.21037/jphe.2018.12.05

CrossRef Full Text | Google Scholar

5. Frankson R, Hueston W, Christian K, Olson D, Lee M, Valeri L, et al. One health core competency domains. Front Public Heal. (2016) 4:192. doi: 10.3389/fpubh.2016.00192

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Hitziger M, Berger Gonzalez M, Gharzouzi E, Ochaíta Santizo D, Solis Miranda R, Aguilar Ferro A, et al. Patient-centered boundary mechanisms to foster intercultural partnerships in health care: a case study in Guatemala. J Ethnobiol Ethnomed. (2017) 13:44. doi: 10.1186/s13002-017-0170-y

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Lane DC, Munro E, Husemann E. Blending systems thinking approaches for organisational analysis: reviewing child protection in England. Eur J Oper Res. (2016) 251:613–23. doi: 10.1016/j.ejor.2015.10.041

CrossRef Full Text | Google Scholar

8. Aenishaenslin C, Hongoh V, Cisse HD, Hoen AG, Samoura K, Michel P, et al. Multi-criteria decision analysis as an innovative approach to managing zoonoses: results from a study on Lyme disease in Canada. BMC Public Health. (2013) 13:897. doi: 10.1186/1471-2458-13-897

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Duboz R, Echaubard P, Promburom P, Kilvington M, Ross H, Allen W, et al. Systems thinking in practice: participatory modeling as a foundation for integrated approaches to health. Front Vet Sci. (2018) 5:303. doi: 10.3389/fvets.2018.00303

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Davis MF, Rankin SC, Schurer JM, Cole S, Conti L, Rabinowitz P, et al. Checklist for one health epidemiological reporting of evidence (COHERE). One Heal. (2017) 4:14–21. doi: 10.1016/j.onehlt.2017.07.001

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Rüegg SR, McMahon BJ, Häsler B, Esposito R, Nielsen LR, Ifejika Speranza C, et al. A blueprint to evaluate One Health. Front Public Heal. (2017) 5:20. doi: 10.3389/fpubh.2017.00020

CrossRef Full Text | Google Scholar

12. Rüegg SR, Häsler B, Zinsstag J editors. Integrated Approaches to Health: A Handbook for the Evaluation of One Health. 1st ed. Wageningen: Wageningen Academic Publishers (2018). doi: 10.3920/978-90-8686-875-9

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Hitziger M, Aragrande M, Berezowski JA, Canali M, Del Rio Vilas V, Hoffmann S, et al. EVOLvINC: EValuating knOwLedge INtegration Capacity in multistakeholder governance. Ecol Soc. (2019) 24:art36. doi: 10.5751/ES-10935-240236

CrossRef Full Text | Google Scholar

14. Rüegg SR, Rosenbaum Nielsen L, Buttigieg SC, Santa M, Aragrande M, Canali M, et al. A systems approach to evaluate One Health initiatives. Front Vet Sci. (2018) 5:23. doi: 10.3389/fvets.2018.00023

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Scholz RW, Steiner G. The real type and ideal type of transdisciplinary processes: part II—what constraints and obstacles do we meet in practice? Sustain Sci. (2015) 10:653–71. doi: 10.1007/s11625-015-0327-3

CrossRef Full Text | Google Scholar

16. Steelman T, Nichols EG, James A, Bradford L, Ebersöhn L, Scherman V, et al. Practicing the science of sustainability: the challenges of transdisciplinarity in a developing world context. Sustain Sci. (2015) 10:581–99. doi: 10.1007/s11625-015-0334-4

CrossRef Full Text | Google Scholar

17. Lebov J, Grieger K, Womack D, Zaccaro D, Whitehead N, Kowalcyk B, et al. A framework for One Health research. One Heal. (2017) 3:44–50. doi: 10.1016/j.onehlt.2017.03.004

CrossRef Full Text | Google Scholar

18. Woods A, Bresalier M. One health, many histories. Vet Rec. (2014) 174:650–4. doi: 10.1136/vr.g3678

CrossRef Full Text | Google Scholar

19. Lee K, Brumme ZL. Operationalizing the One Health approach: the global governance challenges. Health Policy Plan. (2013) 28:778–85. doi: 10.1093/heapol/czs127

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Galaz V, Leach M, Scoones I, Stein C. The political economy of One Health research and policy. STEPS Working Paper 81. Brighton: STEPS Centre (2015).

Google Scholar

21. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. (2014) 89:1245–51. doi: 10.1097/ACM.0000000000000388

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Togami E, Gardy JL, Columbia B, Hansen GR, Consulting H, Poste GH, et al. Core competencies in one health education : what are we missing? Natl Acad Med. (2018). Available online at: https://nam.edu/core-competencies-in-one-health-education-what-are-we-missing/ (accessed June 6, 2018). doi: 10.31478/201806a

CrossRef Full Text | Google Scholar

23. Muñoz-Prieto A, Nielsen LR, Martinez-Subiela S, Mazeikiene J, Lopez-Jornet P, Savić S, et al. Application of the NEOH framework for self-evaluation of One Health elements of a case-study on obesity in European dogs and dog-owners. Front Vet Sci. (2018) 5:163. doi: 10.3389/fvets.2018.00163

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Buttigieg SC, Savic S, Cauchi D, Lautier E, Canali M, Aragrande M. Brucellosis control in Malta and Serbia: a One Health evaluation. Front Vet Sci. (2018) 5:147. doi: 10.3389/fvets.2018.00147

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Léger AL, Stärk K, Rushton J, Nielsen LR. A One Health evaluation of the University of Copenhagen Research Centre for Control of Antibiotic Resistance. Front Vet Sci. (2018) 5:194. doi: 10.3389/fvets.2018.00194

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Baum SE, Machalaba C, Daszak P, Salerno RH, Karesh WB. Evaluating one health: are we demonstrating effectiveness? One Heal. (2016) 3:5–10. doi: 10.1016/j.onehlt.2016.10.004

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Harrison S, Kivuti-Bitok L, Macmillan A, Priest P. EcoHealth and one health: a theory-focused review in response to calls for convergence. Environ Int. (2019) 132:105058. doi: 10.1016/j.envint.2019.105058

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Jordan J, Lynch U, Moutray M, O'Hagan M-T, Orr J, Peake S, et al. Using focus groups to research sensitive issues: insights from group interviews on nursing in the Northern Ireland “Troubles.” Int J Qual Methods. (2007) 6:1–19. doi: 10.1177/160940690700600401

CrossRef Full Text | Google Scholar

29. Fonseca AG, Torgal J, de Meneghi D, Gabriël S, Coelho AC, Vilhena M. One Health-ness evaluation of cysticercosis surveillance design in Portugal. Front Public Heal. (2018) 6:74. doi: 10.3389/fpubh.2018.00074

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Queenan K, Garnier J, Nielsen LR, Buttigieg S, De Meneghi D, Holmberg M, et al. Roadmap to a One Health agenda 2030. CAB Rev Perspect Agric Vet Sci Nutr Nat Resour. (2017) 12:17. doi: 10.1079/PAVSNNR201712014

CrossRef Full Text | Google Scholar

31. Hanin MCE, Queenan K, Savic S, Rüegg SR, Häsler B. A One Health evaluation of the Southern African Centre for Infectious Disease Surveillance. Front Vet Sci. (2018) 5:33. doi: 10.3389/fvets.2018.00033

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: evaluation, social determinants of health, One Health, disease surveillance, governance, project life cycle, knowledge integration, transdisciplinarity

Citation: Hitziger M, Berezowski J, Dürr S, Falzon LC, Léchenne M, Lushasi K, Markosyan T, Mbilo C, Momanyi KN, Özçelik R, Prejit N, Zinsstag J and Rüegg SR (2021) System Thinking and Citizen Participation Is Still Missing in One Health Initiatives – Lessons From Fifteen Evaluations. Front. Public Health 9:653398. doi: 10.3389/fpubh.2021.653398

Received: 14 January 2021; Accepted: 29 April 2021;
Published: 04 June 2021.

Edited by:

Pierre Echaubard, SOAS University of London, United Kingdom

Reviewed by:

Will Allen, Independent Researcher, Christchurch, New Zealand
Daniela Patricia Figueroa, Adolfo Ibáñez University, Chile

Copyright © 2021 Hitziger, Berezowski, Dürr, Falzon, Léchenne, Lushasi, Markosyan, Mbilo, Momanyi, Özçelik, Prejit, Zinsstag and Rüegg. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Simon R. Rüegg, srueegg@vetclinics.uzh.ch

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.