Skip to main content

BRIEF RESEARCH REPORT article

Front. Vet. Sci., 24 March 2023
Sec. Veterinary Epidemiology and Economics
Volume 10 - 2023 | https://doi.org/10.3389/fvets.2023.1107122

Capturing systematically users' experience of evaluation tools for integrated AMU and AMR surveillance

  • 1Department for Food Safety, Veterinary Issues and Risk Analysis, Danish Agriculture and Food Council, Copenhagen, Denmark
  • 2Department of Veterinary and Animal Sciences, University of Copenhagen, Frederiksberg, Denmark
  • 3ASTRE, Université de Montpellier, CIRAD, INRAE, Montpellier, France
  • 4Laboratoire National de l'Elevage et de Recherches Vétérinaires, Institut Sénégalais de Recherches Agricoles, Dakar, Senegal
  • 5CIRAD, UMR ASTRE, Dakar, Senegal
  • 6Veterinary Epidemiology Economics and Public Health Group, Department of Pathobiology and Population Sciences, Royal Veterinary College, London, United Kingdom
  • 7University of Lyon, French Agency for Food, Environmental and Occupational Health & Safety (ANSES), Laboratory of Lyon, Epidemiology and Surveillance Support Unit, Lyon, France
  • 8Department of Veterinary Sciences, University of Turin, Grugliasco (Turin), Italy
  • 9Département de pathologie et microbiologie, Université de Montréal, Saint-Hyacinthe, QC, Canada
  • 10Department of Animal Health, Welfare and Food Safety, Norwegian Veterinary Institute, Ås, Norway
  • 11Department of Agricultural and Food Sciences, University of Bologna, Bologna, Italy
  • 12Laboratory of Animal Health Economics, Aristotle University of Thessaloniki, Thessaloniki, Greece
  • 13Veterinary Epidemiology Unit, Sciensano, Brussels, Belgium
  • 14National Food Institute, Technical University of Denmark, Lyngby, Denmark

Tackling antimicrobial resistance (AMR) is a goal for many countries. Integrated surveillance of antimicrobial use (AMU) and resistance is a prerequisite for effective risk mitigation. Regular evaluation of any surveillance is needed to ensure its effectiveness and efficiency. The question is how to evaluate specifically integrated surveillance for AMU and AMR. In an international network called CoEvalAMR, we have developed guidelines for selection of the most appropriate tools for such an evaluation. Moreover, we have assessed different evaluation tools as examples using a country case format and a methodology with a focus on the user's experience. This paper describes the updated methodology, which consists of a brief introduction to the case and to the tool separately. Moreover, there are 12 functional aspects and nine content themes which should be scored using a 4-tiered scale. Additionally, four Strengths, Weaknesses, Opportunities, Threats (SWOT) questions should be addressed. Results are illustrated using radar diagrams. An example of application of the updated methodology is given using the ECoSur evaluation tool. No tool can cover all evaluation aspects comprehensively in a user-friendly manner, so the choice of tool must be based upon the specific evaluation purpose. Moreover, adequate resources, time and training are needed to obtain useful outputs from the evaluation. Our updated methodology can be used by tool users to share their experience with available tools, and hereby assist other users in identifying the most suited tool for their evaluation purpose. Additionally, tool developers can get valuable information for further improvements of their tool.

Introduction

It is a common goal of society to keep antimicrobials effective for the coming generations. One way of supporting this goal is to have surveillance in place for antimicrobial use (AMU) and resistance in different domains and sectors. This should preferably be done in an integrated manner, because genes coding for antimicrobial resistance (AMR) are spread within and among different human, animal and environmental domains. To ensure surveillance effectiveness and efficiency, there is a need to evaluate existing surveillance systems or components at regular intervals (1). This will help to reach the objective of surveillance which, among others, is to determine why and where action is needed to modify AMU and hereby reduce AMR.

Several tools have been developed to assist in such evaluations. Evaluation may be done by different types of professionals, who may be acting internally or externally to the surveillance system under evaluation. The users will have varying levels of experience in surveillance evaluation, access to detailed data and time to dedicate to the evaluation. Moreover, evaluation may be pursued for different purposes. This makes it necessary to choose the right tool for a given evaluation context, team and question.

During 2019–2020, an international network of scientists called “Convergence in evaluation frameworks for integrated surveillance of AMU and AMR” (CoEvalAMR) developed guidance for the evaluation of integrated surveillance for AMU and AMR (2). In this network, we defined integrated surveillance of AMU and AMR in the context of One Health as surveillance that is based on a systemic, cross-sectoral, multi-stakeholder perspective to inform mitigation decisions with the aim to keep antimicrobials effective for future generations (https://guidance.fp7-risksur.eu/welcome/evaluation-of-surveillance/). In Phase 1 of the CoEvalAMR project, a methodology was developed to gather user feedback on evaluation tools for integrated surveillance for AMU and AMR in an easy and standardized way. The focus was on compiling user subjective experience on the application of the tools; the approach chosen was partly inspired by websites using user feedback and scoring to inform decision-making of other users. The methodology consisted of four different approaches that complemented each other. The first consisted of a brief description of the case study, whereas the second covered the assessment of 11 pre-defined functional aspects of the tool including workability regarding the need for data, time and people (Table 1). The third approach covered an assessment of seven predefined content themes related to the tools' scope (Table 2). The functional aspects and content themes were scored semi-quantitatively using a scale from 1 to 4, and a comment was requested explaining the score. The fourth approach consisted of the subjective perception of the tool assessors based on an assessment of the strengths, weaknesses, opportunities, and threats (SWOT) (Table 3).

TABLE 1
www.frontiersin.org

Table 1. Description of the updated list of 12 functional aspects, sorted into five groups—text in bold reflects changes to the original methodology.

TABLE 2
www.frontiersin.org

Table 2. Updated description of nine content themesa.

TABLE 3
www.frontiersin.org

Table 3. Description of the four questions used for the SWOT-like approach, divided into the original and updated wording.

During Phase 1, six tools were assessed using the described methodology, by applying them to eight national surveillance systems as country cases. The tools were: ATLASS (The Assessment Tool for Laboratories and AMR Surveillance Systems developed by the Food and Agricultural Organization (FAO) of the United Nations), ECoSur (Evaluation of Collaboration for Surveillance tool), ISSEP (Integrated Surveillance System Evaluation Project—now called ISSE), NEOH (Developed by the EU COST Action “Network for Evaluation of One Health”), PMP-AMR (The Progressive Management Pathway tool on AMR developed by the FAO), and SURVTOOLS (Developed in the EU FP7 project RISKSUR). An overall description of this work can be found in (2) whereas (3), described the Danish case study in detail. Moreover, a description of users' experience for each country case study can be found on the website of CoEvalAMR (https://guidance.fp7-risksur.eu/welcome/case-studies/). Some of these case studies consisted of full evaluations based on the tools used. In others, the focus was mostly on the tool, and therefore, the case study only included a superficial evaluation of the surveillance system.

We learned that some tools can be directly used to evaluate a given question, a surveillance component or a system. Such tools have a pre-defined set of steps that need to be conducted. Other tools are better described as frameworks, which provide a theoretical background and explanation as to how the evaluation should be designed. These frameworks guide users toward the most appropriate evaluation method based on the evaluation question and context. According to Calba et al. (4), a framework acts as a skeletal support for something being constructed. Hence, it is an organization of concepts that provides a focus for inquiry. In contrast, Calba et al. (4) define a tool as a process with a specific purpose. Therefore, a tool is used as a means of performing an operation or achieving an end. The ISSE is an example of a framework (5), whereas PMP-AMR is an example of a tool (6).

Among the tools and frameworks investigated, only the ISSE framework is dedicated specifically to the evaluation of integrated surveillance of AMU and AMR, outlining a logic model that can be used to conceptualize surveillance evaluations. Other tools, such as ATLASS and PMP-AMR, are designed specifically for AMU and AMR surveillance and management, NEOH for One Health initiatives in general, SURVTOOLS for surveillance in general, and ECoSur for integrated collaboration (2).

It was concluded that all tools investigated were suitable to evaluate relevant—but not necessarily all—aspects of integrated surveillance for AMU and AMR. Moreover, each tool has its specific purpose and consequently distinct advantages and drawbacks. This makes it important to define a clear evaluation question and objective to choose the right tool. We also learned that the complexity of the tool application appeared to be proportional to the comprehensiveness of the evaluation results. Moreover, governance and impacts of integrated surveillance for AMU and AMR were not fully covered by the assessment of the tools in Phase 1.

Hence, ample experience was collected regarding assessment of the tools and the developed methodology. It was concluded that the methodology worked, but the wording and definitions could be clearer, the evaluation coverage could be broadened, and the scoring system could be more standardized. It was also of interest to understand better the expectations of tool users. Moreover, we wanted to compare the CoEvalAMR methodology with the assessment process used in the newly published Surveillance and Information Sharing Operational Tool (SISOT) (7), developed by the Tripartite (FAO/WHO/WOAH) of the United Nations (UN). These aspects have been dealt with in Phase 2 of the CoEvalAMR project, which runs from 2021 to 2023. The objective of this paper is to present the updated methodology, including an example showing the changes, as well as the considerations behind the update.

Materials and methods

In spring 2021, monthly virtual meetings began in the network, allowing members to convene and discuss how to update the methodology. A common document was set up enabling all members to provide comments and suggestions, which were subsequently discussed with the aim of obtaining consensus. This process continued until autumn 2022. Three elements were discussed: (1) lessons learned from using the initially developed methodology, (2) an analysis of expectations of tool users, and (3) the assessment process used in SISOT. Regarding lessons learned, the approach was a brainstorm in the groups' monthly meetings.

Regarding expectations of the tool users, we considered the results of a survey by Rüegg et al. (8). The survey was conducted in Phase 1 of CoEvalAMR to gather information on evaluation of existing or planned AMU and AMR surveillance systems and people's use of available evaluation tools, as well as their expectations on tools. An analysis of the 23 answers received was undertaken. We studied and discussed how we could best make use of these results to update the CoEvalAMR methodology.

Further, we looked at the list of functional aspects and content themes in SISOT and assessed if any of these would be of value for the update of the methodology. We also studied the definitions, use of scales, and visual appearance. Based on discussions in the CoEvalAMR network group, we aimed at identifying additional functional aspects, which would make the description of the individual tools more complete.

Finally, the updated methodology was tested using a case study undertaken as part of our network. Here, ECoSur was applied to the French surveillance system for AMU, AMR and antimicrobial residues in humans, animals and the environment (9). The overall objective was to evaluate the degree and quality of multisectoral collaboration within the surveillance system. In accordance with the aim of ECoSur, the focus was on evaluating the organization, functioning and functionalities of collaboration taking place in the French multi-sectoral surveillance system. The tool is available online (https://survtools.org/wiki/surveillance-evaluation/doku.php?id=quality_of_the_collaboration), for more information about ECoSur, please see Bordier et al. (10).

Results

Lessons learned from use of the initially developed evaluation methodology

The lessons learned on the methodology in Phase 1 of the network were the following:

• It takes time to make an assessment, as this requires first to get acquainted with the tool, and next to collect the necessary information and thereafter apply the tool.

• Inevitably, there is a high level of subjectivity in the assessment process, especially when it comes to developers assessing their own tools, but also to users, who are not acquainted with the tool.

• Clear definitions for all functional aspects and content themes—including the individual scores—are needed to ensure common understanding and harmonized scoring across future assessors.

• A justification is required along with the semi-quantitative scores to ensure meaningful interpretation because a specific score can be given for different reasons.

• To illustrate variation between assessors, an approach should be developed to combine the scores from different assessors/different case-studies.

• Regarding the SWOT-analysis (Table 3), the question related to opportunities was misinterpreted by some of the tool evaluators, who referred to negative aspects of the tool instead of positive aspects.

Analysis of expectations of tool users

The analysis of the 23 answers to the questionnaire undertaken by Rüegg et al. (8) showed that the respondents emphasized the following:

• The tools should provide clear results and evidence of data integration quality that can be used with confidence in research or to inform decision making.

• Standardized guidance should be available regarding which tool to use, depending on the evaluation needs.

• There should be an increased awareness of the different integrated evaluation tools available to stakeholders and in which contexts each tool could be used.

• It should be possible to undertake different levels of evaluation from superficial to deep, to enable, e.g., a rapid “general overview” evaluation with a more detailed evaluation of selected components.

• Standardized evaluation attributes and measurements across all evaluation tools would enable comparisons to be made between evaluations that use different tools.

• Standardized evaluation methods should enable evaluations that are comparable between different components.

• All tools should be free and easy to use with services available to guide users.

• Clear and easy to use tools would help to minimize bias and subjectivity of the person evaluating the system.

• There should be an opportunity to get assistance from an expert to discuss the different tools available and how and when to use them.

Essentially, people would like to see a one-stop shop with standardized tools that are flexible and easy to use. This does not sound realistic, but it puts attention to the requirement for an approach which is simple, transparent, and with clear definitions. It also means that there should be a balance between the more detailed parts of the evaluation and the general overview.

Comparison between the CoEvalAMR methodology and SISOT

The SISOT has recently been developed by the Tripartite of the UN to support national authorities in establishing or strengthening their coordinated multisectoral surveillance and information sharing for zoonotic diseases (7). SISOT can be used for identifying useful tools and resources for creating, implementing, and/or maintaining coordinated surveillance capacity, and information sharing platforms. The intention is to collect a repository of tools and resources to help users in identifying the most suitable tools and resources. Hence, the objective is like the work undertaken in CoEvalAMR which is focusing on AMU and AMR surveillance, but for a wider context as SISOT is targeting all zoonotic diseases and health threats shared between different domains.

The SISOT Evaluation Matrix describes a tool or resource using a standardized set of criteria that can be used to evaluate whether it is fit for a given purpose. The matrix can be applied to all tools and resources, which can assist in completing any step toward creation of a coordinated zoonotic surveillance system. The criteria are used to identify the strengths and weaknesses in an objective and unbiased way. There are nine categories of criteria: (1) accessibility, (2) language, (3) data needs and management, (4) data analysis and interpretation, (5) ease of use, (6) flexibility, (7) acceptability, (8) One Health, and (9) tool impact. For each category, the evaluation must address a series of pre-defined questions. There are between 3 and 10 questions per category, and for each question a scale of 1–5 is used depending on the situation observed. Radar diagrams are used to provide a graphical presentation of the results of scoring, illustrating the scores on nine different axes corresponding to each category. An evaluation criteria score is given up to a maximum of 100%. FAO has been undertaking country pilots using the SISOT Evaluation Matrix (7).

Based on the investigation of the SISOT Evaluation Matrix and discussions in the CoEvalAMR network, we identified that the addition of the following functional aspects would make the CoEvalAMR methodology more complete:

• Type of approach: framework or tool,

• Scoring-system method (quantitative, semi-quantitative or qualitative),

• Required level of knowledge of users regarding surveillance, epidemiology, etc.

• Required training to be acquainted with the tool,

• Coverage of the tools: human domain, animal domain, environmental, and food domain and combinations thereof,

• Coverage of gender aspects,

• Accessibility, and

• Languages in which the tool is available.

The updated CoEvalAMR methodology

The following updates were made on the existing CoEvalAMR methodology: First, the description of the case study was updated (Supplementary Table S1). Then, a general description of the tool, based on 10 functional aspects, was added (Supplementary Table S2). One of these aspects was gender equity. The list of functional aspects to be scored is presented in Table 1, along with the scoring system, defined in more detail than before. The functional aspects are now classified into five groups. Similarly, the updated content themes used to describe the scope of the tool are presented in Table 2, along with the original definition and the updated definition applied in Phase 2. Two new themes were included: governance and impact. The scoring system for the content themes was maintained, implying a four-tiered scale, where 1 = not covered, 2 = not well covered, 3 = more or less covered, 4 = well covered, in line with Sandberg et al. (2). The challenges related to the four SWOT questions was solved by using the words “strengths,” “weaknesses,” “opportunities,” and “threats” (Table 3).

Visualization of the results was improved by developing radar diagrams as a way of presenting the scoring of functional aspects and content themes. An example is given in Figure 1A for the functional aspects and in Figure 1B for the content themes. Nine axes were judged as the maximum number of axes, which could be used while having a readable graphical output. Therefore, some of the functional aspects were combined. Table 3 contains the original four questions used for the SWOT-like analysis along with the revised questions. The templates are now combined in an Excel matrix, which can be found on the website of CoEvalAMR (https://guidance.fp7-risksur.eu/case-studies/).

FIGURE 1
www.frontiersin.org

Figure 1. Radar diagrams depicting graphically the scoring of the functional aspects (A) and the content themes (B), based upon a French case study using ECoSur. Source: (https://guidance.fp7-risksur.eu/case-studies/).

The Excel matrix using the revised methodology was pilot tested as part of the French case study on the evaluation of collaboration within the French surveillance system for AMR, AMU and antimicrobial residues using ECoSur (11). The completed matrix can be consulted on the CoEvalAMR case studies repository (Please see Case 9 on https://guidance.fp7-risksur.eu/case-studies/). Briefly, the assessment demonstrated that despite ECoSur being somewhat difficult to use (collection of complex data and need for prior knowledge/training before use), it covered a large part of One Health aspects and generated actionable outputs (Figure 1A). In addition, most content themes identified by the CoEvalAMR consortium as relevant to the evaluation of integrated surveillance of AMU and AMR were covered by ECoSur, with the exception of AMU/AMR specific aspects (ECoSur being a generic tool) and impacts (Figure 1B).

Discussion

In Phase 1 of the CoEvalAMR network project, it was found that the users scored the individual functional aspects and content themes in a slightly subjective way. As the project progressed, a higher degree of consensus arose regarding interpretation of the methodology, including the way of scoring (2). Moreover, we discovered that the third question in the SWOT analysis was misunderstood by some of the users. We expect that with the update of the methodology, subjectivity will be reduced. Similarly, the likelihood of misunderstanding the questions will be lower.

The importance of considering gender and equity to tackle AMR has been underlined by the WHO (12, 13) but is currently rarely integrated into surveillance system evaluation. As explained by WHO, unless we think about how AMU and AMR affect men and women and different groups in society in their day-to-day lives at home, work and in their communities, we may inadvertently design programs that fail to address what matters. Hereby, effectiveness may be reduced, and impacts lost, and we may even contribute to gaps and inequities (12). As a first step toward enhancing the inclusion of this aspect, we have added consideration of gender to the list providing a general description of the tools (Supplementary Table S2). Still, we foresee a discussion on how to assess and evaluate gender aspects and other equity issues of importance for AMU and AMR. These issues may become part of a future Phase 3 of our network. Here, chapter 4 in the Handbook for Evaluation of One Health may provide inspiration for the next steps to take (14).

The respondents of the questionnaire survey undertaken as part of Phase 1 of CoEvalAMR pointed to the need for standardization of tools (8). In response to that, we have focused on standardizing our methodology by introducing clearer definitions and scales. The question arises as to which extent further standardization of our methodology is needed. It may be argued that standardization is an essential requirement in academia, but a less important issue for persons involved with the human health and veterinary authorities, where the process initiated by the tool would be more important than the tool itself. Moreover, the intention is not to compare tools, but to describe the tools to such an extent that the future users will be guided in choosing the right tool for their purpose.

According to the survey, the users prefer tools that are easy to use, without much need for preparation or training (8). The question is how this can be operationalized. Grants are usually targeting the development of tools, whereas limited resources are available for supporting their uptake and long-term maintenance. Moreover, the results of simple evaluations may not be sufficiently valuable. Still, it is relevant to discuss the balance between required training, allocated resources, details and overview. To address this, the intended outcome of the evaluation becomes crucial. This reiterates the need for careful description of the evaluation purpose before choosing the evaluation tool.

In our updating of the methodology, we have been inspired by the SISOT matrix developed by the Tripartite. The SISOT matrix is very detailed and can be used for evaluating different kinds of tools and resources for any zoonotic risk-reducing activities. The questions and possible ways of answering show how well-developed the SISOT matrix is. Our revised CoEvalAMR tool is targeting integrated surveillance for AMU and AMR. Based upon our own experience as well as the French case study (11), the CoEvalAMR methodology appears simpler and quicker to use than the SISOT matrix, while it still contains most of the elements that form part of the SISOT matrix. In conclusion, each approach was developed for its own objectives and has its value.

The case studies reported by Sandberg et al. (2) and Nielsen et al. (3) and the French case study (9, 11) covered both multi-component and single component surveillance systems. Multisectoral means that more than one sector is working together in a joint program or response to an event. Similarly, multidisciplinary means collaboration across several disciplines. Taking a One Health approach means that all relevant sectors and disciplines are involved (15). However, it does not imply that all sectors must work together and at all stages of surveillance. The key regarding the degree of integration is relevance. For example, the Competent Authority may need AMU and AMR data in animals and humans to evaluate the effect of a ban on use of a specific kind of antimicrobial in agriculture. However, data on AMR from the environment may not be needed. In contrast, if we are trying to understand the spread of AMR in the environment, data about AMU and AMR are needed from all three sectors. The methodology we have developed is useful to provide an overview of the advantages and disadvantages of the tool investigated, irrespective of whether the tool was used for evaluation of an integrated or non-integrated surveillance system.

Evaluation of One Health surveillance is an active field, and there is a growing number of evaluation tools becoming available. The Canadian One Health Evaluation of Antimicrobial Use and Resistance Surveillance (OHE-AMURS) tool is an example of such a new tool. It has been created to evaluate progress toward integrated, One Health surveillance of AMU and AMR while focusing among others on policy and programme sustainment (16). In Sandberg et al. (2), six tools were retained for evaluation. The ambition in Phase 2 of CoEvalAMR is to apply the updated evaluation methodology to other tools, in accordance with the needs or interests of the network members. The French case study is an example of this. It showed that there is a diversity of individual surveillance programs in France (9). This makes it difficult to get an overview of the surveillance system and its level of integration (11). The ECoSur evaluation provided this overview and helped to identify recommendations, which were shared with policy makers to improve One Health collaborations within the French system for surveillance of AMR, AMU, and AM residues (11).

An ongoing common activity in WG4 of CoEvalAMR is an evaluation of the OH-EpiCap tool, which is under development by the MATRIX consortium, funded by the One Health European Joint Program (17). In a common paper about OH-EpiCap, it will be investigated how we can combine the scores of the different assessors and case studies in a way which ensures that the variation is reported.

Other persons involved in surveillance evaluation are welcome to make use of our methodology. Moreover, the tool developers can get valuable information from our case studies for further improvements of their tools.

Conclusion

The CoEvalAMR evaluation methodology is developed with a focus on the users’ experience. It is free to use, simple and easy to work with. It has been updated to improve clarity, broaden the evaluation coverage, increase the standardization, and improve the visual appearance. The update was based upon experience from the CoEvalAMR network group from applying the methodology using country case studies, a questionnaire focused on the users' needs as well as a comparison with SISOT Evaluation Matrix developed by the Tripartite.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

LA took the initiative to update the methodology and headed the revision process through a series of meetings in WG4 of the CoEvalAMR network. Based upon the inputs from the group, LA drafted the first version of the paper, which was commented by all authors. The excel version of the CoEvalAMR tool was developed by PM. LC is responsible for the French case study. All authors read and approved the final version of the manuscript.

Funding

The project was funded by the Canadian Institutes for Health Research through the Joint Programming Initiative on Antimicrobial Resistance.

Acknowledgments

Mélanie Colomb-Cotinat from Santé Publique France and Clémence Bourély from the French Ministry of Agriculture and Food are acknowledged for their contributions to the French case study. The SISOT working group within the Tripartite is acknowledged for providing access to the SISOT work. The entire CoEvalAMR network is acknowledged for inspiration and comments.

Conflict of interest

LA works for an organization that gives advice to farmers and meat producing companies.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fvets.2023.1107122/full#supplementary-material

References

1. Aenishaenslin C, Häsler B, Ravel A, Parmley J, Stärk K, Buckeridge D, et al. Evidence needed for antimicrobial resistance surveillance systems. Bull World Health Organ. (2019) 97:283–9. doi: 10.2471/BLT.18.218917

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Sandberg M, Hesp A, Aenishaenslin C, Bordier M, Bennani H, Bergwerff U, et al. Assessment of evaluation tools for integrated surveillance of antimicrobial use and resistance based on selected case studies. Front Vet Sci. (2021) 8:620998. doi: 10.3389/fvets.2021.620998

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Nielsen LR, Alban L, Ellis-Iversen J, Mintiens K, Sandberg M. Evaluating integrated surveillance of antimicrobial resistance: experiences from use of three evaluation tools. Clin Microbiol Infect. (2020) 26:1606–16011. doi: 10.1016/j.cmi.2020.03.015

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Calba C, Goutard FL, Hoinville L, Hendrikx P, Lindberg A, Saegerman C, et al. Surveillance systems evaluation: a systematic review of the existing approaches. BMC Public Health. (2015) 15:448. doi: 10.1186/s12889-015-1791-5

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Aenishaenslin C, Häsler B, Ravel A, Parmley EJ, Mediouni S, Bennani H, et al. Evaluating the integration of one health in surveillance systems for antimicrobial use and resistance: a conceptual framework. Front Vet Sci. (2021) 8:611931. doi: 10.3389/fvets.2021.611931

PubMed Abstract | CrossRef Full Text | Google Scholar

6. FAO. FAO Progressive Management Pathway for Antimicrobial Resistance (FAO-PMP-AMR). (2023). Available online at: https://www.fao.org/antimicrobial-resistance/resources/tools/fao-pmp-amr/en/ (accessed February 20, 2023)

Google Scholar

7. Tripartite. Surveillance and Information Sharing Operational Tool - An operational tool of the Tripartite Zoonoses Guide. (2022). World Health Organization (WHO), Food and Agriculture Organization of the United Nations (FAO) and World Organisation for Animal Health, 2022. Available online at: https://www.who.int/publications/i/item/9789240053250 (accessed October 31, 2022).

Google Scholar

8. Rüegg SR, Antoine-Moussiaux N, Aenishaenslin C, Alban L, Bordier M, Bennani H, et al. Guidance for evaluating integrated surveillance of antimicrobial use and resistance. CABI One Health. (2022). doi: 10.1079/cabionehealth.2022.0007

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Collineau L, Bourély C, Rousset L, Berger-Carbonne A, Ploy MC, Pulcini C, et al. Towards One Health surveillance of antibiotic resistance, use and residues of antibiotics in France: characterisation and mapping of existing programmes in humans, animals and the environment. Blueprint (2022). doi: 10.1101/2022.11.14.22281639

CrossRef Full Text | Google Scholar

10. Bordier M, Delavenne C, Thuy Thi Nguyen D, Goutard FL, Hendrikx P. One health surveillance: a matrix to evaluate multisectoral collaboration. Front Vet Sci. (2019) 6:109. doi: 10.3389/fvets.2019.00109

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Collineau L, Rousset L, Colomb-Cotinat M, Bordier M, Bourély C. Moving towards One Health surveillance of antimicrobial resistance in France: an evaluation of the level of collaboration within the surveillance system. In: Proceedings of the ESCAIDE Conference; Stockholm and online, 23–25 November 2022. Available online at: https://www.medrxiv.org/content/10.1101/2022.11.14.22281639v1.full (accessed February 11, 2023).

Google Scholar

12. WHO. Tackling Antimicrobial Resistance (AMR) Together. Working Paper 5.0: Enhancing the focus on gender and equity (2018). WHO: Geneva (WHO/HWSI/AMR/2018.3). Available online at: WHO-WSI-AMR-2018.3-eng.pdf (accessed November 10, 2022).

Google Scholar

13. WHO. The Fight Against Antimicrobial Resistance Requires a Focus on Gender. (2021). Document Number: WHO/EURO:2021-3896-43655-61363. Available online at: WHO-EURO-2021-3896-43655-61363-eng.pdf (accessed November 10, 2022).

Google Scholar

14. WHO FAO, OIE, UNEP,. Strategic Framework for Collaboration on Antimicrobial Resistance – Together for One Health. Geneva: World Health Organization, Food Agriculture Organization of the United Nations World Organization for Animal Health (2022). Licence: CC BY-NC-SA 3.0 IGO. Available online at: https://www.who.int/publications/i/item/9789240045408 (accessed October 31, 2022).

Google Scholar

15. Rüegg S, Häsler B, Zinsstag J. Integrated Approached to Health – A Handbook for the Evaluation of One Health. Wageningen Academic Publishers (2018).

Google Scholar

16. Haworth-Brockman M, Saxinger LM, Miazga-Rodriguez M, Wierzbowski A, Otto SJG. One health evaluation of antimicrobial use and resistance surveillance: a novel tool for evaluating integrated, one health antimicrobial resistance and antimicrobial use surveillance programs. Front Vet Sci. (2021) 9:693703. doi: 10.3389/fpubh.2021.693703

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Tegene HA, Bogaardt C, Collineau L, Cazeau G, Lailler R, Reinhardt J, et al. OH-EpiCap: a semi-quantitative tool for the evaluation of One Health epidemiological surveillance capacities and capabilities. Blueprint. doi: 10.1101/2023.01.04.23284159

CrossRef Full Text | Google Scholar

Keywords: One Health assessment, integrated surveillance, evaluation, antimicrobial resistance, antimicrobial use

Citation: Alban L, Bordier M, Häsler B, Collineau L, Tomassone L, Bennani H, Aenishaenslin C, Norström M, Aragrande M, Filippitzi ME, Moura P and Sandberg M (2023) Capturing systematically users' experience of evaluation tools for integrated AMU and AMR surveillance. Front. Vet. Sci. 10:1107122. doi: 10.3389/fvets.2023.1107122

Received: 24 November 2022; Accepted: 06 March 2023;
Published: 24 March 2023.

Edited by:

Moh A. Alkhamis, Kuwait University, Kuwait

Reviewed by:

Margaret Haworth-Brockman, Max Rady College of Medicine, University of Manitoba, Canada
Simon J. G. Otto, University of Alberta, Canada

Copyright © 2023 Alban, Bordier, Häsler, Collineau, Tomassone, Bennani, Aenishaenslin, Norström, Aragrande, Filippitzi, Moura and Sandberg. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lis Alban, lia@lf.dk

Download