- 1School of Management and Marketing, University of Waikato, Hamilton, New Zealand
- 2College of Nursing, University of New Mexico, Albuquerque, NM, United States
- 3Regional EEO Office, Indian Health Service, Oklahoma City, OK, United States
- 4National Indian Child Welfare Association, Portland, OR, United States
- 5Community-Campus Partnerships for Health, Raleigh, NC, United States
- 6Department of Psychology, San Jose State University, San Jose, CA, United States
- 7Department of Epidemiology and Population Health, Stanford University, Palo Alto, CA, United States
- 8College of Population Health, Center for Participatory Research, University of New Mexico, Albuquerque, NM, United States
Background: As community-engaged research (CEnR), community-based participatory research (CBPR) and patient-engaged research (PEnR) have become increasingly recognized as valued research approaches in the last several decades, there is need for pragmatic and validated tools to assess effective partnering practices that contribute to health and health equity outcomes. This article reports on the co-creation of an actionable pragmatic survey, shortened from validated metrics of partnership practices and outcomes.
Methods: We pursued a triple aim of preserving content validity, psychometric properties, and importance to stakeholders of items, scales, and constructs from a previously validated measure of CBRP/CEnR processes and outcomes. There were six steps in the methods: (a) established validity and shortening objectives; (b) used a conceptual model to guide decisions; (c) preserved content validity and importance; (d) preserved psychometric properties; (e) justified the selection of items and scales; and (f) validated the short-form version. Twenty-one CBPR/CEnR experts (13 academic and 8 community partners) completed a survey and participated in two focus groups to identify content validity and importance of the original 93 items.
Results: The survey and focus group process resulted in the creation of the 30-item Partnering for Health Improvement and Research Equity (PHIRE) survey. Confirmatory factor analysis and a structural equation model of the original data set resulted in the validation of eight higher-order scales with good internal consistency and structural relationships (TLI > 0.98 and SRMR < 0.02). A reworded version of the PHIRE was administered to an additional sample demonstrating good reliability and construct validity.
Conclusion: This study demonstrates that the PHIRE is a reliable instrument with construct validity compared to the larger version from which it was derived. The PHIRE is a straightforward and easy-to-use tool, for a range of CBPR/CEnR projects, that can provide benefit to partnerships by identifying actionable changes to their partnering practices to reach their desired research and practical outcomes.
Background
As community-engaged research (CEnR), community-based participatory research (CBPR) and patient-engaged research (PEnR) have become increasingly recognized as valued research approaches in the last several decades, there is need for models and validated tools to assess effective partnering practices that contribute to health and health equity outcomes. While many systematic reviews have identified a range of impacts of engagement practices on outcomes (1–4), the science of creating strong, reliable, and valid measurements has lagged. With increasing National Institutes of Health (NIH) and foundation funding mandates for community-academic partnerships, even more important is having pragmatic instruments that can serve as evaluation or collective reflection opportunities to strengthen partnership capacities to achieve desired outcomes.
Early on, building from the coalition (5), empowerment (6), organizational development and capacity (7), and team science (8) literatures, individual research projects developed their own mixed-methods evaluations, such as the Detroit Urban Research Center (9) or relied on existing tools such as the Wilder Collaboration instrument (10). More recently multiple research projects and task forces have produced validated instruments that come from collaborative research projects with diverse populations, regions, and health issues, with a few systematic reviews (11–13). Noteworthy among these are three national efforts that have created an engagement model and developed or identified validated tools.
The earliest effort, now called Engage for Equity (E2), was launched in 2006 with the goal of developing an assessment tool of measures and metrics of partnering processes and outcomes, and of identifying promising or best practices that contribute to health and health equity outcomes (14). In the first funding from the NIH, the University of New Mexico Center for Participatory Research team, with partner organizations and a community-academic “Think Tank” as a national advisory group and much community consultation, first reviewed the literature and created a CBPR conceptual model, composed of four domains: the political-economic and social contexts under which partnerships operate; partnering practices, including who is involved and how well partners interact in their relationships and their formal agreements; collaborative implementation of research and intervention actions; and projected outcomes, including intermediate and long-term health and social justice outcomes (15, 16). With the second NIH funding, the national team developed the Community Engagement Survey (CES) with questions from each of the CBPR model domains, conducted internet surveys of academic and community partner teams from 200 federally-funded diverse CBPR projects, and produced a first set of promising practices that contribute to outcomes (17) and psychometrically-validated scales (18). A companion E2 Key Informant Survey for principal investigators asked about the facts of each project. In the third NIH funding, the team refined the CES instrument, translated it into Spanish (19) and surveyed another 210 federally-funded partnerships, enabling refinement of psychometrics (20) and analysis of pathways of which practices contribute to outcomes, with a focus on collective empowerment and shared governance (21, 22). In this third stage, Partnership Data Reports1 were created to return their own data to partnered teams who attended E2 intervention workshops, based on Paulo Freire’s praxis of promoting collective reflection to strengthen team actions (23, 24). The 93 CES questions were divided into eight higher order constructs, which also facilitated team reflections: partnership capacity, structural governance, commitment to collective empowerment, relationships, community-engagement in research, synergy, intermediate outcomes, and long-term outcomes.
Several related measurement projects are relevant to this work. The NIH-funded Measurement Approaches to Partnership Success (MAPS) project was launched in 2019, by the Detroit Community-Academic Urban Research Center, with the goal of developing and validating an instrument focused on partnership success (25, 26). The MAPS model was developed with a CBPR process through their national advisory board, basing constructs on their conceptual framework of partnership evaluation refined over twenty years. Internet surveys were conducted with 55 successful CBPR partnerships, defined as being over 6 years of existence, typically with multiple funding cycles or projects. The validated questionnaire organized 81 questions into seven dimensions: equity in the partnership, reciprocity, competence enhancement, synergy, sustainability, realization of benefits over time, and achievement of long-term partnership goals/outcomes (27–29).
The “Assessing Meaningful Community Engagement” model is the product of a National Academy of Medicine (NAM) taskforce, which brought together academic and community experts to review the field of CEnR, develop a broader engagement model, and compile a summary of all validated instruments to date (30). The literature review of validated instruments identified 28 instruments that cover constructs within the NAM model, with the E2 Community Engagement Survey (CES), one of only three in the nation that covers all NAM domains. The review also included the Key Informant Survey (KIS), the E2 companion instrument, and the Spanish translation of the CES and KIS, which are two of only three Spanish surveys (19). A formal guide provides an overview of each instrument, illustrating how each aligns with the model, its psychometrics, and its potential uses (31).
While these instruments have served to identify appropriate constructs of promising or best partnering practices and their impact on outcomes, many of them have high response burden due to the large number of items and are limited in their pragmatic use as partnership evaluation tools. According to Glasgow and Riley (32), pragmatic measures should, at minimum, have the properties of being important to stakeholders, low burden, actionable, and sensitive to change. This paper presents one effort by the UNM-Center for Participatory Research E2 team to create a pragmatic reduced-item tool from the CES that can be used by CBPR/CEnR community members, practitioners and researchers as an annual evaluation and planning opportunity. By using a short tool, partners can collectively reflect on their own data, assess their current capacities and identify the areas they want to strengthen over the next year to become more effective at reaching their outcomes.
This paper provides an overview of the measure shortening process from the E2 CES to a new shorter tool, called Partnering for Health Improvement and Research Equity (PHIRE). CES higher-order constructs were maintained, and quantitative and qualitative methods were used to reduce items. Discussion includes initiatives which have piloted the PHIRE, its Spanish translation, and importance of cultural and community context. Implications for use as a pragmatic evaluation and collective reflection tool by CBPR/CEnR projects and other community collaborations are presented, with future directions for further translation and validation efforts.
Methods
The original survey and psychometric properties are discussed elsewhere (20). To develop a pragmatic instrument of CBPR processes and outcomes, we pursued a triple aim of preserving content validity, psychometric properties, and importance to stakeholders of items, scales, and constructs from the E2 CES by adapting six steps outlined in Goetz et al. (33). We embed the participants, data collection, and analysis within these six steps as detailed in the following subheadings. The study involved human participants and was reviewed and approved by the University of New Mexico’s Human Research Protections Office, UNM Health Sciences Center (#16–098). The study was conducted in accordance with the local legislation and institutional requirements. Participants provided direct consent to participate after they reviewed an information sheet.
Establish validity and shortening objectives
The objective of this project was to shorten the measurement portion of the E2 CES from 93 items to approximately 25–30 items to provide actionable data to partnerships in a more flexible and practical manner. Further, these efforts can enhance uptake of the CES in other projects due to reduction in participant burden. We utilized the most recently distributed E2 CES instrument with 93 scale items (20) organized across eight higher-order constructs within the four domains of the CBPR model (34). For example, an item for collective empowerment is “Our partnership evaluates together what we have done well and how we can improve our collaboration” and an item for synergy is “We work together well as a partnership.” Scales and constructs in the E2 CES have demonstrated strong factorial validity, strong convergent validity, and strong internal consistency. The shortened measure preserves these elements (20).
Use a conceptual model
The development of the E2 CES was guided by the CBPR model (35) and is limited in scope relative to the entire model due to the model’s complexity (36). To stay consistent with the original purpose of designing the E2 CES, we used the CBPR model to guide the organization of our surveys and focus groups, as well as our psychometric analyses of convergent validity in creating the shortened measure. That is, choices to remove or retain items ensured that the four domains (context, partnering practices, research-intervention actions, and outcomes) were represented in the final survey.
Preserve content validity and importance
We invited 40 experts from the E2 Think Tank members and E2 community partners to participate in a Content Expert and Stakeholder Survey (See Supplementary material 1). The experts included academic and community partners who had been associated with the E2 project and thus had familiarity with the original survey and conceptual measure. They also had their own community-academic partnership experience that they drew on to evaluate survey items. Following Gideon et al. (37), we asked respondents to categorize the content of CES items and scales as “least important,” “very important: might be good to include,” or “most important: needs to be included.” Additionally, we asked the participants to rate the items as actionable: “least actionable,” “very actionable: include if there is room,” or “most actionable: need to be included.” We allowed respondents to choose to focus on one or more of the domains of the CBPR model. We also included descriptions of these domains and the definition of the content of each scale. The responses to the items were converted to a 10-point scale (1–10) using the proportion of respondents who rated the items as most importance or most actionable and multiplying by 10.
Preserve psychometric properties
We preserved psychometric properties by applying analytic techniques used in classical test theory. Scale construction in this approach is a traditional quantitative approach used to test the reliability and validity of scales and their items with an assumption that all observed scores include true and error scores (38, 39). Each CES item and scale were evaluated and ranked in four areas: consistency within scale or construct, convergent validity, ceiling effects, and responsiveness to change (for some items). The original survey was used as part of a post-intervention survey, but since only a subset of scales and items were included, it was not possible to measure responsiveness to change for all scales and items.
A composite ranking for each item and scale was calculated as the mean of the non-missing ranks across the four ranked areas. This ranking was converted to a 10-point scale by multiplying the average item ranking by the proportion of respondents ranking the item content as most important. Rankings for consistency within scale or construct, information, and responsiveness to change were based on a single statistic. In contrast, since the CBPR model has a reciprocal feedback structure across multiple domains, ranking convergent validity involved calculating a composite rank across multiple individual statistics measuring convergent validity. To explore reducing the sensitivity of composite rankings to a trivial difference, “coarse” composite rankings were calculated based on the number of times the statistic for an item or scale exceeds statistics for related items within a ranked area by at least a small effect size.
The statistics for each of the ranked areas at the item level as follows:
a. Consistency within scale — “item-rest” correlations between each item score and the average score of the remaining items within a scale were calculated. Following Cohen (40), two correlations will be considered to differ by no more than a small amount if q < 0.10.
b. Convergent validity — calculating correlations between scores of E2 CES items and scale scores of E2 SEM constructs in adjacent domains of the CBPR model. Correlations for which Cohen’s q < 0.10 will be considered to differ by no more than a small amount.
c. Ceiling effects — percent of respondents who selected the highest value for an item.
d. Responsiveness to change — Responsiveness indicates the ability of a measure to detect change over time commensurate with the change (and amount) that occurred in the construct (41). We used Cohen’s dz to calculate and rank the extent of pre/post differences in means of item scores.
Justify the selection of items and scale
The results from the Content Expert and Stakeholder Survey and the statistical information from the psychometric property testing were used to create a 30-point score for each item (10 on actionable, 10 on content importance, and 10 on statistical importance). We invited participants who completed the survey to participate in one of two Zoom focus groups to review the results and make final recommendations for retention. A summary table of the results was provided to the participants; it included overall rankings and initial recommendations for the disposition of each item and scale. Supplementary material 2 includes the summary table and Supplementary material 3 includes the questions for the participants.
Validate the short-form version
We tested the PHIRE with the original data set collected with the larger survey (20). We tested the internal consistency, factor structure of the measurement model, and the structural model. Specifically, confirmatory factor analysis and structural equation modeling were performed using Stata 15.0 (42). Fit indices included the Tucker-Lewis Index (TLI) and Standardized Root Mean Square Residual (SRMR).
We completed further validity testing with a new sample due to the fact that the focus group participants suggested some wording changes to items. We administered the shortened version to a network of community health councils in New Mexico. The survey supported the New Mexico Department of Health and its Health Promotion unit to assess their own capacities to collaborate with community partners throughout the state. We examined reliability and correlations amongst the constructs using SPSS 29.0. Additionally, psychometric testing was not administered due to the relatively small sample size.
Results
Content validity and psychometric data
Of the 40 invitations, 21 CBPR experts (13 academic and 8 community partners) completed the Content Expert and Stakeholder Survey. There were 11 men and 10 women. Each item received between 14 and 18 responses in the rating of content importance and actionability. The results from the survey are presented in Supplementary material 2. This table also includes the results of statistical importance for the psychometric data.
Each item was scored on a 0–10 scale in three areas, leading to a total possible score of 30. Items scoring 15 or higher were recommended for inclusion in the shortened survey and are highlighted in green in Supplementary material 2. Items scoring 14 were highlighted in purple for possible inclusion; items scoring 13 as the only item within a subscale were also highlighted purpose for possible inclusion.
Focus groups and final survey creation
Of the 21 CBPR/CEnR experts, 15 also participated in one of the two focus groups: 9 in one and 6 in the other. There were 6 community and 9 academic experts: 9 male and 6 female. The focus groups lasted approximately 60 min. The participants were sent the summary table and explanations of the rankings. The first focus group concentrated on the first two domains of the CBPR model: context and partnering processes. The second group examined intervention-research synergy and outcomes, the latter two domains of the CBPR model.
The focus group participants reviewed the items in each of the scales and discussed whether to retain or drop the item or consider alternative wording changes. The groups recommended retaining a few items that scored below 14 in the composite ratings. In that process, the groups recommended creating items that combined the elements of several items into one item. Notes from each focus group were captured and these notes along with the transcripts were reviewed by the research team.
The research team met to review the recommendations of the focus groups alongside the data in Supplementary material 2. Through a series of multiple meetings, the team created the 30-item Partnering for Health Improvement and Research Equity (PHIRE) survey. These included 14 items that were recommended for inclusion (green) and five items that were marginal (purple) for inclusion from the Supplementary material. Three items that were not originally recommended were retained based on the focus group feedback. Eight items were revised and used elements from either the Key Informant Survey (KIS, the survey for principal investigators/director about project characteristics) related to structural governance (n = 3) or used a combination or proxy from the original CES survey (n = 8). Many of the items were slightly reworded from the original focusing on the collective partnership (e.g., we) versus individual reflection (e.g., “I”).
Final confirmation
The initial testing of the original scale was to determine if the shortened version would still fit the original data. While there were wording changes and combinations recommended from the focus group participants, we initially fit the best representative items for the original data set. The details of the original data set are presented elsewhere (20). In brief, the study had 210 projects represented with a principal investigator or designate completing a KIS regarding information about the project and partnership. The respondent also nominated up to three community and one academic partner to complete the E2 CES. There were 457 total respondents with the following demographics: 246 White, 71 Black or African American, 66 American Indian/ Alaska Native, 45 Hispanic or Latino, 36 Asian, 11 Native Hawaiian or other Pacific Islander; 36 LGBTQ, 48 low socio-economic status, 26 persons with disabilities, 49 immigrants, and 4 refugees; 99 male and 325 female; 185 community partners and 265 academic partners.
Table 1 displays the psychometric properties of the survey organized around the eight high-order constructs. Cronbach’s alphas were generally high with only one high-order construct having an alpha below 0.70 (structural governance; likely because the items came from the CES and KIS as two levels of data). The factor loadings for items to the higher-order constructs were statistically significant although there were four covariances between items included to improve model fit. TLI and SRMR for the subscales for the high-order constructs were 0.98 or above for TLI and 0.02 or below for SRMR (the subscales for structural governance and future outcomes were saturated models so these fit indices were not available).
The structural model for the high-order constructs is consistent with the CBPR conceptual model (21, 35), further supporting the conceptual fit of the pragmatic measure with the original conceptual model that guided the development of the longer version. Specifically, these results are consistent with the two paths of the conceptual model. In path one, structural governance is positively associated with community participation which is positively associated with benefits and then to future outcomes. In the second path, partnership capacity is positively associated with collective empowerment, which is positively associated with partnering and then to synergy, benefits, and future outcomes.
The second testing including the administration of the survey to 63 members of community health councils in New Mexico: (a) 45 women, 11 men, and 7 not reporting; (b) 16 were 40 or under, 29 41–60, and 16 older than 60; and (c) 37 white, 18 Latinx, 4 American Indian, and 8 others (numbers do not add up to 63 as participants could select more than one ethnicity).
Table 2 presents the PHIRE with final wording changes. The governance items were not included in the survey because they did not apply with the structure of the health councils. Cronbach’s alphas were high for the remaining subscales with all but one at 0.90 or above. Partnering was at 0.73 and we think this is because we had two negatively phrased items in this scale and upon further review recommended that these be phrased in a positive format. Table 3 presents the correlations among the scales. These are in the expected direction and magnitude given the paths identified in Table 1 and the original structural equation model. Thus, these data support the construct validity of the revised wording changes for the PHIRE.
Discussion
This study aimed to develop a pragmatic tool for measuring community-academic partnerships in CBPR/CEnR projects related to the CBPR conceptual model that reflects best practices contributing to outcomes. A rigorous six-stage process produced a 30-item instrument across eight higher-order constructs that has strong psychometric properties and construct validity with the original 93-item instrument. This section discusses this measure in the context of the literature of CBPR/CEnR processes and describes the benefits of this novel instrument. We also discuss additional developments in the use of the PHIRE and implications for future research and practices.
CEnR, CBPR, and PEnR have increased in usage and acceptance and as a result, so has the need for instruments to assess and evaluate these approaches. Systematic reviews of instruments (11, 12) have indicated relatively few comprehensive instruments that measure a range of domains in CBPR/CEnR processes and/or that have established psychometric properties. The E2 CES is one such tool. In addition, it is only one of three tools that cover all the domains identified by the NAM model (30). The current PHIRE study has further solidified these key domains alongside the original E2 CES measures and demonstrates good construct validity and reliability.
Longer instruments, even when they have strong psychometrics, are limited due to practical reasons in many settings. Thus, pragmatic instruments that provide short and effective measurement for practitioners are needed (32). This study established evidence for the PHIRE to support the key criteria for a pragmatic instrument. First, community and academic partners were involved in adapting the instrument to ensure there was support. Second, the shorter version has a relatively low implementation burden for partnerships to assess their CBPR/CEnR contexts, partnering practices, and outcomes in annual evaluations or collective reflection retreats. Third, based on Freirian praxis of cycles of reflection and action, the instrument provides information that can be used by partner teams to assess their strengths and achievements, as well as challenges to make actionable changes in their partnering practices to contribute to desired outcomes (14). Finally, the instrument can be adapted and added to work within a particular community and context. CEnR and CBPR researchers have long recognized the importance of being able to adapt research processes and instruments to fit local cultural contexts (43).
The PHIRE instrument, thus, demonstrates key characteristics from the diffusion of innovation perspective (44). Its primary relative advantage is its brevity—this feature makes it attractive to partnerships that do not want to be burdened with a measurement instrument yet want to have a tool that is valid and reliable. Similarly, the instrument is not complex and is relatively easy to use. A companion self-administered webapp will also soon be available to support partnerships in using the instrument as a tool to strengthen their practice (See text footnote 1). With this paper, the instrument is trialable as it can be downloaded and applied in a pilot setting or within early partnership meetings. Finally, the instrument has flexibility to allow it to be compatible within a specific context. We encourage partnerships to add additional scales or delete ones that are not important to their context and project. Some may want to add full scales in areas of special interest, such as trust, from the CES or another validated instrument to the higher-order shorter PHIRE instrument to provide more depth. As an instrument, however, demonstrating strong innovative properties, the implication is that the PHIRE is a useful and adaptable tool to enhance the practice of CBPR/CEnR.
We are aware of several adaptations to illustrate this benefit. First, engaging partners from the USA and Latin America, our team has developed a Spanish translation of both the CES (Encuesta Comunitaria) and the PHIRE instrument, Fortaleciendo y Uniendo EsfueRzos Transdisciplinarios para Equidad de Salud (FUERTES), for use among Spanish-speaking partnerships (19). In so doing, there were recommendations made to simplify, and change to positive, a few of the items in the English version, also represented in our revised items. They also included a set of items to assess use of community advisory boards. Second, our project team worked in partnership with a couple of Navajo communities in a pilot study to explore “their perspectives about community-engaged research and community well-being from a Diné lens” (45, p.1). The research partners administered the CES but soon learned the need to translate the CES with an interest in the development of a shortened instrument resulting in the PHIRE (46). Third, the long-term NIH-funded Family Listening Program of UNM’s Center for Participatory Research is testing the PHIRE longitudinally with a new partnership of three long-term tribal research teams and three new tribal community advisory boards (47). Finally, other opportunities are in process. Among them is the adaptation of PHIRE to the new NIH ComPASS awardees to assess their partnerships in the design and implementation of structural interventions over time. Further, there is an emerging integration of PHIRE with Community Campus Partnerships for Health assessment of their partnership principles. Future research can test these adaptations and translations as well as explore the further use of the PHIRE in research and practice. It is important to note that while two of these applications included Navajo communities in piloting the PHIRE, the diversity across tribal nations lends itself to CBPR/CEnR under very different contexts and conditions and likely warrants further adaptation across local tribal and urban Native contexts. This caveat is important for other cultural communities as well.
There are several limitations for our study. First, our shortening process is primarily based on the original data set that was used to create the current version of the CES (20). The second data collection does help to alleviate some of this concern. Second, we had some challenges with model convergence that limit some of the tests that can be run. Nonetheless, we were able to provide a robust test to support the validity of the PHIRE. Finally, our community and academic testing was based on members of our own Think Tank who may have some bias related to supporting the CBPR conceptual model. However, these individuals have vast experience in CBPR/CEnR and have developed their own instruments, so this bias is somewhat mitigated by these factors.
Conclusion
In conclusion, the aim of this study was to develop and test a pragmatic measure for assessing CEnR, CBPR, and PEnR research projects. This study demonstrates that a shorter version of the E2 CES is a reliable instrument with construct validity compared to the larger version from which it was derived. The science of CBPR/CEnR and related research is dependent on the development of instruments with strong psychometric properties. However, the practice of CBPR/CEnR is dependent on pragmatic measures that are not a burden to administer and that provide benefit to partnerships interested in strengthening their partnering practices to more effectively research their outcomes. The PHIRE is a straightforward and easy-to-use tool for a range of CBPR/CEnR projects, both research and practical applications. It has a sound conceptual and research foundation, and it is also something that can be adapted and added to fit local cultural contexts.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The study involved human participants and was reviewed and approved by the University of New Mexico’s Human Research Protections Office, UNM Health Sciences Center (#16-098). The study was conducted in accordance with the local legislation and institutional requirements. Participants provided direct consent to participate after they reviewed an information sheet.
Author contributions
JO: Conceptualization, Methodology, Supervision, Writing – original draft. BB: Conceptualization, Formal analysis, Methodology, Writing – original draft. LL: Conceptualization, Formal Analysis, Methodology, Writing – review & editing. SK: Conceptualization, Writing – review & editing. PC-R: Conceptualization, Writing – review & editing. JP: Conceptualization, Methodology, Writing – review & editing. PR: Conceptualization, Methodology, Writing – review & editing. SS-Y: Conceptualization, Methodology, Project administration, Writing – review & editing. LB: Conceptualization, Methodology, Writing – review & editing. NW: Conceptualization, Funding acquisition, Methodology, Project administration, Writing – original draft.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. This manuscript was supported by funding from the National Institute for Nursing Research (grant number: 1R01NR015241-01A1; NW, PI). The funder had no role in the methods of the study.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2025.1539864/full#supplementary-material
Abbreviations
CBPR, community-based participatory research; CEnR, community-engaged research; CES, Community Engagement Survey; E2, Engage for Equity; FUERTES, Fortaleciendo y Uniendo EsfueRzos Transdisciplinarios para Equidad de Salud; KIS, Key Informant Survey; MAPS, Measurement Approaches to Partnership Success; NAM, National Academy of Medicine; NIH, National Institutes of Health; PHIRE, Partnering for Health Improvement and Research Equity; PEnR, patient-engaged research; SRMR, Standardized Root Mean Square Residual; TLI, Tucker-Lewis Index.
Footnotes
References
1. Cyril, S, Smith, B, Possamai-Inesedy, A, and Renzaho, A. Exploring the role of community engagement in improving the health of disadvantaged populations: a systematic review. Glob Health Action. (2015) 8:29842–12. doi: 10.3402/gha.v8.29842
2. O'Mara-Eves, A, Brunton, G, McDaid, G, Oliver, S, Kavanagh, J, Jamal, F, et al. Community engagement to reduce inequalities in health: a systematic review, meta-analysis and economic analysis. Public Health Res. (2013) 1:1–526. doi: 10.3310/phr01040
3. O'Mara-Eves, A, Brunton, G, Oliver, S, Kavanagh, J, Jamal, F, and Thomas, J. The effectiveness of community engagement in public health interventions for disadvantaged groups: a meta-analysis. BMC Public Health. (2015) 15:1352. doi: 10.1186/s12889-015-1352-y
4. Ortiz, K, Nash, J, Shea, L, Oetzel, J, Garoutte, J, Sanchez-Youngman, S, et al. Partnerships, processes, and outcomes: a health equity-focused scoping meta-review of community-engaged schoalrship. Annu Rev Public Health. (2020) 41:177–99. doi: 10.1146/annurev-publhealth-040119-094220
5. Sandoval, JA, Iglesias Rios, L, Wallerstein, N, Lucero, J, Oetzel, J, Avila, M, et al. Process and outcome constructs for evaluating community-based participatory research projects: a matrix of existing measures. Health Educ Res. (2012) 27:680–90. doi: 10.1093/her/cyr087
6. Perkins, D, and Zimmerman, M. Empowerment theory, research, and application. American J Community Psychol. (1995) 23:569–79. doi: 10.1007/BF02506982
7. Goodman, R, Speers, M, McLeroy, K, Fawcett, S, Kegler, MC, Parker, E, et al. An initial attempt at identifying and defining the dimensions of community capacity to provide a basis for measurement. Health Educ Behav. (1998) 25:258–78. doi: 10.1177/109019819802500303
8. Stokols, D, Hall, K, Taylor, B, and Moser, R. The science of team science: overview of the field and introduction to the supplement. Am J Prev Med. (2008) 35:S77–89. doi: 10.1016/j.amepre.2008.05.002
9. Israel, BA, Eng, E., Schulz, A. J., and Parker, E. (Eds.) (ed.). Methods in community-based participatory research for health (2nd ed.). San Francisco, Calif: Jossey-Bass; (2013).
10. Mattessich, P, Murray-Close, M, and Monsey, B. Wilder collaboration factors inventory. St. Paul, MN: Wilder Research (2001).
11. Luger, T, Hamilton, A, and True, G. Measuring community-engaged research contexts, processes, and outcomes: a mapping review. Milbank Q. (2020) 98:493–553. doi: 10.1111/1468-0009.12458
12. Tigges, B, Miller, D, Dudding, K, Balls-Berry, J, Borawski, E, Dave, G, et al. Measuring quality and outcomes of research collaborations: an integrative review. J Clin Transl Sci. (2019) 3:261–89. doi: 10.1017/cts.2019.402
13. Wells, A. Principles of partnership: Advancing the apha code of ethics domain to guide partnerships in inclusive community-engaged research. Presented at the annual meeting of the American Public Health Association. Minneapolis, MN (2024).
14. Wallerstein, N, Oetzel, JG, Sanchez-Youngman, S, Boursaw, B, Dickson, E, Kastelic, S, et al. Engage for equity: a long-term study of community-based participatory research and community-engaged research practices and outcomes. Health Educ Behav. (2020) 47:380–90. doi: 10.1177/1090198119897075
15. Belone, L, Lucero, J, Duran, B, Tafoya, G, Baker, E, Chan, D, et al. Community-based participatory research conceptual model: community partner consultation and face validity. Qual Health Res. (2016) 26:117–35. doi: 10.1177/1049732314557084
16. Kastelic, S, Wallerstein, N, Duran, B, and Oetzel, J. Socio-ecologic framework for CBPR: development and testing of a model In: N Wallerstein, B Duran, J Oetzel, and M Minkler, editors. Community-based participatory research for health. 3rd ed. ed. San Francisco: Jossey-Bass (2018). 77–94.
17. Duran, B, Oetzel, J, Magarati, M, Parker, M, Zhou, C, Roubideaux, Y, et al. Towards health equity: a national study of promising practices in community-based participatory research. Prog Community Health Partnersh. (2019) 13:337–52. doi: 10.1353/cpr.2019.0067
18. Oetzel, J, Zhou, C, Duran, B, Pearson, C, Magarati, M, Lucero, J, et al. Establishing the psychometric properties of constructs in a community-based participatory research conceptual model. AmJ Health Promot. (2015) 29:e188–202. doi: 10.4278/ajhp.130731-QUAN-398
19. Rodriguez Espinosa, P, Peña, JM, Devia, C, Boursaw, B, Avila, M, Rudametkin, D, et al. The Spanish translation, adaptation, and validation of a community-engaged research survey and a pragmatic short version: Encuesta Comunitaria and FUERTES. J Clinic Translation Sci. (2024) 8:e165. doi: 10.1017/cts.2024.613
20. Boursaw, B, Oetzel, J, Dickson, E, Thein, T, Sanchez-Youngman, S, Peña, J, et al. Scales of practices and outcomes for community-engaged research. Am J Community Psychol. (2021) 67:256–70. doi: 10.1002/ajcp.12503
21. Oetzel, J, Boursaw, B, Magarati, M, Dickson, E, Sanchez-Youngman, S, Morales, L, et al. Exploring theoretical mechanisms of community-engaged research: a multilevel cross-sectional national study of structural and relational practices in community-academic partnerships. Int J Equity Health. (2022) 21:59. doi: 10.1186/s12939-022-01663-y
22. Sanchez-Youngman, S, Boursaw, B, Oetzel, JG, Kastelic, S, Scarpetta, M, Devia, C, et al. Structural community governance: importance for community-academic research partnerships. Am J Community Psychol. (2021) 67:271–83. doi: 10.1002/ajcp.12505
23. Parker, M, Wallerstein, N, Duran, B, Magarati, M, Burgess, E, Sanchez-Youngman, S, et al. Engage for equity: development of community-based participatory research tools. Health Educ Behav. (2020) 47:359–71. doi: 10.1177/1090198120921188
25. Brush, B, Mentz, G, Jensen, M, Jacobs, B, Saylor, K, Rowe, Z, et al. Success in long-standing community-based participatory research (CBPR) partnerships: a scoping literature review. Health Educ Behav. (2020) 47:556–68. doi: 10.1177/1090198119882989
26. Israel, B, Lachance, L, Coombe, C, Lees, S, Jensen, M, Wilson-Powers, E, et al. Measurement approach to partenrship success: theory and methods for measuring success in long-standing community-based participatory research partnerships. Prog Community Health Partnersh. (2020) 14:129–40. doi: 10.1353/cpr.2020.0015
27. Coombe, C, Chadannabhumma, P, Bhardwaj, P, Brush, B, Greene-Moton, E, Jensen, J, et al. A participatory, mixed methods approach to define and measure partnership synergy in long-standing equity-focused CBPR partnerships. Am J Comm Psych. (2020) 66:427–38. doi: 10.1002/ajcp.12447
28. Lachance, L, Brush, B, Mentz, G, Lee, SYD, Chandanabhumma, P, Coombe, C, et al. Validation of the measurement approaches to partnership success (MAPS) questionnaire. Health Educ Behav. (2024) 51:218–28. doi: 10.1177/10901981231213352
29. Lachance, L, Coombe, C, Brush, B, Lee, S, Jensen, M, Taffe, B, et al. Understanding the benefit–cost relationship in long-standing community-based participatory research (CBPR) partnerships: findings from the measurement approaches to partnership success (MAPS) study. J App Behav Sci. (2022) 58:513–36. doi: 10.1177/0021886320972193
30. Aguilar-Gaxiola, S, Ahmed, S, Anise, A, Azzahir, A, Baker, K, Cupito, A, et al. Assessing meaningful community engagement: a conceptual model to advance health equity through transformed systems for health: organizing committee for assessing meaningful community engagement in health & health care programs & policies. NAM Perspect. (2022) 22. doi: 10.31478/202202c
31. Assessment instruments for measuring community engagement. Available online at:https://nam.edu/programs/value-science-driven-health-care/assessing-meaningful-community-engagement/introduction-to-assessment-instrument-summaries/
32. Glasgow, R, and Riley, W. Pragmatic measures. Am J Prev Med. (2013) 45:237–43. doi: 10.1016/j.amepre.2013.03.010
33. Goetz, C, Coste, J, Lemetayer, F, Rat, A, Montel, S, Recchia, S, et al. Item reduction based on rigorous methodological guidelines is necessary to maintain validity when shortening composite measurement scales. J Clin Epidemiol. (2013) 66:710–8. doi: 10.1016/j.jclinepi.2012.12.015
34. Wallerstein, N, Duran, B, Oetzel, JG, and Minkler, M. Community-based participatory research for health: Advancing social and health equity. 3rd ed. San Francisco: Jossey-Bass (2018).
35. Wallerstein, N, Oetzel, J, Duran, B, Tafoya, G, Belone, L, and Rae, R. CBPR: What predicts outcomes? In M. Minkler & N. Wallerstein (Eds.), Community based participatory research for health. (2nd ed.). San Francisco: Jossey-Bass (2008) 371–392.
36. Oetzel, J, Wallerstein, N, Duran, B, Sanchez-Youngman, S, Nguyen, T, Woo, K, et al. Impact of participatory health research: a test of the community-based participatory research conceptual model. Biomed Res Int. (2018) 2018:1–12. doi: 10.1155/2018/7281405
37. Gideon, N, Hawkes, N, Mond, J, Saunders, R, Tchanturia, K, and Serpell, L. Development and psychometric validation of the EDE-QS, a 12 item short form of the eating disorder examination questionnaire (EDE-Q). PLoS One. (2016) 11:e0152744. doi: 10.1371/journal.pone.0152744
38. Cappelleri, J, Lundy, J, and Hays, R. Overview of classical test theory and item response theory for quantitative assessment of items in developing patient reported outcome measures. Clin Ther. (2014) 36:648–62. doi: 10.1016/j.clinthera.2014.04.006
39. Streiner, D, Norman, G, and Cairney, J. Health measurement scales: A practical guide to their development and use. 5th ed. London: Oxford University Press (2015).
40. Cohen, J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum (1988).
41. Polit, DF, and Yang, F. Measurement and the measurement of change. Philadelphia: Wolters Kluwer (2016).
43. Israel, B, Schulz, A, Parker, E, Becker, A, Allen, A, Guzman, R, et al. Critical issues in developing and following CBPR principles In: N Wallerstein, B Duran, J Oetzel, and M Minkler, editors. Community-based participatory health: Advancing social and health equity. (3rd ed). ed. San Francisco: Jossey-Bass (2018). 31–44.
45. Werito, V, and Belone, L. Research from a Diné-centered perspective and the development of a community-based partnership. Health Educ Behav. (2021) 48:361–70. doi: 10.1177/10901981211011926
46. Werito, V, Belone, L, and Boursaw, B. Community engaged research from a Diné-centered perspective: Advancing mentorship and findings from a community engagement survey (2025).
47. Belone, L, Rae, R, Hirchak, K, Cohoe-Belone, B, Orosco, A, Shendo, K, et al. Dissemination of an American Indian culturally centered community-based participatory research family listening program: implications for global indigenous well-being. Genealogy. (2020) 4:99. doi: 10.3390/genealogy4040099
Keywords: community-based participatory research, community-engaged research, pragmatic measurement, patient-engaged research, CBPR conceptual model
Citation: Oetzel JG, Boursaw B, Littledeer L, Kastelic S, Castro-Reyes P, Peña JM, Rodriguez Espinosa P, Sanchez-Youngman S, Belone L and Wallerstein N (2025) A short pragmatic tool for evaluating community engagement: Partnering for Health Improvement and Research Equity. Front. Public Health. 13:1539864. doi: 10.3389/fpubh.2025.1539864
Edited by:
Tilicia Mayo-Gamble, Georgia Southern University, United StatesReviewed by:
Debbie L. Humphries, Yale University, United StatesPer-Anders Tengland, Malmö University, Sweden
Elias Samuels, University of Michigan, United States
Copyright © 2025 Oetzel, Boursaw, Littledeer, Kastelic, Castro-Reyes, Peña, Rodriguez Espinosa, Sanchez-Youngman, Belone and Wallerstein. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Nina Wallerstein, bndhbGxlcnN0ZWluQHNhbHVkLnVubS5lZHU=