Skip to main content

ORIGINAL RESEARCH article

Front. Public Health, 06 December 2016
Sec. Public Health Policy

Evaluating and Using Observational Evidence: The Contrasting Views of Policy Makers and Epidemiologists

  • 1Centre for Research on Ageing, Health and Wellbeing, Australian National University, Canberra, ACT, Australia
  • 2ACT Health Directorate, Canberra, ACT, Australia
  • 3Australian National University Medical School, Canberra, ACT, Australia

Background: Currently, little is known about the types of evidence used by policy makers. This study aimed to investigate how policy makers in the health domain use and evaluate evidence and how this differs from academic epidemiologists. By having a better understanding of how policy makers select, evaluate, and use evidence, academics can tailor the way in which that evidence is produced, potentially leading to more effective knowledge translation.

Methods: An exploratory mixed-methods study design was used. Quantitative measures were collected via an anonymous online survey (n = 28), with sampling from three health-related government and non-government organizations. Semi-structured interviews with policy makers (n = 20) and epidemiologists (n = 6) were conducted to gather qualitative data.

Results: Policy makers indicated systematic reviews were the preferred research resource (19%), followed closely by qualitative research (16%). Neither policy makers nor epidemiologists used grading instruments to evaluate evidence. In the web survey, policy makers reported that consistency and strength of evidence (93%), the quality of data (93%), bias in the evidence (79%), and recency of evidence (79%) were the most important factors taken into consideration when evaluating the available evidence. The same results were found in the qualitative interviews. Epidemiologists focused on the methodology used in the study. The most cited barriers to using robust evidence, according to policy makers, were political considerations (60%), time limitations (55%), funding (50%), and research not being applicable to current policies (50%).

Conclusion: The policy maker’s investigation did not report a systematic approach to evaluating evidence. Although there was some overlap between what policy makers and epidemiologists identified as high-quality evidence, there was also some important differences. This suggests that the best scientific evidence may not routinely be used in the development of policy. In essence, the policy-making process relied on other jurisdictions’ policies and the opinions of internal staff members as primary evidence sources to inform policy decisions. Findings of this study suggest that efforts should be directed toward making scientific information more systematically available to policy makers.

Introduction

There has been increasing discussion that in order to improve public health outcomes quality scientific research should be used throughout the development of health policies (1). The process of disseminating academic research to policy makers is referred to as knowledge translation (KT) or knowledge exchange (2). The process of KT involves many activities and specific practices, including producing synthesized research aimed at informing policy, writing plain language summaries of findings, and spending time with users to understand their context and research needs (3). It is believed that if KT is done effectively then the use of scientific evidence in policy and practice decisions will be increased (4).

In the “real world” of policy making, scientific research is just one of many types of information used (5). Policy makers interpret and “use” evidence in a broad sense (e.g., non-research data such as public health surveillance data and strategic needs assessments) (6). There is also a range of political, economic, and social drivers which affect decisions during policy development. In order to support a particular policy agenda, while also managing the competing interests of diverse stakeholders, policy makers may use specific information without giving consideration to all the available evidence (4, 7) or may not be able to directly translate the findings, or recommendations, from epidemiological research into action within their particular context (8).

Previous research has focused on the apparently low uptake of academic research by policy makers, with particular attention given to understanding how and under what circumstances policy makers access and use academic evidence (9). However, the needs and practices of policy makers are rarely the subject of rigorous study and are likely to be more complex and nuanced than can be captured in surveys (6). For example, three systematic reviews (1012) discussed the facilitators and barriers to the use of evidence in policy making and identified that policy makers use a broad range of evidence. These studies could not find reliable evidence of how much policy makers use academic research in the policy making process or how the definition of evidence by policy makers differs from the conceptualization of what is classified as evidence by researchers. As such, we require a clearer understanding of how policy makers define and use evidence (13).

Currently, little is known about what types of information and evidence is normally used as part of the policy development process or the extent to which political agendas and budgetary constraints influence the design and choice of policy options (6, 9, 14). In one of the few studies investigating the sources of research evidence that policy makers in government accessed when making a decision, academic literature was one of the least frequently used sources, along with internal expertise, policy documents, and employing a consultant (15). A study by Head et al. (9) found that the most valued source was the knowledge of their immediate colleagues (93%). Their study also found that over 40% of policy makers reported that academic research was used in informing policy and legitimizing policy choices. However, the majority also stated that policy making was overwhelmingly driven by budgetary considerations (83%), political acceptability of decision (80%) and responding to urgent day-to-day issues rather than “long-term” thinking (75%) (9).

By investigating the type of research that policy makers use to inform policy decisions, how they identify evidence and what other factors may influence policy decisions, we can identify what information is viewed as more relevant and timely (6). This may contribute to researchers better tailoring their research to policy maker’s needs and thus improving KT processes and the take up of scientific evidence in the policy development process.

There is also a need to investigate how policy makers select and evaluate the quality of evidence. One way of selecting and evaluating evidence is by using an “evidence hierarchy.” This hierarchy may consider certain types of experimental research, for example randomized controlled trials (RCTs) and systematic reviews of RCTs, as highest in methodological quality (16). Researchers and clinicians use particular grading instruments to grade the quality of evidence. An example of such an instrument is Grading of Recommendations, Assessment, Development and Evaluation, which evaluates biomedical evidence based on risk, burden, and cost of intervention (17).

Although the use of evidence hierarchy and grading systems may provide an easier, or at least more streamlined, way of identifying high-quality evidence, in many situations RCTs may not be the most appropriate research methodology to answer specific policy questions, particularly in the sphere of public health. For example, findings from RCTs do not usually take into account the political, social, or economic context (1820). RCTs may also not be a practical, or ethical, research option (e.g., research in smoking, HIV, or dementia) (21). Finally, the results of RCTs may not be easily applied to the general population or specific individuals (22). Due to these factors, policy makers often use a different hierarchy of evidence than researchers (23). For example, policy makers may consider the strongest evidence to be that from systematic reviews as they provide an overview of scientific studies which meet explicit criteria. Yet, single studies and evaluations are more commonly used to support policy than systematic reviews, possibly because systematic reviews are not available due to time constraints or lack of sufficient evidence (23).

Epidemiological data and research is typically valued highly as “objective” or “hard” data compared to qualitative data or case studies (24). Findings derived from epidemiological research are perceived to be most relevant indicator of adverse effects in humans (25) and inform public health, such as health promotion and health policy and planning (26). Public health practice is mostly based on observational epidemiological research, such as cohort, case-control, and cross-sectional studies, rather than RCTs (27). Observational epidemiological research has multiple advantages, for example a large sample size and longer follow-up periods. It can also provide a powerful argument for change by using local data and can impact policy to address emerging public health problems (28). However, epidemiological findings may not be in a form that is useful or easily understood by policy makers, for example lengthy research reports with data at a state or country level (20) or policy makers may be hesitant to use it due to chance of bias and confounding (29).

Given the importance of promptly incorporating new and robust scientific evidence into policy and the barriers to KT identified above, there is an urgent need to better understand how policy makers evaluate and use evidence. Therefore, the current study had two intersecting aims. The first aim was to gain an understanding of the role of research evidence in policy making. The second was to investigate how policy makers in the health domain select this evidence and whether they systematically assess evidence quality and how this differs from academic epidemiologists.

Materials and Methods

An exploratory mixed-methods study design was used in order to provide a deeper understanding. The design involved the collection and data analysis of two sets of qualitative interviews (n = 13 and 6) and one quantitative survey (n = 28). Both interviews and survey are included in Supplementary Material. Written informed consent was obtained from all participants prior to involvement in the study. The Australian National University Human Research Ethics Committee (HREC) and the Australian Capital Territory (ACT) Health HREC Survey Resource Group approved the study.

Qualitative Interviews with Policy Makers

Participants and Recruitment

The first set of interviews focused on a purposive heterogeneous sample of 20 people who worked in policy. Thirteen participants were from the ACT Government Health Directorate, four participants were from non-governmental organizations, one from a national Australian government department, and two from Australia’s peak research funding body, the National Health and Medical Research Council. Individuals were invited to participate in the study if they had any previous experience contributing to the development and implementation of health policy or programs relating to risk factors for chronic disease, mental health, or aging. Executive Directors from ACT Health identified participants and invited them via email to participate. Individuals who responded and consented to participating were then contacted by the ANU researchers. Participants were selected irrespective of policy background, time spent in organizational roles, or seniority. Participants from ACT Health were from a wide range of policy units, including Women’s Youth and Child Health; Aboriginal and Torres Strait Islander; Alcohol and Other Drugs; Rehabilitation, Aged and Community Care; and Population Health.

Measures

One-on-one semi structured interviews were conducted with participants focusing on understanding: (1) how policy makers locate and use evidence from observational and other research; (2) factors influencing their choice of evidence sources; (3) how policy makers deal with conflicting evidence from specific topics; (4) how policy makers evaluate the quality of research; and (5) how policy makers view researchers. The interviews also sought information on perceived barriers to KT. The interview questions were developed in consultation with research experts and senior staff from the health department, Alzheimer’s Australia, and NHMRC. Interviews were recorded and transcribed. Interviews were done one-on-one and took approximately 1 h.

Analysis

The transcribed interviews were uploaded into NVivo (30) and thematically analyzed for themes. These themes included evidence sources, choice of evidence sources, confusion about policy, evaluating policy, grading evidence, policy drivers, policy process, policy maker concerns, and barriers affecting KT. Average and percentage calculations were also applied.

Qualitative Interviews with Epidemiologists

Participants and Recruitment

The second set of interviews focused on a purposive sample of chronic disease experts, known to the authors, who were approached to provide their views regarding the characteristics of high-quality observational research, their opinion about the currently available evidence rating systems, and the implications of grading observational research.

Measures

Participants were asked to answer seven open-ended questions in paper form seeking their views on: (1) their opinion of what constitutes high-quality observational research and how it compares with other types of research; (2) their understanding and use of current rating systems for grading evidence; and (3) the consequences of inappropriately rating observational research. Participants were provided with a one-page guide to grading instruments to clarify what the authors defined as a grading instrument (included in Supplementary Material).

Analysis

Thematic analysis using a step by step process was conducted to analyze the interviews. The interview transcripts were repeatedly screened in order to identify a list of recurring themes that appeared critical to evaluating evidence. These themes included evidence sources, choice of evidence sources, evaluating evidence, grading evidence, and observational research. Average and percentage calculations were also applied.

Quantitative Survey

Participants and Recruitment

An anonymous survey was compiled using the online survey tool Qualtrics (31). Senior staff from three health-related organizations, two government and one non-government, invited all policy makers via email to complete the survey. The selection of participation was not reliant on age, gender, or policy experience; however, the survey was only distributed to staff who were not involved in the qualitative interviews. Participants were not offered any incentives for completing the survey. The survey was accessible to participants for 6 months. In the time the survey was accessible, 58 participants began the survey, but only 28 participants provided responses to all questions.

Measures

The focus of the survey was barriers to knowledge-uptake, knowledge needs at the time of the survey, and the accessibility of information. The survey comprised 18 questions. Of these six were multiple-choice, five were rating scales, six were open-ended, and one was a combination of multiple-choice and open ended (included in Supplementary Material). The questions in the survey were developed after examination of surveys that had been used in related studies, consideration of results of the qualitative interviews, and after consultation with collaborators from the University, the Health Department and Alzheimer’s Australia.

Analysis

Participants with missing data were excluded at the item level. Data were analyzed using Microsoft Excel. Descriptive statistics and bar graphs were used to illustrate response patterns to survey items.

Results

Qualitative Interviews with Policy Makers

All but one participant (95%) reported that policies and program decisions were often based on work, including current programs or policies, which had been done in other jurisdictions or by other organizations that were presumed to have better resources for seeking evidence (e.g., work tendered to university researchers, larger organizations). In this context, respondents reported that greater emphasis was placed on the experience of running the program or implementing the policy than on the evidence base behind it, which was typically not systematically checked. As an example, one participant noted a program implemented in another state that was “taken up” and resulted in a lot of problems. Subsequent contact with those who had set up the original program revealed that they too had had a lot of problems but had not reported them.

No respondents identified a systematic approach to gathering evidence for policy. Fourteen participants (70%) mentioned that part of their research strategy included talking to experts, including academics and consultants. Eleven participants (55%) gained most of their information from consumer input and subscribed to publications by the Australian Healthcare and Hospitals Association. Academic journals and Institutional research/library services were only used by five participants (25%).

Twelve participants (60%) mentioned politics or political agenda as a significant contributor to the policy formation process. The political agenda may drive what research is used and is not, regardless of the quality of the research. For example, one participant said “… the politicians are wanting to say ‘we’ve made a decision, this is what we’re going to do’and if there are votes in it [the politicians] will do it regardless of the evidence” (Government Health Middle Manager). Eleven participants (55%) also cited that consumer or community views were another policy driver.

Eight participants (40%) discussed that there was not a great understanding of what constitutes good or strong evidence. For example, one participant said “I think it’s a bit of an issue that we’ve seen in terms of being able to identify well what is good evidence, what’s real evidence, what evidence should you use for a policywhat evidence should you be using to back that up. Don’t just go to a website and copy something – that happens, you know, which is not very good but it happens” (Government Health Project Officer). Policy makers identified the following as the most common factors which affect evidence choice: the type of evidence (60%), the reputation of the evidence source (55%), quality of the evidence (45%), and local applicability (40%).

Only three participants (15%) knew of grading systems and they did not use grading systems to evaluate evidence. Two of these participants discussed the mismatch between grading systems and policy, with RCTs not necessarily being applicable in the policy decision-making, but rather social research being more likely to inform a policy decision. One of the participants highlighted this mismatch and the use of systematic reviews, stating: “it’s hard to find any RCTs for the issues we’re after and whether they’re appropriate anyway in some contextsin terms of policy what’s really good is a Cochrane review or something that’s looked at a bunch of things across everywhere and synthesized it and so then you can look at what the general opinion or picture looks like” (Government Health Middle Manager).

The most cited barriers to using robust evidence were political agenda (60%), time limits (55%), funding (50%), and research not being applicable for current policies (50%). For example, one participant stated “research takes time, as well as money and effort. Policies have a different timeframe. So if a government is going to move in a particular area, or feels inclined or compelled that it needs to come up with something, it might not be able to wait for research” (Government Health Senior Manager) Two participants also stated that government department employees were risk averse and so would “perpetuate current practice” rather than suggesting and evaluating “original ideas” based on new research.

When policy makers were asked what could improve the use of evidence in developing policy, six participants (30%) stated that there should be more “links” or collaborations between government staff and researchers. According to one policy maker these linkages “would make policy development a lot easier because you would have shown quite clearly due to the collaborative nature of the research that you’ve considered a large number of things and it would seem to provide a very solid finding because of that” (NGO Manager). Two participants (10%) stated that being able to access collated information would be helpful as it would reduce the amount of time spent looking for applicable research.

Qualitative Interviews with Epidemiologists

Seven epidemiologists were asked to participate; however, only six agreed and completed the interview. All interviewees had a post-doctoral degree, and all but one was a researcher from an Australian university. There was an even number of male and female respondents.

All respondents cited that they had heard of grading system but tended not to use them to evaluate research evidence, rather they had their own way of evaluating evidence. One participant stated that they evaluated studies from first principles (clearly defined research question, clear and appropriate methods, high participation rates, appropriate analysis, and conclusions), and another admitted to giving more credibility to studies published in prestigious journals as they tended to undergo more rigorous peer-review and methodological editing.

All respondents cited that although RCTs are considered at the top of the hierarchy of evidence and observational research lower, RCTs are not necessarily the most efficient or applicable evidence. Respondents found several problems with using RCTs, including unsuitable research questions (e.g., environmental and health-related research questions), limited generalizability, and bias. All respondents argued that it is more important to look at the design and conduct of the study – for example cohort size, duration, evaluation of relevant covariates/confounders – than it is to look at what rating the evidence is.

Responses to what constituted as high-quality observational research all focused on the rigor of the methodology. All respondents agreed that high-quality observational research should address bias and ensure that the data are valid. Four respondents also argued that the sample had to be representative of the target population and large.

Quantitative Survey with Policy Makers

The majority of respondents were aged between 35 and 44 years (32%) followed closely by 45–54 (29%) and 55–64 years (25%). Respondents were mostly female (71%) and had completed a postgraduate qualification (82%). Of the 28 participants who responded to all questions, 13 (46%) described their level within the organization as “middle management or project/policy officer with some management responsibilities.”

When asked to indicate preferred research methods, respondents (19%) indicated that systematic reviews were the preferred research method. Qualitative research and RCTs followed with response rates of 16 and 13% respectively. Only 7% of respondents indicated a preference for observational research.

The most easily understood sources of evidence were trusted organizations (96%), other internal staff (92%), consumer views (85%), policies from other jurisdictions (81%), and expert opinions (73%). The most difficult evidence sources to understand were researchers and existing academic research (42%) and internal statistical data (35%).

The most important factors taken into consideration when evaluating evidence are shown in Figure 1. When asked to identify how often evidence sources were utilized in the policy process, the subset of policy makers (40%) who responded to this question indicated that the most often used policy sources were: existing academic research (92%), other staff within the organization (92%), similar policy experience from other jurisdictions (85%), publications from trusted organizations (73%), and guidelines (58%). The majority of policy makers from the government health department (61%) indicated that they had not used their own departmental epidemiological reports in formulating new population health-relevant policy.

FIGURE 1
www.frontiersin.org

Figure 1. The most important factors taken into consideration when evaluating evidence.

The relative ranking of specific research methods and data synthesis techniques, as indicated by policy makers, is shown in Figure 2. Policy makers’ responses to the open-ended question of what (in their opinion) constitutes high-quality forms of evidence varied. Some responses included: articles published in reputable peer-reviewed journals, RCTs that can be related to and translated into practical clinical guidelines, systematic reviews, case studies (depending upon the research question), and sound methodology, clearly articulated, and peer reviewed research.

FIGURE 2
www.frontiersin.org

Figure 2. The relative quality of specific research methods and data synthesis techniques.

Discussion

The aims of this study were to gain an understanding of the role of research evidence in policy making, investigate how policy makers in the health domain select this evidence, and whether they systematically assess evidence quality. While use of evidence differs somewhat across policy makers, it appears that the reliance on direct scientific evidence in the policy development process is low. The policy maker’s investigation did not seem to have a methodical approach to evaluating evidence. Although there was some overlap between what policy makers and epidemiologists identified as high-quality evidence, there was also some important differences which suggests that the best scientific evidence is not frequently used in the development of policy. Differences between epidemiologists and policy makers included the way evidence was evaluated and the importance placed on study’s methodology.

What Evidence Do Policy Makers Use in the Policy Process?

Systematic reviews were the preferred research method in the policy-making process. Observational research came last and ranked as the lowest quality. Previous research has found that policy makers perceived systematic reviews as better suited to identifying gaps in existing research rather than providing answers to policy questions (32). This research also found that systematic reviews were useful only when they had been commissioned to support policy decisions that had already been made, rather than inform the decision-making process of which policy option is most effective. Systematic reviews may be favored by policy actors because of their potential to save time and other resources and are seen as a credible source of information.

The information provided by policy makers about the use of academic resources in the policy process is inconsistent. In the web survey, all responding participants indicated that academic research was the most often utilized evidence source in the policy process. However, in the interviews, only one-quarter of participants stated that they referred to academic journals when gathering evidence for policy. Furthermore, participants from the web survey stated that academics and existing academic research were the most difficult evidence source to understand. This difference between responses may indicate that what policy makers think they are using, or what they should ideally be using, is not what they actually use and that they may not fully understand the academic research that they are using. Studies that had similar results found that respondents did not use academic literature because they did not have access to libraries or online journals, they were not trained in how to use academic search engines and because they found academic literature complex and frequently contradictory (15).

Results from both the web survey and interviews found that the majority of policy makers used work which had been done by other jurisdictions or organizations as a base for policy and program decisions. The use of other jurisdictions programs/policies may be a feasible option as it fits with the policy environment and provides a sense of security that the intended outcomes will be achieved within the decided timelines. However, as participants pointed out, this transferability either may not be applicable to the adopting jurisdiction or key information and supporting evidence may not be provided by the other jurisdiction. Given that respondents stated they usually did not check the quality of the evidence to these programs, or the applicability of this evidence to the situation, then the policy/program objective may not be met.

Respondents from both the interviews and web survey also mentioned that other internal staff members were one of the most frequently utilized source of evidence, and that part of their research strategy included talking to others, such as experts or consultants. This has been found in previous studies and may be a way of gaining accurate information quickly (9, 14, 33).

Policy maker’s reliance on peers and other jurisdictions, rather than evidence, could indicate several possible characteristics of policy makers. First, this might suggest that respondents are assuming that someone else has checked and evaluated the evidence. Second, policy makers may lack the skills to evaluate the evidence themselves, or lack confidence in their own skills. A third possibility, cited by two participants, is that individuals within the government health department are risk averse. Individuals may feel that the culture within the public service discourages innovative programs and policies as such evaluations may fail and result in damage to the government and individual’s reputations.

Previous research has found that political opinion or targets influenced the adoption of particular policies or programs (13). Within this study, most policy makers mentioned politics or political agenda as a significant driver in the policy formation process, followed closely by consumer or community views. In Ritter’s study (15), they found that the internet, notably “Google,” and statistical data were the third and fourth most frequently mentioned source used by policy makers. Policy makers did not mention the use of the internet in our quantitative survey, and only one participant mentioned it in the qualitative interviews. The majority of respondents in our study indicated that they did not use their own departmental epidemiological reports. Our results may differ from Ritter’s because we did not explicitly ask about internet or statistical data use or because participants were hesitant to discuss their usage of these sources.

How Do Policy Makers Evaluate the Quality of Evidence and How Does It Compare to Epidemiologist’s Evaluation?

Just under half of participants in the interviews discussed that there was not a great understanding among policy makers of what makes good quality evidence. This has been found in previous studies (23). Although some respondents had heard of grading systems, neither the policy makers nor epidemiologists whom we interviewed used them. Rather, policy makers and epidemiologists had their own way of evaluating evidence. Although grading systems may not identify the most appropriate research methodology, their usage enables a standardized, comprehensive, transparent, and easily communicated way of rating the quality of evidence for different policy decisions and the strength of recommendations and could improve decision-making processes (34).

Both parties agreed that RCTs, followed by systematic reviews, provided the highest quality evidence and that observational research was ranked the lowest. However, both policy makers and epidemiologists cited problems with using RCTs in their respective fields. For policy makers, RCTs were not applicable in the policy decision-making process, whereas epidemiologists had methodological issues with RCT designs (e.g., limited generalizability and bias). These findings are similar to previous research (20, 23).

Barriers to Use of Evidence in Policy Making

The most cited barrier to using robust evidence was political agenda and time limits. Previous research has also found that the short time periods, or need for action, within the policy making sphere meant that decisions were often made whether “strong” evidence was there or not (13, 35). Research can take up to 3 years to be published following data collection, so by the time it is made available the information may be out of date or less useful to policy makers (4).

Half of our policy-maker participants stated that barriers to using research were lack of funding and research not being applicable to current policy. This has been found in previous research (35, 36). It has been suggested that in order to overcome these barriers there should be a dialog between researchers and policy makers before the study design is carried out. As policy decisions may be influenced by pragmatic considerations, such as cost (13), then, researchers should be made aware of these considerations and build research and recommendations that accommodate them.

Enablers to Use of Evidence in Policy Making

The establishment of more links, or collaborations, between policy makers and researchers was cited by one-third of policy makers as a way to improve the use of evidence in the policy-making process. This strategy has been frequently discussed in previous research (7, 35). Previous research has identified that policy makers use sources that are highly accessible and prefer summative information that uses plain language and clear data (7, 15). Two policy makers in our study discussed having access to evidence collated within a single source. We think this type of information source would not only reduce the amount of time policy makers spend gathering evidence but could also be used to help policy makers identify strong evidence, based on the methodological considerations discussed by epidemiologists in this study. The authors have developed a web-based tool designed to help policy makers find and evaluate evidence (37). This tool will integrate the epidemiologists and policy makers on observational evidence and provide policy makers with the skills needed to understand and critically appraise research, which is a specific practice of KT (3).

This study has some limitations. First, the sample size was small and only a few organizations within a single Australian provincial-level jurisdiction were surveyed and as such may not be more widely generalized. Second, due to survey design, we could not analyze how policy maker’s level of research training affected their use of scientific research. For example, it is possible that those with a specific health-related Masters or Postgraduate degree are more likely to use peer reviewed literature. Despite these limitations, this study gathered data on a process about which little is known or understood. Furthermore, it used different methodologies in order to gain a more comprehensive understanding of the issue and different organizations from both government and non-government were involved.

Recommendations

For Policy Makers

To facilitate the use and assessment of academic research in the policy-making process, we have four recommendations. The first is to build policy makers capacity to appraise evidence, through strategies, such as training and participation in internships. This recommendation is based on our finding that policy makers did not have a great understanding of what makes good quality evidence nor did they use a standardized way of evaluating evidence. As only a small number of policy makers in this study referred to academic sources, the second recommendation is to ensure that policy makers can access robust sources of scientific evidence, for example online peer-reviewed journals. Third, because the policy and scientific processes occur on different time scales, which policy makers in this study cited as a barrier to using robust evidence, the sharing of evidence between researchers and public servants should be facilitated through new channels and ways of conducting business. This is particularly important for health issues for which the scientific data may vary substantially over time. Finally, we recommend developing mechanisms through which scientists with specific expertise are invited into a particular department for a “scientific chat” to openly discuss planned policies. This would be particularly useful in cases where commissioning new research would take too long but where substantial “soft” evidence is already available in the scientific field.

For Researchers

Based on our findings that policy makers cited researchers and existing academic research as one of the most difficult evidence sources to understand and that a barrier to using robust evidence was research not being applicable to current policies, we have three recommendations for researchers. The first is to build awareness among researchers producing policy-relevant material that this information cannot be communicated exclusively through typical scientific dissemination processes (e.g., conference presentation, peer-reviewed publication). Furthermore, academic research with policy-relevant material should include a clearly identified policy-relevant section that can easily be identified by policy makers, and the language and statistics included should be tailored in a way that makes them usable by policy makers. Second, training on the production and effective ways to communicate policy-relevant material in scientific research should be provided to researchers. Finally, forums where scientists and policy makers can interact and demonstrate their viability and effectiveness should be established.

Conclusion

This study has found that neither policy makers nor epidemiologists are using grading systems to evaluate evidence, rather each have their own ways of assessing the evidence. Both policy makers and epidemiologists recognized that RCTs were usually at the top of these hierarchies, but that RCTs were not always the most efficient or applicable evidence upon which to base population health policies and that there were some problems with RCT designs. Policy makers in this study demonstrated a good understanding that they need to have an evidence base, that it is an important part of the process, and that it justifies the policy. However, the time and resources to form that evidence base, as well as an understanding of what constitutes good evidence and how to evaluate it was lacking. This study is limited by its small sample size; however, by having both in-depth interviews and the web survey we are provided with more and often conflicting information than previous research has found using just survey data. Finally, this study focused on the use of observational evidence and interviewed only one type of public health researcher, academic epidemiologists. By using this approach, the authors have not examined the use of intervention research which provides direct evidence on how to produce change and which may be more relevant to policy makers (38). Findings from this study demonstrate that scientific information needs to be more systematically available to policy makers and that efforts should be directed toward increasing the communication between researchers and policy makers.

Author Contributions

The study concept and design was done by KA, NC, and PK. All authors contributed to the analysis and interpretation of data and drafting of the manuscript. LJ conducted all statistical analysis. All authors have read the final paper and have agreed to be listed as authors.

Conflict of Interest Statement

PK is employed by ACT Health Directorate as a policy maker, a co-investigator in the study as well as being an author. The other authors declare no conflict of interest.

Acknowledgments

The authors wish to acknowledge the project partner from Alzheimer’s Australia, Dr Chris Hatherly, and Professor Gabriele Bammer for their contribution to the survey and interview questions. The authors also thank the study interviewers, Michelle Irving and Karemah Francois, and the study participants.

Funding

This study is supported by the Australian Research Council Centre of Excellence in Population Ageing Research (project number CE110001029) and the Australian Research Council Linkage Project (Project ID LP120200609). NC is funded by ARC Fellowship (#FT120100227). KA is funded by NHMRC Fellowship (#1002560).

Supplementary Material

The Supplementary Material for this article can be found online at http://journal.frontiersin.org/article/10.3389/fpubh.2016.00267/full#supplementary-material.

Abbreviations

ACT, Australian Capital Territory; EBM, evidence-based medicine; HREC, Human Research Ethics Committee; KT, knowledge translation; RCT, randomized controlled trial; NGO, non-government organization.

References

1. Zardo P, Collie A. Measuring use of research evidence in public health policy: a policy content analysis. BMC Public Health (2014) 14(1):496. doi:10.1186/1471-2458-14-496

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof (2006) 26(1):13–24. doi:10.1002/chp.47

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Jacobson N, Butterill D, Goering P. Organizational factors that influence university-based researchers’ engagement in knowledge transfer activities. Sci Commun (2004) 25(3):246–59. doi:10.1177/1075547003262038

CrossRef Full Text | Google Scholar

4. Cvitanovic C, Hobday AJ, van Kerkhoff L, Wilson SK, Dobbs K, Marshall NA. Improving knowledge exchange among scientists and decision-makers to facilitate the adaptive governance of marine resources: a review of knowledge and research needs. Ocean Coastal Manag (2015) 112:25–35. doi:10.1016/j.ocecoaman.2015.05.002

CrossRef Full Text | Google Scholar

5. Bowen S, Zwi A, Sainsbury P. What evidence informs government population health policy? Lessons from early childhood intervention policy in Australia. N S W Public Health Bull (2005) 16(11–12):180. doi:10.1071/NB05050

CrossRef Full Text | Google Scholar

6. Oliver K, Lorenc T, Innvaer S. New directions in evidence-based policy research: a critical analysis of the literature. Health Res Policy Syst (2014) 12(1):34. doi:10.1186/1478-4505-12-34

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Haynes AS, Derrick GE, Chapman S, Redman S, Hall WD, Gillespie J, et al. From “our world” to the “real world”: exploring the views and behaviour of policy-influential Australian public health researchers. Soc Sci Med (2011) 72(7):1047–55. doi:10.1016/j.socscimed.2011.02.004

CrossRef Full Text | Google Scholar

8. de Goede J, van Bon-Martens MJ, Putters K, van Oers HA. Looking for interaction: quantitative measurement of research utilization by Dutch local health officials. Health Res Policy Syst (2012) 10(1):9. doi:10.1186/1478-4505-10-9

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Head B, Ferguson M, Cherney A, Boreham P. Are policy-makers interested in social research? Exploring the sources and uses of valued information among public servants in Australia. Policy Soc (2014) 33:89–101. doi:10.1016/j.polsoc.2014.04.004

CrossRef Full Text | Google Scholar

10. Innvaer S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy (2002) 7(4):239–44. doi:10.1258/135581902320432778

CrossRef Full Text | Google Scholar

11. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res (2014) 14(1):2. doi:10.1186/1472-6963-14-2

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The use of research evidence in public health decision making processes: systematic review. PLoS One (2011) 6(7):e21704. doi:10.1371/journal.pone.0021704

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Petticrew M, Whitehead M, Macintyre SJ, Graham H, Egan M. Evidence for public health policy on inequalities: 1: the reality according to policymakers. J Epidemiol Community Health (2004) 58(10):811–6. doi:10.1136/jech.2003.015289

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Oliver KA, de Vocht F. Defining ‘evidence’ in public health: a survey of policymakers’ uses and preferences. Eur J Public Health (2015) 1–6. doi:10.1093/eurpub/ckv082

CrossRef Full Text | Google Scholar

15. Ritter A. How do drug policy makers access research evidence? Int J Drug Policy (2009) 20:70–5. doi:10.1016/j.drugpo.2007.11.017

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Leigh A. What evidence should social policymakers use? Econ Round-up (2009) 1:27–43. doi:10.2139/ssrn.1415462

CrossRef Full Text | Google Scholar

17. Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ (2004) 328(7454):1490–4. doi:10.1136/bmj.328.7454.1490

PubMed Abstract | CrossRef Full Text | Google Scholar

18. Bowen S, Zwi AB. Pathways to “evidence-informed” policy and practice: a framework for action. PLoS Med (2005) 2(7):e166. doi:10.1371/journal.pmed.0020166

CrossRef Full Text | Google Scholar

19. Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health (2003) 57(7):527–9. doi:10.1136/jech.57.7.527

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Brownson RC, Royer C, Ewing R, McBride TD. Researchers and policymakers: travelers in parallel universes. Am J Prev Med (2006) 30(2):164. doi:10.1016/j.amepre.2005.10.004

PubMed Abstract | CrossRef Full Text | Google Scholar

21. West SG, Duan N, Pequegnat W, Gaist P, Des Jarlais DC, Holtgrave D, et al. Alternatives to the randomized controlled trial. Am J Public Health (2008) 98(8):1359–66. doi:10.2105/AJPH.2007.124446

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Williams B. Perils of evidence-based medicine. Perspect Biol Med (2010) 53(1):106–20. doi:10.1353/pbm.0.0132

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health (2009) 99(9):1576–83. doi:10.2105/AJPH.2008.156224

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Marston G, Watts R. Tampering with the evidence: a critical appraisal of evidence-based policy-making. Drawing Board: Aust Rev Public Aff (2003) 3(3):143–63.

Google Scholar

25. Matanoski GM. Conflicts between two cultures: implications for epidemiologic researchers in communicating with policy-makers. Am J Epidemiol (2001) 154(12 Suppl):S36–42. doi:10.1093/aje/154.12.S36

CrossRef Full Text | Google Scholar

26. DiPietro NA. Methods in epidemiology: observational study designs. Pharmacotherapy (2010) 30(10):973–84. doi:10.1592/phco.30.10.973

CrossRef Full Text | Google Scholar

27. Tang JL, Griffiths S. Review paper: epidemiology, evidence-based medicine, and public health. Asia Pac J Public Health (2009) 21(3):244–51. doi:10.1177/1010539509335516

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Davis FG, Peterson CE, Bandiera F, Carter-Pokras O, Brownson RC. How do we more effectively move epidemiology into policy action? Ann Epidemiol (2012) 22(6):413–6. doi:10.1016/j.annepidem.2012.04.004

PubMed Abstract | CrossRef Full Text | Google Scholar

29. Dreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R. Why observational studies should be among the tools used in comparative effectiveness research. Health Aff (2010) 29(10):1818–25. doi:10.1377/hlthaff.2010.0666

PubMed Abstract | CrossRef Full Text | Google Scholar

30. QSR International Pty Ltd. NVivo Qualitative Data Analysis Software. 10 ed. Melbourne: QSR International (2012).

Google Scholar

31. Qualtrics. Qualtrics Software. November 2013 Ed. Provo, UT: Qualtrics (2013).

Google Scholar

32. Smith KE, Stewart E. “Black magic’ and ‘gold dust’: the epistemic and political uses of evidence tools in public health policy making. Evid Policy (2015) 11(3):415–38. doi:10.1332/174426415X14381786400158

CrossRef Full Text | Google Scholar

33. Haynes AS, Gillespie JA, Derrick GE, Hall WD, Redman S, Chapman S, et al. Galvanizers, guides, champions, and shields: the many ways that policymakers use public health researchers. Milbank Q (2011) 89(4):564–98. doi:10.1111/j.1468-0009.2011.00643.x

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Vogel JP, Oxman AD, Glenton C, Rosenbaum S, Lewin S, Gülmezoglu AM, et al. Policymakers’ and other stakeholders’ perceptions of key considerations for health system decisions and the presentation of evidence to inform those considerations: an international survey. Health Res Policy Syst (2013) 11(1):19. doi:10.1186/1478-4505-11-19

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Campbell DM, Redman S, Jorm L, Cooke M, Zwi AB, Rychetnik L. Increasing the use of evidence in health policy: practice and views of policy makers and researchers. Aust New Zealand Health Policy (2009) 6(1):21. doi:10.1186/1743-8462-6-21

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Samet JM, Lee NL. Bridging the gap: perspectives on translating epidemiologic evidence into policy. Am J Epidemiol (2001) 154(12 Suppl):S1–3. doi:10.1093/aje/154.12.S1

CrossRef Full Text | Google Scholar

37. Centre for Research on Ageing Health and Wellbeing. Learning to Evaluate Evidence for Policy. Canberra: Australian National University (2015). Available from: http://leep.anu.edu.au

Google Scholar

38. Sanson-Fisher RW, Campbell EM, Perkins JJ, Blunden SV, Davis BB. Indigenous health research: a critical review of outputs over time. Med J Aust (2006) 184(10):502–5.

PubMed Abstract | Google Scholar

Keywords: policy making, knowledge translation, evidence-based practice, government, mixed-methods research

Citation: O’Donoughue Jenkins L, Kelly PM, Cherbuin N and Anstey KJ (2016) Evaluating and Using Observational Evidence: The Contrasting Views of Policy Makers and Epidemiologists. Front. Public Health 4:267. doi: 10.3389/fpubh.2016.00267

Received: 09 September 2016; Accepted: 14 November 2016;
Published: 06 December 2016

Edited by:

Alastair James Fischer, Nice, UK

Reviewed by:

Emmanuel D. Jadhav, Ferris State University, USA
Janya McCalman, Central Queensland University, Australia

Copyright: © 2016 O’Donoughue Jenkins, Kelly, Cherbuin and Anstey. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lily O’Donoughue Jenkins, lily.jenkins@anu.edu.au

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.