Your new experience awaits. Try the new design now and help us make it even better

BRIEF RESEARCH REPORT article

Front. Public Health, 27 November 2025

Sec. Digital Public Health

Volume 13 - 2025 | https://doi.org/10.3389/fpubh.2025.1699312

This article is part of the Research TopicArtificial Intelligence in Public Health: Advancing Multidisciplinary Applications for Population HealthView all articles

Usability and usefulness of U. S. federal and state public health data dashboards: implications for improving data access and use

  • 1School of Communication & Information, Rutgers University New Brunswick, New Brunswick, NJ, United States
  • 2Florida State University School of Information, Tallahassee, FL, United States
  • 3School of Information and Library Science, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

Introduction: Dashboards that afford timely access to credible, relevant, and actionable data can significantly improve public health decision-making at all levels. As dashboards becomes more ubiquitous, it is imperative to proactively consider how they may be optimally designed to be usable and useful to users.

Methods: A cluster probability sample of U. S. federal and state public health dashboards (N = 210) was utilized to describe and compare common design elements and data characteristics of dashboards. A standardized valid and reliable instrument was used to extract data for assessing dashboards’ usability and usefulness.

Results: Dashboards are primarily designed for epidemiological surveillance and assessing disparities. Both federal and state dashboards rely heavily on data collected by federal agencies but many state dashboards also draw on local data. Vulnerable subpopulations are underrepresented in data used in dashboards. Federal dashboards score higher than state dashboards on usability but are comparable in usefulness. About one-third of state dashboards are hosted on third-party platforms and are prone to access disruptions.

Conclusion: Usability and usefulness of public health dashboards can be significantly enhanced by streamlining and enhancing users’ experience and incorporating additional customization and analytical affordances. A uniform set of best-practices and standards for optimizing dashboard design and implementation does not yet exist as research on this topic is lagging. Policy implications: Additional federal and state investments are needed to build and maintain a robust infrastructure for developing, improving, and sustaining public health dashboards and incentivize rigorous, theory grounded research to optimize usability and usefulness of dashboards.

1 Introduction

Timely access to credible, relevant, and actionable data is essential to making evidence-informed decisions in public health (1, 2). This requires a robust public health data infrastructure that supports critical public health functions such as epidemiological surveillance, coordinated response to health risks, designing policies that improve population health, building a skilled public health workforce, and communicating health information effectively to diverse audiences (3, 4). In this context, data dashboards are increasingly touted as a cost-effective means of supporting evidence-informed public health decision-making (5, 6). In theory, dashboards can afford timely, convenient, and near-universal access to public health data, transform complex data into intuitive information displays, and allow users the flexibility of exploring data on their own to answer questions of relevance to them (7, 8). They are also recognized by some for their evidence democratizing potential, namely supporting a process of widening access to the creation, use, and interpretation of research and data to include diverse and underrepresented stakeholders (7, 9, 10).

The current landscape of public health data dashboards is rapidly expanding to a broad range of domains, applications, and stakeholders (1114), prompting interest in the potential utility and public health impact of these tools (5, 6, 8). As use of dashboards in public health decision-making becomes more ubiquitous, it is imperative to map and analyze the current landscape of public health dashboards and to proactively consider how they may be optimally designed to be usable and useful to a range of users (8, 15). The available research on this topic is limited, drawing primarily on findings of selective case studies of dashboards designed by researchers in academic settings, which therefore inadequately represent the significantly larger population of dashboards created and deployed by national, state, and local public health agencies (16, 17). Local and state public health agencies may not have the same access to resources and expertise needed to create, deploy, and sustain dashboards as do federal agencies and may also diverge in terms of public health issues and types of data prioritized in dashboards. In addition, systematic and rigorous assessments of usability and usefulness of public health dashboards are scarce, lacking a standard and theory-grounded conceptualization of usability and usefulness as well as empirically-tested valid and reliable measures of both constructs (17, 18). To begin addressing this gap, this study systematically mapped and analyzed the current landscape of public health data dashboards in the U. S. using a probability sample of dashboards created by federal and state health agencies. Responding to repeated calls in the field (16, 19, 20), it was intentionally designed to assess and compare usability and usefulness of federal and state dashboards using theory-grounded, valid, and reliable measures that were tested and refined iteratively to improve reproducibility. The purpose of the analysis is to identify critical gaps in current practice and propose modifications and enhancements that advance and normalize use of dashboards as means for public health decision makers at all levels to acquire and use evidence-based, actionable knowledge.

2 Method

2.1 Population and sampling

A ‘public health data dashboard’ was operationalized as a publicly accessible, active, and interactive web-based data visualization tool for presenting and analyzing public health data. This definition includes dashboards designed for presenting population-level health data (e.g., vital statistics, epidemiological and risk surveillance, environmental hazards, access or utilization of health services, health disparities, health policy, and health-related public opinion data), but excludes dashboards used in clinical settings and patient care.

Previous studies used a range of strategies to sample public health dashboards, including Google searches (21), purposive sampling (22), expert nomination (15), or combinations of these strategies (23). Since random sampling of dashboards is not feasible (24), the alternative is conducting targeted web searches for dashboards using a two-step cluster sampling strategy that treats top-level web domains used by official federal and state public health agencies (e.g.,‘cdc.gov’) as homogenous clusters and then randomly sampling a fixed number of data dashboards within each cluster (25). Applying this strategy, we compiled a list of top-level web domains used by federal public health agencies and health departments of all 50 states and five territories. We then conducted a manual web search and screening of URLs of dashboards in each top-level domain (e.g., “site: cancer.gov AND dashboard*”). This yielded a sampling framework of active federal (n = 358) and state (n = 2,158) dashboards retrieved in July 2024. Next, we employed a previously validated two-stage cluster sampling strategy to draw a probability sample of dashboards (26). This procedure involved first randomly selecting 30 clusters (top-level web domains) and then randomly selecting seven dashboards within each cluster to generate a probability sample of 210 federal and state public health dashboards (see Supplementary material for the list). All sampled dashboards were initially accessed in July 2024, reaccessed in March 2025, and again in July 2025 to verify that they are still publicly accessible and active given concerns raised in the research literature regarding the potential adverse impact of limited resources and funding on dashboards’ sustainability (6, 7, 17).

2.2 Data extraction

The dashboard coding instrument was developed, tested, and refined iteratively based on variables and measures adopted from similar studies that analyzed public health data dashboards (15, 16, 20, 27) and a recent scoping review of dashboard usability and usefulness measures (17). The instrument was reviewed by members of the study’s advisory group, all leading experts in design and use of public health data dashboards, and further refinements were made based on feedback they provided to improve validity. Next, five coders trained on the application of the coding instrument independently coded a random subsample of 20 dashboards to assess degree of agreement across all coding decisions (N = 260), and intercoder reliability was assessed using Krippendorff’s alpha which corrects for chance agreement (28). The calculated agreement coefficient (α = 0.87) exceeded the threshold of acceptable reliability (α = 0.75). Each coder was then assigned a random subsample of dashboards to code independently and a quality control procedure was implemented by senior members of the research team to detect and correct any potential errors. The coding instrument was designed to extract information regarding observable characteristics of dashboards (e.g., topics, sources and types of data used, and populations represented in the data). Usability and usefulness affordances were assessed heuristically by having coders assess adherence to recommended usability and usefulness principles (29, 30). Usability affordances coded included accessibility, ease of navigation, interactivity, data visualization, and availability of technical assistance. Key usefulness affordances assessed include (a) credibility-related information or cues (e.g., trust certificates); (b) explicit identification of target audience or intended users; (c) data customization affordances (e.g., data disaggregation or layering options); (d) data affordances (i.e., types of different questions that can be answered by the data); (e) analytical affordances (range and complexity of data analyses users can conduct); and (f) translational or interpretation affordances (features such as data storytelling that can facilitate user comprehension or ability to generate useful insights) (13, 19). A copy of the coding instrument is available from the Supplementary material.

2.3 Statistical analyses

Given the study’s aims, data analyses focused on describing and comparing key characteristics of national and state public health dashboards to assess similarities and differences in use, usability, and usefulness of these tools that may be due to variations in resources available to design, implement, and sustain dashboards as well as variations between national and local public health priorities. Variables were converted to multiple response items prior to generating descriptive statistics. Categories of multiple response items are not mutually exclusive and therefore violate the assumptions of standard statistical tests of differences between federal and state dashboards. For this reason, each response option on these variables was first recoded into a dummy variable, and a Pearson’s chi-square test was used to assess the statistical significance of differences between federal and state dashboards on that variable. All statistical analyses were performed using IBM SPSS version 29.

3 Results

Of the 210 dashboards sampled, 58 (28%) are federal and 152 (72%) are state dashboards. Eighty-five percent of all dashboards were created between 2017 and 2024 and 50% since 2022, demonstrating that use of public health dashboards is proliferating. The dashboards sampled are used for visualizing data on diverse health topics that we group according to major public health focus areas. As shown in Figure 1, about one-third of all dashboards sampled are disease or condition-specific (e.g., cancer, diabetes, influenza, etc.) and 13% are focused on risky health behaviors (e.g., drug use and tobacco consumption). Other than surveillance, dashboards are also commonly used for sharing health services data regarding child and maternal care, emergency care, injury prevention, preventive care, and indicators of access or quality of care. By comparison, a very small percentage of public health dashboards are designed to track trends in health policy, workforce development, and social determinants of health. Most dashboards (80%) do not include explicit information regarding the identity of the dashboard’s creator or source of funding, which has been repeatedly shown to undermine user trust in these tools (10).

Figure 1
Bar chart showing percentages of various health topics. Disease/Condition leads with 29.5%, followed by Risky Behavior at 13.3%. Child/Maternal Health and System Performance/Preparedness are both at 10%. Other categories include Emergency Care at 8.1%, Injury Prevention at 7.6%, Access/Quality of Care at 7.1%, and several others with lower percentages, such as Health Workforce at 3.3% and Social Determinants of Health at 1.9%. Health Policy and Healthy Living Indicators are both at 0.5%.

Figure 1. Distribution of health topics in U. S. federal and state public health data dashboards, 2024–2025 (N = 210).

Regarding access, most dashboards sampled (73%) are hosted on official federal or state health departments’ websites, however 36% of state dashboards are hosted on third-party platform (e.g., Tableau or Microsoft Power BI), which raises a potential concern regarding accessibility and sustainability of these dashboards if financial or other circumstances change. Indeed, 31 (20%) of state dashboards in the sample were temporarily inaccessible when attempted to be reaccessed in March or July 2025 either due to technical issues or because the original URL was changed (see Supplementary material). However, similar concerns can be raised regarding reliable access to federal government dashboards. When reaccessed in March 2025, 17 of 58 (or 30%) federally maintained dashboards were inaccessible (i.e., removed or archived to comply with President Trump’s executive orders) (31), specifically dashboards visualizing COVID-19, HIV/AIDS, and health inequities-related data. Access to eight (47%) of these dashboards has been restored by July 2025, although some were altered to restrict data analyses by sex and race/ethnicity and others now require users to register before gaining access to the dashboard.

3.1 Characteristics of data used in national and state public health dashboards

Table 1 compares the characteristics of data used in federal and state public health dashboards. Data collected by the federal government is used in over half of all dashboards sampled, including 98% of all national dashboards and 37% of all state dashboards. Most state dashboards (91%) use in addition or exclusively data collected by state health departments. Only a fraction of all dashboards (4%) incorporated local (i.e., county or municipal) health data, as such data may not be readily available. Publicly available data from other sources (e.g., health maintenance organizations and philanthropic foundations) is rarely used in either federal or state dashboards.

Table 1
www.frontiersin.org

Table 1. Characteristics of data used in U. S. federal and state public health data dashboards, 2024–2025.

Epidemiological and health services data are the most common types of data used in public health dashboards (44 and 33% of all dashboards, respectively) since both are routinely collected via existing data collection systems. Dashboards also commonly utilize risk behavior and health outcomes surveillance data (12.4 and 11.4% of all dashboards, respectively), with the latter primarily used in so-called “disparity dashboards” or dashboards that track indicators of health disparities (32). Federal dashboards are significantly more likely to use performance or preparedness-related data (21% compared to 9% of state dashboards) given stronger mandates concerning collection and reporting of such data at the federal level. In contrast, state dashboards are significantly more likely to utilize emergency care-related data (21% compared to 7% of federal dashboards), although this difference may be artificially inflated due to states’ increased use of dashboards to track opioid-related overdoses and emergency room visits to guide local response efforts. State dashboards are also more likely to include environmental risk data (35.5% compared to 25.4% of federal dashboards), which may reflect differences in priorities of federal and state governments. Lastly, very few public health dashboards (less than 4%) utilize public attitudes or social determinants of health data, despite their relevance to public health. This may be due to the cost and complexity of collecting such data, but also because it is not always clear how such data may be actionable from a public health perspective (33).

The findings in Table 1 also reveal important insights regarding representation of different populations in data used by federal and state dashboards. Federal and state dashboards routinely use general population and/or patient data. However, data collected from providers is more frequently used in federal dashboards compared to state dashboards (12% compared to 0.7% of dashboards, respectively), perhaps because states’ regulatory authority to collect and use provider data vary significantly. Federal dashboards are also slightly more likely than state dashboards to incorporate administrative and other data collected on healthcare organizations, health services, and health records or claims. Still, the most notable pattern of findings emerges regarding the representation of vulnerable populations. Data absenteeism, i.e., the chronic absence or limits of data collected on groups experiencing social vulnerabilities, is a major obstacle to both accurately documenting and addressing health inequities (34), and only 22% of all dashboards sampled utilized data specific to vulnerable populations. State dashboards were significantly more likely than federal dashboards to incorporate such data (27.6% vs. 8.6%, respectively), with about a third focusing exclusively on infant and child health indicators and 5% on data collected on Medicaid recipients. By comparison, dashboards that include data specific to other vulnerable groups, such as older adults people, people experiencing poverty and/or homelessness, persons who are incarcerated, and persons identifying as LGBTQ+, are scarce (less than 3% of dashboards).

3.2 Usability of national and state public health dashboards

Table 2 summarizes findings regarding the usability of public health data dashboards, specifically accessibility, ease of navigation, interactivity, data visualization, and availability of technical assistance. Only 22% of the dashboards sampled (55% of federal dashboards and 9% of state dashboards) claim to be ADA-compliant, raising concerns about the degree to which many existing dashboards are accessible to individuals with disabilities. Ease of navigation also improves when dashboards are accessible via a central (parent) webpage that allows users to access multiple dashboards on the same or different health topics (27). Our analysis reveals that most current dashboards (57%) are only accessible via a standalone webpage, although state dashboards are more likely to utilize multi-dashboard landing pages (47% compared to 31% of federal dashboards), potentially making them easier for users to access and navigate. This difference appears to be primarily due to federal dashboards being independently created by federal health agencies for topics that fall under their specific purview, which is not the case for dashboards designed and maintained by state health departments.

Table 2
www.frontiersin.org

Table 2. Usability of U. S. federal and state public health data dashboards, 2024–2025.

Regarding interactivity, the findings in Table 2 suggest that federal dashboards are generally more interactive than state dashboards on most benchmarks assessed including options to download data, download a visualization, and provide user feedback. However, virtually none of the federal and state dashboards sampled offers users the option to generate visualizations tailored to their specific questions or information needs despite both options being feasible to implement using modern dashboard design technology.

Lastly, whereas virtually all federal and state dashboards appropriately utilize a range of data visualization tools (e.g., tables, graphs, and maps), they appear to be lacking in availability of technical support. Thus, one-quarter of the dashboards analyzed do not include any technical support information, and only about a third include contact information for inquiries. In addition, whereas most federal and state dashboards (86 and 66%, respectively) include use instructions on the website, only a handful, mostly federal dashboards, include links to user manuals, training resources, or examples of how the dashboard may be used.

3.3 Usefulness of national and state public health dashboards

Table 3 compares usefulness-related affordances of federal and state public health dashboards. Dashboards are presumed to be useful if they provide users with timely, relevant, credible and actionable information. As shown in Table 3, trust certificates are included in 97% of all federal dashboards compared to 42% of state dashboards, as dashboards created by federal agencies are required to carry an official trust certificate. Still, most federal and state dashboards (85%) do not explicitly identify the intended users of a dashboard or explain how and for what purpose they may use this tool, which may discourage the use of these tools by some.

Table 3
www.frontiersin.org

Table 3. Usefulness of U. S. federal and state public health data dashboards, 2024–2025.

Common data affordances of public health dashboard include epidemiological surveillance (60% of all dashboards), risk surveillance (16.7%), detection of gaps or disparities in utilization of health services such as screening and vaccination (14.8%), and monitoring performance of health systems on indicators such as hospitalizations (13%). State dashboards appear to be skewed toward using dashboards primarily for epidemiological and risk surveillance whereas federal dashboards use dashboards in addition to track and assess health systems performance. Other potential data affordances such as tracking health policy, social determinants of health data, and projecting trends are less common, although relevant pools of data are increasingly available (14). Customization affordances of dashboards follow a similar pattern: most federal and state dashboards allow users to compare distributions of variables or indicators across demographic groups (65% of all dashboards), geographical areas (78.5%), and time (75%), but significantly fewer offer users the option to disaggregate data by other relevant factors such as health insurance status (6.2%), risk profile (12.4%), or neighborhood characteristics (17%). Consequently, all dashboards analyzed can support descriptive data analyses, but only a handful can support more advanced data analyses, such as multivariate analysis (11%), choice analysis (8%), or predictive modeling (4%). This significantly limits the ability of users, particularly those with advanced data literacy and analysis skills, to analyze data in context or conduct causal analyses. Given this, it is reassuring that virtually all dashboards analyzed include a brief explanation or a disclaimer regarding data limitations and many (58%) use visual techniques to highlight major insights for users. Most (75%) also include links to external sources of additional information about relevant health topics or data for users who seek deeper insights, although use of data storytelling techniques remains scarce.

4 Discussion

Public health data dashboards have a significant potential to improve data access and use in health decision-making at the national, state, and local levels (5, 6, 8). Our findings demonstrate that dashboards are already being used extensively by federal and state public health agencies to equip health decision makers and the public with timely, relevant, credible, and useful data across a wide spectrum of public health issues and applications. However, our mapping and analysis of the current landscape of public health dashboards reveals that several critical challenges remain to realizing the full potential of these tools. A primary challenge is lack of a standard approach to design and implementation of usable and useful dashboards. Our findings point to significant differences in how, for whom, and for what purpose public health dashboards are created as well as in their relative ease of access and use to different users. Dashboards created by federal agencies are more likely to conform to uniform design standards than dashboards created by state public health departments, but the process of designing and implementing dashboards does not appear to follow systematic strategy focused on maximizing usability and usefulness. This likely reflects the current state of research and evidence to guide optimal design and implementation of dashboards, which remains mostly disjointed, lacking firm grounding in theories or frameworks that logically link usability and usefulness affordances to user experience and learning, and is limited to insights obtained from descriptive case studies as opposed to rigorous evaluations (13, 1618).

A second challenge involves ensuring routine, reliable, and sustained access to these tools. We find that access to both federal and state public health data dashboards may be disrupted and become unreliable, but for different reasons: state dashboards rely heavily on third-party platforms for designing and hosting dashboards and are susceptible to disruptions caused by licensing or technical issues, whereas federal dashboards are susceptible to disruptions due to changes in federal policies and mandates regarding data collection and sharing. Public health data are critical public good, and federal government dashboards are especially susceptible to impacts of disappearing or altered public health data (35). Overall, robust access to data via dashboards is necessary to ensure these tools are usable, useful, and used consistently for supporting evidence-informed public health decisions at all levels.

Third, our findings highlight important gaps regarding the types, scope, and nature of data used in public health dashboards. First, data and indicators most frequently used in dashboards emphasize infectious disease, chronic conditions, mortality, and risk factor exposures, with less attention to upstream factors that impact population health such as social determinants of health, changing climate, and public health workforce development and training. Further, the use of unstructured data from alternative sources (e.g., social media or sensors) is scarce despite their potential to provide critical insights regarding lived experiences and flow of health information. Second, most public health data dashboards may be perpetuating chronic underrepresentation of members of socially vulnerable populations in public health data by not explicitly acknowledging inequitable representation and cautioning readers about limitations of inference from such data. Third, data collected by federal agencies are generally less granular than data collected by state health departments, which has implications for using national data for public health decision-making at the local level. Dashboards that pull and aggregate comparable data across state dashboards have the potential to overcome this issue.

Lastly, dashboards are most useful if they present data in context, yet most of the federal and state dashboards analyzed do not include features such as data storytelling or annotations that offer important insights regarding context, although there are positive signs of dashboard designers moving in this direction. We are not suggesting that such features ought or need to be incorporated into all dashboards—for example, context may matter more for strategic decisions than for operational ones—but rather propose that the potential value of including such features is considered in the design process.

4.1 Study limitations

This study utilized a cluster probability sampling of federal and state public health dashboards in the U. S. and therefore provides a more complete and nuanced representation of this landscape than that generated by similar studies that relied on purposive samples. In addition, this study employed a theory-grounded, valid, and reliable data extraction instrument with a particular focus on assessing usability and usefulness. The most notable limitations of this study are the exclusion of public health dashboards available from sources other than federal and state health agencies (e.g., foundations, health insurers, universities, and news media) and that usability and usefulness of dashboards were not directly assessed based on actual users’ feedback but rather assessed indirectly but examining usability and usefulness affordances. Despite these limitations, the study offers valuable insights regarding gaps and opportunities for realizing the full potential of data dashboards in improving and advancing evidence-informed public health decision-making.

4.2 Policy and practice implications

Federal and state agencies have a critical role in the collection, curation, analysis, and dissemination of data that public health stakeholders rely on for making decisions, and data dashboards are increasingly used for connecting users with such data. Realizing the full potential of dashboards to provide timely, relevant, credible, and actionable insights for informing sound policy and practice requires convergence on a common set of standards and best-practices at the federal and state levels for guiding design and implementation of usable and useful dashboards. It is critical that such standards and best-practices be evidence-based and emerge from collaborations between designers, users, and intermediaries, which calls for additional investments in theoretically grounded and methodologically rigorous research to clarify and advance the science underlying dashboard design. Additional investments are also needed to support all users of these tools by providing adequate training, technical assistance, and improved customer support. Beyond that, greater consideration ought to be paid in the design and implementation process to ensuring data equity, interoperability, transparency, and governance—all of which are critical for building trust in data and evidence—as well as to instituting quality control and performance benchmarks for guiding improvements.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors upon request, without undue reservation.

Author contributions

IY: Writing – original draft, Formal analysis, Supervision, Validation, Funding acquisition, Methodology, Investigation, Writing – review & editing, Conceptualization. GS: Formal analysis, Writing – original draft, Supervision, Funding acquisition, Methodology, Investigation, Validation, Writing – review & editing, Conceptualization. MK: Data curation, Conceptualization, Investigation, Writing – review & editing, Writing – original draft, Project administration.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This study was funded by a grant from the Robert Wood Johnson Foundation (#805640).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Author disclaimer

The views expressed here do not necessarily reflect the views of the Foundation, which had no role in the study design, decision to publish, or drafting of the manuscript.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2025.1699312/full#supplementary-material

References

1. Benjamin, GC. The future of public health: ensuring an adequate infrastructure. Milbank Q. (2023) 101:637–52. doi: 10.1111/1468-0009.12637

PubMed Abstract | Crossref Full Text | Google Scholar

2. Dixon, BE, Staes, C, Acharya, J, Allen, KS, Hartsell, J, Cullen, T, et al. Enhancing the nation’s public health information infrastructure: a report from the ACMI symposium. J Am Med Inform Assoc. (2023) 30:1000–5. doi: 10.1093/jamia/ocad033

PubMed Abstract | Crossref Full Text | Google Scholar

3. Nogueira, PJ, Costa, AS, and Farinha, CSS. Public health and health research data: availability, needs and challenges. Front Public Health. (2023) 11:663. doi: 10.3389/fpubh.2023.1340663

Crossref Full Text | Google Scholar

4. Christopher, G, Zimmerman, E, and Chandra, MT. Charting a course for an equity-centered data system: Recommendations from the National Commission to transform public health data systems. (2021). Available online at: https://www.nationalcollaborative.org/wp-content/uploads/2021/10/RWJF-Transforming-Public-Health-Data-Systems.pdf (Accessed July 21, 2025).

Google Scholar

5. Dasgupta, N, and Kapadia, F. The future of the public health data dashboard. Am J Public Health. (2022) 112:886–8. doi: 10.2105/ajph.2022.306871

PubMed Abstract | Crossref Full Text | Google Scholar

6. Dixon, BE, Dearth, S, Duszynski, TJ, and Grannis, SJ. Dashboards are trendy, visible components of data Management in Public Health: sustaining their use after the pandemic requires a broader view. Am J Public Health. (2022) 112:900–3. doi: 10.2105/ajph.2022.306849

PubMed Abstract | Crossref Full Text | Google Scholar

7. Matheus, R, Janssen, M, and Maheshwari, D. Data science empowering the public: data-driven dashboards for transparent and accountable decision-making in smart cities. Gov Inf Q. (2020) 37:101284. doi: 10.1016/j.giq.2018.01.006

Crossref Full Text | Google Scholar

8. Thorpe, LE, and Gourevitch, MN. Data dashboards for advancing health and equity: proving their promise? Am J Public Health. (2022) 112:889–92. doi: 10.2105/ajph.2022.306847

PubMed Abstract | Crossref Full Text | Google Scholar

9. Concannon, D, Herbst, K, and Manley, E. Developing a data dashboard framework for population health surveillance: widening access to clinical trial findings. JMIR Form Res. (2019) 3:e11342. doi: 10.2196/11342

PubMed Abstract | Crossref Full Text | Google Scholar

10. D'Agostino, EM, Feger, BJ, Pinzon, MF, Bailey, R, and Kibbe, WA. Democratizing research with data dashboards: data visualization and support to promote community partner engagement. Am J Public Health. (2022) 112:S850–3. doi: 10.2105/Ajph.2022.307103

PubMed Abstract | Crossref Full Text | Google Scholar

11. Afrin Ive, R, and Shahriar, A. Developing AI-driven community health dashboards for real-time disparity analysis and intervention. J Multidiscip Res. (2025) 1:1–17. doi: 10.71408/jz4t4f74

Crossref Full Text | Google Scholar

12. Snyder, M, and Zhou, W. Big data and health. Lancet Digit Health. (2019) 1:e252–4. doi: 10.1016/S2589-7500(19)30109-8

PubMed Abstract | Crossref Full Text | Google Scholar

13. Chishtie, JA, Marchand, JS, Turcotte, LA, Bielska, IA, Babineau, J, Cepoiu-Martin, M, et al. Visual analytic tools and techniques in population health and health services research: scoping review. J Med Internet Res. (2020) 22:e17892. doi: 10.2196/17892

Crossref Full Text | Google Scholar

14. Acosta, JD, Chandra, A, Yeung, D, Nelson, C, Qureshi, N, Blagg, T, et al. What data should be included in a modern public health data system. Big Data. (2022) 10:S9–S14. doi: 10.1089/big.2022.0205

PubMed Abstract | Crossref Full Text | Google Scholar

15. Ivanković, D, Barbazza, E, Bos, V, Brito Fernandes, Ó, Jamieson Gilmore, K, Jansen, T, et al. Features constituting actionable COVID-19 dashboards: descriptive assessment and expert appraisal of 158 public web-based COVID-19 dashboards. J Med Internet Res. (2021) 23:e25682. doi: 10.2196/25682

PubMed Abstract | Crossref Full Text | Google Scholar

16. Schulze, A, Brand, F, Geppert, J, and Boel, GF. Digital dashboards visualizing public health data: a systematic review. Front Public Health. (2023) 11:9958. doi: 10.3389/fpubh.2023.999958

Crossref Full Text | Google Scholar

17. Yanovitzky, I, Stahlman, G, Quow, J, Ackerman, M, Perry, Y, and Kim, M. National Public Health Dashboards: protocol for a scoping review. JMIR Res Protoc. (2024) 13:e52843. doi: 10.2196/52843

PubMed Abstract | Crossref Full Text | Google Scholar

18. Verhulsdonck, G, and Shah, V. Making actionable metrics “actionable”: the role of affordances and Behavioral Design in Data Dashboards. J Bus Tech Commun. (2022) 36:114–9. doi: 10.1177/10506519211044502

Crossref Full Text | Google Scholar

19. Barbazza, E, Ivankovic, D, Wang, S, Gilmore, KJ, Poldrugovac, M, Willmington, C, et al. Exploring changes to the actionability of COVID-19 dashboards over the course of 2020 in the Canadian context: descriptive assessment and expert appraisal study. J Med Internet Res. (2021) 23:e30200. doi: 10.2196/30200

PubMed Abstract | Crossref Full Text | Google Scholar

20. Bos, CV, Jansen, T, Klazinga, SN, and Kringos, SD. Development and actionability of the Dutch COVID-19 dashboard: descriptive assessment and expert appraisal study. JMIR Public Health Surveill 10/12 2021;7:e31161. doi:doi: 10.2196/31161

Crossref Full Text | Google Scholar

21. Clarkson, MD. Web-based COVID-19 dashboards and trackers in the United States: survey study. JMIR Hum Factors. (2023) 10:e43819. doi: 10.2196/43819

PubMed Abstract | Crossref Full Text | Google Scholar

22. Momenipour, A, Rojas-Murillo, S, Murphy, B, Pennathur, P, and Pennathur, A. Usability of state public health department websites for communication during a pandemic: a heuristic evaluation. Int J Ind Ergon. (2021) 86:103216. doi: 10.1016/j.ergon.2021.103216

PubMed Abstract | Crossref Full Text | Google Scholar

23. Tang, R, Hu, Z, and Zhang, Y. Data use policies on state COVID-19 dashboards in the United States: key characteristics, topical focus, and identifiable gaps. Data Inf Manag. (2025) 9:100050. doi: 10.1016/j.dim.2023.100050

PubMed Abstract | Crossref Full Text | Google Scholar

24. Mooney, SJ, and Garber, MD. Sampling and sampling frames in big data epidemiology. Curr Epidemiol Rep. (2019) 6:14–22. doi: 10.1007/s40471-019-0179-y

PubMed Abstract | Crossref Full Text | Google Scholar

25. Featherstone, M, Adam, S, and Borstorff, P. A methodology for extracting quasi-random samples from world wide web domains. J Int Bus Res. (2009) 8:63–7. Available online at: http://hdl.handle.net/10536/DRO/DU:30016660

Google Scholar

26. Henderson, RH, and Sundaresan, T. Cluster sampling to assess immunization coverage: a review of experience with a simplified sampling method. Bull World Health Organ 1982;60:253–260.6980735

Google Scholar

27. Ansari, B, and Martin, EG. Development of a usability checklist for public health dashboards to identify violations of usability principles. J Am Med Inform Assoc. (2022) 29:1847–58. doi: 10.1093/jamia/ocac140

PubMed Abstract | Crossref Full Text | Google Scholar

28. Hayes, AF, and Krippendorff, K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. (2007) 1:77–89. doi: 10.1080/19312450709336664

Crossref Full Text | Google Scholar

29. Otaiza, R, Rusu, C, and Roncagliolo, S Evaluating the usability of transactional web sites. Third International Conference on Advances in Computer-Human Interactions (2010) 32–37

Google Scholar

30. Almasi, S, Bahaadinbeigy, K, Ahmadi, H, Sohrabei, S, and Rabiei, R. Usability evaluation of dashboards: a systematic literature review of tools. Biomed Res Int. (2023) 2023:9990933. doi: 10.1155/2023/9990933

PubMed Abstract | Crossref Full Text | Google Scholar

31. Freilich, J, Price, WN, and Kesselheim, AS. Disappearing data at the U.S. federal government. N Engl J Med. (2025) 392:e55. doi: 10.1056/NEJMp2502567

PubMed Abstract | Crossref Full Text | Google Scholar

32. Gallifant, J, Kistler, EA, Nakayama, LF, Zera, C, Kripalani, S, Ntatin, A, et al. Disparity dashboards: an evaluation of the literature and framework for health equity improvement. Lancet Digit Health. (2023) 5:e831–9. doi: 10.1016/S2589-7500(23)00150-4

PubMed Abstract | Crossref Full Text | Google Scholar

33. He, Z, Pfaff, E, Guo, SJ, Guo, Y, Wu, Y, Tao, C, et al. Enriching real-world data with social determinants of health for health outcomes and health equity: successes, challenges, and opportunities. Yearb Med Inform. (2023) 32:253–63. doi: 10.1055/s-0043-1768732

PubMed Abstract | Crossref Full Text | Google Scholar

34. Viswanath, K, McCloud, RF, Lee, EWJ, and Bekalu, MA. Measuring what matters: data absenteeism, science communication, and the perpetuation of inequities. Ann Am Acad Pol Soc Sci. (2022) 700:208–19. doi: 10.1177/00027162221093268

Crossref Full Text | Google Scholar

35. McAndrew, T, Lover, AA, Hoyt, G, and Majumder, MS. When data disappear: public health pays as US policy strays. Lancet Digit Health. (2025) 7:874. doi: 10.1016/j.landig.2025.100874

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: data dashboards, public health, actionability, usability, usefulness

Citation: Yanovitzky I, Stahlman G and Kim M (2025) Usability and usefulness of U. S. federal and state public health data dashboards: implications for improving data access and use. Front. Public Health. 13:1699312. doi: 10.3389/fpubh.2025.1699312

Received: 04 September 2025; Revised: 14 October 2025; Accepted: 10 November 2025;
Published: 27 November 2025.

Edited by:

Matteo Delsanto, University of Turin, Italy

Reviewed by:

Joy D. Doll, Creighton University, United States
Elwin Wu, Columbia University, United States

Copyright © 2025 Yanovitzky, Stahlman and Kim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Itzhak Yanovitzky, aXR6aGFrQHJ1dGdlcnMuZWR1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.