- Center for Applied Ethics, Emory University, Atlanta, GA, United States
Public trust in science remains below pre-pandemic levels, underscoring an urgent need to reevaluate conventional science communication practices. This paper identifies a crucial vulnerability inherent in the common practice of relying on single, authoritative spokespersons. Drawing upon interdisciplinary insights, I propose replacing lone figureheads with community-based expert panels, transparently highlighting areas of consensus as well as legitimate disagreements. This approach fosters greater accountability, reduces personal risks for individual communicators, and portrays science authentically as the dynamic, collaborative, and iterative enterprise that it is. By openly conveying the collective nature of scientific inquiry, the proposed community-panel model can more effectively restore and sustain public trust.
1 Introduction
Though gaging public trust in science is inherently challenging due to varied definitions and measurement approaches (Besley and Tiffany, 2023; Besley et al., 2021), there is evidence that trust in science suffered significantly during the pandemic and has yet to recover to pre-pandemic levels (Kennedy and Tyson, 2023, 2024). While deliberate misinformation has clearly exacerbated mistrust (Oreskes and Conway, 2011; O’Connor and Murphy, 2020), I argue that conventional messaging practices may also be significant contributors. In moments of urgency—such as a rapidly spreading virus, an environmental catastrophe, or a contentious policy debate—people understandably seek clear, authoritative guidance. Yet by relying excessively on single spokespersons to represent scientific authority, we inadvertently distort both the nature and the public perception of science. Individual experts become symbolic stand-ins for inherently collaborative, often multidisciplinary inquiries. When uncertainties or mistakes emerge, which is inevitable, trust fractures, prompting people to ask: “Who should I listen to when even the experts can be uncertain or wrong?”
Might different modes of messaging worsen the situation? I think so. Monolithic messaging plays directly into the hands of bad actors. Conspiracy theorists and interest groups can latch onto isolated statements to sow doubt, weaponize nuance, and portray scientists as divided and unreliable. Worse, genuine scholarly disagreements, presented without clear framing, can easily be mischaracterized as signs of chaos or fraud, especially in politically polarized contexts (Hall Jamieson and Hardy, 2014). This highlights the critical need to clearly differentiate fundamental debates from narrower, technical disputes within broader consensus frameworks. Absent such explicit boundaries, simplified or ambiguous messaging can inadvertently perpetuate misinformation and mistrust with deadly consequences (Lewandowsky et al., 2012). To rebuild and sustain public trust, we should follow insights from a growing body of literature advocating a shift away from a singular, monolithic voice toward a community model (Canfield et al., 2020; Orthia et al., 2021)—one that transparently conveys the size of the populations working on the issue, clearly distinguishes consensus, and accurately portrays the scope and character of any dissensus.
2 Why community voices outperform lone authorities
Science is inherently a distributed, iterative process (Kuhn, 1970), composed of various scientists, often teams of scientists (Wuchty et al., 2007), attacking a problem from a variety of different angles. Yet in moments of crisis or high stakes, communication often collapses into a single, elevated voice: one person speaking on behalf of an entire field. This may allow for concise, consistent messaging, but it also concentrates accountability, magnifying every course correction into a perceived contradiction, thereby handing bad actors a single target to discredit. This compression of nuance and responsibility undermines, not buttresses, public trust in science.
When diverse expert opinions are systematically aggregated, the collective judgment reliably outperforms randomly assembled groups or typical individual experts. Philip Tetlock’s research in Superforecasting: The Art and Science of Prediction (Tetlock and Gardner, 2016) shows that carefully structured panels of forecasters—each bringing unique perspectives, heuristics, and biases—generally produce predictions superior to those of any single expert. Although the best individual “superforecasters” can sometimes match or surpass aggregated judgments, groups consistently outperform individual predictions over time by reducing errors and biases through diversity.
Likewise, the very architecture of peer review depends on multiple reviewers scrutinizing a manuscript from different angles. As Helen Longino argued in Science as Social Knowledge (Longino, 2020), the social process of critique, response, and revision brings in varied perspectives that correct biases and strengthen conclusions.
Many credible voices can also undermine fringe narratives (Keeley, 2006). Keeley’s research on conspiracy thinking finds that isolated groups thrive on echo chambers, but when mainstream experts and institutions can flood the discourse with consistent, evidence-based rebuttals, conspiracy theories become untenable.
Consider the case for restorative justice, where the legal process deliberately brings together victims, offenders, and community members to share perspectives and build understanding. Including multiple stakeholders in a dialogic, human-centered process often produces more meaningful reconciliation (Levin, 2005). The same holds for science communication: when multiple experts, each with their own expertise, experiences, and humanity, participate openly they foster mutual respect and reduce the sense that science is an inaccessible monolith.
3 The dangers of monolithic messaging
Monolithic messaging is not just less effective and less accurate; it’s also potentially more dangerous. During the COVID-19 pandemic, Dr. Anthony Fauci emerged as the de facto voice of U.S. public health. Mask recommendations were initially discouraged due to limited supplies and scant initial evidence then later encouraged as mask availability improved and accumulating evidence demonstrated their effectiveness if used properly (Howard et al., 2021). These shifts were not interpreted as inevitable and healthy components of a self-correcting scientific process, but as personal backtracking, readily exploited by political actors and conspiracy theorists who decontextualized initial statements (e.g., “masks aren’t necessary”) to portray public health guidance as erratic or deceitful. This misinterpretation was a predictable byproduct of positioning a single authoritative spokesperson, whose introduction to most of the national audience coincided precisely with the weakest state of available evidence, to debut major policy recommendations. Consequently, Fauci became a frequent target of intense criticism (Horton, 2024), coinciding notably with decreased public trust in science. Evans and Hargittai (2020) conclude their analysis by highlighting this vulnerability, noting, “for those who believe in the value of prioritizing lives, scientists such as Fauci are the only prominent federal-level advocates of that value, which puts them in a bind.”
It’s also worth remembering that CDC guidance is not crafted by any single individual, even someone in Dr. Fauci’s position, but emerges from collaborative federal advisory committees and rigorous evidence-review processes (Carande-Kulis et al., 2022). Committees such as the Advisory Committee on Immunization Practices (ACIP) convene dozens of publicly identified external experts who deliberate on the literature, weigh risks and benefits, and vote on recommendations. Furthermore, CDC’s own guideline standards mandate systematic scoping, external input, and transparent grading of evidence. However, while the members of these key decision-making committees are typically public, other expert consultations or surveys may anonymize respondents. This anonymity is often standard procedure, intended to comply with Institutional Review Board (IRB) requirements, encourage candid feedback, or protect experts from potential harassment or professional repercussions on sensitive topics. While these are valid considerations, such practice creates a potential tension with the goals of transparency and public trust. By obscuring the identities of contributing experts, even for understandable reasons, it can inadvertently limit the visible breadth of scientific input. The byproduct is a process that seems less open and potentially reinforces the public perception of guidance stemming from a smaller, less accountable group rather than a wide expert community.
In the wake of the massacre at the offices of Charlie Hebdo in 2015, the magazine decided to print a caricature image of the prophet Mohammad on the cover of the ensuing issue. Within the publishing community there was a push to republish the magazine’s cover. It was undoubtedly a difficult decision for many outlets. Some, such as CNN, declined to republish the articles (Tandoc et al., 2019). However, many European and US papers chose to reprint the image under the rallying cry of a hashtag #SpreadTheRisk. The idea being that the threat of printing such a controversial figure on the cover of one magazine or news piece was too dangerous for one publication to bear. Newspapers and publication outlets could dilute that risk to fellow journalists by choosing to republish the image. As in publishing, so too with science. By diversifying and transparently highlighting more authoritative voices in public health messaging, the risks of targeted harassment, politicization, or loss of public trust can similarly be diluted. In doing so, science communication not only becomes safer for individual experts but also more robust and credible, aligning closely with Keeley’s insights that collaborative messaging increases effectiveness and public trust.
4 The need for contextualized disagreement
Even when multiple experts speak, raw disagreement can backfire if it is not carefully framed. It becomes essential to clearly distinguish between questions that remain genuinely open (“live hypotheses”) and those resting on overwhelming evidence (“dead hypotheses”), and to explicitly show where each expert’s viewpoint falls within that spectrum.
James (1896) coined the notion of “live” versus “dead” hypotheses to distinguish questions that genuinely engage us from those already settled. In our context:
• Live hypotheses are active debates: areas where data remain sparse or interpretations diverge (e.g., mechanisms driving long COVID).
• Dead hypotheses rest on overwhelming evidence and form the bedrock consensus (e.g., that HIV causes AIDS).
By explicitly framing discussions to clearly differentiate live from dead hypotheses, we can immediately signal to the public where genuine scientific uncertainty lies and where it does not. This distinction should inform how we engage in public discourse. Disagreement should be welcomed. Not only because it forms the bedrock of scientific progress, but because it is honest. During truly novel phenomena, such as we experienced with COVID-19, many hypotheses must remain live because insufficient time has elapsed to generate robust, high-quality evidence for ruling them out. Nevertheless, overarching principles from biochemistry, virology, or epidemiology, such as established mechanisms of viral mutation and known constraints on transmission, provide stable points of agreement that anchor discussions within a common framework. This shared framework allows disagreements to be clearly contextualized, ensuring that debate occurs within a space grounded in established science, thereby enhancing clarity and fostering public trust. Explicit acknowledgment of inherent uncertainty early in a crisis can prevent premature policy overconfidence and minimize public confusion when inevitable revisions occur. Embracing this reality supports a growing movement toward “uncertainty-normalization” (Han et al., 2021), a theoretical approach explicitly aimed at communicating the inherent uncertainties and limitations in scientific knowledge as a normal, expected aspect of the scientific process, rather than as weaknesses or failures (Lewandowsky et al., 2015). However, the manner in which disagreement is presented to the public critically impacts its reception (Gustafson and Rice, 2020), thus necessitating careful calibration.
The debate surrounding punctuated equilibrium, famously articulated by Gould and Eldredge (1993), illustrates this danger well. Their theory proposed that the history of life, as reflected in the fossil record and phylogenetic patterns, is often characterized by long periods of morphological stasis within species, punctuated by geologically rapid bursts of speciation and change (cladogenesis). This challenged the traditional emphasis on phyletic gradualism, slow, continuous, incremental change within lineages, as the sole or dominant tempo and mode of macroevolution. While representing a significant and vibrant debate within evolutionary biology concerning the relative frequency and importance of different evolutionary patterns, it operated entirely within the established framework of common descent. Nevertheless, “evolution deniers,” particularly young-earth creationists, exploited the vigorous scientific exchanges. They selectively quoted arguments to falsely portray this internal debate about evolutionary mechanisms as a fundamental crisis challenging the validity of evolution itself. Had the discourse consistently framed the punctuated equilibrium debate as a ‘live hypothesis’ concerning the patterns and processes of evolution through common descent (e.g., stating upfront, “This discussion concerns the tempo and mode of evolutionary change, building upon the established fact of common descent”)—the opportunity for such deliberate misrepresentation would have been diminished.
Just as failing to clearly distinguish between live and dead hypotheses obscures essential context, current methods used to measure public and expert views often strip away necessary nuance. Standard methodological rigor demands that social scientists pose simple, unambiguous questions, deliberately avoiding so-called “double-barreled” items that simultaneously address multiple conditions or contingencies (Krosnick and Presser, 2010). Although methodologically justified, this practice severely limits our ability to capture opinions on inherently complex, conditional propositions (Tourangeau et al., 2000)—a limitation equally relevant whether polling scientists or the general public. For instance, typical polling might simplistically ask, “Do masks reduce COVID-19 transmission?” but cannot readily include essential qualifiers such as “if compliance remains above 80%” or “assuming new variants aren’t more transmissible.” Researchers attempt to approximate nuanced viewpoints through multiple simplified questions, inevitably fragmenting complex attitudes into disjointed data points. Consequently, responses must later be coded, categorized, and aggregated, constructing conclusions post hoc rather than capturing integrated perspectives directly.
The result is polling that is increasingly disconnected from respondents: temporally due to publishing delays (Björk and Solomon, 2013), and relationally due to anonymization. As previously discussed in the restorative justice analogy, detached objectivity unintentionally erects barriers undermining trust. Especially in conveying scientific consensus, complexity obscures understanding, leaving the public reliant on institutional messaging. To build genuine trust and adherence, science communication must prioritize transparency, comprehensibility, and authenticity.
5 Contextualizing non-fringe experts
In Merchants of Doubt (Oreskes and Conway, 2011) Naomi Oreskes and Erik Conway illustrate how tobacco corporations strategically amplified the voices of a small group of contrarian scientists to create an exaggerated appearance of scientific disagreement about the health risks of smoking. By presenting these marginal views as representative of the broader scientific discourse, the tobacco industry successfully distorted public perceptions, leading people to significantly overestimate the uncertainty surrounding smoking’s dangers. Had the public and policymakers clearly understood the overwhelming scientific consensus about the harmful effects of smoking, this knowledge would likely have spread more rapidly, enabling earlier interventions and substantially reducing the eventual loss of life.
Guarding against undue amplification should not lead to the complete ostracization of credible dissenting voices within the mainstream. Such exclusion risks not only suppressing legitimate scientific inquiry but also alienating these experts, potentially driving them toward more radical positions or followers outside the scientific community. Proper contextualization, therefore, involves presenting minority views proportionally, not silencing them entirely.
6 What we can do to improve trust
While addressing the intentional spread of misinformation remains crucial (O’Connor and Weatherall, 2019), we also can improve our existing science communication infrastructure to better reflect the communal nature of science and the importance of context for appreciating the relative power of scientific explanations and science-based policy proposals.
One strategy involves shifting from elevating single figureheads to utilizing expert panels. Convening diverse groups of perhaps 4–6 subject-matter specialists for joint briefings allows for the presentation of multiple facets of an issue and distributes accountability. By rotating panel members over time, institutions can showcase the breadth of expertise within a field and reframe necessary course corrections not as individual flip-flops, but as the evolution of collective understanding based on new evidence. This approach is scalable; at the local level, universities and research institutions can proactively identify pools of experts on topics relevant to their communities (e.g., public health, environmental science). By fostering ongoing relationships between these local expert groups and regional news outlets, they can establish familiar, trusted voices who reappear as issues evolve, providing consistent and contextualized information rather than relying on isolated interviews with potentially unfamiliar figures.
Furthermore, enhancing transparency in how expert judgment is solicited and presented can address other critical challenges. Building upon evidence demonstrating that transparency fosters greater accountability and can significantly enhance public trust (Grimmelikhuijsen and Meijer, 2014; Schnackenberg and Tomlinson, 2016), we can envision more dynamic systems.
Consider a platform where credentialed experts could voluntarily register their views on specific, well-defined scientific questions or policy options relevant to their field. Critically, this system would allow experts to publicly update their position as new evidence emerges, perhaps even linking their change in judgment to specific publications or datasets. These evolving judgments aggregated and displayed transparently could offer a near real-time view of the state of expert understanding, something we already have in pseudo-form that is social media. Such a dynamic, opt-in system directly tackles several core issues: it enhances transparency by showing who believes what and potentially why; it addresses the speed and evolution of science by making changes visible and evidence-based, countering narratives of arbitrary shifts; and it could demonstrate the community aspect by aggregating and contextualizing a potentially broad spectrum of expert views over time.
7 Discussion
These measures are by no means a panacea. A significant portion of the public remains disengaged from primary sources, preferring sound-bite media or social feeds over detailed dashboards. Some will cherry-pick disagreements just as readily from a panel as from an individual spokesperson. Nonetheless, shifting, even incrementally, from monolithic messaging to a distributed, transparent ecosystem of expert voices, including contrarian positions, builds critical scaffolding for science communication in public discourse.
By seeing experts debate live vs. settled questions, the public can better appreciate the provisional nature of science. By tying names and faces to judgments, we foster accountability and trust. And by offering a real-time window into shifting judgments, not locked in print but animated online, we invite ongoing engagement rather than one-off pronouncements.
Rebuilding public trust will not happen overnight. Yet by adopting distributed expert panels, clearly framing live versus settled debates, and opening real-time windows into evolving evidence, we can leave behind the myth of the lone oracle and embrace a genuine community of inquiry: collectively responsible, self-correcting, and unmistakably credible. After all, like trust itself, science thrives not on solitary authority but on a harmonious chorus of informed, accountable voices.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
JY: Writing – review & editing, Writing – original draft.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Acknowledgments
I would like to thank Dr. John Lysaker, whose thoughtful feedback and guidance greatly enhanced this manuscript. I am also grateful to the scientists, doctors, and communication professionals who shared their insights and experiences in conversations that helped shape my thinking on these issues. Their openness and expertise significantly enriched my perspective. A large language model (ChatGPT version 4.5, developed by OpenAI) was used during the final stages of editing and formatting this manuscript. The author remains fully responsible for the content and accuracy of the article.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that Gen AI was used in the creation of this manuscript. Generative AI was used to perform a final check for grammatical and typographical errors in the manuscript. Additionally, generative AI assisted with formatting and ensuring references were compliant with journal guidelines.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Besley, J. C., Lee, N. M., and Pressgrove, G. (2021). Reassessing the variables used to measure public perceptions of scientists. Sci. Commun. 43, 3–32. doi: 10.1177/1075547020949547
Besley, J. C., and Tiffany, L. A. (2023). What are you assessing when you measure “trust” in scientists with a direct measure? Public Underst. Sci. 32, 709–726. doi: 10.1177/09636625231161302
Björk, B. C., and Solomon, D. (2013). The publishing delay in scholarly peer-reviewed journals. J. Informetr. 7, 914–923. doi: 10.1016/j.joi.2013.09.001
Canfield, K. N., Menezes, S., Matsuda, S. B., Moore, A., Mosley Austin, A. N., Dewsbury, B. M., et al. (2020). Science communication demands a critical approach that centers inclusion, equity, and intersectionality. Front. Commun. 5:2. doi: 10.3389/fcomm.2020.00002
Carande-Kulis, V., Elder, R. W., and Koffman, D. M. (2022). Standards required for the development of CDC evidence-based guidelines. MMWR Suppl. 71, 1–6. doi: 10.15585/mmwr.su7101a1
Evans, J. H., and Hargittai, E. (2020). Who doesn’t trust Fauci? The public’s belief in the expertise and shared values of scientists in the COVID-19 pandemic. Soci 6, 1–13. doi: 10.1177/2378023120947337
Gould, S. J., and Eldredge, N. (1993). Punctuated equilibrium comes of age. Nature 366, 223–227. doi: 10.1038/366223a0
Grimmelikhuijsen, S. G., and Meijer, A. J. (2014). Effects of transparency on the perceived trustworthiness of a government organization: evidence from an online experiment. J. Public Adm. Res. Theory 24, 137–157. doi: 10.1093/jopart/mus048
Gustafson, A., and Rice, R. E. (2020). The effects of uncertainty on scientific trust and perceived risk: a review of experimental findings. Public Understand. Sci. 29, 614–633. doi: 10.1177/0963662520942122
Hall Jamieson, K., and Hardy, B. W. (2014). Leveraging scientific credibility about Arctic Sea ice trends in a polarized political environment. Proc. Natl. Acad. Sci. USA 111, 13598–13605. doi: 10.1073/pnas.1320868111
Han, P. K., Scharnetzki, E., Scherer, A. M., Thorpe, A., Lary, C., Waterston, L. B., et al. (2021). Communicating scientific uncertainty about the COVID-19 pandemic: online experimental study of an uncertainty-normalizing strategy. J. Med. Internet Res. 23:e27832. doi: 10.2196/27832
Horton, R. (2024). Offline: in defence of Dr Fauci. Lancet 403:2768. doi: 10.1016/S0140-6736(24)01338-2
Howard, J., Huang, A., Li, Z., Tufekci, Z., Zdimal, V., Van Der Westhuizen, H. M., et al. (2021). An evidence review of face masks against COVID-19. Proc. Natl. Acad. Sci. USA 118:e2014564118. doi: 10.1073/pnas.2014564118
James, W. (1896). The will to believe. The new world: a quarterly review of religion. Ethics Theology 327:347.
Keeley, B. L. (2006). “Of conspiracy theories,’’ in Conspiracy theories: the philosophical debate. ed. D. Coady (New York, NY: Routledge), 45–60.
Kennedy, B., and Tyson, A. (2023). Americans’ trust in scientists, positive views of science continue to decline. Pew Research Center. Available online at: https://www.pewresearch.org/science/2023/11/14/americans-trust-in-scientists-positive-views-of-science-continue-to-decline/ (Accessed April 19, 2025).
Kennedy, B., and Tyson, A. (2024). Public trust in scientists and views on their role in policymaking. Pew Research Center. Available online at: https://www.pewresearch.org/science/2024/11/14/public-trust-in-scientists-and-views-on-their-role-in-policymaking/ (Accessed April 19, 2025).
Krosnick, J. A., and Presser, S. (2010). “Question and questionnaire design,’’ in Handbook of survey research (2nd edn.). eds. P. V. Marsden and J. D. Wright (Bingley, UK: Emerald Group Publishing), 263–313.
Levin, M. (2005). Restorative justice in Texas: Past, present & future : Texas Public Policy Foundation.
Lewandowsky, S., Ballard, T., and Pancost, R. D. (2015). Uncertainty as knowledge. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 373:20140462. doi: 10.1098/rsta.2014.0462
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., and Cook, J. (2012). Misinformation and its correction: continued influence and successful debiasing. Psychol. Sci. Public Interest 13, 106–131. doi: 10.1177/1529100612451018
Longino, H. E. (2020). Science as social knowledge: Values and objectivity in scientific inquiry : Princeton University Press.
O’Connor, C., and Murphy, M. (2020). Going viral: doctors must tackle fake news in the COVID-19 pandemic. BMJ 369:m1587. doi: 10.1136/bmj.m1587
O’Connor, C., and Weatherall, J. O. (2019). The misinformation age: How false beliefs spread : Yale University Press.
Oreskes, N., and Conway, E. M. (2011). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming : Bloomsbury Publishing.
Orthia, L. A., McKinnon, M., Viana, J. N., and Walker, G. (2021). Reorienting science communication towards communities. J. Sci. Commun. 20:A12. doi: 10.22323/2.20030212
Schnackenberg, A. K., and Tomlinson, E. C. (2016). Organizational transparency: a new perspective on managing trust in organization-stakeholder relationships. J. Manage. 42, 1784–1810. doi: 10.1177/0149206314525202
Tandoc, E. C., Jenkins, J., and Craft, S. (2019). Fake news as a critical incident in journalism. Journal. Pract. 13, 673–689. doi: 10.1080/17512786.2018.1562958
Tetlock, P. E., and Gardner, D. (2016). Superforecasting: The art and science of prediction : Random House.
Tourangeau, R., Rips, L. J., and Rasinski, K. (2000). The psychology of survey response. Cambridge, UK: Cambridge University Press.
Keywords: trust in science, science communication, open science, public engagement, scientific transparency, scientific consensus, community panels
Citation: Yudin J (2025) Beyond the lone voice: how community-based communication restores trust. Front. Commun. 10:1634016. doi: 10.3389/fcomm.2025.1634016
Edited by:
Odaro J. Huckstep, US Air Force Academy, United StatesReviewed by:
Nicole Kelp, Colorado State University, United StatesCopyright © 2025 Yudin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jacob Yudin, amFjb2IueXVkaW5AYWx1bW5pLmVtb3J5LmVkdQ==