Skip to main content

PERSPECTIVE article

Front. Commun., 07 March 2022
Sec. Science and Environmental Communication
Volume 7 - 2022 | https://doi.org/10.3389/fcomm.2022.767557

Defining a Flexible Notion of “Good” STEM Writing Across Contexts: Lessons Learned From a Cross-Institutional Conversation

Sara M. Grady1 Jenna Morton-Aiken2 Caroline Gottschalk Druschke3 Ingrid E. Lofgren4* Nancy E. Karraker5 Scott R. McWilliams5 Nedra Reynolds6 Elaine Finan7 Patti L. Wolter8 Donna R. Leff8 Michael Kennedy9
  • 1Department of Communication, Michigan State University, East Lansing, MI, United States
  • 2Department of Humanities, Massachusetts Maritime Academy, Bourne, MA, United States
  • 3Department of English, University of Wisconsin-Madison, Madison, WI, United States
  • 4Department of Nutrition and Food Sciences, University of Rhode Island, Kingston, RI, United States
  • 5Department of Natural Resources Science, University of Rhode Island, Kingston, RI, United States
  • 6College of Arts and Sciences, University of Rhode Island, Kingston, RI, United States
  • 7Office of the Advancement of Teaching and Learning, University of Rhode Island, Kingston, RI, United States
  • 8Medill School of Journalism, Northwestern University, Evanston, IL, United States
  • 9Science in Society, Northwestern University, Evanston, IL, United States

We respond to a surging interest in science communication training for graduate scientists by advocating for a focus on rhetorically informed approaches to STEM writing and its assessment. We argue that STEM communication initiatives would benefit by shifting from a strategic focus on products to a flexible understanding of writing as a practice worthy of attention and study. To do that, we use our experience across two universities and two distinct programmatic contexts to train STEM graduate students in writing and communication. We draw from cross-disciplinary conversations to identify four facets of “good” STEM writing: (1) connecting to the big picture; (2) explaining science; (3) adhering to genre conventions; and (4) choosing context-appropriate language. We then describe our ongoing conversations across contexts to develop and implement flexible rubrics that capture and foster conversations around “good” writing. In doing so, we argue for a notion of writing rubrics as boundary objects, capable of fostering cross-disciplinary, integrative conversations and collaborations that strengthen student writing, shift STEM students toward a rhetorically informed sense of “good” writing, and offer that kinds of assessment data that make for persuasive evidence of the power of writing-centric approaches for STEM administrators and funders.

Introduction

Scientists and educators increasingly recognize the demand for improved STEM (science, technology, engineering, and math) training in communication, including writing, and its importance for facilitating wider dissemination of research results, improved policy outcomes, and richer engagement with public audiences (Fischhoff, 2013; Kuehne and Olden, 2015). This paper discusses two separate but complementary programs at Northwestern University and the University of Rhode Island that responded to that call. Each developed focused training programs and related tools, including the rubrics discussed here, to equip STEM graduate students to communicate their science to broad audiences. Central to the philosophy of each program is to situate STEM writing and its assessment as a social, contextual, iterative, and public practice.

This authorship team—faculty and staff collaborating across different institutional, disciplinary, and programmatic contexts—had to grapple with defining a flexible notion of “good” writing applicable across a variety of STEM disciplines, taught to STEM faculty, practiced by STEM students, and ultimately supported by STEM administrators and funding agencies. Because we were in communication throughout that process, we share our experiences in this collaborative piece to build on continued calls for the development of STEM writing and communication skills as part of the education and professionalization of STEM undergraduate and graduate students (Fuhrmann et al., 2011; Denecke et al., 2017). We use the terms writing and communication here to encompass all modes of building, sharing, and reinforcing knowledge. We use rhetoric, a term often politically loaded, in reference to the ancient tradition of communication with purpose for an audience within a specific set of circumstances. Rhetorical moves refer to the intentional decisions a writer or speaker makes in order to meet the needs of those circumstances most effectively.

Here we draw from interdisciplinary literatures in science, science communication, and writing studies. We define four facets of effective communication that we argue constitute a flexible and capacious definition of “good” STEM writing across a range of genres and audiences: (1) connecting to the big picture; (2) explaining science; (3) adhering to genre conventions; and (4) choosing context-appropriate language. We then describe our work to capture these facets of “good” STEM writing in the development of two rubrics that support different contextually situated training programs designed to support STEM writers. In doing so, we build from a flexible understanding of writing rubrics (Henningsen et al., 2010; Nolen et al., 2011), conceiving of writing rubrics that formalize the expectations and definitions of good STEM writing as boundary objects: “a rhetorical construct that can foster cooperation and communication among the diverse members of heterogeneous working groups” (Wilson and Herndl, 2007). Here, writing rubrics that articulate teaching and learning goals for STEM students are an opportunity to span communities and build bridges between diverse stakeholders interested and invested in science communication outcomes. We argue that the development and implementation of writing rubrics can facilitate conversations across disciplines about good STEM writing. This process can foster collective investment in and understanding of STEM writing practices, while offering a valuable opportunity to generate data on the impacts of programs in increasingly competitive funding environments in higher education.

We deploy rubrics as rhetorical boundary objects (Wilson and Herndl, 2007) to connect knowledge-making in science with good STEM writing practice and pedagogy to develop locally situated thinking at our two institutions. This approach helped us leverage outside perspectives and empirical evidence to create resilient and flexible resources and instruments to meet local needs as part of a recursive assessment loop (Rutz and Lauer-Glebov, 2005). Our focus on writing as a practice not a product and on rubrics as a shared articulation of learning goals and essential rhetorical moves allowed us to accommodate the broader shift from a deficit model to a contextual model (Gross, 1994; Perrault, n.d). It also allowed us to emphasize rhetoric as a critical component in science communication (Gross, 1994; Druschke and McGreavy, 2016) and the importance of a user-centered paradigm for designing effective communication artifacts (Rothwell and Cloud, 2017).

We began working together several years ago as cross-institutional collaborators looking for tools to facilitate shared approaches to the training and assessment of STEM writing. While our processes and products have converged and diverged through the years, the shared development of these rubrics enabled nuanced conversations about what defines good STEM writing across our many disciplines, encouraging us to clarify to ourselves and each other which rhetorical approaches and goals were specific to our individual program aims and which were broader, more universal element of good practice. We found that developing these tools was a profoundly helpful opportunity to open cross-disciplinary dialogue on the key ingredients of “good” writing and how those ingredients might be taught, explicated, and assessed. This is especially important in light of recent research highlighting the lack of consensus on what constitutes good science communication and the ability of current training programs to improve students' capacity in these areas (Rubega et al., 2021).

Of course, once a rubric is created, there are next steps to test its reliability and validity in the field, particularly as an instrument to assess skill-gain among students. We acknowledge this process is not yet complete for our tools. However, we are not advocating here for the broad adoption of our specific instruments. Rather, we want to shed light on their development, including discussions about the diverse but often siloed literatures that informed them, and their deployment for assessment as important conceptual steps in developing a shared understanding across faculty and students of good STEM writing, its best practices, and eventually its meaningful assessment. In particular, we hope to contribute to the conversation facilitating a shift toward science communication as a messy, iterative practice, bringing the insights of writing studies and rhetorical studies to bear on broad science communication initiatives and training in ways that can inform guiding principles implemented at the local level.

Our Programmatic Contexts

Northwestern University's program, Skills and Careers in Science Writing, is a partnership between two academic units: Science in Society, a community-engaged research center, and Medill, a world-renowned journalism school. This semester-long graduate-level course is for STEM doctoral students across all disciplines including microbiology, materials science, environmental engineering and developmental psychology. The course is led by journalism faculty and practicing writing professionals to cover best practices in writing, public science communication, and science reporting including principles of structure, narrative, and voice. Students produce an original magazine-style article about their own research. Critically discussing lay audience-friendly science stories also enables students to recognize and grapple with the immense shift of moving from traditional academic writing to an accessible style (Crossley et al., 2014). The course also focuses on science writing career pathways, and provides exposure to science communication and journalism professionals given the likelihood many STEM PhDs will pursue non-academic careers (Cyranoski et al., 2011; Powell, 2012).

University of Rhode Island's (URI) program, SciWrite, focuses on equipping science graduate students to move between academic and public-facing writing in two ways: (1) layering rhetorical training into graduate student curricula and (2) training faculty to support writing pedagogy in classrooms and laboratories. SciWrite is a cross-disciplinary training program funded by the National Science Foundation for STEM graduate students and faculty at URI and was collaboratively developed by faculty from Writing and Rhetoric, Nutrition and Food Sciences, and Natural Resources Science. The 2-year program includes internships and workshops alongside a four-course sequence where students gain a rhetorical foundation for writing through a series of academic and public writing projects. Full programmatic and assessment details are offered elsewhere (Druschke et al., 2018, n.d; Harrington et al., 2021).

Interdisciplinary and Inter-Institutional Collaboration

Our programs initially developed independently. But our joint discussions about assessment helped us realize that rubrics were productive mechanisms for helping us push back against the widespread notion of writing (and communication more broadly) as strategic endpoint and for reframing the idea of writing as an intentional, situated, and messy practice. Particularly when integrated into multi-modal assessment portfolios, we argue that rubrics can serve three separate but interrelated purposes: (1) assessing STEM writing with flexible and locally-informed instruments; (2) empowering STEM faculty to engage more heartily with a rhetorical approach to writing training; and (3) communicating with students about important aspects of rhetorically savvy writing. Rather than treating rubrics—and the good writing they are meant to assess—as static, stringent structures, both programs deployed rubrics as unique opportunities for dialogue and collaboration with diverse faculty tasked with teaching (and grading) trainee writing.

During rubric development, we considered interdisciplinary sources such as impact measures in science communication and engagement (Coppola, 1999; Bucchi, 2013; Fischhoff, 2013; Denecke et al., 2017; of Sciences, Engineering, and Medicine et al., 2017), specialist assessment work being done in engineering undergraduate writing (Boettger, 2010), researcher oral presentations (Dunbar et al., 2006), and public science communication rubrics (Mercer-Mapstone and Kuchel, 2017; Murdock, n.d) as well as best practices in writing assessment (Rutz and Lauer-Glebov, 2005; Huot and O'Neill, 2009; Adler-Kassner and O'Neill, 2010). This diverse list of sources points to the disjointed and siloed nature of discussions taking place in science writing, science communication, rhetoric, and teaching and learning practices more broadly. Drawing from these various disciplines allowed us to map their commonalities and begin to stitch together a shared framework with four distinct, but overlapping features.

Connecting to the Big Picture

Good writers and communicators position themselves in the wider discourse; draw from existing understandings; make a compelling, structured articulation of their goals, purpose or main point; and vary their deployment of these elements depending on purpose and intended audience. This facet builds from perspectives present in writing studies since at least John Swales' Create a Research Space (CARS) model (Swales, 1981, 1984, 1990) with its emphasis on establishing a territory. This contextualizing is picked up in popular scicomm trainings like the Compass message box (Compass Science Communication Inc., 2017), and the SciWrite@URI program relied on it extensively in their training program (Druschke et al., 2018, n.d; Harrington et al., 2021).

Explaining Science

Good writers and communicators understand the highly academic ways scientists conventionally describe their research to peers, and identify how these are likely to be difficult or unfamiliar for novice readers. This facet includes understanding how the organization and technical detail provided in an explanation are critical components for effective science communication. Understanding these hurdles requires that communicators grapple with the specific challenges for communicating to novices (Wolfe and Mienko, 2007; Rottman et al., 2012) and the subject-specific vocabulary, or jargon, which impedes communication between science to non-scientists (Bullock et al., 2019). Bullock et al. found the presence of jargon impairs people's ability to process scientific information, and suggests that the use of jargon undermines efforts to inform and persuade the public (Bullock et al., 2019). At the same time, jargon serves an important function within specific discourse communities discourse communities (Porter, 1986), peer groups accustomed to specific ways of exchanging information. It is essential that good STEM writers recognize jargon as a community-specific vocabulary and make conscious choices about when and how to include it to explain complex scientific concepts to a variety of audiences with accuracy and clarity.

Adhering to Genre Conventions

Good writers and communicators understand and can appropriately navigate genre-specific expectations, which vary community to community and piece to piece. Both programs emphasize the importance of genre, but teach different genres to students, and the two program's rubrics reflect these genre-specific differences.

Choosing Context-Appropriate Language

Good writers and communicators have a solid grasp of the rhetorical moves at their disposal, such as style, tone, and register, as well as grammar, semantic and linguistic complexity, and scientific conventions such as hedging and citations. Importantly, this facet includes but moves well beyond word choice. This facet is most directly aligned with other quantifications of contextually good writing (Crossley et al., 2014) and broader discourse around stylistics and language (Pinker, 2015; Zinsser, 2016).

Implementing Rubrics as Contextually Situated Tools

While our collective conversations coalesced around these shared facets of good writing, the rubrics we developed to articulate them were structured to our unique programmatic goals and needs. For example, the “genre conventions” our programs were designed to address were vastly different. So, while our shared goal was to articulate and teach these conventions, the ways in which our rubrics could reflect that would differ substantially.

Northwestern's program focuses specifically on lay-friendly magazine writing and science storytelling approaches (Leslie et al., 2013; Dahlstrom, 2014), and therefore this rubric deliberately defines some narrative conventions (Zinsser, 2016; Hart, n.d) which connect with research on recall and processing of narrative elements (Speer et al., 2009; Zak, 2015), as well as metaphors and analogies (Wolff and Gentner, 2011).

For example, the Science in Society rubric defined “Relevance (shows how this work is connected to real world experience in meaningful ways and why it matters)” as “Clearly defines the context and/or application of this work”; Reader perspective and real world connections meaningfully articulate the purpose/promise of this work. “Order and Structure (builds scaffolded scientific explanations)” was articulated as, “Effectively connect to reader's context and prior knowledge; Well structured and scaffolded explanations building bridges from existing understanding; Clearly walks through steps of processes and explains phenomena in a logical and coherent order; Consistently and clearly builds bridges from existing knowledge.” (See Supplementary Material for more information).

URI's SciWrite, on the other hand, reinforces the idea of STEM writing as a rhetorical act in and among specific discourse communities (Penrose and Katz, 2010; Kuhn, 2012), and encompasses a range of formats including visual representation. Perhaps uniquely, this rubric is intended to span both academic- and public-facing artifacts in order to reinforce the public as a valuable partner in larger conversations about science (Collins and Evans, 2002; Rowe and Frewer, 2005) and citizen science (Druschke and Seltzer, 2012; Shirk et al., 2012; Bonney et al., 2016). This rubric is therefore made up of 12 categories divided into subsections, some of which apply to all artifacts, and some of which are specific to certain modalities and formats. In both cases, the role of genre conventions is central, but how this is articulated is in conversation with broader programmatic goals and models.

In the SciWrite rubric, the category “Is the text appropriate for the target audience?” is articulated as, “The text consistently incorporates appropriate definitions and explanations of all key terms and concepts that makes the research/text fully comprehensible, accessible, and engaging to the primary intended audience.” For the category, “Is there an appropriate depth of content given genre and subject matter?” “The text includes a sufficient depth of content about the subject matter for the genre and primary intended audience.” And the category, “Does the text demonstrate its significance in a wider context, and build on the existing knowledge base by using literary elements appropriate to the genre (e.g., analogies, metaphors, similes, visual examples, case studies, etc.) to support deeper levels of understanding of complex ideas and phenomena?” was defined as, “The text explicitly demonstrates its significance in a wider context, and consistently builds on the existing knowledge base by using highly effective literary elements appropriate to the genre to support deeper levels of understanding of complex ideas and phenomena.” (See Supplementary Material for more information).

As we mentioned above, this paper is not intended to report a validated instrument, but to call out how our processes and ultimate products converge and diverge in important ways. This transparency is intended to contribute to wider conversations about how science communication and writing programs should be developed, delivered and evaluated. We are certainly not done, and hope that sharing our process of developing rubrics as boundary objects within our own programs—and with each other across programs—helps others see how to incorporate rhetoric into STEM communication training conversations going forward.

Moving Forward Toward “Good” STEM Writing

As we well know, assessment is essential to STEM writing training and teaching. Well-structured, meaningful assessment also offers datasets and analyses that can be used to argue for funding and build a sustainable enterprise for this vital professional training. Such metrics are increasingly necessary to support and advocate for sustainable, rhetorically-informed and writing-focused practice within higher education (Rutz and Lauer-Glebov, 2005; Adler-Kassner and O'Neill, 2010).

In particular, embedded, rhetorically grounded frameworks provide a unique opportunity to create deeper interdisciplinary conversations about the values and definitions of good writing—and they make disciplinary and genre conventions and practices visible. Including colleagues from a range of fields in this process is one step toward making those nebulous, frustrating guidelines for science writing more explicit.

We believe that conversations about the practice and pedagogy of good STEM writing vitally contribute to conversations about science and scientist training. A meta-analysis of over 700,000 biomedical journal abstracts over the past 150 years clearly demonstrates the readability of scientific abstracts is decreasing over time, and Rubega et al. (2021) recently demonstrated that current science communication training programs provide little evidence of improved practice (Pontus et al., 2017). Even further, the need for scientists to communicate across genres and audiences seems particularly apparent in a cultural moment of political division and policy-making challenges where cynicism and science-skepticism (Charney, 2003) inform highly-motivated interpretations of science and research (Washburn and Skitka, 2018). The need for cross-disciplinary conversations about good and great science writing, dissemination, and public engagement—and how to convey and assess these goals—has never been more obvious or more necessary.

Author Contributions

SG, JM-A, CD, and IL organized the data and results. SG and JM-A wrote the first draft of the manuscript and revised the manuscript after receiving feedback from the rest of the authors. All authors contributed to the conception and design of the study, manuscript revision, read, and approved the submitted version.

Funding

The authors gratefully acknowledge the support of funders National Science Foundation (Award #1545275) and the Graduate School and Vice President for Research at University of Rhode Island.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

This collaborative work in both teaching and assessment has only been possible through intellectual partnerships with both assessment specialists and stellar teaching collaborators, resulting in stimulating discussion with faculty and staff at and across our respective institutions.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomm.2022.767557/full#supplementary-material

References

Adler-Kassner, L., and O'Neill, P. (2010). Reframing Writing Assessment to Improve Teaching and Learning. Utah State University Press. Available online at: https://muse.jhu.edu/book/12749 (accessed October 29, 2021).

Google Scholar

Boettger, R. K. (2010). Rubric use in technical communication: exploring the process of creating valid and reliable assessment tools. IEEE Trans. Prof. Commun. 53, 4–17. doi: 10.1109/TPC.2009.2038733

CrossRef Full Text | Google Scholar

Bonney, R., Phillips, T. B., Ballard, H. L., and Enck, J. W. (2016). Can citizen science enhance public understanding of science? Public Understand. Sci. 25, 2–16. doi: 10.1177/0963662515607406

PubMed Abstract | CrossRef Full Text | Google Scholar

Bucchi, M. (2013). Style in science communication. Public Understand. Sci. 22, 904–915. doi: 10.1177/0963662513498202

PubMed Abstract | CrossRef Full Text | Google Scholar

Bullock, O. M., Colón Amill, D., Shulman, H. C., and Dixon, G. N. (2019). Jargon as a barrier to effective science communication: evidence from metacognition. Public Understand. Sci. 28, 845–853. doi: 10.1177/0963662519865687

PubMed Abstract | CrossRef Full Text | Google Scholar

Charney, D. (2003). Lone geniuses in popular science: the devaluation of scientific consensus. Written Commun. 20, 215–241. doi: 10.1177/0741088303257505

CrossRef Full Text | Google Scholar

Collins, H. M., and Evans, R. (2002). The third wave of science studies: studies of expertise and experience. Soc. Stud. Sci. 32, 235–296. doi: 10.1177/0306312702032002003

CrossRef Full Text | Google Scholar

Compass Science Communication Inc. (2017). The Message Box Workshop. Available online at: https://www.COMPASSscicomm.org/ (accessed December 12, 2021).

Coppola, N. W. (1999). Setting the discourse community: tasks and assessment for the new technical communication service course. Tech. Commun. Q. 8, 249–267. doi: 10.1080/10572259909364666

CrossRef Full Text | Google Scholar

Crossley, S. A., Roscoe, R., and McNamara, D. S. (2014). What is successful writing? An investigation into the multiple ways writers can write successful essays. Written Commun. 31, 184–214. doi: 10.1177/0741088314526354

CrossRef Full Text | Google Scholar

Cyranoski, D., Gilbert, N., Ledford, H., Nayar, A., and Yahia, M. (2011). Education: the PhD factory. Nature 472, 276–279. doi: 10.1038/472276a

PubMed Abstract | CrossRef Full Text | Google Scholar

Dahlstrom, M. F. (2014). Using narratives and storytelling to communicate science with nonexpert audiences. Proc. Nat. Acad. Sci. 111, 13614–13620. doi: 10.1073/pnas.1320645111

PubMed Abstract | CrossRef Full Text | Google Scholar

Denecke, D., Feaster, K., and Stone, K. (2017). Professional Development: Shaping Effective Programs for STEM Graduate Students. Washington, DC: Council of Graduate Schools.

Druschke, C. G., and McGreavy, B. (2016). Why rhetoric matters for ecology. Front. Ecol. Environ. 14, 46–52. doi: 10.1002/16-0113.1

CrossRef Full Text | Google Scholar

Druschke, C. G., Reynolds, N., Morton-Aiken, J., Lofgren, I. E., Karraker, N. E., and McWilliams, S. R. (2018). Better science through rhetoric: a new model and pilot program for training graduate student science writers. Tech. Commun. Q. 27, 175–190. doi: 10.1080/10572252.2018.1425735

CrossRef Full Text | Google Scholar

Druschke, C. G., and Seltzer, C. E. (2012). Failures of Engagement: lessons learned from a citizen science pilot study. Appl. Environ. Educ. Commun. 11, 178–188. doi: 10.1080/1533015X.2012.777224

CrossRef Full Text | Google Scholar

Druschke, C. G., Karraker, N., McWilliams, S. R., Scott, A., Morton-Aiken, J., Reynolds, N., et al (n.d.). A low-investment, high-impact approach for training stronger more confident graduate student science writers. Conserv. Sci. Pract. n/a(n/a), e573. doi: 10.1111/csp2.573.

CrossRef Full Text | Google Scholar

Dunbar, N. E., Brooks, C. F., and Kubicka-Miller, T. (2006). Oral communication skills in higher education: using a performance-based evaluation rubric to assess communication skills. Innov. Higher Educ. 31, 115. doi: 10.1007/s10755-006-9012-x

CrossRef Full Text | Google Scholar

Fischhoff, B. (2013). The sciences of science communication. Proc. Natl. Acad. Sci. U. S. A. 110, 14033–14039. doi: 10.1073/pnas.1213273110

PubMed Abstract | CrossRef Full Text | Google Scholar

Fuhrmann, C. N., Halme, D. G., O'Sullivan, P. S., and Lindstaedt, B. (2011). Improving graduate education to support a branching career pipeline: recommendations based on a survey of doctoral students in the basic biomedical sciences. CBE Life Sci. Educ. 10, 239–249. doi: 10.1187/cbe.11-02-0013

PubMed Abstract | CrossRef Full Text | Google Scholar

Gross, A. G. (1994). The roles of rhetoric in the public understanding of science. Public Understand. Sci. 3, 3–23. doi: 10.1088/0963-6625/3/1/001

CrossRef Full Text | Google Scholar

Harrington, E. R., Lofgren, I. E., Gottschalk Druschke, C., Karraker, N. E., Reynolds, N., and McWilliams, S. R. (2021). Training graduate students in multiple genres of public and academic science writing: an assessment using an adaptable, interdisciplinary rubric. Front. Environ. Sci. 9, 358. doi: 10.3389/fenvs.2021.715409

CrossRef Full Text | Google Scholar

Hart, J. (n.d.). A Writer's Coach: The Complete Guide to Writing Strategies that Work. New York, NY: Anchor.

Henningsen, T., Chin, D., Feldman, A., Druschke, C., Moss, T., Pittendrigh, N., et al. (2010). “A hybrid genre supports hybrid roles in community-university collaboration, ” in Going Public: What Writing Programs Learn From Engagement, eds S. K. Rose, and I. Weiser (Logan, UT: Utah State University Press), 85–109.

Google Scholar

Huot, B., and O'Neill, P. (2009). Assessing Writing: A Critical Sourcebook. Boston, MA: Bedford/St. Martin's.

Kuehne, L. M., and Olden, J. D. (2015). Opinion: lay summaries needed to enhance science communication. Proc. Natl. Acad. Sci. U. S. A. 112, 3585–3586. doi: 10.1073/pnas.1500882112

PubMed Abstract | CrossRef Full Text | Google Scholar

Kuhn, T. (2012). The Structure of Scientific Revolutions: 50th Anniversary Edition. 4th Edn. Chicago, IL: University of Chicago.

Google Scholar

Leslie, H. M., Goldman, E., McLeod, K. L., Sievanen, L., Balasubramanian, C., Cudney-Bueno, R., et al. (2013). How good science and stories can go hand-in-hand. Conserv. Biol. 27, 1126–1129. doi: 10.1111/cobi.12080

PubMed Abstract | CrossRef Full Text | Google Scholar

Mercer-Mapstone, L., and Kuchel, L. (2017). Core skills for effective science communication: a teaching resource for undergraduate science education. Int. J. Sci. Educ. Part B 7, 181–201. doi: 10.1080/21548455.2015.1113573

CrossRef Full Text | Google Scholar

Murdock, R. (n.d.). An instrument for Assessing the Public Communication of Scientists—ProQuest. Available online at: https://www.proquest.com/openview/4680840e414ecc0523535ab6601c0b3b/1?pq-origsite=gscholar&cbl=18750 (accessed November 1, 2021).

Google Scholar

National Academies of Sciences Engineering Medicine, Division of Behavioral Social Sciences Education, Committee on the Science of Science Communication: A Research Agenda. (2017). Communicating Science Effectively: A Research Agenda. National Academies Press (US). Available online at: http://www.ncbi.nlm.nih.gov/books/NBK425710/ (accessed October 29, 2021).

Google Scholar

Nolen, S. B., Horn, I. S., Ward, C. J., and Childers, S. A. (2011). Novice teacher learning and motivation across contexts: assessment tools as boundary objects. Cogn. Instr. 29, 88–122. doi: 10.1080/07370008.2010.533221

CrossRef Full Text | Google Scholar

Penrose, A., and Katz, S. (2010). Writing in the Sciences: Exploring Conventions of Scientific Discourse (3rd: Open Access Version). Anderson, SC.

Google Scholar

Perrault, S. T. (n.d.). Communicating Popular Science: From Deficit to Democracy. London: Palgrave MacMillan.

Google Scholar

Pinker, S. (2015). The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century. London: Penguin Publishing Group.

Google Scholar

Pontus, P.-S., Matheson, G. J., Schiffler, B. C., and Thompson, W. H. (2017). The readability of scientific texts is decreasing over time. Elife 6:e27725. doi: 10.7554/eLife.27725

PubMed Abstract | CrossRef Full Text | Google Scholar

Porter, J. E. (1986). Intertextuality and the discourse community. Rhetoric Rev. 5, 34–47. doi: 10.1080/07350198609359131

PubMed Abstract | CrossRef Full Text | Google Scholar

Powell, K. (2012). The postdoc experience: High expectations, grounded in reality. Sci. Careers. doi: 10.1126/science.opms.science.opms.r1200121

CrossRef Full Text | Google Scholar

Rothwell, E. J., and Cloud, M. J. (2017). Engineering Speaking by Design: Delivering Technical Presentations with Real Impact. New York, NY: CRC Press.

Google Scholar

Rottman, B. M., Gentner, D., and Goldwater, M. B. (2012). Causal systems categories: differences in novice and expert categorization of causal phenomena. Cogn. Sci. 36, 919–932. doi: 10.1111/j.1551-6709.2012.01253.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Rowe, G., and Frewer, L. J. (2005). A typology of public engagement mechanisms. Sci. Technol. Hum. Values 30, 251–290. doi: 10.1177/0162243904271724

CrossRef Full Text | Google Scholar

Rubega, M. A., Burgio, K. R., MacDonald, A. A. M., Oeldorf-Hirsch, A., Capers, R. S., and Wyss, R. (2021). Assessment by audiences shows little effect of science communication training. Sci. Commun. 43, 139–169. doi: 10.1177/1075547020971639

CrossRef Full Text | Google Scholar

Rutz, C., and Lauer-Glebov, J. (2005). Assessment and innovation: one darn thing leads to another. Assessing Writing 10, 80–99. doi: 10.1016/j.asw.2005.03.001

CrossRef Full Text | Google Scholar

Shirk, J. L., Ballard, H. L., Wilderman, C. C., Phillips, T., Wiggins, A., Jordan, R., et al. (2012). Public participation in scientific research: a framework for deliberate design. Ecol. Soc. 17, 29–48. doi: 10.5751/ES-04705-170229

PubMed Abstract | CrossRef Full Text | Google Scholar

Speer, N. K., Reynolds, J. R., Swallow, K. M., and Zacks, J. M. (2009). Reading stories activates neural representations of visual and motor experiences. Psychol. Sci. 20, 989–999. doi: 10.1111/j.1467-9280.2009.02397.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Swales, J. (1981). Aspects of Article Introductions. Birmingham: Language Studies Unit, University of Aston in Birmingham.

Google Scholar

Swales, J. (1984). Common Ground: Shared Interests in ESP and Communication Studies. New York, NY: Pergamon Press.

Swales, J. (1990). Genre Analysis: English in Academic and Research Settings. Cambridge: Cambridge University Press.

Google Scholar

Washburn, A. N., and Skitka, L. J. (2018). Science denial across the political divide: liberals and conservatives are similarly motivated to deny attitude-inconsistent science. Soc. Psychol. Personal. Sci. 9, 972–980. doi: 10.1177/1948550617731500

CrossRef Full Text | Google Scholar

Wilson, G., and Herndl, C. G. (2007). Boundary objects as rhetorical exigence: knowledge mapping and interdisciplinary cooperation at the los alamos national laboratory. J. Bus. Tech. Commun. 21, 129–154. doi: 10.1177/1050651906297164

CrossRef Full Text | Google Scholar

Wolfe, M. B. W., and Mienko, J. A. (2007). Learning and memory of factual content from narrative and expository text. Br. J. Educ. Psychol. 77, 541–564. doi: 10.1348/000709906X143902

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolff, P., and Gentner, D. (2011). Structure-mapping in metaphor comprehension. Cogn. Sci. 35, 1456–1488. doi: 10.1111/j.1551-6709.2011.01194.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Zak, P. J. (2015). Why inspiring stories make us react: the neuroscience of narrative. Cerebrum Dana Forum Brain Sci. 2015, 2.

PubMed Abstract | Google Scholar

Zinsser, W. (2016). On Writing Well: The Classic Guide to Writing Nonfiction (30th Anniversary Edition). New York, NY: HarperCollins Publishers.

Google Scholar

Keywords: STEM, science communication, rhetoric graduate student training, collaborate

Citation: Grady SM, Morton-Aiken J, Druschke CG, Lofgren IE, Karraker NE, McWilliams SR, Reynolds N, Finan E, Wolter PL, Leff DR and Kennedy M (2022) Defining a Flexible Notion of “Good” STEM Writing Across Contexts: Lessons Learned From a Cross-Institutional Conversation. Front. Commun. 7:767557. doi: 10.3389/fcomm.2022.767557

Received: 31 August 2021; Accepted: 27 January 2022;
Published: 07 March 2022.

Edited by:

Jen Schneider, Boise State University, United States

Reviewed by:

Julie Zilles, University of Illinois at Urbana-Champaign, United States
Susanne Pelger, Lund University, Sweden

Copyright © 2022 Grady, Morton-Aiken, Druschke, Lofgren, Karraker, McWilliams, Reynolds, Finan, Wolter, Leff and Kennedy. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ingrid E. Lofgren, ingridlofgren@uri.edu

These authors share first authorship

Download