Selecting for impact: new data debunks old beliefs
One of the strongest beliefs in scholarly publishing is that journals seeking a high impact factor (IF) should be highly selective, accepting only papers predicted to become highly significant and novel, and hence likely to attract a large number of citations. The result is that so-called top journals reject as many of 90-95% of the manuscripts they receive, forcing the authors of these papers to resubmit in more “specialized”, lower impact factor journals where they may find a more receptive home.
Unfortunately, most of the 20,000 or so journals in the scholarly publishing world follow their example. All of which raises the question: does the strategy work? There is evidence that proves it doesn’t.
In Figure 1, we plotted the impact factors of 570 randomly selected journals indexed in the 2014 Journal Citation Reports (Thomson Reuters, 2015), against their publicly stated rejection rates.
Figure 1: 570 journals with publicly stated rejection rates (for sources, see below and to see complete data, click here). Impact factors from Thomson Reuters Journal Citation Reports (2014). (Y-axis is on a Log scale).
As you can see, Figure 1 shows there is absolutely no correlation between rejection rates and impact factor (r2 = 0.0023; we assume the sample of 570 journals is sufficiently random to represent the full dataset, given that it spans across fields and publishers). In fact, many journals with high rejection rates have low impact factors and many journals with low rejection rates have impact factors that are higher than the bulk of journals with rejection rates of 70-80%. Clearly, selecting “winners” is hard and the belief that obtaining a high impact factor simply requires high rejection rates is false.
Of course, some journals with 90-95% rejection rates do achieve very high impact factors – they are depicted in the top right of the graph. Critics believe they may achieve this by giving priority to well-established authors, and on reports likely to win broad acceptance (i.e. safely within the dogmas of science) – an assurance of immediate citations from the community. As a result, many breakthrough papers are rejected by the high impact factor journals where they are first submitted (see refs 1-3). Another reason could be that it is not possible to achieve a high impact factor in the specialized journals because the paper is visible only in one of the silos of academic disciplines. Indeed, the highest impact factor journals are those that pre-select papers for their “general interest”.
But regardless of these considerations, the hard facts remain. A vast number of high quality papers are being sacrificed to engineer high impact factors, yet the strategy fails for the vast majority of journals (some of the lowest impact factors have been obtained by journals rejecting 60-70% of the papers; 80% of the 11,149 journals indexed in the Journal Citation Report (JCR) have impact factors below 1 – read our Summary Impact Blog. More importantly, some journals do achieve impact factors in the top 90th percentiles without trying to pre-select the most impactful papers and without high rejection rates, showing that impact neutral peer review can work.
A recent ranking of journal provides further evidence that impact neutral peer-review can work. The journals published by Frontiers, the youngest digital-age OA publisher, have risen rapidly to the top percentiles in impact factor (see our Summary Impact Blog). More importantly, the total citations generated from these journals have started to outpace decade-old and even century-old journals. Total citations of articles in a journal reflect the amount of new research that is partly built on the knowledge within these articles. For example, in the JCRs Neurosciences category, the field of Frontiers in Neuroscience generated more citations in 2014 (reported in the 2015 JCR) than all other open-access journals combined in this category and 3rd highest number of total citations compared to all journals in this category (including all subscription journals). Another example is Frontiers in Psychology. In only 4 years, this journal has become not only the largest psychology journal in the world, but it also generates the 2nd highest number of citations in the discipline of psychology (2nd to Frontiers in Human Neuroscience). The other Frontiers journals (in Pharmacology, Physiology, Microbiology and Plant Science) follow a similar pattern (see Summary Impact Blog).
In Frontiers, our “impact neutral” peer review is a rigorous specialist review. The main difference from “impact selective” peer review is that the editors and reviewers are not asked to try and predict the significance of the paper. Frontiers uses its Collaborative Peer Review and its online interactive forum to intensify the interaction between the authors and specialist reviewers. The high quality editorial boards (see our Editorial Board Demographics) help match the most specialized reviewer to submitted papers.
Based on our experience of conducting impact neutral peer reviews for the last 8 years, rejection rates up to 30% are justifiable to ensure only sound research is published. We also conclude that the specialized collaborative peer review process provided by Frontiers is a highly effective strategy to achieve high quality and highly influential journals in different disciplines. We are excited to see how high total citations can go when focusing peer-review on enhancing quality, rather than on rejecting papers.
This doesn’t end here…
What happens when we remove the field component by normalizing impact factors by field? Does the absence of a correlation still hold true? Find out in Part 2.
J. M. Campanario, “Rejecting and resisting Nobel class discoveries: accounts by Nobel Laureates,” Scientometrics, vol. 81, pp. 549-565, 2009/11// 2009.
R. Walker and P. Rocha da Silva, “Emerging trends in peer review—a survey,” Frontiers in neuroscience, vol. 9, 2015 2015.
A. Eyre-Walker and N. Stoletzki, “The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations,” PLoS biology, vol. 11, p. e1001675, 2013.
Sources for rejection rates (by publisher):
American Psychological Association: http://www.apa.org/pubs/journals/statistics.aspx (2013 data)
American Medical Association: http://www.the-scientist.com/?articles.view/articleNo/23672/title/Is-Peer-Review-Broken-/
Elsevier: http://journalfinder.elsevier.com/#results (this is a journal finder)
Frontiers: Internal data for 2013 spontaneous submissions (Original research articles)
Hindawi: All data is from individual journal websites. Has been done for all journals that have an impact factor.
NPG: Various sources Nature Materials: http://www.nature.com/nmat/journal/v11/n9/full/nmat3424.html;
Nature Communications: oaspa.org/wp-content/uploads/2012/11/James-Butcher-NPG.pptx;
Nature Neuroscience: http://www.nature.com/neuro/journal/v11/n5/full/nn0508-521.html;
Scientific Reports: http://occamstypewriter.org/trading-knowledge/2012/07/09/megajournals/)
Taylor and Francis: http://explore.tandfonline.com/content/beh/ntcn-call-for-editor-in-chief