Skip to main content


Front. Comput. Neurosci., 19 August 2014
Volume 8 - 2014 |

Using science and psychology to improve the dissemination and evaluation of scientific work

  • Department of Methodology and Statistics, Tilburg University, Tilburg, Noord-Brabant, Netherlands

Here I outline some of what science can tell us about the problems in psychological publishing and how to best address those problems. First, the motivation behind questionable research practices is examined (the desire to get ahead or, at least, not fall behind). Next, behavior modification strategies are discussed, pointing out that reward works better than punishment. Humans are utility seekers and the implementation of current change initiatives is hindered by high initial buy-in costs and insufficient expected utility. Open science tools interested in improving science should team up, to increase utility while lowering the cost and risk associated with engagement. The best way to realign individual and group motives will probably be to create one, centralized, easy to use, platform, with a profile, a feed of targeted science stories based upon previous system interaction, a sophisticated (public) discussion section, and impact metrics which use the associated data. These measures encourage high quality review and other prosocial activities while inhibiting self-serving behavior. Some advantages of centrally digitizing communications are outlined, including ways the data could be used to improve the peer review process. Most generally, it seems that decisions about change design and implementation should be theory and data driven.

Previous research has outlined the problems with the current publishing system and made suggestions about how to improve the system (Gottfredson, 1978; Rosenthal, 1979; Ioannidis, 2005, 2012a,b; Benos et al., 2007; Björk, 2007; Birukou et al., 2011; Simmons et al., 2011; Bekkers, 2012; Giner-Sorolla, 2012; John et al., 2012; Kriegeskorte et al., 2012; Lee, 2012; Nosek and Bar-Anan, 2012; Asendorpf et al., 2013). Instead of outlining this work, here I examine how the scientific literature (especially Psychology) can help us understand the problems and develop/implement more effective solutions.

What is the Real Problem Science Faces?

The real problem for scientific communication and society more generally is the desire for success and power (or the desire to avoid failure) which prods human researchers to put their own interests above the interests of the group (Hardin, 1968; Skinner, 1972; Fehr and Fischbacher, 2003; Elliot, 2006; Thaler and Sunstein, 2008); the publishing system is only the obstacle this drive must overcome. The dilemma is that in order to advance, or at least keep, our careers, we must publish in high impact journals. There is competition to publish in these journals and people naturally began looking for a way to get the edge on the competition (Bentham and Mill, 2004). As those who bent the rules had better outcomes, the practices became normalized over generations, resulting in widespread “questionable research practices” (QRPs; Darwin, 1859; Skinner, 1972; John et al., 2012).

This motivation to get ahead is (probably) not a bad thing; it is what drives Science and human progress in the first place. The problem is an ineffective reward system which makes doing the prosocial action (e.g., no QRPs, open data, no file drawer, open methods) bad for the individual because it is less efficiently achieves high impact work and thus promotion. The goal here is to recast the system, the “game” the individual plays, such that working toward the individual success is also working toward the group’s success, or at least that individual success is not achieved at the expense of the group (Skinner and Hayes, 1976; Thaler and Sunstein, 2008).

Designing Successful Change

There are many ways to institute behavioral change, but history and the psychological literature suggest that motivating change with reward is more effective than motivating change with punishment, which basically creates better cheaters and even encourages the behavior (e.g., prohibition, war on drugs, war on terror; Skinner, 1972; Nadelmann, 1989; Sherman, 1993; Higgins, 1997; Bijvank et al., 2009; Branson, 2012). Instead of focusing on creating tools to go back, catch, and thus punish (through reputation costs) previous scientific wrongdoers (Francis, 2012; Klein et al., 2014; Simonsohn et al., 2014), it would be better to focus forward on creating a system, incentive structure, and zeitgeist where the behavior is not continued (Gibbs et al., 2009); this is the goal below.

This is not a new goal, and many initiatives are attempting to stimulate prosocial behavior using rewards (Hartgerink, 2014). Unfortunately, without coordination, the effort to buy in quickly outweighs the expected utility, limiting engagement (Kahneman and Tversky, 1979). Many competitors divide the manpower and no tool has either all of the features that the scientist wants or the widespread acceptance which ensures it will be useful in the future. Initial step costs are also quite high, as for each new system the researcher must invest hours to set up their profile, learn the interface, and build up their network. These issues (e.g., high initial buy-in cost, divided utility/market, uncertainty of the payoff) help to explain why psychologists, despite verbally endorsing change, are not meaningfully engaging with current change initiatives (Buttliere and Wicherts, in preparation; Kahneman and Tversky, 1979). Research has demonstrated that too many options, especially for important choices like a retirement savings account, paradoxically leads to less participation (Iyengar et al., 2004).

In order to surmount these problems, open science tools should work together, putting aside individual interests and combining utilities in order to make the prize larger and lower the cost of achieving that prize. The most successful technologies are those that are so useful that people make time to learn and utilize the tool on their own (e.g., the printing press, the telephone, the internet, or Facebook, which is accessed more than 20 billion minutes per day; Deci, 1971; Skinner, 1972; Legris et al., 2003; Smith, 2014).

A Psychologically Designed System

The goal here is to make a tool so useful that researchers make time to learn and utilize it on their own (like the microscope, the Likert scale, or QRPs; Legris et al., 2003). The tool should also endorse group centered behavior while inhibiting self-centered behavior (Skinner, 1972). While there is much discussion about the specifics of this ideal tool, it probably involves the internet and emulates the most successful social media technologies in utilizing: an attractive, easy to navigate, profile (e.g.,,,,, a feed of targeted science stories based upon prior clicking behavior (e.g., RSS feeds,,,; Lee, 2012; Nentwich and König, 2014), a sophisticated rating/comment mechanism for content (e.g.,,,,; Birukou et al., 2011; Hunter, 2012), and a new set of impact metrics which make use of the information available within the system (e.g.,,,; Walther and Van den Bosch, 2012).

The basic reinforcements for the system are probably also the same as Facebook and Twitter, namely: the high quality, targeted, content provided in the newsfeed (Bian et al., 2009) and the good feelings we receive when notified that others have interacted with our content (Berne, 1964). These immediate reinforcements, paired with an easy to navigate user interface, are powerful enough to make Facebook users log in an average of 14 times per day and have researchers talking about Facebook addictions (Andreassen et al., 2012; Cheung et al., 2013; Taylor, 2013).

When an individual posts a paper, dataset, general comment, or new protocol to their profile, it shows up in the newsfeed of those the system believes will find utility in that content (e.g., collaborators, colleagues, researchers who click similar stories) and these people can view and publically comment on the work. When an individual interacts with a post, the system notifies the original poster (providing utility) and is more likely to display content from the same source again. The feed can also contain low key targeted notifications for professional organizations, conferences, special issues, and other services which notify the researcher of upcoming opportunities (again, utility) while also helping to pay for the system, potentially paying for the system outright.

Centralizing and digitizing the discussion of a post is probably the best part, as it provides the data upon which to generate the feed and saves readers much time otherwise spent thinking about things which have already been thought about (researcher’s rewarded for providing links and information which is “liked” by others). For instance, one could go to a paper or subfield and see if anyone has mentioned Cognitive Dissonance Theory, join the conversation, or start their own discussion with the authors/ community. While some may worry that reading this information adds extra work, protestations of data overload can be dealt with by first pointing out that we only need to “read it if we need it”, but also that the system will include sophisticated methods for discovering and readily presenting the highest quality content (e.g., Reddit, Facebook Lookback; Yarkoni, 2012).

When the researcher has a question they cannot find in the discussion of a paper or (sub)field, the system could suggest a list of experts who are likely to have the answer to that question (the expert is rewarded for answering these questions). This system could be keyword driven, pulling theoretical, methodological, and analytical keywords out of the researcher’s papers to create profiles (Pennebaker et al., 2001). These profiles can, in addition to matching experts with questions and improving the feed, speed along article/ researcher processing for meta(science) analyses and create network maps, similar to social media maps for summarizing literatures and fields more efficiently (Gilbert and Karahalios, 2009; Hansen et al., 2010).

Good for the Group

Information contained in the system can also be used to reward group based behaviors that are currently underperformed (e.g., making datasets/stimuli available, reanalyzing data, writing quality reviews). Impact metrics, instead of using only citations, can utilize all of the data in the system including: the impact of the individual’s work (e.g., shares, comments, ratings, who made those ratings), the impact of their comments on other’s work, whether data and syntax are uploaded, how well their interactions predict the general community’s, how they answer questions they are asked, and much more (Florian, 2012; Kriegeskorte, 2012).

The publicity of the comment section also means that the individual can develop a reputation and accrue an audience, driving impact. For instance, if one knows that certain researchers check the methods and statistics of new papers, replicates them, or just makes good comments, one may look for their comment when reading/citing a new paper (though the system itself could also have a built in statistics checker; Wicherts et al., 2012). The original author wants those researchers’ helpful comments and thus uploads the materials, data, and syntax for them to check (besides being rewarded directly for it). The methods checkers and replicators are motivated to do a good job as it is their reputation and the reader benefits enormously because they can trust that the effects are replicable and as reported (Yarkoni, 2012). Even if the reader doesn’t explicitly endorse the comment (e.g., like, sub comment), by searching for the author’s name in the comments or viewing the (statistical) replication page, reward can be administered. Because the individual can become impactful by engaging in these prosocial activities, the need for QRPs is alleviated while also making them harder to engage in, because people are rewarded for checking.

The system outlined above could be implemented without changing the fundamental peer review system. The proposed changes are expected to improve the system by encouraging, through quality impact metrics (Priem et al., 2010; Kreiman and Maunsell, 2011; Yarkoni, 2012), open practices and endorsing group centered behavior. Unfortunately, only adding this to the current system still looks backwards and does not deal with the competition to become published, the time papers spend waiting for reviews (Peters and Ceci, 1982), or the excess cost of the current system (Rennie, 2003; Edlin and Rubinfeld, 2004). It is time to examine how the data within this system could help improve the peer review mechanism.

More Impactful (Read: Important) Changes

The changes suggested here are the most sensitive to small design flaws, which, over the decades, will grow as the current issues have. For this reason, it is imperative that we have a spirited debate about the specifics outlined below and not believe that our decisions are set in stone when we make them. Only continual maintenance of the system will ensure fidelity over time (Blanchard and Fabrycky, 1990).

Others have already outlined several alternative mechanisms by which to evaluate research including open review, review conducted by specialized services, and various levels of pre and post-publication review (Kravitz and Baker, 2011; Hunter, 2012; Kriegeskorte, 2012; Nosek and Bar-Anan, 2012). While it is still unclear how to keep bias out of the review services or reviews in general, we would like to suggest that the data within the current system can be utilized to facilitate review (Kreiman and Maunsell, 2011; Lee, 2012; Zimmermann et al., 2012).

When a researcher wants to publish a paper, the system could automatically send the paper to field experts, “rival” field experts, non-experts, methods experts, and statistical experts, based upon the data in the system (Kravitz and Baker, 2011). Reviewers can be asked to write brief reviews and make quantitative ratings of the paper or they can simply be presented with the paper and the system can see how they react (as ignoring the piece is also informative; Birukou et al., 2011; Kriegeskorte, 2012; Lee, 2012). These reviews can be done “pre-publication,” where reviewers privately provide feedback (while being rewarded through a counter on their profile), or the reviews could become immediately public and serve as the basis of discussion after a certain number of comments have been accrued (in order to avoid the anchoring effect of a bad first comment; Bachmann, 2011). If the paper is received well, it can be suggested to more individuals and groups that might also find utility in the paper.

Professional organizations maintain their role as disseminators of content (what they were originally designed to do; Benos et al., 2007), but would no longer be responsible for evaluating, reviewing, and publishing these works. Dissemination decisions can be made by editors, or the professional organization could use a computer and stipulate that in order for a paper to be considered for dissemination, it has to have certain keywords and have had × number of members comment on or like it, including some with higher impact factors. Each organization can have several “journals”, each with their own reputation (e.g., finding the most cutting edge work, only promoting the future classics, only promoting those that are preregistered). When a group promotes a work, the system sends it to those who are most likely to find utility in it, similar to the individual but on a much larger scale. The paper also earns a stamp of approval which grows (e.g., Bronze, Silver, Gold badges; Nosek and Bar-Anan, 2012) if the paper is received well and it is suggested to more users in the group; in this way the paper can “go viral”.

One further addition I would like to add to proposals that emphasize purely online review systems is the ability for, at the end of the year (or decade), extra badges to be given for the top 10 (or 100) papers published in a particular (sub)domain. These collections could be put together for any aspect of the paper (e.g., theory, methods, statistics), could be printed, and provide something to aim for in the creation of content besides high impact.

Motivating Change

Another aspect where Science can help is in getting people to adopt the system. Though open and post-publication review are popular among experts, a recent survey of 2,300 psychologists conducted by this author found that changes related to opening review were the three lowest rated potential changes to the publication system, with post publication review being rated 10th of 15 (Buttliere and Wicherts, in preparation). Change initiatives would benefit from empirically demonstrating the utility of the proposed changes, as has been done with opening review in the biomedical field (Godlee et al., 1998; Walsh et al., 2000; Pulverer, 2010).

We also know that scarcity increases the value of a good (Cialdini, 1993). When Facebook came out, it was only for Harvard students and for several years was invite only. Similarly, it may benefit a fledging open science platform to first be by invitation only, perhaps limiting access to those who supported the systems which combined to make it, and then only opening by invitation from those already in the system (like Facebook was). It should also be pointed out to field leaders and professors (who will get invited to the system earlier than others) that they serve as examples to others (especially students; Phillips, 1974) and that by not pursuing change for the better, they signal that nothing needs to be done and become a bad example (Darley and Latane, 1968). Obviously, marketing and advertising should also guide naming and implementation strategy.

Concerns about Science using behavioral engineering (Huxley, 1935; Rand, 1937; Orwell, 1949), are necessarily brushed aside by reminding ourselves that advertisers have been engineering us for their own profit since before Skinner outlined the methods in 1972. Behaviorally engineering a well-functioning system for ourselves would go a long way toward showing the public what the use of this technology for good looks like (Skinner, 1972; Thaler and Sunstein, 2008) and would very likely garner more trust and financial support in the future.

In Sum

There are many problems with the current academic publishing system, and many have suggested courses of action to solve those problems. Here I highlight science that can inform the discussion and decisions being made about these issues. Most importantly, humans are utility seekers and use whatever tools (e.g., QRPs) most efficiently help them achieve their goals. The reason psychologists are not engaging with change initiatives is because they have high initial step costs, and have uncertain outcomes due to a fragmentation of the market. I propose that open science tools put individual interests’ aside and work together to raise the utility and lower the cost of using the common tool. I next examined how the data from one, centralized, online system can be used to improve scientific communication by being immediately rewarding to the individual while also encouraging group-centered behavior and concurrently inhibiting self-centered behavior. There is much more conversation to be had, but I hope this essay will help focus conversation on using science to guide decision making.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


The author would like to thank the following individuals for their insightful and useful comments on the paper (they are in no way responsible if you disagree): the anonymous reviewers, Jelte Wicherts, Erika Salomon, Willem Sleegers, Mark Verschoor, Diogo Seco, and Merve Karacaoglu.


Andreassen, C. S., Torsheim, T., Brunborg, G. S., and Pallesen, S. (2012). Development of a facebook addiction scale 1, 2. Psychol. Rep. 110, 501–517. doi: 10.2466/02.09.18.pr0.110.2.501-517

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J., Fiedler, K., et al. (2013). Recommendations for increasing replicability in psychology. Eur. J. Pers. 27, 108–119. doi: 10.1002/per.1919

CrossRef Full Text

Bachmann, T. (2011). Fair and open evaluation may call for temporarily hidden authorship, caution when counting the votes and transparency of the full pre-publication procedure. Front. Comput. Neurosci. 5:61. doi: 10.3389/fncom.2011.00061

CrossRef Full Text

Bekkers, R. (2012). Risk factors for fraud and academic misconduct in the social sciences. Academia. Edu. Available online at:

Benos, D. J., Bashari, E., Chaves, J. M., Gaggar, A., Kapoor, N., LaFrance, M., et al. (2007). The ups and downs of peer review. Adv. Physiol. Educ. 31, 145–152. doi: 10.1152/advan.00104.2006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bentham, J., and Mill, J. S. (2004). Utilitarianism and Other Essays. UK: Penguin.

Berne, E. (1964). Games People Play: The Psychology of Human Relationships. New York: Tantor eBooks.

Bian, J., Liu, Y., Zhou, D., Agichtein, E., and Zha, H. (2009). “Learning to recognize reliable users and content in social media with coupled mutual reinforcement,” in Proceedings of The 18th International Conference on World Wide Web (Madrid, Spain: EU), 51–60.

Bijvank, M. N., Konijn, E. A., Bushman, B. J., and Roelofsma, P. H. (2009). Age and violent-content labels make video games forbidden fruits for youth. Pediatrics 123, 870–876. doi: 10.1542/peds.2008-0601

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Birukou, A., Wakeling, J. R., Bartolini, C., Casati, F., Marchese, M., Mirylenka, K., et al. (2011). Alternatives to peer review: novel approaches for research evaluation. Front. Comput. Neurosci. 5:56. doi: 10.3389/fncom.2011.00056

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Björk, B.-C. (2007). A model of scientific communication as a global distributed information system. Inf. Res. 12:307. Available online at:

Blanchard, B. S., and Fabrycky, W. J. (1990). Systems Engineering and Analysis (Vol. 4). Englewood Cliffs, New Jersey: Prentice Hall.

Branson, R. (2012). War on drugs a trillion-dollar failure. Available online at: Accessed on 1-4-2013.

Cheung, C. M., Lee, Z. W., and Lee, M. K. (2013). “Understanding compulsive use of Facebook through the reinforcement processes,” in Proceedings of the 21st European Conference of Information Systems (Utrecht, The Netherlands: EU)

Cialdini, R. B. (1993). Influence: The Psychology of Persuasion. New York: HarperCollins Publishers.

Darley, J. M., and Latane, B. (1968). Bystander intervention in emergencies: diffusion of responsibility. J. Pers. Soc. Psychol. 8, 377–383. doi: 10.1037/h0025589

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Darwin, C. (1859). The Origin of Species by Means of Natural Selection: Or, the Preservation of Favored Races in the Struggle for Life. London: John Murray.

Deci, E. L. (1971). Effects of externally mediated rewards on intrinsic motivation. J. Pers. Soc. Psychol. 18, 105–115. doi: 10.1037/h0030644

CrossRef Full Text

Edlin, A. S., and Rubinfeld, D. L. (2004). Exclusion or efficient pricing: the ‘big deal’ bundling of academic journals. Antitrust Law J. 72, 119–157.

Elliot, A. J. (2006). The hierarchical model of approach-avoidance motivation. Motiv. Emot. 30, 111–116. doi: 10.1007/s11031-006-9028-7

CrossRef Full Text

Fehr, E., and Fischbacher, U. (2003). The nature of human altruism. Nature 425, 785–791. doi: 10.1038/nature02043

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Florian, R. V. (2012). Aggregating post-publication peer reviews and ratings. Front. Comput. Neurosci. 6:31. doi: 10.3389/fncom.2012.00031

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Francis, G. (2012). Too good to be true: publication bias in two prominent studies from experimental psychology. Psychon. Bull. Rev. 19, 151–156. doi: 10.3758/s13423-012-0227-9

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gibbs, M. J., Merchant, K. A., Van der Stede, W. A., and Vargus, M. E. (2009). Performance measure properties and incentive system design. Ind. Relat. J. Econ. Soc. 48, 237–264. doi: 10.1111/j.1468-232x.2009.00556.x

CrossRef Full Text

Gilbert, E., and Karahalios, K. (2009). “Predicting tie strength with social media,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, Massachuttes, US: ACM), 211–220.

Giner-Sorolla, R. (2012). Will we march to utopia, or be dragged there? Past failures and future hopes for publishing our science. Psychol. Inq. 23, 263–266. doi: 10.1080/1047840x.2012.706506

CrossRef Full Text

Godlee, F., Gale, C. R., and Martyn, C. N. (1998). Effect on quality of peer review of blinding reviewers and asking them to sign their reports. JAMA 280, 237–240. doi: 10.1001/jama.280.3.237

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Gottfredson, S. D. (1978). Evaluating psychological research reports: dimensions, reliability and correlates of quality judgments. Am. Psychol. 33, 920–934. doi: 10.1037//0003-066x.33.10.920

CrossRef Full Text

Hansen, D., Shneiderman, B., and Smith, M. A. (2010). Analyzing Social Media Networks with NodeXL: Insights from a Connected World. Burlington, MA: Morgan Kaufmann.

Hardin, G. (1968). The tragedy of the commons. Science 162, 1243–1248. doi: 10.1126/science.162.3859.1243

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hartgerink, C. (2014). “Open science protocol,” in Poster Presented at the Annual Meeting of the Society for Personality and Social Psychology (San Antonio, Texas, US)

Higgins, E. T. (1997). Beyond pleasure and pain. Am. Psychol. 52, 1280–1300. doi: 10.1037/0003-066x.52.12.1280

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hunter, J. (2012). Post-publication peer review: opening up scientific conversation. Front. Comput. Neurosci. 6:63. doi: 10.3389/fncom.2012.00063

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Huxley, A. (1935). Brave New World. London: HarperCollins.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Med. 2:e124. doi: 10.1371/journal.pmed.0020124

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ioannidis, J. P. (2012a). Why science is not necessarily self-correcting. Perspect. Psychol. Sci. 7, 645–654. doi: 10.1177/1745691612464056

CrossRef Full Text

Ioannidis, J. P. (2012b). Scientific communication is down at the moment, please check again later. Psychol. Inq. 23, 267–270. doi: 10.1080/1047840x.2012.699427

CrossRef Full Text

Iyengar, S. S., Huberman, G., and Jiang, W. (2004). “How much choice is too much? Contributions to 401 (k) retirement plans,” in Pension Design and Structure: New Lessons from Behavioral Finance, eds O. Mitchell and S. Utkus (Oxford, UK: Oxford University Press), 83–95.

John, L. K., Loewenstein, G., and Prelec, D. (2012). Measuring questionable research practices with incentives for truth telling. Psychol. Sci. 23, 524–532. doi: 10.1177/0956797611430953

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kahneman, D., and Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica 47, 263–292. doi: 10.2307/1914185

CrossRef Full Text

Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B. Jr., Bahník, Š., Bernstein, M. J., et al. (2014). Investigating variation in replicability. Soc. Psychol. 45, 142–152. doi: 10.1027/1864-9335/a000178

CrossRef Full Text

Kravitz, D. J., and Baker, C. I. (2011). Toward a new model of scientific publishing: discussion and a proposal. Front. Comput. Neurosci. 6:55. doi: 10.3389/fncom.2011.00055

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kreiman, G., and Maunsell, J. H. (2011). Nine criteria for a measure of scientific output. Front. Comput. Neurosci. 5:48. doi: 10.3389/fncom.2011.00048

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kriegeskorte, N. (2012). Open evaluation: a vision for entirely transparent post-publication peer review and rating for science. Front. Comput. Neurosci. 6:79. doi: 10.3389/fncom.2012.00079

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kriegeskorte, N., Walther, A., and Deca, D. (2012). An emerging consensus for open evaluation: 18 visions for the future of scientific publishing. Front. Comput. Neurosci. 6:94. doi: 10.3389/fncom.2012.00094

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lee, C. (2012). Open peer review by a selected-papers network. Front. Comput. Neurosci. 6:1. doi: 10.3389/fncom.2012.00001

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Legris, P., Ingham, J., and Collerette, P. (2003). Why do people use information technology? A critical review of the technology acceptance model. Inf. Manag. 40, 191–204. doi: 10.1016/s0378-7206(01)00143-4

CrossRef Full Text

Nadelmann, E. A. (1989). Drug prohibition in the United States: costs, consequences and alternatives. Science 245, 939–947. doi: 10.1126/science.2772647

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Nentwich, M., and König, R. (2014). “Academia goes facebook? The potential of social network sites in the scholarly realm,” in Opening Science (Heidelberg, Berlin: Springer International Publishing), 107–124.

Nosek, B., and Bar-Anan, Y. (2012). Scientific utopia: 1. Opening scientific communication. Psychol. Inq. 23, 217–243. doi: 10.1080/1047840x.2012.692215

CrossRef Full Text

Orwell, G. (1949). Ninteen Eighty Four. London: Editions Underbahn Ltd.

Pennebaker, J. W., Francis, M. E., and Booth, R. J. (2001). Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates. 71.

Peters, D. P., and Ceci, S. J. (1982). Peer-review practices of psychological journals: the fate of published articles, submitted again. Behav. Brain Sci. 5, 187–195. doi: 10.1017/s0140525x00011183

CrossRef Full Text

Phillips, D. P. (1974). The influence of suggestion on suicide: substantive and theoretical implications of the Werther effect. Am. Sociol. Rev. 39, 340–354. doi: 10.2307/2094294

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010). Almetrics: a manifesto.

Pulverer, B. (2010). A transparent black box. EMBO J. 29, 3891–3892. doi: 10.1038/emboj.2010.307. Available online at:

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Rand, A. (1937). Anthem. London, England: MobileReference.

Rennie, D. (2003). “Editorial peer review: its development and rationale,” in Peer Review in Health Sciences, eds F. Godlee and T. Jefferson (London: BMJ Books), 3–13.

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychol. Bull. 86, 638–641. doi: 10.1037//0033-2909.86.3.638

CrossRef Full Text

Sherman, L. W. (1993). Defiance, deterrence and irrelevance: a theory of the criminal sanction. J. Res. Crime Delinq. 30, 445–473. doi: 10.1177/0022427893030004006

CrossRef Full Text

Simmons, J. P., Nelson, L. D., and Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366. doi: 10.1177/0956797611417632

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Simonsohn, U., Nelson, L. D., and Simmons, J. P. (2014). P-curve: a key to the file-drawer. J. Exp. Psychol. Gen. 143, 534–547. doi: 10.1037/a0033242

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Skinner, B. F. (1972). Beyond Freedom and Dignity. New York: Bantam Books (p. 142).

Skinner, B. F., and Hayes, J. (1976). Walden Two. New York: Macmillan.

Smith, C. (2014). By the numbers: 98 amazing facebook user statistics. Available online at: Accessed on April, 6, 2014.

Taylor, C. (2013). Smartphone users check facebook 14 times a day. Available online at: Accessed on April, 6, 2014.

Thaler, R. H., and Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth and Happiness. New York: Yale University Press.

Walsh, E., Rooney, M., Appleby, L., and Wilkinson, G. (2000). Open peer review: a randomised controlled trial. Br. J. Psychiatry 176, 47–51. doi: 10.1192/bjp.176.1.47

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Walther, A., and Van den Bosch, J. J. (2012). FOSE: a framework for open science evaluation. Front. Comput. Neurosci. 6:32. doi: 10.3389/fncom.2012.00032

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Wicherts, J. M., Kievit, R. A., Bakker, M., and Borsboom, D. (2012). Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci. 6:20. doi: 10.3389/fncom.2012.00020

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Yarkoni, T. (2012). Designing next-generation platforms for evaluating scientific output: what scientists can learn from the social web. Front. Comput. Neurosci. 6:72. doi: 10.3389/fncom.2012.00072

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Zimmermann, J., Roebroeck, A., Uludag, K., Sack, A. T., Formisano, E., Jansma, B., et al. (2012). Network-based statistics for a community driven transparent publication process. Front. Comput. Neurosci. 6:11. doi: 10.3389/fncom.2012.00011

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Keywords: open science, publication process, scientific communication, choice design

Citation: Buttliere BT (2014) Using science and psychology to improve the dissemination and evaluation of scientific work. Front. Comput. Neurosci. 8:82. doi: 10.3389/fncom.2014.00082

Received: 11 January 2014; Accepted: 12 July 2014;
Published online: 19 August 2014.

Edited by:

Diana Deca, Technical University of Munich, Germany

Reviewed by:

Daoyun Ji, Baylor College of Medicine, USA
Dwight Kravitz, National Institutes of Health, USA

Copyright © 2014 Buttliere. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Brett T. Buttliere, Department of Methodology and Statistics, Tilburg University, P2.206 Waraandelaan 2, Tilburg, Noord-Brabant, Netherlands e-mail: