%A Altoè,Gianmarco
%A Bertoldo,Giulia
%A Zandonella Callegher,Claudio
%A Toffalini,Enrico
%A Calcagnì,Antonio
%A Finos,Livio
%A Pastore,Massimiliano
%D 2020
%J Frontiers in Psychology
%C
%F
%G English
%K R functions,statistical reasoning,statistical inference,psychological research,power,effect size,Type M and Type S errors,prospective and retrospective design analysis
%Q
%R 10.3389/fpsyg.2019.02893
%W
%L
%M
%P
%7
%8 2020-January-14
%9 Original Research
%#
%! Design analysis in psychological research
%*
%<
%T Enhancing Statistical Inference in Psychological Research via Prospective and Retrospective Design Analysis
%U https://www.frontiersin.org/article/10.3389/fpsyg.2019.02893
%V 10
%0 JOURNAL ARTICLE
%@ 1664-1078
%X In the past two decades, psychological science has experienced an unprecedented replicability crisis, which has uncovered several issues. Among others, the use and misuse of statistical inference plays a key role in this crisis. Indeed, statistical inference is too often viewed as an isolated procedure limited to the analysis of data that have already been collected. Instead, statistical reasoning is necessary both at the planning stage and when interpreting the results of a research project. Based on these considerations, we build on and further develop an idea proposed by Gelman and Carlin (2014) termed “prospective and retrospective design analysis.” Rather than focusing only on the statistical significance of a result and on the classical control of type I and type II errors, a comprehensive design analysis involves reasoning about what can be considered a plausible effect size. Furthermore, it introduces two relevant inferential risks: the exaggeration ratio or Type M error (i.e., the predictable average overestimation of an effect that emerges as statistically significant) and the sign error or Type S error (i.e., the risk that a statistically significant effect is estimated in the wrong direction). Another important aspect of design analysis is that it can be usefully carried out both in the planning phase of a study and for the evaluation of studies that have already been conducted, thus increasing researchers' awareness during all phases of a research project. To illustrate the benefits of a design analysis to the widest possible audience, we use a familiar example in psychology where the researcher is interested in analyzing the differences between two independent groups considering Cohen's d as an effect size measure. We examine the case in which the plausible effect size is formalized as a single value, and we propose a method in which uncertainty concerning the magnitude of the effect is formalized via probability distributions. Through several examples and an application to a real case study, we show that, even though a design analysis requires significant effort, it has the potential to contribute to planning more robust and replicable studies. Finally, future developments in the Bayesian framework are discussed.