Research Topic

Reliability, Robustness, and Replicability in Human Neuroscience Studies

About this Research Topic

Reliability, robustness, and replicability are three pillars of the scientific method. Establishing the reliability of a particular quantitative assessment methods should be the first step in any experimental design: it helps demonstrate the amount of measurement error in the data, which then can be contrasted against experimentally relevant effect sizes to determine the appropriate sample size for a sufficiently powered study. Robustness refers to the ability of arriving to the same conclusions from a study, by using the same data but different analysis methodologies. Finally, replication is achieved when the same findings are obtained from new data. These three features are fundamental in establishing new evidence and advancing current theories.

A large proportion of studies in human neuroscience are carried out with insufficient statistical power, which leads to false negatives, inflated effect size estimates, and lower positive predictive values. Power depends mainly on the effect size and the variability of the assessment measure, but these are often unknown or poorly estimated. Furthermore, even if a particular effect is both statistically significant but also experimentally relevant, it still needs to be established whether this is due to the specific way in which data was analyzed or not, and if the findings can be reproduced and generalized to other circumstances. In this way, it is possible to rule out potential problems with the experimental design and analysis, for example due to excessive researcher degrees of freedom or p-hacking.

This Research Topic focuses on evaluating the reliability, robustness, and replicability in existing or new experimental paradigms within all areas of human neuroscience, with particular interest in brain imaging, and cognitive, sensory, and motor neuroscience. We seek Original Research, Reviews, Perspectives, Clinical Trials and Opinion articles that cover, but are not limited, to the following topics:

• Reliability of quantitative assessment methods, including test-retest reliability, method comparison, and intra- and inter-observer agreement studies.
• Robustness of experimental findings, including reanalysis and interpretation of existing data and multiverse analysis.
• Replication of existing human neuroscience studies, encouraging multi-laboratory collaborations.

Although it is not a prerequisite for submission, we encourage authors to follow open science guidelines, including pre-registration of hypothesis of experimental designs and open access to data and analysis scripts.


Keywords: Replication, Open science, Pre-registration, Variability, Effect size


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Reliability, robustness, and replicability are three pillars of the scientific method. Establishing the reliability of a particular quantitative assessment methods should be the first step in any experimental design: it helps demonstrate the amount of measurement error in the data, which then can be contrasted against experimentally relevant effect sizes to determine the appropriate sample size for a sufficiently powered study. Robustness refers to the ability of arriving to the same conclusions from a study, by using the same data but different analysis methodologies. Finally, replication is achieved when the same findings are obtained from new data. These three features are fundamental in establishing new evidence and advancing current theories.

A large proportion of studies in human neuroscience are carried out with insufficient statistical power, which leads to false negatives, inflated effect size estimates, and lower positive predictive values. Power depends mainly on the effect size and the variability of the assessment measure, but these are often unknown or poorly estimated. Furthermore, even if a particular effect is both statistically significant but also experimentally relevant, it still needs to be established whether this is due to the specific way in which data was analyzed or not, and if the findings can be reproduced and generalized to other circumstances. In this way, it is possible to rule out potential problems with the experimental design and analysis, for example due to excessive researcher degrees of freedom or p-hacking.

This Research Topic focuses on evaluating the reliability, robustness, and replicability in existing or new experimental paradigms within all areas of human neuroscience, with particular interest in brain imaging, and cognitive, sensory, and motor neuroscience. We seek Original Research, Reviews, Perspectives, Clinical Trials and Opinion articles that cover, but are not limited, to the following topics:

• Reliability of quantitative assessment methods, including test-retest reliability, method comparison, and intra- and inter-observer agreement studies.
• Robustness of experimental findings, including reanalysis and interpretation of existing data and multiverse analysis.
• Replication of existing human neuroscience studies, encouraging multi-laboratory collaborations.

Although it is not a prerequisite for submission, we encourage authors to follow open science guidelines, including pre-registration of hypothesis of experimental designs and open access to data and analysis scripts.


Keywords: Replication, Open science, Pre-registration, Variability, Effect size


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

30 September 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

30 September 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..