Skip to main content

About this Research Topic

Submission closed.

Reproducibility is critical to scientific inquiry, which relies on the independent verification of results. Progress in science also requires that we determine whether conclusions were obtained using a rigorous process, and we must know whether results are robust to small changes in conditions. Computational ...

Reproducibility is critical to scientific inquiry, which relies on the independent verification of results. Progress in science also requires that we determine whether conclusions were obtained using a rigorous process, and we must know whether results are robust to small changes in conditions. Computational approaches present unique challenges for these requirements. As models and data analysis routines become more complex, verification that is completely independent of the original implementation may not be pragmatic, since re-implementation often requires significant resources and time. Model complexity also increases the difficulty in sharing all details of the model, hindering transparency.
Although true reproducibility should be the goal when possible, resources and tools that aim to promote the replication of computational results using the original code are extremely valuable to the community. These platforms such as open source code sharing sites and model databases provide the means for increasing the impact of models and other computational approaches through re-use and allow for further development and improvement. Simulator-independent model descriptions provide a further step toward reproducibility and transparency. Despite this progress, best practices for verification of computational neuroscience research have not been established.
Increasing the impact of modeling across neuroscience areas also requires better descriptions of model assumptions, constraints, and validation. For data driven models, better reporting is needed regarding which data were used to constrain model development, the details of the data fitting process, and the quantitative evaluation of how well the emergent properties of the model dynamics match empirical data. When model development is driven by theoretical or conceptual constraints, modelers must carefully describe the assumptions and the process for model development and validation in order to improve transparency and rigour.
With this Research Topic, we aim to describe the challenges of reproducibility and rigour and efforts to address them across areas of quantitative neuroscience. We include descriptions of resources and platforms that promote replication and re-use of computational models, as well as resources that aid in validating models against empirical studies. We also include examples that successfully address verification of computational neuroscience research that may lead to progress in establishing best practices.

Keywords: Reproducibility, Model sharing, Interoperability, Validation, Replicability


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Recent Articles

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

views

total views views downloads topic views

}
 
Top countries
Top referring sites
Loading..

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.