## About this Research Topic

For many years, there has been a growing understanding of the importance of theory in Ecology and Evolution, with theory being an organized complex of models. Functionally, science works by comparing models. Science progresses when old (possibly good) models are replaced with better models.

Classical error statistics doesn’t really compare multiple models simultaneously. With Fisherian significance tests a single model is tested, and if rejected one of many plausible alternatives is inferred. Neyman-Pearson tests appear to compare two models, but are really just significance tests constructed along the axis most likely to reject the “Null” if the “Alternative” is true. Significance tests are calculated as if the null hypothesis were true. Philosophically, this is very awkward. George Box wrote “All models are wrong”. If so, what does a significance test get you? If you reject, you haven’t learned anything, as you knew the model was wrong a priori. If you fail to reject, it only tells you haven't got sufficient sample size or specified inappropriate effect size.

Along with classical statistics, Bayesian statistics also suffers from a true model assumptions. A subjective Bayes analysis is biased by the personal belief embodied in the prior distribution, while an objective Bayes analysis is influenced by the particular transformation that the model is presented in. Both approaches suffer from a cryptic true model assumption: priors summing/integrating to 1 implies that the truth is in the domain of the prior. Barnard insightfully said: “To speak of the probability of a hypothesis implies the possibility of an exhaustive enumeration of all possible hypotheses, which implies a degree of rigidity foreign to the true scientific spirit. We should always admit the possibility that our experimental results may be best accounted for by a hypothesis which never entered our own heads" This apparently innocuous fact levies a serious philosophical burden on the Bayesian statistical paradigm.

The difficulties in statistics are not just philosophical, but very practical too. A large proportion of published conclusions are believed to be erroneous. The public distrusts science, and even scientists are having a “crisis of faith” in their methods. Hundreds of papers have been written severely criticizing either classical or Bayesian statistics. Only a small number of papers have asked: Given the problems with both paradigms, how should we make inferences in this new theory and model centric view of science? Evidential statistics answers this question. Evidential statistics is an honest approach whose primordial premise is to avoid the true model assumption. The core inferential entity is the evidence function, which is a data based estimate of the relative distance of two models to truth or reality. In classical error statistics, the strength of evidence is conceived of as synonymous with the error probability. In evidential statistics, the evidence and the error probabilities are distinct statistical entities, both of inferential interest.

As different as this all seems from classical and Bayesian statistics, most inference tools can be viewed evidentially in a single coherent framework, including model identification, model uncertainty estimation, parameter estimation, parameter uncertainty estimation, pre-data error control, post-data strength of evidence and the design of experiments. Despite all these advantages, the evidential statistics approach is largely unknown to working scientists.

The aim of this research topic is to rectify this widespread ignorance by informing working scientist of the utility and flexibility of evidential statistics. We will solicit both papers that convey basic concepts and papers that convey technical subtleties sufficient to conduct real scientific research, as well as practical advice that can be easily incorporated into the teaching of undergraduate and graduate courses. The topic will consist of a mix of new original research, reviews, commentaries and perspectives on topics related to evidential statistics (see article types). New statistical work is encouraged, nevertheless, all papers will need to spend significant effort to explain goals, utility, and application of methods to working scientists. To further this goal, collaboration among statisticians and more empirical scientists is also encouraged.

**Important Note**:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

For many years, there has been a growing understanding of the importance of theory in Ecology and Evolution, with theory being an organized complex of models. Functionally, science works by comparing models. Science progresses when old (possibly good) models are replaced with better models.

Classical error statistics doesn’t really compare multiple models simultaneously. With Fisherian significance tests a single model is tested, and if rejected one of many plausible alternatives is inferred. Neyman-Pearson tests appear to compare two models, but are really just significance tests constructed along the axis most likely to reject the “Null” if the “Alternative” is true. Significance tests are calculated as if the null hypothesis were true. Philosophically, this is very awkward. George Box wrote “All models are wrong”. If so, what does a significance test get you? If you reject, you haven’t learned anything, as you knew the model was wrong a priori. If you fail to reject, it only tells you haven't got sufficient sample size or specified inappropriate effect size.

Along with classical statistics, Bayesian statistics also suffers from a true model assumptions. A subjective Bayes analysis is biased by the personal belief embodied in the prior distribution, while an objective Bayes analysis is influenced by the particular transformation that the model is presented in. Both approaches suffer from a cryptic true model assumption: priors summing/integrating to 1 implies that the truth is in the domain of the prior. Barnard insightfully said: “To speak of the probability of a hypothesis implies the possibility of an exhaustive enumeration of all possible hypotheses, which implies a degree of rigidity foreign to the true scientific spirit. We should always admit the possibility that our experimental results may be best accounted for by a hypothesis which never entered our own heads" This apparently innocuous fact levies a serious philosophical burden on the Bayesian statistical paradigm.

The difficulties in statistics are not just philosophical, but very practical too. A large proportion of published conclusions are believed to be erroneous. The public distrusts science, and even scientists are having a “crisis of faith” in their methods. Hundreds of papers have been written severely criticizing either classical or Bayesian statistics. Only a small number of papers have asked: Given the problems with both paradigms, how should we make inferences in this new theory and model centric view of science? Evidential statistics answers this question. Evidential statistics is an honest approach whose primordial premise is to avoid the true model assumption. The core inferential entity is the evidence function, which is a data based estimate of the relative distance of two models to truth or reality. In classical error statistics, the strength of evidence is conceived of as synonymous with the error probability. In evidential statistics, the evidence and the error probabilities are distinct statistical entities, both of inferential interest.

As different as this all seems from classical and Bayesian statistics, most inference tools can be viewed evidentially in a single coherent framework, including model identification, model uncertainty estimation, parameter estimation, parameter uncertainty estimation, pre-data error control, post-data strength of evidence and the design of experiments. Despite all these advantages, the evidential statistics approach is largely unknown to working scientists.

The aim of this research topic is to rectify this widespread ignorance by informing working scientist of the utility and flexibility of evidential statistics. We will solicit both papers that convey basic concepts and papers that convey technical subtleties sufficient to conduct real scientific research, as well as practical advice that can be easily incorporated into the teaching of undergraduate and graduate courses. The topic will consist of a mix of new original research, reviews, commentaries and perspectives on topics related to evidential statistics (see article types). New statistical work is encouraged, nevertheless, all papers will need to spend significant effort to explain goals, utility, and application of methods to working scientists. To further this goal, collaboration among statisticians and more empirical scientists is also encouraged.

**Important Note**:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

## Comments

## Add a comment

Add comment