%A Dennis,Brian
%A Ponciano,José Miguel
%A Taper,Mark L.
%A Lele,Subhash R.
%D 2019
%J Frontiers in Ecology and Evolution
%C
%F
%G English
%K Model misspecification,Evidential statistics,evidence,error rates in model selection,Kullback-Leibler divergence,hypothesis testing,Akaike’s information criterion (AIC),Model selection
%Q
%R 10.3389/fevo.2019.00372
%W
%L
%N 372
%M
%P
%7
%8 2019-October-21
%9 Original Research
%#
%! Model misspecification and statistical inference
%*
%<
%T Errors in Statistical Inference Under Model Misspecification: Evidence, Hypothesis Testing, and AIC
%U https://www.frontiersin.org/article/10.3389/fevo.2019.00372
%V 7
%0 JOURNAL ARTICLE
%@ 2296-701X
%X The methods for making statistical inferences in scientific analysis have diversified even within the frequentist branch of statistics, but comparison has been elusive. We approximate analytically and numerically the performance of Neyman-Pearson hypothesis testing, Fisher significance testing, information criteria, and evidential statistics (Royall, 1997). This last approach is implemented in the form of evidence functions: statistics for comparing two models by estimating, based on data, their relative distance to the generating process (i.e., truth) (Lele, 2004). A consequence of this definition is the salient property that the probabilities of misleading or weak evidence, error probabilities analogous to Type 1 and Type 2 errors in hypothesis testing, all approach 0 as sample size increases. Our comparison of these approaches focuses primarily on the frequency with which errors are made, both when models are correctly specified, and when they are misspecified, but also considers ease of interpretation. The error rates in evidential analysis all decrease to 0 as sample size increases even under model misspecification. Neyman-Pearson testing on the other hand, exhibits great difficulties under misspecification. The real Type 1 and Type 2 error rates can be less, equal to, or greater than the nominal rates depending on the nature of model misspecification. Under some reasonable circumstances, the probability of Type 1 error is an increasing function of sample size that can even approach 1! In contrast, under model misspecification an evidential analysis retains the desirable properties of always having a greater probability of selecting the best model over an inferior one and of having the probability of selecting the best model increase monotonically with sample size. We show that the evidence function concept fulfills the seeming objectives of model selection in ecology, both in a statistical as well as scientific sense, and that evidence functions are intuitive and easily grasped. We find that consistent information criteria are evidence functions but the MSE minimizing (or efficient) information criteria (e.g., AIC, AICc, TIC) are not. The error properties of the MSE minimizing criteria switch between those of evidence functions and those of Neyman-Pearson tests depending on models being compared.