AUTHOR=Schnack Hugo G. , Kahn René S. TITLE=Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters JOURNAL=Frontiers in Psychiatry VOLUME=Volume 7 - 2016 YEAR=2016 URL=https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2016.00050 DOI=10.3389/fpsyt.2016.00050 ISSN=1664-0640 ABSTRACT=Recently it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample increase of diagnostic accuracy of schizophrenia with number of subjects (N) has been shown, the relationship between N and accuracy is completely different between studies. Using data from a meta-analysis of machine learning in imaging schizophrenia, we found that while low-N studies can reach 90% and higher accuracy, above N/2=50 the maximum accuracy achieved steadily drops to below 70% for N/2>150. We investigate the role N plays in the wide variability in accuracy results (63-97%). We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site), larger studies inevitably need to relax the criteria / recruit from large geographic areas. A schizophrenia prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients. In addition to heterogeneity, we investigate other factors influencing accuracy and introduce a machine learning effect size. We derive a simple model of how the different factors such as sample heterogeneity determine this effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy. In conclusion, when comparing results from different machine learning studies, the sample sizes should be taken into account. To assess the generalizability of the models, validation of the prediction models should be tested in independent samples. The prediction of more complex measures such as outcome, which are expected to have an underlying pattern of more subtle brain abnormalities, will require large (multicenter) studies.