AUTHOR=Li Yi-Fan , Ying Haojiang TITLE=Disrupted visual input unveils the computational details of artificial neural networks for face perception JOURNAL=Frontiers in Computational Neuroscience VOLUME=Volume 16 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2022.1054421 DOI=10.3389/fncom.2022.1054421 ISSN=1662-5188 ABSTRACT=Deep Convolutional Neural Network (DCNN), with its great performance, has attracted the attention of researchers from many disciplines. However, researchers do not have a thorough understanding of its specific mechanisms and principles. Here in this study, we used psychophysical methods to study the functional performance of the DCNNs and tried to unveil their computational details. We trained and tested several typical DCNNs (AlexNet, VGG11, VGG13, VGG16, DenseNet, MobileNet, and EfficientNet) with a face ethnicity categorization task. We measured the performance of DCNNs by testing them with original and lossy visual inputs (various kinds of image occlusion) and compared their performance with human participants. Moreover, the class activation map (CAM) method allowed us to visualize and compare the foci of the "attention" of these DCNNs. The results suggested that the VGG13 performed the best: (1) its performance closely resembled human participants in terms of psychophysics measurements, (2) it utilized similar areas of visual inputs as humans, and (3) it had the most consistent performance with inputs having various kinds of impairments. This study also offered a new paradigm to study and develop DCNNs using human perception as a functional benchmark.