AUTHOR=Baker Nicholas , Garrigan Patrick , Phillips Austin , Kellman Philip J. TITLE=Configural relations in humans and deep convolutional neural networks JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 5 - 2022 YEAR=2023 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2022.961595 DOI=10.3389/frai.2022.961595 ISSN=2624-8212 ABSTRACT=Deep convolutional neural networks (DCNNs) have attracted considerable interest as useful devices and as possible windows into understanding perception and cognition in biological systems. In earlier work, we showed that DCNNs differ dramatically from human perceivers in that they have no sensitivity to global object shape. Here, we investigated whether those findings are symptomatic of broader limitations of DCNNs regarding use of relations. We tested learning and generalization of DCNNs (AlexNet and ResNet-50) for several relations. One involved classifying two shapes as same or different. Another involved enclosure. Every display contained a closed figure among contour noise fragments and one dot; correct responding depended on whether the dot was inside or outside the figure. The third relation we tested involved a classification that depended on which of two polygons had more sides. We used both restricted and unrestricted transfer learning for DCNNs that had been trained on the ImageNet database. For same-different with a constant set of 20 amoeboid shapes that varied in position and size, there was little restricted transfer learning (54.7% accuracy) and somewhat better unrestricted transfer learning (82.2%). Generalization tests with new shapes showed near chance performance. Results for enclosure were at chance for restricted transfer learning and somewhat better for unrestricted (74%). Generalization with two new kinds of shapes showed above-chance performance, but follow-up studies indicated that the networks did not access the enclosure relation in their responses. For the relation of more or fewer sides of polygons, DCNNs showed successful learning with polygons having 3-5 sides under unrestricted transfer learning, but showed chance performance in generalization tests to polygons having 6-10 sides. Experiments with human observers showed learning from relatively few examples of all of the relations tested and complete generalization of relational learning to new stimuli. These results using several different relations suggest that DCNNs have crucial limitations that derive from their lack of computations involving abstraction and relational processing of the sort that are fundamental in human perception.