ORIGINAL RESEARCH article
Front. Neurosci.
Sec. Brain Imaging Methods
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1606801
This article is part of the Research TopicAI-enabled processing, integrating, and understanding neuroimages and behaviorsView all articles
Neural decoding of Aristotle tactile illusion using deep learning-based fMRI classification
Provisionally accepted- 1Ewha Womans University, Seoul, Republic of Korea
- 2Brown University, Providence, Rhode Island, United States
- 3Sungkyunkwan University, Jongno-gu, Seoul, Republic of Korea
- 4Ulsan National Institute of Science and Technology, Eonyang, Ulsan, Republic of Korea
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Aristotle illusion is a well-known tactile illusion which causes the perception of a one object as two. EEG analysis was employed to investigate the neural correlates of Aristotle illusion, yet was limited due to low spatial resolution of EEG. This study aimed to identify brain regions involved in the Aristotle illusion using functional magnetic resonance imaging (fMRI) and deep learning-based analysis of fMRI data.Methods: While three types of tactile stimuli (Aristotle, Reverse, Asynchronous) were applied to thirty participants' fingers, we collected fMRI data, and recorded the number of stimuli each participant perceived. Four convolutional neural network (CNN) models were trained for perceptionbased classification tasks (the occurrence of Aristotle illusion vs. Reverse illusion, the occurrence vs. absence of Reverse illusion), and stimulus-based classification tasks (Aristotle vs. Reverse, Reverse vs. Asynchronous, and Aristotle vs. Asynchronous).Results: Simple fully convolution network (SFCN) achieved the highest classification accuracy of 68.4% for the occurrence of Aristotle illusion vs. Reverse illusion, and 80.1% for the occurrence vs. absence of Reverse illusion. For stimulus-based classification tasks, all CNN models yielded accuracies around 50% failing to distinguish among the three types of applied stimuli. Gradientweighted class activation mapping (Grad-CAM) analysis revealed salient brain regions-of-interest (ROIs) for the perception-based classification tasks, including the somatosensory cortex and parietal regions.Discussion: Our findings demonstrate that perception-driven neural responses are classifiable using fMRI-based CNN models. Saliency analysis of the trained CNNs reveals the involvement of the somatosensory cortex and parietal regions in making classification decisions, consistent with previous research. Other salient ROIs include orbitofrontal cortex, middle temporal pole, supplementary motor area, and middle cingulate cortex.
Keywords: somatosensory, Tactile illusion, fMRI, deep learning, Brain Mapping
Received: 06 Apr 2025; Accepted: 30 May 2025.
Copyright: © 2025 Lee, Kim, Park, Kim and Shin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Sung-Phil Kim, Ulsan National Institute of Science and Technology, Eonyang, 689-798, Ulsan, Republic of Korea
Taehoon Shin, Ewha Womans University, Seoul, Republic of Korea
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.